Can't install RMagick 2.13.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Error

Can't install RMagick 2.13.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin


Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension.

        /usr/bin/ruby1.9.1 extconf.rb
checking for Ruby version >= 1.8.5... yes
checking for gcc... yes
checking for Magick-config... no
Can't install RMagick 2.13.2. Can't find Magick-config in /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

Provided configuration options:
    --with-opt-dir
    --without-opt-dir
    --with-opt-include
    --without-opt-include=${opt-dir}/include
    --with-opt-lib
    --without-opt-lib=${opt-dir}/lib
    --with-make-prog
    --without-make-prog
    --srcdir=.
    --curdir
    --ruby=/usr/bin/ruby1.9.1


Gem files will remain installed in /var/lib/gems/1.9.1/gems/rmagick-2.13.2 for inspection.
Results logged to /var/lib/gems/1.9.1/gems/rmagick-2.13.2/ext/RMagick/gem_make.out

An error occurred while installing rmagick (2.13.2), and Bundler cannot
continue.


Solution

Install ImageMagick

Ubuntu

sudo apt-get install imagemagick libmagickcore-dev libmagickwand-dev


Error installing json Failed to build gem native extension.

jagat@nanak-P570WM:/var/www/redmine-2.4.2$ gem install json
Fetching: json-1.8.1.gem (100%)
ERROR:  While executing gem ... (Gem::FilePermissionError)
    You don't have write permissions into the /var/lib/gems/1.9.1 directory.
jagat@nanak-P570WM:/var/www/redmine-2.4.2$ sudo gem install json
Building native extensions.  This could take a while...
ERROR:  Error installing json:
    ERROR: Failed to build gem native extension.

        /usr/bin/ruby1.9.1 extconf.rb
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- mkmf (LoadError)
    from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
    from extconf.rb:1:in `<main>'


Gem files will remain installed in /var/lib/gems/1.9.1/gems/json-1.8.1 for inspection.
Results logged to /var/lib/gems/1.9.1/gems/json-1.8.1/ext/json/ext/generator/gem_make.out



Solution

Install ruby-dev package also

Ubuntu

sudo apt-get install ruby-dev

ggdal and ggmap R packages install

To install following Ggdal and ggmap R packages we need additional dependencies

I spent lot of time searching. So dumping all here

Versions and binary names below are for Redhat , please see corresponding names If you are on Debian.

geos-devel                   3.3.2-1.el6                    
geos                         3.3.2-1.el6
gdal                        1.7.3-15.el6
gdal-devel           1.7.3-15.el6
proj-devel           4.7.0-1.el6
proj-epsg        4.7.0-1.el6
proj-nad    4.7.0-1.el6
libpng                    2:1.2.49-1.el6_2
libpng-devel 2:1.2.49-1.el6_2


Dumping some of the related errors

Error: proj/epsg not found

Install

proj-epsg                  
proj-nad 

read.c:3:17: error: png.h: No such file or directory
configure: error: proj_api.h not found in standard or given locations.

Install
proj-devel

configure: error: proj_api.h not found in standard or given locations.
ERROR: configuration failed for package ‘rgdal’

configure: error: proj_api.h not found in standard or given locations.


Error: gdal-config not found
gdal-config is in your path. Try typing gdal-config

Install gdal-devel


\

How to help Kernel developers to help you

I have been bugged with lots of errors in Kernel errors in syslog lately with my new machine.

This machine Clevo is not linux supported offically by Clevo so i cannot ask them manufacturer.

I found this interesting link on how to find which function is throwing error and how to contact maintainer of that Kernel code

http://lwn.net/Articles/395178/


Download Kernel code

git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git linux-git 

Depending on error you get ,try to search the code and find the package from where error statement is being thrown.

After that use Ubuntu bug reporting tool to file the bug or ask question.



Install Hue from tar ball or source

To install Hue from tar file or source follow the following steps

Get the code

Download the tar ball from Git Hub or Cloudera website

https://github.com/cloudera/hue

or

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/CDH-Version-and-Packaging-Information/cdhvd_topic_3.html

Extract it to some location

If you are using source then clone the code into some location

Development tools

You need to install following additional packages to build from source

https://github.com/cloudera/hue#development-prerequisites

I am using Ubuntu so i did by

$ sudo apt-get install ant gcc g++ libkrb5-dev libmysqlclient-dev libssl-dev libsasl2-dev libsasl2-modules-gssapi-mit libsqlite3-dev libtidy-0.99-0 libxml2-dev libxslt-dev  libldap2-dev python-dev python-simplejson python-setuptools


Build the package

$ git clone http://github.com/cloudera/hue.git
$ cd hue
$ make apps
$ build/env/bin/supervisor 
 
This will start Hue server on default port
 
You can configure various properties of hue
 
Go to 
 
/desktop/conf/hue.in file and edit it. 
 
 

Update Hive Metastore

Update Hive Metastore

Hive 0.12 includes the Schema update tool


$ bin/schematool -help
usage: schemaTool
-dbType <databaseType>             Metastore database type
-dryRun                            list SQL scripts (no execute)
-help                              print this message
-info                              Show config and schema details
-initSchema                        Schema initialization
-initSchemaTo <initTo>             Schema initialization to a version
-passWord <password>               Override config file password
-upgradeSchema                     Schema upgrade
-upgradeSchemaFrom <upgradeFrom>   Schema upgrade from a version
-userName <user>                   Override config file user name
-verbose                           only print SQL statements


Depending on what metastore you are using

dbType can be

mysql, postgres, derby or oracle

Update MySQL Hive metastore from 0.10.0 to 0.12.0
For example

bin/schematool -dbType mysql -upgradeSchemaFrom 0.10.0

You can also do a dry run with passing option -dryRun



Hive schematool to create metastore

Hive 0.12 has a new tool called schematool
This tool is used for two purpose
1) Create new metastore database when you install new Hive

2) Upgrade the existing Hive metastore to new version


To create new metastore when you install new hive

Create hive-site.xml

Inside that specify the details for your mysql ( or other ) database

Use the following command to populate that database

$bin/schematool -dbType mysql –initSchema


schematool -dbType mysql –info

You can also see the details 



Impala Failed to load metadata for table

Error

Failed to load metadata for table: <tableName>
CAUSED BY: TableLoadingException: Failed to load metadata for table: <tableName>
CAUSED BY: TTransportException: java.net.SocketException: Broken pipe
CAUSED BY: SocketException: Broken pipe

Solution

Execute refresh via the impala shell
or
Restart the impala service
or
Execute refreshschema via the ODBC driver
Reference

http://knowledgebase.progress.com/articles/Article/000041831

Debug Hive query and startup

Start hive with following command.
 
bin/hive --hiveconf hive.root.logger=DEBUG,console
 
Logs
 
The logs of the user queries and metastore can be analysed by
User query logs
 
/etc/hive/hive-log4j.properties
 
Configure the variable
 
hive.log.dir
 
Hive metastore logs
 
/etc/hive-metastore/hive-log4j.properties
 
Configure the variable
 
hive.log.dir
 
A very good presentation on how to debug hive errors
 
 

Compare contents of two jar files

Download jarcomp jar from below URL
 

Use the following command to compare


java -jar jarcomp_01.jar file1.jar file2.jar

HBase transfer to another cluster using distcp

We had to take full copy of HBase from one cluster to another.

We decided to take brute force approach of copying via Distcp.
Although it’s not recommended but we took it as time was very less and we knew it works very quick.

clusterA

clusterB


Steps

This assumes that you don’t have any tables on destination side. If you have then you need to backup them first.

Stop HBase on both sides this will ensure all the data in memory will be dumped to local disks

Start the distcp to copy data

Commands executed on clusterA side

# Create directory on destination side

sudo –u hdfs hadoop fs –mkdir hdfs://clusterB/hbase_copy_20130320

# Start distcp job
sudo –u hdfs hadoop distcp –update  /hbase  hdfs://clusterB/hbase_copy_20130320

Commands executed on destination side

Verify that data size matches on both sides

clusterA
hadoop fs -du -h /hbase
clusterB
hadoop fs -du -h /hbase_copy_20130320

Commands executed on destination side clusterB

sudo -u hdfs hadoop fs -chown -R hbase:hbase /hbase_copy_20130320
sudo -u hdfs hadoop fs -mv /hbase /hbase_clusterB_backup
sudo -u hdfs hadoop fs -mv /hbase_clusterB_backup /hbase

Do meta repair

sudo -u hbase hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair -base hdfs://clusterB_NamenodeService/hbase

This will take some time , once it’s done.

Restart HBase on destination side and let region balancing happens

you can verify data by

hbase list

Lastly , the recommended approach is snapshots and copytable. But today we did not use these. I will write another post to use snapshots and copytable

Thanks for reading