Recently while working for one of the customer we had issues with number of cores shown in the cluster was 2 per each node.
$ cat /proc/cpuinfo
Whereas the processor of the system was Intel Xeon E5620 , it should have 4 cores and 8 threads.
After analysis we found that the number of cores were wrongly shown as apci was turned off in all the nodes
/etc/grub.conf
Changing apci=ht in all 12 nodes
Made Redhat to detect all the threads in the system since it was stopped earlier.
This made the Hadoop cluster to perform like anything , it processed lot lot better and customer was happy. Not sure who was at fault , why this was off earlier. I found it and we fixed it that’s the happy part.
How do you handle your installations so that you avoid such kids of errors?
Just after that I modified the Hadoop map tasks and reduce tasks . The first performance tuning step which we all do
No comments:
Post a Comment
Please share your views and comments below.
Thank You.