车立方" type="application/atom+xml">

hahakubile Blog, Powered by 车立方

Welcome to hahakubile's blog, You should know him. Thanks to 车立方

Configuring Ulimit for HBase in Cloudera Cdh3u6

Error in hbase regionserver log

2014-02-13 11:38:13,028 INFO org.apache.hadoop.hdfs.DFSClient: Failed to connect to /192.168.101.21:50010, add to deadNodes and continue
java.net.SocketException: Too many open files

About ulimit

http://hbase.apache.org/book.html#ulimit

To be clear, upping the file descriptors and nproc for the user who is running the HBase process is an operating system configuration, not an HBase configuration.

Raising ulimit on centos-server

fs.file-max

  1. vim /etc/sysctl.conf
  2. Add this line fs.file-max = 122880 into /etc/sysctl.conf
  3. You can execute the command sysctl -p or reboot to permanently add the kernel parameter above

    ulimit

  4. vim /etc/security/limits.conf
  5. Add the lines below:
    • soft nofile 122880
    • hard nofile 122880
  6. logout and login, execute the command ulimit -n inside your terminal. If you had done everything correctly, you should now see a value returns as 122880.

CM users

For CM users, since there’s a supervisor in effect that monitors the launched daemons, the ulimits are inherited from it and are hence pre-set in /usr/sbin/cmf-agent (look for lines that start with “ulimit”.

  1. Edit /usr/sbin/cmf-agent and change the ulimit -n setting.
  2. shutdown hdfs/mapreduce/hbase services on them.
  3. On these stopped nodes run: service cloudera-scm-agent hard_restart
  4. Restart the services.

How to confirm

To confirm, you can go to any CM-managed node, ps -ef | grep hdfs, grab the pid of a hdfs process. cat /proc/<pid>/fd to make sure it’s sticking.

Limit                     Soft Limit           Hard Limit           Units     
Max processes             65536                65536                processes 
Max open files            122880               122880               files