Error in hbase regionserver log
2014-02-13 11:38:13,028 INFO org.apache.hadoop.hdfs.DFSClient: Failed to connect to /192.168.101.21:50010, add to deadNodes and continue
java.net.SocketException: Too many open files
About ulimit
http://hbase.apache.org/book.html#ulimit
To be clear, upping the file descriptors and nproc for the user who is running the HBase process is an operating system configuration, not an HBase configuration.
Raising ulimit on centos-server
fs.file-max
vim /etc/sysctl.conf
- Add this line
fs.file-max = 122880
into/etc/sysctl.conf
- You can execute the command
sysctl -p
or reboot to permanently add the kernel parameter aboveulimit
vim /etc/security/limits.conf
- Add the lines below:
- soft nofile 122880
- hard nofile 122880
- logout and login, execute the command
ulimit -n
inside your terminal. If you had done everything correctly, you should now see a value returns as 122880.
CM users
For CM users, since there’s a supervisor in effect that monitors the launched daemons, the ulimits are inherited from it and are hence pre-set in /usr/sbin/cmf-agent
(look for lines that start with “ulimit”.
- Edit /usr/sbin/cmf-agent and change the ulimit -n setting.
- shutdown hdfs/mapreduce/hbase services on them.
- On these stopped nodes run:
service cloudera-scm-agent hard_restart
- Restart the services.
How to confirm
To confirm, you can go to any CM-managed node, ps -ef | grep hdfs
, grab the pid of a hdfs process.
cat /proc/<pid>/fd
to make sure it’s sticking.
Limit Soft Limit Hard Limit Units
Max processes 65536 65536 processes
Max open files 122880 122880 files