I have on several occasions needed to troubleshoot issues which wound up being problems with linux limiting the number of open files for a given process. This can be an annoying issue to troubleshoot since many programs do not gracefully handle this condition and linux does not provide log information which indicates the situation by default. This really applies to all of the linux process limits, not just open files.
So what do we do about it?
When you google the problem, you will typically find references to running ulimit as the service user to determine what the existing limits are. You will quickly discover, that this doesn’t work. For one thing, most service users don’t have shells. Additionally, as you will see in my next post, many services already have configuration which attempts to set the limits on program startup.
Enter the proc filesystem…
cat /proc/805/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 7807 7807 processes
Max open files 1024 4096 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 7807 7807 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
805 is the pid of the process you want to check.
Have fun…