I am running Kubernets v1.11.1 cluster, sometime my kube-apiserver server started throwing the 'too many open files' message. I noticed to many open TCP connections node kubelet port 10250
My server configured with 65536 file descriptors. Do I need to increase the number of open files for the container host? What are the recommended ulimit settings for the container host?
api server log message
I1102 13:57:08.135049 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:09.135191 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:10.135437 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:11.135589 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:12.135755 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
my host ulimit values:
# ulimit -a
-f: file size (blocks) unlimited
-t: cpu time (seconds) unlimited
-d: data seg size (kb) unlimited
-s: stack size (kb) 8192
-c: core file size (blocks) unlimited
-m: resident set size (kb) unlimited
-l: locked memory (kb) 64
-p: processes unlimited
-n: file descriptors 65536
-v: address space (kb) unlimited
-w: locks unlimited
-e: scheduling priority 0
-r: real-time priority 0
Thanks SR
65536
seems a bit low, although there are many apps that recommend that number. This is what I have on one K8s cluster for the kube-apiserver:
# kubeapi-server-container
# |
# \|/
# ulimit -a
-f: file size (blocks) unlimited
-t: cpu time (seconds) unlimited
-d: data seg size (kb) unlimited
-s: stack size (kb) 8192
-c: core file size (blocks) unlimited
-m: resident set size (kb) unlimited
-l: locked memory (kb) 16384
-p: processes unlimited
-n: file descriptors 1048576 <====
-v: address space (kb) unlimited
-w: locks unlimited
-e: scheduling priority 0
-r: real-time priority 0
Different from a regular bash process system limits:
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15447
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1024 <===
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15447
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
But yet the total max of the whole system:
$ cat /proc/sys/fs/file-max
394306
If you see this nothing can exceed /proc/sys/fs/file-max
on the system, so I would also check that value. I would also check the number of file descriptors being used (first column), this will give you an idea of how many open files you have:
$ cat /proc/sys/fs/file-nr
2176 0 394306