kuberenets pods shows different ulimit open files than the host

1/28/2018

I have a K8s cluster up and running on Azure VMs.

after getting:

java.io.IOException: Too many open files in system

on one of the pods, I checked the open files limits by calling:

ulimit -a | grep "open files"

on both, the host (via ssh), and got:

open files (-n) 1024

and from within the pod (via 'exec' the pod), and got:

bash-4.3# ulimit -a | grep "open files"

open files (-n) 1048576

my question is how is it possible to have different values (the pod 'see' higher limit than the underlined host), and also which of the limits counts - will it break if more than 1024 open fd's get opened?

the relevant pod image based from 'alpine-java'.

the host os is: CentOS Linux release 7.4.1708

-- Victor Bouhnik
azure
kubernetes
linux

1 Answer

1/28/2018

For Ulimit option, you shoud change it from docker's unit file on your vm

    [Service]
    ExecStart=/usr/bin/dockerd \
      --iptables=false \
      --ip-masq=false \
      --host=unix:///var/run/docker.sock \
      --log-level=error \
      --storage-driver=overlay \
      --default-ulimit nofile=70000:70000 \
      --default-ulimit nproc=70000:70000
    Restart=on-failure
    RestartSec=5
-- Pamir Erdem
Source: StackOverflow