how to know why the kube-proxy stopped

5/2/2020

Today I found the kubernetes cluster one node's kube-proxy process was stopped, this is the stopped status:

[root@uat-k8s-01 ~]# systemctl status -l kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: inactive (dead) since Sat 2020-04-18 08:04:18 CST; 2 weeks 0 days ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
  Process: 937394 ExecStart=/opt/k8s/bin/kube-proxy --config=/etc/kubernetes/kube-proxy-config.yaml --logtostderr=true --v=2 (code=killed, signal=PIPE)
 Main PID: 937394 (code=killed, signal=PIPE)

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

from this log tips I did not know why the kube-proxy process stopped. This is the kube-proxy service config:

[root@uat-k8s-01 ~]# cat /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/opt/k8s/k8s/kube-proxy
ExecStart=/opt/k8s/bin/kube-proxy \
  --config=/etc/kubernetes/kube-proxy-config.yaml \
  --logtostderr=true \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

is there any way to find out why the kube-proxy failed and avoid stop the next time? This is the journal log output:

[root@uat-k8s-01 ~]# journalctl -u kube-proxy.service
-- No entries --

enter image description here

-- Dolphin
kubernetes

1 Answer

5/2/2020

Use journalctl -u kube-proxy.service or check /var/log/kube-proxy.log to see kube-proxy logs. In a real production setup you should send logs to a log aggregator system such as ELK or splunk so that logs are not lost.

-- Arghya Sadhu
Source: StackOverflow