Kubernetes: readinessProbes failing but the livelinessProbe is succeeding with the same settings

10/8/2018

I have a livelinessProbe configured for my pod which does a http-get on path on the same pod and a particular port. It works perfectly. But, if I use the same settings and configure a readinessProbe it fails with the below error.

Readiness probe failed: wsarecv: read tcp :50578->:80: An existing connection was forcibly closed by the remote host

Actually after certain point I even see the liveliness probes failing. not sure why . Liveliness probe succeeding should indicate that the kube-dns is working fine and we're able to reach the pod from the node. Here's the readinessProbe for my pod's spec

readinessProbe:  
        httpGet:  
          path: /<path> # -> this works for livelinessProbe  
          port: 80  
        initialDelaySeconds: 30  
        periodSeconds: 10  
        timeoutSeconds: 10  

Does anyone have an idea what might be going on here.

-- sai guru datt manchikanti
kubernetes
kubernetes-helm

1 Answer

10/9/2018

I don't think it has anything to do with kube-dns or coredns. The most likely cause here is that your pod/container/application is crashing or stop serving requests.

Seems like this timeline:

  • Pod/container comes up.
  • Liveliness probe passes ok.
  • Some time passes.
  • Probably app crash or error.
  • Readiness fails.
  • Liveliness probe fails too.

More information about what that error means here: An existing connection was forcibly closed by the remote host

-- Rico
Source: StackOverflow