Running kubectl proxy from same pod vs different pod on same node - what's the difference?

4/4/2018

I'm experimenting with this, and I'm noticing a difference in behavior that I'm having trouble understanding, namely between running kubectl proxy from within a pod vs running it in a different pod.

The sample configuration run kubectl proxy and the container that needs it* in the same pod on a daemonset, i.e.

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
# ...
spec:
  template:
    metadata:
    # ...
    spec:
      containers:

      # this container needs kubectl proxy to be running:
      - name: l5d
        # ...

      # so, let's run it:
      - name: kube-proxy
        image: buoyantio/kubectl:v1.8.5
        args:
         - "proxy"
         - "-p"
         - "8001"

When doing this on my cluster, I get the expected behavior. However, I will run other services that also need kubectl proxy, so I figured I'd rationalize that into its own daemon set to ensure it's running on all nodes. I thus removed the kube-proxy container and deployed the following daemon set:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-proxy
  labels:
    app: kube-proxy
spec:
  template:
    metadata:
      labels:
        app: kube-proxy
    spec:
      containers:
      - name: kube-proxy
        image: buoyantio/kubectl:v1.8.5
        args:
        - "proxy"
        - "-p"
        - "8001"

In other words, the same container configuration as previously, but now running in independent pods on each node instead of within the same pod. With this configuration "stuff doesn't work anymore"**.

I realize the solution (at least for now) is to just run the kube-proxy container in any pod that needs it, but I'd like to know why I need to. Why isn't just running it in a daemonset enough?

I've tried to find more information about running kubectl proxy like this, but my search results drown in results about running it to access a remote cluster from a local environment, i.e. not at all what I'm after.


I include these details not because I think they're relevant, but because they might be even though I'm convinced they're not:

*) a Linkerd ingress controller, but I think that's irrelevant

**) in this case, the "working" state is that the ingress controller complains that the destination is unknown because there's no matching ingress rule, while the "not working" state is a network timeout.

-- Tomas Aschan
kubernetes

1 Answer

4/5/2018

namely between running kubectl proxy from within a pod vs running it in a different pod.

Assuming your cluster has an software defined network, such as flannel or calico, a Pod has its own IP and all containers within a Pod share the same networking space. Thus:

containers:
- name: c0
  command: ["curl", "127.0.0.1:8001"]
- name: c1
  command: ["kubectl", "proxy", "-p", "8001"]

will work, whereas in a DaemonSet, they are by definition not in the same Pod and thus the hypothetical c0 above would need to use the DaemonSet's Pod's IP to contact 8001. That story is made more complicated by the fact that kubectl proxy by default only listens on 127.0.0.1, so you would need to alter the DaemonSet's Pod's kubectl proxy to include --address='0.0.0.0' --accept-hosts='.*' to even permit such cross-Pod communication. I believe you also need to declare the ports: array in the DaemonSet configuration, since you are now exposing that port into the cluster, but I'd have to double-check whether ports: is merely polite, or is actually required.

-- mdaniel
Source: StackOverflow