A proxy inside a kubernetes pod doesn't intercept any HTTP traffic

3/30/2020

What I am craving for is to have 2 applications running in a pod, each of those applications has its own container. The Application A is a simple spring-boot application which makes HTTP requests to the other application which is deployed on Kubernetes. The purpose of Application B (proxy) is to intercept that HTTP request and add an Authorization token to its header. The Application B is a mitmdump with a python script. The issue I am having is that when I have deployed in on Kubernetes, the proxy seems to not intercept any traffic at all ( I tried to reproduce this issue on my local machine and I didn't find any troubles, so I guess the issue lies somewhere withing networking inside a pod). Can someone have a look into it and guide me how to solve it?

enter image description here

Here's the deployment and service file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: proxy-deployment
  namespace: myown
  labels:
    app: application-a
spec:
  replicas: 1
  selector:
    matchLabels:
      app: application-a
  template:
    metadata:
      labels:
        app: application-a
    spec:
      containers:
      - name: application-a
        image: registry.gitlab.com/application-a
        resources:
          requests:
            memory: "230Mi"
            cpu: "100m"
          limits:
            memory: "460Mi"
            cpu: "200m"
        imagePullPolicy: Always
        ports:
        - containerPort: 8090
        env:
        - name: "HTTP_PROXY"
          value: "http://localhost:1030"
      - name:
        image: registry.gitlab.com/application-b-proxy
        resources:
          requests:
            memory: "230Mi"
            cpu: "100m"
          limits:
            memory: "460Mi"
            cpu: "200m"
        imagePullPolicy: Always
        ports:
        - containerPort: 1080
---
kind: Service
apiVersion: v1
metadata:
  name: proxy-svc
  namespace: myown
spec:
  ports:
  - nodePort: 31000
    port: 8090
    protocol: TCP
    targetPort: 8090
  selector:
    app: application-a
  sessionAffinity: None
  type: NodePort

And here's how i build the docker image of mitmproxy/mitmdump

FROM mitmproxy/mitmproxy:latest

ADD get_token.py .
WORKDIR ~/mit_docker
COPY get_token.py .
EXPOSE 1080:1080
ENTRYPOINT ["mitmdump","--listen-port", "1030", "-s","get_token.py"]

EDIT

I created two dummy docker images in order to have this scenario recreated locally.

APPLICATION A - a spring boot application with a job to create an HTTP GET request every 1 minute for specified but irrelevant address, the address should be accessible. The response should be 302 FOUND. Every time an HTTP request is made, a message in the logs of the application appears.

APPLICATION B - a proxy application which is supposed to proxy the docker container with application A. Every request is logged.

  1. Make sure your docker proxy config is set to listen to http://localhost:8080 - you can check how to do so here

  2. Open a terminal and run this command:

 docker run -p 8080:8080 -ti registry.gitlab.com/dyrekcja117/proxyexample:application-b-proxy
  1. Open another terminal and run this command:
    docker run --network="host" registry.gitlab.com/dyrekcja117/proxyexample:application-a
  1. Go into the shell with the container of application A in 3rd terminal:
    docker exec -ti <name of docker container> sh

and try to make curl to whatever address you want.

And the issue I am struggling with is that when I make curl from inside the container with Application A it is intercepted by my proxy and it can be seen in the logs. But whenever Application A itself makes the same request it is not intercepted. The same thing happens on Kubernetes

-- uiguyf ufdiutd
kubernetes
kubernetes-pod
mitmproxy
networking
proxy

1 Answer

4/7/2020

Let's first wrap up the facts we discover over our troubleshooting discussion in the comments:

  • Your need is that APP-A receives a HTTP request and a token needs to be added inflight by PROXY before sending the request to your datastorage.
  • Every container in a Pod shares the network namespace, including the IP address and network ports. Containers inside a Pod can communicate with one another using localhost, source here.
  • You was able to login to container application-a and send a curl request to container application-b-proxy on port 1030, proving the above statement.
  • The problem is that your proxy is not intercepting the request as expected.
  • You mention that in you was able to make it work on localhost, but in localhost the proxy has more power than inside a container.
  • Since I don't have access neither to your app-a code nor the mitmproxy token.py I will give you a general example how to redirect traffic from container-a to container-b
  • In order to make it work, I'll use NGINX Proxy Pass: it simply proxies the request to container-b.

Reproduction:

  • I'll use a nginx server as container-a.

  • I'll build it with this Dockerfile:

FROM nginx:1.17.3
RUN rm /etc/nginx/conf.d/default.conf
COPY frontend.conf /etc/nginx/conf.d
  • I'll add this configuration file frontend.conf:
server {
    listen 80;

    location / {
        proxy_pass http://127.0.0.1:8080;
    }
}

It's ordering the traffic should be sent to container-b that is listening in port 8080 inside the same pod.

  • I'll build this image as nginxproxy in my local repo:
$ docker build -t nginxproxy .

$ docker images 
REPOSITORY        TAG       IMAGE ID        CREATED          SIZE
nginxproxy    latest    7c203a72c650    4 minutes ago    126MB
  • Now the full.yaml deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: proxy-deployment
  labels:
    app: application-a
spec:
  replicas: 1
  selector:
    matchLabels:
      app: application-a
  template:
    metadata:
      labels:
        app: application-a
    spec:
      containers:
      - name: container-a
        image: nginxproxy:latest
        ports:
        - containerPort: 80
        imagePullPolicy: Never
      - name: container-b
        image: echo8080:latest
        ports:
        - containerPort: 8080
        imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
  name: proxy-svc
spec:
  ports:
  - nodePort: 31000
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: application-a
  sessionAffinity: None
  type: NodePort    

NOTE: I set imagePullPolicy as Never because I'm using my local docker image cache.

I'll list the changes I made to help you link it to your current environment:

  • container-a is doing the work of your application-a and I'm serving nginx on port 80 where you are using port 8090
  • container-b is receiving the request, as your application-b-proxy. The image I'm using was based on mendhak/http-https-echo, normally it listens on port 80, I've made a custom image just changing to listen on port 8080 and named it echo8080.

  • First I created a nginx pod and exposed it alone to show you it's running (since it's empty in content, it will return bad gateway but you can see the output is from nginx:

$ kubectl apply -f nginx.yaml 
pod/nginx created
service/nginx-svc created

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
nginx                              1/1     Running   0          64s
$ kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
nginx-svc    NodePort    10.103.178.109   <none>        80:31491/TCP   66s

$ curl http://192.168.39.51:31491
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.17.3</center>
</body>
</html>
  • I deleted the nginx pod and created a echo-apppod and exposed it to show you the response it gives when directly curled from outside:
$ kubectl apply -f echo.yaml 
pod/echo created
service/echo-svc created

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
echo                               1/1     Running   0          118s
$ kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
echo-svc     NodePort    10.102.168.235   <none>        8080:32116/TCP   2m

$ curl http://192.168.39.51:32116
{
  "path": "/",
  "headers": {
    "host": "192.168.39.51:32116",
    "user-agent": "curl/7.52.1",
  },
  "method": "GET",
  "hostname": "192.168.39.51",
  "ip": "::ffff:172.17.0.1",
  "protocol": "http",
  "os": {
    "hostname": "echo"
  },
  • Now I'll apply the full.yaml:
$ kubectl apply -f full.yaml 
deployment.apps/proxy-deployment created
service/proxy-svc created
$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
proxy-deployment-9fc4ff64b-qbljn   2/2     Running   0          1s

$ k get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
proxy-svc    NodePort    10.103.238.103   <none>        80:31000/TCP   31s
  • Now the Proof of concept, from outside the cluster, I'll send a curl to my node IP 192.168.39.51 in port 31000 which is sending the request to port 80 on the pod (handled by nginx):
$ curl http://192.168.39.51:31000
{
  "path": "/",
  "headers": {
    "host": "127.0.0.1:8080",
    "user-agent": "curl/7.52.1",
  },
  "method": "GET",
  "hostname": "127.0.0.1",
  "ip": "::ffff:127.0.0.1",
  "protocol": "http",
  "os": {
    "hostname": "proxy-deployment-9fc4ff64b-qbljn"
  },
  • As you can see, the response has all the parameters of the pod, indicating it was sent from 127.0.0.1 instead of a public IP, showing that the NGINX is proxying the request to container-b.

Considerations:

I Hope to help you with this example.

-- willrof
Source: StackOverflow