Why does this setup with contour on kubernetes (GKE) result in 2 functioning external IPs?

5/7/2018

I've been experimenting with contour as an alternative ingress controller on a test GKE kubernetes cluster.

Following the contour deployment docs with a few modifications, I've got a working setup serving test HTTP responses.

First, I created a "helloworld" pod that serves http responses, exposed via a NodePort service and an ingress:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    name: helloworld
spec:
  replicas: 4
  template:
    metadata:
      labels:
        app: helloworld
    spec:
      containers:
        - name: "helloworld-http"
          image: "nginxdemos/hello:plain-text"
          imagePullPolicy: Always
          resources:
            requests:
              cpu: 250m
              memory: 256Mi
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - helloworld
              topologyKey: "kubernetes.io/hostname"
---
apiVersion: v1
kind: Service
metadata:
  name: helloworld-svc
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: helloworld
  sessionAffinity: None
  type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: helloworld-ingress
spec:
  backend:
    serviceName: helloworld-svc
    servicePort: 80

Then, I created a deployment for contour that's directly copied from their docs:

apiVersion: v1
kind: Namespace
metadata:
  name: heptio-contour
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: contour
  namespace: heptio-contour
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: contour
  name: contour
  namespace: heptio-contour
spec:
  selector:
    matchLabels:
      app: contour
  replicas: 2
  template:
    metadata:
      labels:
        app: contour
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9001"
        prometheus.io/path: "/stats"
        prometheus.io/format: "prometheus"
    spec:
      containers:
      - image: docker.io/envoyproxy/envoy-alpine:v1.6.0
        name: envoy
        ports:
        - containerPort: 8080
          name: http
        - containerPort: 8443
          name: https
        command: ["envoy"]
        args: ["-c", "/config/contour.yaml", "--service-cluster", "cluster0", "--service-node", "node0", "-l", "info", "--v2-config-only"]
        volumeMounts:
        - name: contour-config
          mountPath: /config
      - image: gcr.io/heptio-images/contour:master
        imagePullPolicy: Always
        name: contour
        command: ["contour"]
        args: ["serve", "--incluster"]
      initContainers:
      - image: gcr.io/heptio-images/contour:master
        imagePullPolicy: Always
        name: envoy-initconfig
        command: ["contour"]
        args: ["bootstrap", "/config/contour.yaml"]
        volumeMounts:
        - name: contour-config
          mountPath: /config
      volumes:
      - name: contour-config
        emptyDir: {}
      dnsPolicy: ClusterFirst
      serviceAccountName: contour
      terminationGracePeriodSeconds: 30
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app: contour
              topologyKey: kubernetes.io/hostname
---
apiVersion: v1
kind: Service
metadata:
  name: contour
  namespace: heptio-contour
spec:
 ports:
 - port: 80
   name: http
   protocol: TCP
   targetPort: 8080
 - port: 443
   name: https
   protocol: TCP
   targetPort: 8443
 selector:
   app: contour
 type: LoadBalancer
---

The default and heptio-contour namespaces now look like this:

$ kubectl get pods,svc,ingress -n default
NAME                              READY     STATUS    RESTARTS   AGE
pod/helloworld-7ddc8c6655-6vgdw   1/1       Running   0          6h
pod/helloworld-7ddc8c6655-92j7x   1/1       Running   0          6h
pod/helloworld-7ddc8c6655-mlvmc   1/1       Running   0          6h
pod/helloworld-7ddc8c6655-w5g7f   1/1       Running   0          6h

NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/helloworld-svc   NodePort    10.59.240.105   <none>        80:31481/TCP   34m
service/kubernetes       ClusterIP   10.59.240.1     <none>        443/TCP        7h

NAME                                    HOSTS     ADDRESS         PORTS     AGE
ingress.extensions/helloworld-ingress   *         y.y.y.y   80        34m

$ kubectl get pods,svc,ingress -n heptio-contour
NAME                          READY     STATUS    RESTARTS   AGE
pod/contour-9d758b697-kwk85   2/2       Running   0          34m
pod/contour-9d758b697-mbh47   2/2       Running   0          34m

NAME              TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
service/contour   LoadBalancer   10.59.250.54   x.x.x.x   80:30882/TCP,443:32746/TCP   34m

There's 2 publicly routable IP addresses:

  • x.x.x.x - a GCE TCP load balancer that forwards to the contour pods
  • y.y.y.y - a GCE HTTP load balancer that forwards to the helloworld pods via the helloworld-ingress

A curl on both public IPs returns a valid HTTP response from the helloworld pods.

# the TCP load balancer
$ curl -v x.x.x.x
* Rebuilt URL to: x.x.x.x/  
*   Trying x.x.x.x...
* TCP_NODELAY set
* Connected to x.x.x.x (x.x.x.x) port 80 (#0)
> GET / HTTP/1.1
> Host: x.x.x.x
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< server: envoy
< date: Mon, 07 May 2018 14:14:39 GMT
< content-type: text/plain
< content-length: 155
< expires: Mon, 07 May 2018 14:14:38 GMT
< cache-control: no-cache
< x-envoy-upstream-service-time: 1
<
Server address: 10.56.4.6:80
Server name: helloworld-7ddc8c6655-w5g7f
Date: 07/May/2018:14:14:39 +0000
URI: /
Request ID: ec3aa70e4155c396e7051dc972081c6a

# the HTTP load balancer
$ curl http://y.y.y.y 
* Rebuilt URL to: y.y.y.y/
*   Trying y.y.y.y...
* TCP_NODELAY set
* Connected to y.y.y.y (y.y.y.y) port 80 (#0)
> GET / HTTP/1.1
> Host: y.y.y.y
> User-Agent: curl/7.58.0
> Accept: */*
> 
< HTTP/1.1 200 OK
< Server: nginx/1.13.8
< Date: Mon, 07 May 2018 14:14:24 GMT
< Content-Type: text/plain
< Content-Length: 155
< Expires: Mon, 07 May 2018 14:14:23 GMT
< Cache-Control: no-cache
< Via: 1.1 google
< 
Server address: 10.56.2.8:80
Server name: helloworld-7ddc8c6655-mlvmc
Date: 07/May/2018:14:14:24 +0000
URI: /
Request ID: 41b1151f083eaf30368cf340cfbb92fc

Is it by design that I have two public IPs? Which one should I use for customers? Can I choose based on my preference between a TCP and HTTP load balancer?

-- James Healy
google-cloud-platform
google-kubernetes-engine
kubernetes
kubernetes-ingress

1 Answer

5/7/2018

Probably you have GLBC ingress configured (https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller)

Could you try using following ingress definition?

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "contour"
  name: helloworld-ingress
spec:
  backend:
    serviceName: helloworld-svc
    servicePort: 80

If you would like to be sure that your traffic goes via contour you should use x.x.x.x ip.

-- Maciek Sawicki
Source: StackOverflow