loadbalancer service won't redirect to desired pod

2/15/2017

I'm playing around with kubernetes and I've set up my environment with 4 deployments:

  • hello: basic "hello world" service
  • auth: provides authentication and encryption
  • frontend: an nginx reverse proxy which represents a single-point-of-entry from the outside and routes to the accurate pods internally
  • nodehello: basic "hello world" service, written in nodejs (this is what I contributed)

For the hello, auth and nodehello deployments I've set up each one internal service.

For the frontend deployment I've set up a load-balancer service which would be exposed to the outside world. It uses a config map nginx-frontend-conf to redirect to the appropriate pods and has the following contents:

upstream hello {
    server hello.default.svc.cluster.local;
}
upstream auth {
    server auth.default.svc.cluster.local;
}
upstream nodehello {
    server nodehello.default.svc.cluster.local;
}          
server {
    listen 443;
    ssl    on;
    ssl_certificate     /etc/tls/cert.pem;
    ssl_certificate_key /etc/tls/key.pem;
    location / {
        proxy_pass http://hello;
    }
    location /login {
        proxy_pass http://auth;
    }
    location /nodehello {
        proxy_pass http://nodehello;
    } 
}

When calling the frontend endpoint using curl -k https://<frontend-external-ip> I get routed to an available hello pod which is the expected behavior. When calling https://<frontend-external-ip>/nodehello however I won't get routed to a nodehello pod, but instead to a hellopod again.

I suspect the upstream nodehello configuration to be the failing part. I'm not sure how service discovery works here, i.e. how the dns name nodehello.default.svc.cluster.local would be exposed. I'd appreciate an explanation on how it works and what I did wrong.

yaml files used

deployments/hello.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hello
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: hello
        track: stable
    spec:
      containers:
        - name: hello
          image: "udacity/example-hello:1.0.0"
          ports:
            - name: http
              containerPort: 80
            - name: health
              containerPort: 81
          resources:
            limits:
              cpu: 0.2
              memory: "10Mi"
          livenessProbe:
            httpGet:
              path: /healthz
              port: 81
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 15
            timeoutSeconds: 5
          readinessProbe:
            httpGet:
              path: /readiness
              port: 81
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 1

deployments/auth.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: auth
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: auth
        track: stable
    spec:
      containers:
        - name: auth
          image: "udacity/example-auth:1.0.0"
          ports:
            - name: http
              containerPort: 80
            - name: health
              containerPort: 81
          resources:
            limits:
              cpu: 0.2
              memory: "10Mi"
          livenessProbe:
            httpGet:
              path: /healthz
              port: 81
              scheme: HTTP
            initialDelaySeconds: 5
            periodSeconds: 15
            timeoutSeconds: 5
          readinessProbe:
            httpGet:
              path: /readiness
              port: 81
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 1

deployments/frontend.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: frontend
        track: stable
    spec:
      containers:
        - name: nginx
          image: "nginx:1.9.14"
          lifecycle:
            preStop:
              exec:
                command: ["/usr/sbin/nginx","-s","quit"]
          volumeMounts:
            - name: "nginx-frontend-conf"
              mountPath: "/etc/nginx/conf.d"
            - name: "tls-certs"
              mountPath: "/etc/tls"
      volumes:
        - name: "tls-certs"
          secret:
            secretName: "tls-certs"
        - name: "nginx-frontend-conf"
          configMap:
            name: "nginx-frontend-conf"
            items:
              - key: "frontend.conf"
                path: "frontend.conf"

deployments/nodehello.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nodehello
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nodehello 
        track: stable
    spec:
      containers:
        - name: nodehello 
          image: "thezebra/nodehello:0.0.2"
          ports:
            - name: http
              containerPort: 80
          resources:
            limits:
              cpu: 0.2
              memory: "10Mi"

services/hello.yaml

kind: Service
apiVersion: v1
metadata:
  name: "hello"
spec:
  selector:
    app: "hello"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 80

services/auth.yaml

kind: Service
apiVersion: v1
metadata:
  name: "auth"
spec:
  selector:
    app: "auth"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 80

services/frontend.yaml

kind: Service
apiVersion: v1
metadata:
  name: "frontend"
spec:
  selector:
    app: "frontend"
  ports:
    - protocol: "TCP"
      port: 443
      targetPort: 443
  type: LoadBalancer

services/nodehello.yaml

kind: Service
apiVersion: v1
metadata:
  name: "nodehello"
spec:
  selector:
    app: "nodehello"
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 80
-- Ronin
kubernetes
nginx
node.js

1 Answer

2/22/2017

This works perfectly :-)

$ curl -s http://frontend/
{"message":"Hello"}
$ curl -s http://frontend/login
authorization failed
$ curl -s http://frontend/nodehello
Hello World!

I suspect you might have updated the nginx-frontend-conf when you added /nodehello but have not restarted nginx. Pods won't pickup changed ConfigMaps automatically. Try:

kubectl delete pod -l app=frontend

Until versioned ConfigMaps happen there isn't a nicer solution.

-- Janos Lenart
Source: StackOverflow