udp ingress port in kubernetes using nginx-controller

3/1/2017

I am trying to configure an udp ingress port using nginx-controller. But I continuously get the following in the error from the nginx-controller:

$ kubectl -n kube-system logs -f nginx-ingress-controller-2391389042-xzmc7

2017/03/01 18:08:20 [error] 62#62: *8 no live upstreams while connecting to upstream, udp client: 192.168.0.20, server: 0.0.0.0:53, upstream: "udp-kube-system-kube-dns-53", bytes from/to client:1/0, bytes from/to upstream:0/0

As you can see in the nginx configuration the server endpoint was not properly mapped to the corresponding endpoint ip.

Configuration

I configure my environment with the following:

# 1. install kubernetes with kubeadm
kubeadm init --pod-network-cidr 10.244.0.0/16

# 2. use flannel as virtual network backend
curl -sSL https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml |  kubectl create -f -

# 3. install the nginx-controller from https://raw.githubusercontent.com/kubernetes/ingress/master/examples/deployment/nginx/nginx-ingress-controller.yaml
# edit the controller to specify the host and enable the UDP ports (see bottom of the entry for reference)

# 4. create the ConfigMap for the udp ports
# udp example: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/udp
curl https://github.com/kubernetes/contrib/blob/master/ingress/controllers/nginx/examples/udp/udp-configmap-example.yaml | kubectl create -f -

Debugging

Nginx configuration:

$ kubectl -n kube-system exec nginx-ingress-controller-2391389042-xzmc7 -- cat /etc/nginx/nginx.conf| grep -i udp -C 10

upstream udp-kube-system-kube-dns-53 {
                server 127.0.0.1:8181 down;
        }

    # TCP services

    # UDP services

        server {
            listen 53 udp;
            proxy_responses        1;
            proxy_pass             udp-kube-system-kube-dns-53;
        }

}

The description of the kube-dns service:

$ kubectl -n kube-system describe svc kube-dns
Name:                   kube-dns
Namespace:              kube-system
Labels:                 component=kube-dns
                        k8s-app=kube-dns
                        kubernetes.io/cluster-service=true
                        kubernetes.io/name=KubeDNS
                        name=kube-dns
                        tier=node
Selector:               name=kube-dns
Type:                   ClusterIP
IP:                     10.96.0.10
Port:                   dns     53/UDP
Endpoints:              10.244.0.13:53
Port:                   dns-tcp 53/TCP
Endpoints:              10.244.0.13:53
Session Affinity:       None
No events.

The description of the nginx controller pod:

$ kubectl -n kube-system describe po nginx-ingress-controller-2391389042-xzmc7 
Name:           nginx-ingress-controller-2391389042-xzmc7
Namespace:      kube-system
Node:           kubeworker-1/192.168.0.20
Start Time:     Wed, 01 Mar 2017 19:07:26 +0100
Labels:         k8s-app=nginx-ingress-controller
                pod-template-hash=2391389042
Status:         Running
IP:             192.168.0.20
Controllers:    ReplicaSet/nginx-ingress-controller-2391389042
Containers:
  nginx-ingress-controller:
    Container ID:       docker://65b3b9d2ce55932ca0940d561cec6b60dad26a317f2bcf54bbfa3a85e5908a65
    Image:              gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2
    Image ID:           docker-pullable://gcr.io/google_containers/nginx-ingress-controller@sha256:977a68f887e1621fb30e80939b3a8f875cbb20c549af1e42d12f2fef272b8e9b
    Ports:              80/TCP, 443/TCP, 53/UDP
    Args:
      /nginx-ingress-controller
      --default-backend-service=$(POD_NAMESPACE)/default-http-backend
      --udp-services-configmap=$(POD_NAMESPACE)/udp-configmap-example
    State:              Running
      Started:          Wed, 01 Mar 2017 19:07:26 +0100
    Ready:              True
    Restart Count:      0
    Liveness:           http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:          http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-x8qk9 (ro)
    Environment Variables:
      POD_NAME:         nginx-ingress-controller-2391389042-xzmc7 (v1:metadata.name)
      POD_NAMESPACE:    kube-system (v1:metadata.namespace)
Conditions:
  Type          Status
  Initialized   True 
  Ready         True 
  PodScheduled  True 
Volumes:
  default-token-x8qk9:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-x8qk9
QoS Class:      BestEffort
Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                    -------------   --------        ------                  -------
  25m           25m             3       {default-scheduler }                    Warning         FailedScheduling        pod (nginx-ingress-controller-2391389042-xzmc7) failed to fit in any node
fit failure summary on nodes : MatchNodeSelector (1), PodFitsHostPorts (1), PodToleratesNodeTaints (1)
  25m   25m     1       {default-scheduler }                                                    Normal  Scheduled       Successfully assigned nginx-ingress-controller-2391389042-xzmc7 to kubeworker-1
  25m   25m     1       {kubelet kubeworker-1}  spec.containers{nginx-ingress-controller}       Normal  Pulled          Container image "gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2" already present on machine
  25m   25m     1       {kubelet kubeworker-1}  spec.containers{nginx-ingress-controller}       Normal  Created         Created container with docker id 65b3b9d2ce55; Security:[seccomp=unconfined]
  25m   25m     1       {kubelet kubeworker-1}  spec.containers{nginx-ingress-controller}       Normal  Started         Started container with docker id 65b3b9d2ce55

Modified nginx-ingress-controller.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  labels:
    k8s-app: nginx-ingress-controller
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: nginx-ingress-controller
    spec:
      # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
      # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
      # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
      # like with kubeadm
      hostNetwork: true
      nodeSelector:
          kubernetes.io/hostname: kubeworker-1
      terminationGracePeriodSeconds: 60
      containers:
      - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.2
        name: nginx-ingress-controller
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          timeoutSeconds: 1
        ports:
        - containerPort: 80
          hostPort: 80
        - containerPort: 443
          hostPort: 443
        - containerPort: 53
          hostPort: 53
          protocol: UDP
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --udp-services-configmap=$(POD_NAMESPACE)/udp-configmap-example
-- aitorhh
kubernetes
nginx
udp

1 Answer

3/2/2017

The problem exists only on versions 0.9.0-beta 1 and 2. Rolling back to 0.8.3 solves the issue.

According to https://github.com/kubernetes/ingress/issues/199 efforts are ongoing to fix the issue on 0.9.0.

-- aitorhh
Source: StackOverflow