Kubernetes ingress (hostNetwork=true), can't reach service by node IP - GCP

2/5/2019

I am trying to expose deployment using Ingress where DeamonSet has hostNetwork=true which would allow me to skip additional LoadBalancer layer and expose my service directly on the Kubernetes external node IP. Unfortunately I can't reach the Ingress controller from the external network.

I am running Kubernetes version 1.11.16-gke.2 on GCP.

I setup my fresh cluster like this:

gcloud container clusters get-credentials gcp-cluster

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade

helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress

I run the deployment:

cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-node
spec:
  selector:
      matchLabels:
        app: hello-node
  template:
    metadata:
      labels:
        app: hello-node
    spec:
      containers:
      - name: hello-node
        image: gcr.io/google-samples/node-hello:1.0
        ports:
        - containerPort: 8080
EOF

Then I create service:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: hello-node
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: hello-node
EOF

and ingress resource:

cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
  name: hello-node-single-ingress    
spec:
  backend:
    serviceName: hello-node
    servicePort: 80
EOF

I get the node external IP:

12:50 $ kubectl get nodes -o json | jq '.items[] | .status .addresses[] | select(.type=="ExternalIP") | .address'
"35.197.204.75"

Check if ingress is running:

12:50 $ kubectl get ing
NAME                        HOSTS   ADDRESS         PORTS   AGE
hello-node-single-ingress   *       35.197.204.75   80      8m

12:50 $ kubectl get pods --namespace ingress-nginx
NAME                                                     READY   STATUS    RESTARTS   AGE
ingress-nginx-ingress-controller-7kqgz                   1/1     Running   0          23m
ingress-nginx-ingress-default-backend-677b99f864-tg6db   1/1     Running   0          23m

12:50 $ kubectl get svc --namespace ingress-nginx
NAME                                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
ingress-nginx-ingress-controller        ClusterIP   10.43.250.102   <none>        80/TCP,443/TCP   24m
ingress-nginx-ingress-default-backend   ClusterIP   10.43.255.43    <none>        80/TCP           24m

Then trying to connect from the external network:

curl 35.197.204.75 

Unfortunately it times out

On Kubernetes Github there is a page regarding ingress-nginx (host-netork: true) setup: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network

which mentions:

"This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it."

I've tried to follow that and delete ingress-nginx services:

kubectl delete svc --namespace ingress-nginx ingress-nginx-ingress-controller ingress-nginx-ingress-default-backend

but this doesn't help.

Any ideas how to set up the Ingress on the node external IP? What I am doing wrong? The amount of confusion over running Ingress reliably without the LB overwhelms me. Any help much appreciated !

EDIT: When another service accessing my deployment with NodePort gets created:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: hello-node2
spec:
  ports:
  - port: 80
    targetPort: 8080
  type: NodePort
  selector:
    app: hello-node
EOF


NAME          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
hello-node    ClusterIP   10.47.246.91   <none>        80/TCP         2m
hello-node2   NodePort    10.47.248.51   <none>        80:31151/TCP   6s

I still can't access my service e.g. using: curl 35.197.204.75:31151.

However when I create 3rd service with LoadBalancer type:

cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: hello-node3
spec:
  ports:
  - port: 80
    targetPort: 8080
  type: LoadBalancer
  selector:
    app: hello-node
EOF

NAME          TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
hello-node    ClusterIP      10.47.246.91   <none>           80/TCP         7m
hello-node2   NodePort       10.47.248.51   <none>           80:31151/TCP   4m
hello-node3   LoadBalancer   10.47.250.47   35.189.106.111   80:31367/TCP   56s

I can access my service using the external LB: 35.189.106.111 IP.

-- NeverEndingQueue
google-kubernetes-engine
kubernetes
kubernetes-helm
kubernetes-ingress

1 Answer

2/5/2019

The problem was missing firewall rules on GCP.

Found the answer: https://stackoverflow.com/a/42040506/2263395

Running:

gcloud compute firewall-rules create myservice --allow tcp:80,tcp:30301

Where 80 is the ingress port and 30301 is the NodePort port. On production you would probabaly use just the ingress port.

-- NeverEndingQueue
Source: StackOverflow