Kubernetes Unable to Access pods

1/17/2020

I have one master and worker node and both are up & running, I deployed an angular application in my k8 cluster. When I'm inspecting my pod log all things are working file without any error.

I am trying to access the application in browser using master and worker IP address followed by a node port number like below, and getting error like unable to connect.

http://10.0.0.1:32394/
Name:         frontend-app-6848bc9666-9ggz7
Namespace:    pre-release
Priority:     0
Node:         SBT-poc-worker2/10.0.0.5
Start Time:   Fri, 17 Jan 2020 05:04:10 +0000
Labels:       app=frontend-app
              pod-template-hash=6848bc9666
Annotations:  <none>
Status:       Running
IP:           10.32.0.3
IPs:
  IP:           10.32.0.3
Controlled By:  ReplicaSet/frontend-app-6848bc9666
Containers:
  frontend-app:
    Container ID:   docker://292199347e391c9feecd667e1668f32931f1fd7c670514eb1e05e4a37b8109ad
    Image:          frontend-app:future-master-fix-7ba35fbe
    Image ID:       docker://sha256:0099587db89de9ef999a7d1f087d4781e73c491b17e89392e92b08d2f935ad27
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 17 Jan 2020 05:04:15 +0000
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     250m
      memory:  256Mi
    Requests:
      cpu:        100m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r67p7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-r67p7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-r67p7
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From                       Message
  ----    ------     ----   ----                       -------
  Normal  Scheduled  5m44s  default-scheduler          Successfully assigned pre-release/frontend-app-6848bc9666-9ggz7 to SBT-poc-worker2
  Normal  Pulled     5m41s  kubelet, SBT-poc-worker2  Container image "frontend-app:future-master-fix-7ba35fbe" already present on machine
  Normal  Created    5m39s  kubelet, SBT-poc-worker2  Created container frontend-app
  Normal  Started    5m39s  kubelet, SBT-poc-worker2  Started container frontend-app

root@jenkins-linux-vm:/home/SBT-admin# kubectl get pods -n pre-release
NAME                            READY   STATUS    RESTARTS   AGE
frontend-app-6848bc9666-9ggz7   1/1     Running   0          7m26s

root@jenkins-linux-vm:/home/SBT-admin# kubectl get services -n pre-release
NAME           TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
frontend-app   NodePort   10.96.6.77   <none>        8080:32394/TCP   7m36s

root@jenkins-linux-vm:/home/SBT-admin# kubectl get deployment -n pre-release
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
frontend-app   1/1     1            1           11m
root@jenkins-linux-vm:/home/SBT-admin# kubectl get -o yaml -n pre-release svc frontend-app
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"name":"frontend-app"},"name":"frontend-app","namespace":"pre-release"},"spec":{"ports":[{"port":8080,"targetPort":8080}],"selector":{"name":"frontend-app"},"type":"NodePort"}}
  creationTimestamp: "2020-01-17T05:04:10Z"
  labels:
    name: frontend-app
  name: frontend-app
  namespace: pre-release
  resourceVersion: "1972713"
  selfLink: /api/v1/namespaces/pre-release/services/frontend-app
  uid: 91b87f9e-d723-498c-af05-5969645a82ee
spec:
  clusterIP: 10.96.6.77
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 32394
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    name: frontend-app
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}
root@jenkins-linux-vm:/home/SBT-admin# kubectl get pods --selector="app=frontend-app" --output=wide
NAME                            READY   STATUS    RESTARTS   AGE   IP          NODE               NOMINATED NODE   READINESS GATES
frontend-app-7c7cf68f9c-n9tct   1/1     Running   0          58m   10.32.0.5   SBT-poc-worker2   <none>           <none>

root@jenkins-linux-vm:/home/SBT-admin# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
frontend-app-7c7cf68f9c-n9tct   1/1     Running   0          58m

root@jenkins-linux-vm:/home/SBT-admin# kubectl get svc
NAME           TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
frontend-app   NodePort   10.96.21.202   <none>        8080:31098/TCP   59m

root@jenkins-linux-vm:/home/SBT-admin# kubectl get deployment
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
frontend-app   1/1     1            1           59m

can you please someone help me to fix this.

-- Anonymuss
kubernetes

2 Answers

1/17/2020

Label on the POD is app=frontend-app as seen from logs on your problem statement. Your POD description shows below label

Name:         frontend-app-6848bc9666-9ggz7
Namespace:    pre-release
Priority:     0
Node:         SBT-poc-worker2/10.0.0.5
Start Time:   Fri, 17 Jan 2020 05:04:10 +0000
Labels:       app=frontend-app

Selector field on service yaml file is name: frontend-app , you should change this label on service yaml file to app: frontend-app and updated the service created.

Your current selector value is as below and is wrong comparing the label on POD

  ports:
  - nodePort: 32394
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    name: frontend-app

Change it to

selector:
    app: frontend-app
-- DT.
Source: StackOverflow

1/17/2020

You should try to establish that

There are no rules blocking the default node-port range (i.e from port 30000 - to port 32767) on security rules or firewall on cluster network.

For example verify you have below security rule open on Cluster Network for nodeport range to work in browser.

Ingress IPv4    TCP 30000 - 32767   0.0.0.0/0

Once you have confirmed you have no security group rule issue. I will take below approach to debug and find whats wrong with port reachablity at node level. perform a basic Test and check if i can get nginx web server installed and reachable on browser via node port:

Steps:

Deploy a NGINX deployment using below nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80

Verify deployment is up and running

$ kubectl apply -f nginx.yaml

$ kubectl get all
NAME                            READY   STATUS        RESTARTS   AGE
pod/my-nginx-75897978cd-ptqv9   1/1     Running       0          32s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   4d11h

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nginx   1/1     1            1           33s

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/my-nginx-75897978cd   1         1         1       33s

Now create service to expose the nginx deployment using below example

apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    run: my-nginx
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 80
    protocol: TCP
    name: http
  selector:
    run: my-nginx

Verify service is created and identify the nodeport assigned (since we did not provide any fixed port in service.yaml ( like below the node port is 32502)

$ kubectl apply -f service.yaml

$ kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          4d11h
my-nginx     NodePort    10.96.174.234   <none>        8080:32502/TCP   12s

In addition to the nodeport identify the ip of your master node i.e 131.112.113.101 below

$ kubectl get nodes -o wide
NAME           STATUS   ROLES    AGE     VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
master-1   Ready    master   4d11h   v1.17.0   131.112.113.101   <none>        Ubuntu 16.04.6 LTS   4.4.0-169-generic   docker://18.6.2
node-1     Ready    <none>   4d11h   v1.17.0   131.112.113.102   <none>        Ubuntu 16.04.6 LTS   4.4.0-169-generic   docker://18.6.2
node-2     Ready    <none>   4d11h   v1.17.0   131.112.113.103   <none>        Ubuntu 16.04.6 LTS   4.4.0-169-generic   docker://18.6.2

Now if you try to access the nginx application using the IP of your masternode with nodeport value like <masternode>:<nodeport> (i.e. 131.112.113.101:32502) in your browser you should get result similar to below

enter image description here

Note the container port used on nginx.yaml and targetPort on service.yaml (i.e. 80) you should be able to figure out this for your frontend-app better. Hope this will help you understand the issue at your node/cluster level if any.

-- DT.
Source: StackOverflow