By kubernetes nodeport I can't get access to the application

12/1/2019

I make configuration that my service is builded on 8080 port.

My docker image is also on 8080.

I put my ReplicaSet with configuration like this

apiVersion: apps/v1
kind: ReplicaSet
metadata:
 name: my-app-backend-rs
      spec:
       containers:
       - name: my-app-backend
         image: go-my-app-backend
         ports:
         - containerPort: 8080
         imagePullPolicy: Never

And finally I create service of type NodePort also on port 8080 with configuration like below:

apiVersion: v1
kind: Service
metadata:
 labels:
  app: my-app-backend-rs
 name: my-app-backend-svc-nodeport
spec:
 type: NodePort
 ports:
 - port: 8080
   protocol: TCP
   targetPort: 8080
 selector:
  app: my-app-backend

And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.

Type:                     NodePort
IP:                       10.110.250.176
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31859/TCP
Endpoints:                172.17.0.6:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

What am I not understanding and what am I doing wrong? Can anyone explain me that?

-- Adriano
docker
kubernetes

2 Answers

12/2/2019

From your output,i'm seeing below endpoint is created.So it seems one pod is ready to serve for this nodeport service.So label is not an issue now.

Endpoints:                172.17.0.6:8080

First ensure you are able to access the app by running curl http://podhostname:8080 command, once you are login into the pod using kubectl exec -it podname sh(if curl is installed on image which running in that pod container).If not run curl ambassador container pods as sidecar and from that pod try to access the http://<>:8080 and ensure it is working.

Remember you can't access the nodeport service as localhost since it will be pointing to your master node,if you are running this command from master node. You have to access this service by below methods.

<CLUSTERIP:PORT>---In you case:10.110.250.176:80
                      <1st node's IP>:31859
                      <2nd node's IP>:31859
-- user10912187
Source: StackOverflow

12/12/2019

I tried to use curl after kubectl exec -it podname sh

In this very example the double dash is missed in front of the sh command. Please note that correct syntax can be checked anytime with the kubectl exec -h and shall be like:

kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] [options]

if you have only one container per Pod it can be simplified to:

kubectl exec -it PODNAME -- COMMAND

The caveat of not specyfying the container is that in case of multiple containers on that Pod, you'll be conected to the first one :)

Example: kubectl exec -it pod/frontend-57gv5 -- curl localhost:80

I tried also hit on 10.110.250.176:80:31859 but this is incorrect I think. Sorry but I'm beginner at network stuff.

yes, that is not correct, as the value for :port occurs twice . In that example it is needed to hit 10.110.250.176:80 (as 10.110.250.176 is a "Cluster_IP" )

And after I put describe on NodePort I see that I should hit (e.g. curl http://127.0.0.1:31859/) to my app on address http://127.0.0.1:31859, but I have no response.

It depends on where you are going to run that command.

In this very case it is not clear what exactly you have put into ReplicaSet config (if Service's selector matches with ReplicaSet's labels), so let me explain "how this supposed to work".

Assuming we have the following ReplicaSet (the below example is slightly modified version of official documentation on topic ):

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend-rs
  labels:
    app: guestbook
    tier: frontend-meta
spec:
  # modify replicas according to your case
  replicas: 2
  selector:
    matchLabels:
      tier: frontend-label 
  template:
    metadata:
      labels:
        tier: frontend-label      ## shall match spec.selector.matchLabels.tier
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google_samples/gb-frontend:v3

And the following service:

apiVersion: v1
kind: Service
metadata:
 labels:
  app: frontend
 name: frontend-svc-tier-nodeport
spec:
 type: NodePort
 ports:
 - port: 80
   protocol: TCP
   targetPort: 80
 selector:
   tier: frontend-label   ## shall match labels from ReplicaSet spec

We can create ReplicaSet (RS) and Service. As a result, we shall be able to see RS, Pods, Service and End Points:

kubectl get rs -o wide
NAME          DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                                 SELECTOR
frontend-rs   2         2         2       10m   php-redis    gcr.io/google_samples/gb-frontend:v3   tier=frontend-label

kubectl get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE    IP           NODE                                            
frontend-rs-76sgd       1/1     Running   0          11m    10.12.0.31   gke-6v3n
frontend-rs-fxxq8       1/1     Running   0          11m    10.12.1.33   gke-m7z8 

kubectl get svc -o wide
NAME                         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                          AGE     SELECTOR
frontend-svc-tier-nodeport   NodePort    10.0.5.10    <none>        80:32113/TCP                     9m41s   tier=frontend-label

kubectl get ep -o wide
NAME                         ENDPOINTS                                                     AGE
frontend-svc-tier-nodeport   10.12.0.31:80,10.12.1.33:80                                   10m

kubectl describe svc/frontend-svc-tier-nodeport
Selector:                 tier=frontend-label
Type:                     NodePort
IP:                       10.0.5.10
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  32113/TCP
Endpoints:                10.12.0.31:80,10.12.1.33:80

Important thing that we can see from my example is that Port was set 80:32113/TCP for the service we have created.

That shall allow us accessing "gb-frontend:v3" app in a few different ways:

  • from inside cluster: curl 10.0.5.10:80
    (CLUSTER-IP:PORT) or curl frontend-svc-tier-nodeport:80
  • from external network (internet): curl PUBLIC_IP:32113 here PUBLIC_IP is the IP you can reach Node in your cluster. All the nodes in cluster are listening on a NodePort and forward requests according t the Service's selector.
  • from the Node : curl localhost:32113

Hope that helps.

-- Nick
Source: StackOverflow