Can't access my service in a remote Kubernetes

6/28/2020

I've been looking for a similar question for a while but I haven't found one. I have a remote Kubernetes cluster with the architecture of one master and two workers. The versions installed are as following: Kubernetes: 1.15.1-0 Docker: 18.09.1-3.el7 I'm trying to deploy & expose a JAR file of Spring project that has one REST endpoint.

Deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: microservices-deployment
  labels:
    app: microservices-deployment
spec:
  replicas: 3
  template:
    metadata:
      name: microservices-deployment
      labels:
        app: microservices-deployment
    spec:
      containers:
        - name: microservices-deployment
          image: **my_repo**/*repo_name*:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 8085
      restartPolicy: Always
  selector:
    matchLabels:
      app: microservices-deployment

service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: microservices-service
spec:
  selector:
    app: microservices-deployment
  ports:
    - port: 8085
      targetPort: 8085
  type: NodePort

my application.properties:

server.port=8085

Dockerfile:

FROM openjdk:8
ADD target/microservices.jar microservices.jar
EXPOSE 8085
ENTRYPOINT ["java", "-jar", "microservices.jar"]

It looks like my pods are ready and everything looks good, but I can't access the service I exposed even from the master's terminal. Does anyone have any idea? Thanks in advance.

UPDATE

I'm able to telnet from my master to port 30000 on my nodes (after I specified 30000 as my NodePort), as well as telnet to my pods on port 8085. When I'm trying to telnet from the master to any other port in the nodes\pods I get refuse, so I think that's a good start. Still, I'm unable to access the rest endpoint I specified although it is working on Docker locally: docker run -p 8085:8085 IMAGE_NAME

-- Yaakov Shami
docker
kubernetes
microservices

2 Answers

6/28/2020

You defined that :

 - port: 8085
   targetPort: 8085

For information : targetPort is the port used by your containerized application and port is the port of the Cluster IP (internal cluster IP).
But you didn't define a value for nodePort, it means that K8s will allocate it for you :

If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).

But you can also specify that port :

If you want a specific port number, you can specify a value in the nodePort field. The control plane will either allocate you that port or report that the API transaction failed. This means that you need to take care of possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use.

Whatever, you can get/check the value of the nodePort chosen by K8s (or that you chose) via kubectl -n YOUR_NAMESPACE describe service YOUR_SERVICE.
And in a general way you should abuse of the describe subcommand to diagnostic/debug a K8s deployment.

Then from any node in the cluster or from the external of that (since the port is of type nodePort), request any node of the cluster on the allocated port (the generated or not) and you should be able to request your service from the cluster nodes.

If you want to define yourself that external port, does that explicitly
(beware : nodePort has to be unique inside the cluster) such as :

ports:
  - port: 8085
    targetPort: 8085
    nodePort: 8085

In that way, every cluster nodes will expose the service on the 8085 port.

-- davidxxx
Source: StackOverflow

7/5/2020

The problem was a network problem. Accessing the endpoint from one of the workers did the trick. Thanks for all.

-- Yaakov Shami
Source: StackOverflow