Unable to access StreamSets through URL on K8s

4/30/2020

I'm using ansible script to deploy streamsets on k8s master node. There is play where I'm checking if the streamset dashboard is accessible via http://127.0.0.1:{{streamsets_nodePort}} where streamsets_nodePort: 30029. The default port is 30024, which is assigned to other service, so I've changed the port.

The service is Up and the pods are running.

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/streamsets-service NodePort 10.104.162.67 <none> 18630:30029/TCP 24m

When do see the logs I can see, Running on URI : 'http://streamsets-0.streamsets-service.streamsets-ns.svc.cluster.local:18630' 2020-04-30 13:45:58,149 [user:] [pipeline:] [runner:] [thread:main] [stage:] INFO WebServerTask - Running on URI : 'http://streamsets-0.streamsets-service.streamsets-ns.svc.cluster.local:18630'

The below is my service.yml

apiVersion: v1 kind: Service metadata: name: streamsets-service labels: name: streamsets spec: type: NodePort ports: - port: {{streamsets_port}} targetPort: 18630 nodePort: {{streamsets_nodePort}} selector: role: streamsets

These are the assigned port details:

streamsets_port: 8630

streamsets_nodePort: 30029

streamsets_targetPort: 18630

In my play when I'm executing the below block

`- name: Check if Streamsets is accessible.`
  `uri:`
    `url: http://localhost:{{streamsets_nodePort}}`
    `method: GET`
    `status_code: 200`
  `register: streamsets_url_status`

- debug:`
    `var: streamsets_url_status.msg`

The output I'm getting while executing this block -

fatal: [127.0.0.1]: FAILED! => {"changed": false, "content": "", "elapsed": 30, "msg": "Status code was -1 and not [200]: Connection failure: timed out", "redirected": false, "status": -1, "url": "http://localhost:30029"}

Can someone help me to understand what is the issue?

-- Shuvodeep Ghosh
kubernetes
service-node-port-range
streamsets

1 Answer

5/1/2020

Perhaps I'm not understanding correctly, but why would the service be responsive on the localhost IP of 127.0.0.1?

You're creating a NodePort mapping, which automatically creates a ClusterIP - you can see that in your services listing: 10.104.162.67 That IP is the one that should be used to access the application whose port you've exposed with the service, in combination with the 'port' specification you've made (8630 in this case).

Alternatively, if you wanted to directly access the NodePort you created then you would hit the direct internal-IP of the node(s) on which the pod is running. Execute a kubectl get nodes -o wide and note the internal IP address of the Node you're interested in, and then make a call against that IP address in combination with the nodePort you've specified for the service (30029 in this case).

Depending on which layer you're SSH-ing/exec-ing into (pod, node, conatiner, etc.) the resolution for 127.0.0.1 could be completely different - a container you've exec'd into doesn't resolve 127.0.0.1 to the address of the host it's running on, but rather resolves to the pod it's running in.

-- Mitch Barnett
Source: StackOverflow