I want to start out by saying I do not know the exact architecture of the servers involved.. All I know is that they are Ubuntu machines on the cloud.
I have set up a 1 master/1 worker k8s cluster using two servers.
kubectl cluster-info
gives me:
Kubernetes master is running at https://10.62.194.4:6443
KubeDNS is running at https://10.62.194.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
I have created a simple deployment as such:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy
spec:
replicas: 2
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
Which spins up an nginx pod exposed on container port 80.
I have exposed this deployment using:
kubectl expose deployment nginx-deploy --type NodePort
When I run kubectl get svc
, I get:
nginx-deploy NodePort 10.99.103.239 <none> 80:30682/TCP 29m
kubectl get pods -o wide
gives me:
nginx-deploy-7c45b84548-ckqzb 1/1 Running 0 33m 192.168.1.5 myserver1 <none> <none>
nginx-deploy-7c45b84548-vl4kh 1/1 Running 0 33m 192.168.1.4 myserver1 <none> <none>
Since I exposed the deployment using NodePort, I was under the impression I can access the deployment using < Node IP > : < Node Port >
The Node IP of the worker node is 10.62.194.5 and when I try to access http://10.62.194.5:30682 I do not get the nginx landing page.
One part I do not understand is that when I do kubectl describe node myserver1
, in the long output I receive I can see:
Addresses:
InternalIP: 10.62.194.5
Hostname: myserver1
Why does it say InternalIP? I can ping this IP
EDIT: Output of sudo lsof -i -P -n | grep LISTEN
systemd-r 846 systemd-resolve 13u IPv4 24990 0t0 TCP 127.0.0.53:53 (LISTEN)
sshd 1157 root 3u IPv4 30168 0t0 TCP *:22 (LISTEN)
sshd 1157 root 4u IPv6 30170 0t0 TCP *:22 (LISTEN)
xrdp-sesm 9840 root 7u IPv6 116948 0t0 TCP [::1]:3350 (LISTEN)
xrdp 9862 xrdp 11u IPv6 117849 0t0 TCP *:3389 (LISTEN)
kubelet 51562 root 9u IPv4 560219 0t0 TCP 127.0.0.1:42735 (LISTEN)
kubelet 51562 root 24u IPv4 554677 0t0 TCP 127.0.0.1:10248 (LISTEN)
kubelet 51562 root 35u IPv6 558616 0t0 TCP *:10250 (LISTEN)
kube-prox 52427 root 10u IPv4 563401 0t0 TCP 127.0.0.1:10249 (LISTEN)
kube-prox 52427 root 11u IPv6 564298 0t0 TCP *:10256 (LISTEN)
kube-prox 52427 root 12u IPv6 618851 0t0 TCP *:30682 (LISTEN)
bird 52925 root 7u IPv4 562993 0t0 TCP *:179 (LISTEN)
calico-fe 52927 root 3u IPv6 562998 0t0 TCP *:9099 (LISTEN)
Output of ss -ntlp | grep 30682
LISTEN 0 128 *:30682 *:*
As far as I understand you are trying to access 10.62.194.5
from a Host which is in a different subnet, for example your terminal. In Azure I guess you have a Public IP and a Private IP for each Node. So, if you are trying to access the Kubernetes Service
from your terminal, you should use the Public IP of the Host together with the port and also be sure that the port is open in your azure firewall.