I have a cluster with 3 nodes in virtualbox environment. I created cluster with flag
kubeadm init --pod-network-cidr=10.244.0.0/16
then I installed flannel and added rest of two nodes to the cluster. After that, new virtual machine was created to host private repository for docker images. Next, I used this .yaml to create deployment of my app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gunicorn
spec:
selector:
matchLabels:
app: gunicorn
replicas: 1
template:
metadata:
labels:
app: gunicorn
spec:
imagePullSecrets:
- name: my-registry-key
containers:
- name: ipcheck2
image: 192.168.2.4:8083/ipcheck2:1
imagePullPolicy: Always
command:
- sleep
- "infinity"
ports:
- containerPort: 8080
hostPort: 8080
image was created from following dockerfile and pushed to the repo:
FROM python:3
EXPOSE 8080
ADD /IP_check/ /
WORKDIR /
RUN pip install pip --upgrade
RUN pip install -r requirements.txt
CMD ["gunicorn", "IP_check.wsgi", "-b :8080"]
At this moment I can tell that If I run container from docker engine end expose this port I am able to connect with app.
Next I created NodePort service for my app:
apiVersion: v1
kind: Service
metadata:
name: ipcheck
spec:
selector:
app: gunicorn
ports:
- port: 70
targetPort: 8080
nodePort: 30000
type: NodePort
And here is the issue. I checked with kubectl describe pods, which node is running pod with my app. Then I tried to use curl <nodeIP>:30000 to reach the app but it doesn't work.
curl: (7) Failed connect to 192.168.2.3:30000; Connection refused
Also I installed hello-world app from kubernetes documentation and exposed it with NodePort. This didn't work too.
Anyone has idea why I can't reach pod from inside cluster and from outside cluster with NodePort?
OS: Centos7
IP addresses:
Node1 192.168.2.1 - Master
Node2 192.168.2.2 - Worker
Node3 192.168.2.3 - Worker
Node4 192.168.2.4 - Private repo (outside of cluster)
Pod describe:
Name: gunicorn-5f7f485585-wjdnf
Namespace: default
Priority: 0
Node: node3/192.168.2.3
Start Time: Thu, 16 Jul 2020 18:01:54 +0200
Labels: app=gunicorn
pod-template-hash=5f7f485585
Annotations: <none>
Status: Running
IP: 10.244.1.20
IPs:
IP: 10.244.1.20
Controlled By: ReplicaSet/gunicorn-5f7f485585
Containers:
ipcheck2:
Container ID: docker://9aa18f3fff1d13dfc76355dde72554fd3af304435c9b7fc4f7365b4e6ac9059a
Image: 192.168.2.4:8083/ipcheck2:1
Image ID: docker-pullable://192.168.2.4:8083/ipcheck2@sha256:e48469c6d1bec474b32cd04ca5ccbc057da0377dff60acc37e7fa786cbc39526
Port: 8080/TCP
Host Port: 8080/TCP
Command:
sleep
infinity
State: Running
Started: Thu, 16 Jul 2020 18:01:55 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9q77c (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-9q77c:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9q77c
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40m default-scheduler Successfully assigned default/gunicorn-5f7f485585-wjdnf to node3
Normal Pulling 40m kubelet, node3 Pulling image "192.168.2.4:8083/ipcheck2:1"
Normal Pulled 40m kubelet, node3 Successfully pulled image "192.168.2.4:8083/ipcheck2:1"
Normal Created 40m kubelet, node3 Created container ipcheck2
Normal Started 40m kubelet, node3 Started container ipcheck2
Service describe:
Name: ipcheck
Namespace: default
Labels: <none>
Annotations: Selector: app=gunicorn
Type: NodePort
IP: 10.111.7.129
Port: <unset> 70/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30000/TCP
Endpoints: 10.244.1.20:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Node3 iptables:
Chain INPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain FORWARD (policy DROP)
target prot opt source destination
KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- 10.244.0.0/16 anywhere
ACCEPT all -- anywhere 10.244.0.0/16
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL all -- anywhere anywhere
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-EXTERNAL-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- anywhere anywhere /* default/gunicorn-ipcheck: has no endpoints */ ADDRTYPE match dst-type LOCAL tcp dpt:30384 reject-with icmp-port-unreachable
Chain KUBE-FIREWALL (2 references)
target prot opt source destination
DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
Chain KUBE-FORWARD (1 references)
target prot opt source destination
DROP all -- anywhere anywhere ctstate INVALID
ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED
Chain KUBE-KUBELET-CANARY (0 references)
target prot opt source destination
Chain KUBE-PROXY-CANARY (0 references)
target prot opt source destination
Chain KUBE-SERVICES (3 references)
target prot opt source destination
REJECT tcp -- anywhere 10.104.59.152 /* default/gunicorn-ipcheck: has no endpoints */ tcp dpt:webcache reject-with icmp-port-unreachable
REJECT tcp -- anywhere 192.168.2.240 /* default/gunicorn-ipcheck: has no endpoints */ tcp dpt:webcache reject-with icmp-port-unreachable
'ip a' on Node3:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:a4:1d:ff brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3
valid_lft 86181sec preferred_lft 86181sec
inet6 fe80::1272:64b5:b03b:2b75/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:14:7f:ad brd ff:ff:ff:ff:ff:ff
inet 192.168.2.3/24 brd 192.168.2.255 scope global noprefixroute enp0s8
valid_lft forever preferred_lft forever
inet6 fe80::2704:2b92:cc02:e88/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:a1:17:41:be brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 6e:c6:9c:0f:ab:55 brd ff:ff:ff:ff:ff:ff
inet 10.244.1.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::6cc6:9cff:fe0f:ab55/64 scope link
valid_lft forever preferred_lft forever
6: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
link/ether 4a:66:88:71:56:6a brd ff:ff:ff:ff:ff:ff
inet 10.244.1.1/24 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::4866:88ff:fe71:566a/64 scope link
valid_lft forever preferred_lft forever
7: veth0ded1d29@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
link/ether 22:c2:6b:c7:cc:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::20c2:6bff:fec7:cc7a/64 scope link
valid_lft forever preferred_lft forever
Endpoints:
ipcheck 10.244.1.21:8080 51m
kubernetes 192.168.2.1:6443 9d
I hope that you are able to curl using clusterip internally using <curl http://10.111.7.129:70>
Seems like the port is not open. Try opening the port 30000 at virtual box level/ if using AKS or IBM cloud open the port at security groups.
<firewall-cmd --permanent --add-port 30000/tcp>
Then use <curl http://workernodeIP:30000>
Thanks VB