I created by cluster by using echo 'KUBELET_KUBEADM_ARGS="--network-plugin=kubenet --pod-cidr=10.20.0.0/24 --pod-infra-container-image=k8s.gcr.io/pause:3.6"' > /etc/default/kubelet
. The setup is ran in a ubuntu VM using NAT configurations.
There is one cluster partitioned with two namespaces, each with one deployment of an application instance (think one application for one client). I'm trying to access the individual application instance via <nodeIP:nodePort>. I can access the application via <nodeIP>; however, this way I cant access application belonging to client A and client B separately.
If you're interested in the exact steps taken, see https://stackoverflow.com/questions/70637470/kubernetes-deployment-not-reachable-via-browser-exposed-with-service?noredirect=1#comment124871687_70637470
Below is the yaml file for deployment in eramba-1 namespace (so for the second deployment, I just have namespace = eramba-2)
apiVersion: apps/v1
kind: Deployment
metadata:
name: eramba-web
namespace: eramba-1
labels:
app.kubernetes.io/name: eramba-web
spec:
replicas: 1
selector:
matchLabels:
app: eramba-web
template:
metadata:
labels:
app: eramba-web
spec:
containers:
- name: eramba-web
image: markz0r/eramba-app:c281
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_HOSTNAME
value: eramba-mariadb
- name: MYSQL_DATABASE
value: erambadb
- name: MYSQL_USER
value: root
- name: MYSQL_PASSWORD
value: eramba
- name: DATABASE_PREFIX
value: ""
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: eramba-web
namespace: eramba-1
labels:
app.kubernetes.io/name: eramba-web
spec:
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: eramba-web
type: NodePort
...
Service output for eramba-1 namespace
root@osboxes:/home/osboxes/eramba# kubectl describe svc eramba-web -n eramba-1
Name: eramba-web
Namespace: eramba-1
Labels: app.kubernetes.io/name=eramba-web
Annotations: <none>
Selector: app.kubernetes.io/name=eramba-web
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.17.120
IPs: 10.100.17.120
Port: http 8080/TCP
TargetPort: 8080/TCP
NodePort: http 32370/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Service for eramba-2 output
root@osboxes:/home/osboxes/eramba# kubectl describe svc eramba-web2 -n eramba-2
Name: eramba-web2
Namespace: eramba-2
Labels: app.kubernetes.io/name=eramba-web2
Annotations: <none>
Selector: app.kubernetes.io/name=eramba-web2
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.98.240.243
IPs: 10.98.240.243
Port: http 8080/TCP
TargetPort: 8080/TCP
NodePort: http 32226/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I've verified the nodePorts listening status
root@osboxes:/home/osboxes/eramba# netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 **0 0.0.0.0:32370** 0.0.0.0:* LISTEN 3776/kube-proxy
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 3476/kube-scheduler
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 535/systemd-resolve
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 587/cupsd
tcp 0 **0 0.0.0.0:32226** 0.0.0.0:* LISTEN 3776/kube-proxy
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 2983/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 3776/kube-proxy
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 809/mysqld
tcp 0 0 172.16.42.135:2379 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 172.16.42.135:2380 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 3495/etcd
tcp 0 0 127.0.0.1:39469 0.0.0.0:* LISTEN 2983/kubelet
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 3521/kube-controlle
tcp6 0 0 ::1:631 :::* LISTEN 587/cupsd
tcp6 0 0 :::33060 :::* LISTEN 809/mysqld
tcp6 0 0 :::10250 :::* LISTEN 2983/kubelet
tcp6 0 0 :::6443 :::* LISTEN 3485/kube-apiserver
tcp6 0 0 :::10256 :::* LISTEN 3776/kube-proxy
tcp6 0 0 :::80 :::* LISTEN 729/apache2
udp 0 0 0.0.0.0:35922 0.0.0.0:* 589/avahi-daemon: r
udp 0 0 0.0.0.0:5353 0.0.0.0:* 589/avahi-daemon: r
udp 0 0 127.0.0.53:53 0.0.0.0:* 535/systemd-resolve
udp 0 0 172.16.42.135:68 0.0.0.0:* 586/NetworkManager
udp 0 0 0.0.0.0:631 0.0.0.0:* 654/cups-browsed
udp6 0 0 :::5353 :::* 589/avahi-daemon: r
udp6 0 0 :::37750 :::* 589/avahi-daemon: r
Here's the Iptables output
root@osboxes:/home/osboxes/eramba# iptables --list-rules
-P INPUT ACCEPT
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-N KUBE-EXTERNAL-SERVICES
-N KUBE-FIREWALL
-N KUBE-FORWARD
-N KUBE-KUBELET-CANARY
-N KUBE-NODEPORTS
-N KUBE-PROXY-CANARY
-N KUBE-SERVICES
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "eramba-2/eramba-web2:http has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 32226 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "eramba-1/eramba-web:http has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 32370 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.98.240.243/32 -p tcp -m comment --comment "eramba-2/eramba-web2:http has no endpoints" -m tcp --dport 8080 -j REJECT --reject-with icmp-port-unreachable
-A KUBE-SERVICES -d 10.100.17.120/32 -p tcp -m comment --comment "eramba-1/eramba-web:http has no endpoints" -m tcp --dport 8080 -j REJECT --reject-with icmp-port-unreachable
I'm sure I'm unaware of other ways where I can access the individual Application instances, so please advice if there's a better way.
Endpoints: <none>
is an indication your Service is configured wrong; its selector doesn't match any of the Pods. If you look at the Service, it looks for
spec:
selector:
app.kubernetes.io/name: eramba-web
But if you look at the Deployment, it generates Pods with different labels
spec:
template:
metadata:
labels:
app: eramba-web # not app.kubernetes.io/name: ...
I'd consistently use the app.kubernetes.io/name
format everywhere. You will have to delete and recreate the Deployment to change its selector:
value to match.