I thought by design, one should access pod via exposed services. However, I find that on GKE and EKS, I can ping the pod address from instance outside of Kubernetes cluster.
>> kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dnsutils 1/1 Running 4 4h5m 10.80.12.19 ip-10-80-26-113.ap-northeast-2.compute.internal <none> <none>
network-test 1/1 Running 0 4h11m 10.80.11.192 ip-10-80-26-113.ap-northeast-2.compute.internal <none> <none>
ntest-6877545bdb-7h498 1/1 Running 0 8h 10.80.29.36 ip-10-80-60-104.ap-northeast-2.compute.internal <none> <none>
ntest2-854bd7cb6-tnbgt 1/1 Running 0 8h 10.80.116.168 ip-10-80-111-130.ap-northeast-2.compute.internal <none> <none>
Above is output from EKS. I can ping 10.80.x.x (pod) IP address within same VPC/subnet.
I can't do this when I try the same on my on-prem Kubernetes cluster.
Is it suppose to do this? If yes, how can I set up the same on my on-prem cluster?
This is being possible by the CNI plugin used. GKE uses GKE native CNI and EKS uses EKS CNI.
From EKS docs
Amazon EKS supports native VPC networking via the Amazon VPC CNI plugin for Kubernetes. Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network.
In on prem cluster you will not be able to use these CNIs.