I have a k8s service/deployment in a minikube cluster (name amq
in default
namespace:
D20181472:argo-k8s gms$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argo argo-ui ClusterIP 10.97.242.57 <none> 80/TCP 5h19m
default amq LoadBalancer 10.102.205.126 <pending> 61616:32514/TCP 4m4s
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h23m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 5h23m
I spun up infoblox/dnstools, and tried nslookup
, dig
and ping
of amq.default
with the following results:
dnstools# nslookup amq.default
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: amq.default.svc.cluster.local
Address: 10.102.205.126
dnstools# ping amq.default
PING amq.default (10.102.205.126): 56 data bytes
^C
--- amq.default ping statistics ---
28 packets transmitted, 0 packets received, 100% packet loss
dnstools# dig amq.default
; <<>> DiG 9.11.3 <<>> amq.default
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 15104
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;amq.default. IN A
;; Query time: 32 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Sat Jan 26 01:58:13 UTC 2019
;; MSG SIZE rcvd: 29
dnstools# ping amq.default
PING amq.default (10.102.205.126): 56 data bytes
^C
--- amq.default ping statistics ---
897 packets transmitted, 0 packets received, 100% packet loss
(NB: pinging the ip address directly gives the same result)
I admittedly am not very knowledgable about the deep workings of DNS, so I am not sure why I can do a lookup and dig for the hostname, but not ping it.
That’s because the service’s cluster IP is a virtual IP, and only has meaning when combined with the service port.
Whenever a service gets created by API server a Virtual IP address is assigned to it immediately and after that, the API server notifies all kube-proxy agents running on the worker nodes that a new Service has been created. Then, It's kube-proxy's job to make that service addressable on the node it’s running on. kube-proxy does this by setting up a few iptables rules, which make sure each packet destined for the service IP/port pair is intercepted and its destination address modified, so the packet is redirected to one of the pods backing the service.
I admittedly am not very knowledgable about the deep workings of DNS, so I am not sure why I can do a lookup and dig for the hostname, but not ping it.
Because Service
IP addresses are figments of your cluster's imagination, caused by either iptables or ipvs, and don't actually exist. You can see them with iptables -t nat -L -n
on any Node that is running kube-proxy
(or ipvsadm -ln
), as is described by the helpful Debug[-ing] Services page
Since they are not real IPs bound to actual NICs, they don't respond to any traffic other than the port numbers registered in the Service
resource. The correct way of testing connectivity against a service is with something like curl
or netcat
and using the port number upon which you are expecting application traffic to travel.