Connecting pod to external world

9/3/2019

Newbie to kubernetes so might be a silly question, bear with me -

I created a cluster with one node, applied a sample deployment like below

apiVersion: apps/v1
kind: Deployment
metadata:
  name: coffedep
spec:
  selector:
    matchLabels:
      app: coffedepapp
  template:
    metadata:
      labels:
        app: coffedepapp
    spec:
      containers:
      - name: coffepod
        image: nginxdemos/hello:plain-text
        ports:
        - containerPort: 80'

Now I want to ping/connect an external website/entity from this pod, so I was hoping my ping would fail as one needs services applied like NodePort/LoadBalancer to connect to the outside world. But surprisingly, ping passed? I know I am horribly wrong somewhere, please correct my understanding here.

Pod's interfaces and trace route -

/ # traceroute google.com
traceroute to google.com (172.217.194.138), 30 hops max, 46 byte packets
 1  *  *  *
 2  10.244.0.1 (10.244.0.1)  0.013 ms  0.006 ms  0.004 ms
 3  178.128.80.254 (178.128.80.254)  1.904 ms  178.128.80.253 (178.128.80.253)  0.720 ms  178.128.80.254 (178.128.80.254)  5.185 ms
 4  138.197.250.254 (138.197.250.254)  0.995 ms  138.197.250.248 (138.197.250.248)  0.634 ms  138.197.250.252 (138.197.250.252)  0.523 ms
 5  138.197.245.12 (138.197.245.12)  5.295 ms  138.197.245.14 (138.197.245.14)  0.956 ms  138.197.245.0 (138.197.245.0)  1.160 ms
 6  103.253.144.255 (103.253.144.255)  1.396 ms  0.857 ms  0.763 ms
 7  108.170.254.226 (108.170.254.226)  1.391 ms  74.125.242.35 (74.125.242.35)  0.963 ms  108.170.240.164 (108.170.240.164)  1.679 ms
 8  66.249.95.248 (66.249.95.248)  2.136 ms  72.14.235.152 (72.14.235.152)  1.727 ms  66.249.95.248 (66.249.95.248)  1.821 ms
 9  209.85.243.180 (209.85.243.180)  2.813 ms  108.170.230.73 (108.170.230.73)  1.831 ms  74.125.252.254 (74.125.252.254)  2.293 ms
10  209.85.246.17 (209.85.246.17)  2.758 ms  209.85.245.135 (209.85.245.135)  2.448 ms  66.249.95.23 (66.249.95.23)  4.538 ms
11^Z[3]+  Stopped                    traceroute google.com
/ # 
/ # 
/ # 
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether ee:97:21:eb:98:bc brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.183/32 brd 10.244.0.183 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ec97:21ff:feeb:98bc/64 scope link 
       valid_lft forever preferred_lft forever

Node's interfaces -

root@pool-3mqi2tbi6-b3dc:~# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 3a:c1:6f:8d:0f:45 brd ff:ff:ff:ff:ff:ff
    inet 178.128.82.251/20 brd 178.128.95.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.15.0.5/16 brd 10.15.255.255 scope global eth0:1
       valid_lft forever preferred_lft forever
    inet6 fe80::38c1:6fff:fe8d:f45/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 06:88:c4:23:4b:cc brd ff:ff:ff:ff:ff:ff
    inet 10.130.227.173/16 brd 10.130.255.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::488:c4ff:fe23:4bcc/64 scope link 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:61:08:39:8a brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9a:3c:d3:35:b3:35 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::983c:d3ff:fe35:b335/64 scope link 
       valid_lft forever preferred_lft forever
6: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 52:13:c5:6e:52:bf brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/32 scope link cilium_host
       valid_lft forever preferred_lft forever
    inet6 fe80::5013:c5ff:fe6e:52bf/64 scope link 
       valid_lft forever preferred_lft forever
7: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 4a:ab:3b:3b:0d:b5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::48ab:3bff:fe3b:db5/64 scope link 
       valid_lft forever preferred_lft forever
9: cilium_health@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b6:2f:45:83:e0:44 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::b42f:45ff:fe83:e044/64 scope link 
       valid_lft forever preferred_lft forever
11: lxc1408c930131e@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 8e:45:4d:7b:94:e5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::8c45:4dff:fe7b:94e5/64 scope link 
       valid_lft forever preferred_lft forever
13: lxc0cef46c3977c@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 16:eb:36:8b:fb:45 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::14eb:36ff:fe8b:fb45/64 scope link 
       valid_lft forever preferred_lft forever
15: lxca02c5de95d1c@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 62:9d:0c:34:0f:11 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::609d:cff:fe34:f11/64 scope link 
       valid_lft forever preferred_lft forever
17: lxc32eddb70fa07@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether da:1a:08:95:fb:f2 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::d81a:8ff:fe95:fbf2/64 scope link 
       valid_lft forever preferred_lft forever
-- Pavan
kubernetes
kubernetes-pod

1 Answer

9/3/2019

You don't need services, nodeport, or loadbalancer to connect to the outside world. If your network policies allow pods to talk to outside, you can.

You need services to access your pods from within your cluster. You need loadbalancers, or nodeports to connect to your cluster from outside.

-- Burak Serdar
Source: StackOverflow