Kubernetes DNS and NetworkPolicy with Calico not working

6/21/2019

I have a minikube cluster with Calico running and I am trying to make NetworkPolicies working. Here are my Pods and Services:

First pod (team-a):

apiVersion: v1
kind: Pod
metadata:
  name: team-a
  namespace: orga-1
  labels:
    run: nginx
    app: team-a
spec:
 containers:
   - image: joshrosso/nginx-curl:v2
     imagePullPolicy: IfNotPresent
     name: nginx

---
apiVersion: v1
kind: Service
metadata:
  name: team-a
  namespace: orga-1
spec:
  ports:
   - port: 80
     name: http
     protocol: TCP
     targetPort: 80
  selector:
     app: team-a

Second pod (team-b):

apiVersion: v1
kind: Pod
metadata:
  name: team-b
  namespace: orga-2
  labels:
    run: nginx
    app: team-b
spec:
 containers:
   - image: joshrosso/nginx-curl:v2
     imagePullPolicy: IfNotPresent
     name: nginx

---
apiVersion: v1
kind: Service
metadata:
  name: team-b
  namespace: orga-2
spec:
  ports:
   - port: 80
     name: http
     protocol: TCP
     targetPort: 80
  selector:
     app: team-b

When I execute a bash in team-a, I cannot curl orga-2.team-b:

dev@ubuntu:~$ kubectl exec -it -n orga-1 team-a /bin/bash
root@team-a:/# curl google.de
      //Body removed...
root@team-a:/# curl orga-2.team-b
curl: (6) Could not resolve host: orga-2.team-b

Now I applied a network policy:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
 name: deny-all-base-rule
 namespace: orga-1
spec:
 podSelector: {}
 policyTypes:
  - Ingress
 ingress: []

When I now curl google in team-a, it still works. Here are my pods:

kube-system   calico-etcd-hbpqc                           1/1     Running   0          27m
kube-system   calico-kube-controllers-6b86746955-5mk9v    1/1     Running   0          27m
kube-system   calico-node-72rcl                           2/2     Running   0          27m
kube-system   coredns-fb8b8dccf-6j64x                     1/1     Running   1          29m
kube-system   coredns-fb8b8dccf-vjwl7                     1/1     Running   1          29m
kube-system   default-http-backend-6864bbb7db-5c25r       1/1     Running   0          29m
kube-system   etcd-minikube                               1/1     Running   0          28m
kube-system   kube-addon-manager-minikube                 1/1     Running   0          28m
kube-system   kube-apiserver-minikube                     1/1     Running   0          28m
kube-system   kube-controller-manager-minikube            1/1     Running   0          28m
kube-system   kube-proxy-p48xv                            1/1     Running   0          29m
kube-system   kube-scheduler-minikube                     1/1     Running   0          28m
kube-system   nginx-ingress-controller-586cdc477c-6rh6w   1/1     Running   0          29m
kube-system   storage-provisioner                         1/1     Running   0          29m
orga-1        team-a                                      1/1     Running   0          20m
orga-2        team-b                                      1/1     Running   0          7m20s

and my services:

default       kubernetes             ClusterIP   10.96.0.1       <none>        443/TCP                  29m
kube-system   calico-etcd            ClusterIP   10.96.232.136   <none>        6666/TCP                 27m
kube-system   default-http-backend   NodePort    10.105.84.105   <none>        80:30001/TCP             29m
kube-system   kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   29m
orga-1        team-a                 ClusterIP   10.101.4.159    <none>        80/TCP                   8m37s
orga-2        team-b                 ClusterIP   10.105.79.255   <none>        80/TCP                   7m54s

The kube-dns endpoint is available, also the service.

Why is my network policy not working and why is the curl to the other pod not working? Can someone help me here?

-- ItFreak
kubernetes

1 Answer

7/1/2019

Please run

curl team-a.orga-1.svc.cluster.local
curl team-b.orga-2.svc.cluster.local
verify entries in 'cat /etc/resolf.conf'

If you can reach your pods than please follow this tutorial

Deny all ingress traffic:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-ingress
  namespace: orga-1
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Ingress

and Allow ingress traffic to Nginx:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: access-nginx
  namespace: orga-1
spec:
  podSelector:
    matchLabels:
      run: nginx
  ingress:
    - from:
      - podSelector:
          matchLabels: {}

Below you can find more information about:

Hope this help.

-- Hanx
Source: StackOverflow