NetworkPolicy on Kubernetes to allow only the UI talk to the Backend?

12/27/2019

The caveat seems to be that the backend (flask service) needs to talk to the MongoDB for fetching the data. If in the network policy, I add the nodeSelector as my flask service, and add UI to ingress and UI and MongoDB on the Egress to rules, it still does not work.

NAME                                            READY   STATUS      RESTARTS   AGE
pod/xyz-mongodb-replicaset-0                    1/1     Running     0          10d
pod/xyz-mongodb-replicaset-1                    1/1     Running     0          7d
pod/xyz-mongodb-replicaset-2                    1/1     Running     0          6d23h
pod/xyz-svc-7b589fbd4-25qd6                     1/1     Running     0          20h
pod/xyz-svc-7b589fbd4-9n8jh                     1/1     Running     0          20h
pod/xyz-svc-7b589fbd4-r5q9g                     1/1     Running     0          20h
pod/xyz-ui-7d6f44b57b-8s4mq                     1/1     Running     0          3d20h
pod/xyz-ui-7d6f44b57b-bl8r6                     1/1     Running     0          3d20h
pod/xyz-ui-7d6f44b57b-jwhc2                     1/1     Running     0          3d20h
pod/mongodb-backup-check                        1/1     Running     0          20h

NAME                             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)     AGE
service/xyz-mongodb-replicaset   ClusterIP   None          <none>        27017/TCP   10d
service/xyz-prod-service         ClusterIP   10.3.92.123   <none>        8000/TCP    20h
service/xyz-prod-ui              ClusterIP   10.3.49.132   <none>        80/TCP      10d

--Deployment--
--Replicasset--
--Statefulset--

My ingress looks like -

Name:             xyz-prod-svc
Namespace:        prod-xyz
Address:
Default backend:  default-http-backend:80 (<none>)
TLS:
  prod terminates xyz.prod.domain.com
Rules:
  Host                      Path  Backends
  ----                      ----  --------
  xyz.prod.domain.com
                            /           xyz-prod-u:80 (10.7.2.4:80,10.7.4.22:80,10.7.5.24:80)
                            /endpoint4    xyz-prod-servic:8000 (IPS...)
                            /endpoint3    xyz-prod-servic:8000 (IPS...)
                            /endpoint2        xyz-prod-servic:8000 (IPS...)
                            /endpoint1   xyz-prod-servic:8000 (IPS...)

Do I have to specify my Ingress in the podSelector option of my Network Policy?

So far my Network Policy looks like this -

---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: application-network-policy
  namespace: app-prod-xyz
  labels:
    app: application-network-policy
spec:
  podSelector: 
    matchLabel:
        run: xyz-svc
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: xyz-ui
    - podSelector:
        matchLabels:
          app: application-health-check
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: xyz-ui
    - podSelector:
        matchLabels:
          app: xyz-mongodb-replicaset
    - podSelector:
        matchLabels:
          app: mongodb-replicaset

Troubleshooting:

I have already tried to spin up a pod and add that pod into the ingress. I was able to ping the xyz-svc from the pod when it was allowed in the ingress and denied when i removed it from ingress, hence proving that the network policy was working.

I want to understand labels, selectors and matchLabels.

I have read through these links, but I want an intuitive explanation for my NetworkPolicy like:

podSelector: pod to which the network policy needs to be applied (it can be a deployment or app name or tier name or run)

ingress: traffic that is allowed or denied access to the above mentioned pod

egress: traffic that is allowed or denied access going out from the above mentioned pod. Thee names of pod should match which labels?

namespaceSelector?

podSelector?

EDIT: Ingress YAML

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-expires: "3600"
    nginx.ingress.kubernetes.io/session-cookie-max-age: "3600"
  name: xyz-{{ .Values.environment }}-ingress-svc
  namespace: acoe-{{ .Values.environment }}-xyz
  labels:
    app: xyz-{{ .Values.environment }}-ingress-svc
spec:
  tls:
  - hosts:
    - xyz{{ .Values.ingressDomain }}
    secretName: {{ .Values.tlsSecret }}
  rules:
  - host: xyz{{ .Values.ingressDomain }}
    http:
      paths:
      - path: /
        backend:
          serviceName: xyz-{{ .Values.environment }}-ui
          servicePort: 80
      - path: /endpoint4
        backend:
          serviceName: xyz-{{ .Values.environment }}-svc
          servicePort: 8000
      - path: /endpoint3
        backend:
          serviceName: xyz-{{ .Values.environment }}-svc
          servicePort: 8000
      - path: /endpoint2
        backend:
          serviceName: xyz-{{ .Values.environment }}-svc
          servicePort: 8000
      - path: /endpoint1
        backend:
          serviceName: xyz-{{ .Values.environment }}-svc
          servicePort: 8000
-- technazi
kubernetes
kubernetes-networkpolicy
microservices

1 Answer

12/27/2019

Does it work without the NetworkPolicy applied? If it does not work even without the NetworkPolicy, it is likely to be caused by another problem (e.g. wrong service endpoint configured to communicate with), since the default NetworkPolicy allows all traffic between pods inside the same namespace.

The following selector looks like you are applying the NetworkPolicy to all pods in that namespace. If I understood you correctly, you actually want the NetworkPolicy only to be applied on the backend pods (xyz-svc).

spec:
  podSelector: {}

So the solution therefore would probably be something like this (assuming that the backend service pod got the label app = xyz-svc):

spec:
  podSelector:
    matchLabels:
      app: xyz-svc
-- Phil
Source: StackOverflow