Combining several network policies for fine-grained, flag-based control

9/19/2019

I want to work out a network policies in Kubernetes cluster to have a fine grained generalized access control between modules.

I've prepared a Kubernetes setup with 2 manifests:

  1. 2-container nginx pod, with 2 ports listening and some generic data returned, one is port 80, another - 81
  2. I have 3 console pods with 2 on/off labels: "allow80" and "allow80". So, if "allow80"present, console pod can access dual-nginx via service entry point, port 80. Same is applied for port 81

I have 3 console pods:

  1. console-full - access port 80 and 81, [allow80, allow81]
  2. console-partial - port 80 - on, 81 - off, [allow80]
  3. console-no-access - both, 80 and 81 - restricted []

Test setup. It will create all necessary components in the "net-policy-test" namespace.

To create:

kubectl apply -f net_policy_test.yaml

To cleanup:

kubectl delete -f net_policy_test.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: net-policy-test
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx1
  namespace: net-policy-test
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
    <title>nginx, instance1</title>
    </head>
    <body>
      <h1>nginx, instance 1, port 80</h1>
    </body>
    </html>
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx2
  namespace: net-policy-test
data:
  index.html: |
    <!DOCTYPE html>
    <html>
    <head>
    <title>nginx, instance2</title>
    </head>
    <body>
      <h1>nginx, instance 2, port 81</h1>
    </body>
    </html>
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-conf1
  namespace: net-policy-test
data:
  default.conf: |
    server {
        listen       80;
        server_name  localhost;


        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-conf2
  namespace: net-policy-test
data:
  default.conf: |
    server {
        listen       81;
        server_name  localhost;


        location / {
            root   /usr/share/nginx/html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dual-nginx
  namespace: net-policy-test
  labels:
    app: dual-nginx
    environment: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dual-nginx
  template:
    metadata:
      labels:
        app: dual-nginx
        name: dual-nginx
    spec:
      containers:
      - image: nginx
        name: nginx1
        ports:
        - name: http1
          containerPort: 80
        volumeMounts:
          - name: html1
            mountPath: /usr/share/nginx/html
          - name: config1
            mountPath: /etc/nginx/conf.d
      - image: nginx
        name: nginx2
        ports:
        - name: http2
          containerPort: 81
        volumeMounts:
          - name: html2
            mountPath: /usr/share/nginx/html
          - name: config2
            mountPath: /etc/nginx/conf.d
      volumes:
        - name: html1
          configMap:
            name: nginx1
        - name: html2
          configMap:
            name: nginx2
        - name: config1
          configMap:
            name: nginx-conf1
        - name: config2
          configMap:
            name: nginx-conf2

---
apiVersion: v1
kind: Service
metadata:
  name: dual-nginx
  namespace: net-policy-test
spec:
  selector:
    app: dual-nginx
  ports:
  - name: web1
    port: 80
    targetPort: http1
  - name: web2
    port: 81
    targetPort: http2
---
# this console deployment will have full access to nginx
apiVersion: apps/v1
kind: Deployment
metadata:
  name: console-full
  namespace: net-policy-test
  labels:
    app: console-full
    environment: test
    nginx-access: full
spec:
  replicas: 1
  selector:
    matchLabels:
      app: console-full
  template:
    metadata:
      labels:
        app: console-full
        name: console-full
        allow80: "true"
        allow81: "true"
    spec:
      containers:
      - image: alpine:3.9
        name: main
        command: ["sh", "-c", "apk update && apk add curl && sleep 10000"]

---
# this console deployment will have partial access to nginx
apiVersion: apps/v1
kind: Deployment
metadata:
  name: console-partial
  namespace: net-policy-test
  labels:
    app: console-partial
    environment: test
    nginx-access: partial
spec:
  replicas: 1
  selector:
    matchLabels:
      app: console-partial
  template:
    metadata:
      labels:
        app: console-partial
        name: console-partial
        allow80: "true"

    spec:
      containers:
      - image: alpine:3.9
        name: main
        command: ["sh", "-c", "apk update && apk add curl && sleep 10000"]
---
# this console deployment will have no access to nginx
apiVersion: apps/v1
kind: Deployment
metadata:
  name: console-no-access
  namespace: net-policy-test
  labels:
    app: console-no-access
    environment: test
    nginx-access: none
spec:
  replicas: 1
  selector:
    matchLabels:
      app: console-no-access
  template:
    metadata:
      labels:
        app: console-no-access
        name: console-no-access
    spec:
      containers:
      - image: alpine:3.9
        name: main
        command: ["sh", "-c", "apk update && apk add curl && sleep 10000"]

And policies, again, to apply:

kubectl apply -f policies.yaml

To cleanup:

kubectl delete -f policies.yaml

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: nginx-restrict80
spec:
  podSelector:
    matchLabels:
      app: "dual-nginx"
  policyTypes:
    - Ingress
  ingress:
  - from:
      - podSelector:
          matchLabels:
            allow80: "true"
    ports:
      - protocol: TCP
        port: 80
  - from:
      - podSelector:
          matchLabels:
            allow81: "true"
    ports:
      - protocol: TCP
        port: 81

If I leave just one condition with "from", for one port, it works as expected. I have or not have a port access, depending on corresponding label presence, allow80 or allow81

If 2 conditions are present, partial pod has access to both, port 80 and 81:

  1. Switch to right namespace:
kubectl config set-context --current --namespace=net-policy-test
  1. Check the labels:
kubectl get pods -l allow80
NAME                               READY   STATUS    RESTARTS   AGE
console-full-78d5499959-p5kbb      1/1     Running   1          4h14m
console-partial-6679745d79-kbs5w   1/1     Running   1          4h14m

kubectl get pods -l allow81
NAME                            READY   STATUS    RESTARTS   AGE
console-full-78d5499959-p5kbb   1/1     Running   1          4h14m
  1. Check the access from pod "console-partial-..." which should access port 80 and not 81:
kubectl exec -ti console-partial-6679745d79-kbs5w curl http://dual-nginx:80
<!DOCTYPE html>
<html>
<head>
<title>nginx, instance1</title>
</head>
<body>
  <h1>nginx, instance 1, port 80</h1>
</body>
</html>

kubectl exec -ti console-partial-6679745d79-kbs5w curl http://dual-nginx:81
<!DOCTYPE html>
<html>
<head>
<title>nginx, instance2</title>
</head>
<body>
  <h1>nginx, instance 2, port 81</h1>
</body>
</html>

Partial-access pod has access to ports 80 and 81.

Pod, having no labels (console-no-access-), having no access to either port, which is expected

It resembles to what is described in this presentation: Youtube, Securing Cluster Networking with Network Policies - Ahmet Balkan, Google. So, at least one flag, "allow80" or "allow81" gives access to everything. How come?

Now, questions:

  1. Is it expected behaviour?
  2. How to make simple flag based access control, for purpose automating it or passing to admins, who can easily produce these in large numbers?
-- Vetal
firewall
kubernetes
kubernetes-networkpolicy

2 Answers

9/23/2019

I've tried to recreate a cluster in Azure with Calico net policies and Azure CNI - it started to work properly for Linux -> Linux communications

network_plugin="azure" &&\
network_policy="calico"
az aks create ... \
--network-plugin ${network_plugin} \
--network-policy ${network_policy}

Now, when Windows container is put to place on Client side. When policies are enabled, neither port is accessible from Windows shell to test Linux container. That is begiining of another story, I presume

-- Vetal
Source: StackOverflow

9/20/2019

TLDR: It's working on my cluster exactly how you wanted it to work.

Now a bit of explanations and examples.

I've created a cluster at GKE and enabled Network Policies using following command:

gcloud beta container clusters create test1 --enable-network-policy --zone us-cental1-a

Then copied your exact deployment yaml and Network Policy yaml with no changes and deployed them.

$ kubectl apply -f policy-test.yaml
namespace/net-policy-test created
configmap/nginx1 created
configmap/nginx2 created
configmap/nginx-conf1 created
configmap/nginx-conf2 created
deployment.apps/dual-nginx created
service/dual-nginx created
deployment.apps/console-full created
deployment.apps/console-partial created
deployment.apps/console-no-access created

$ kubectl apply -f policy.yaml
networkpolicy.networking.k8s.io/nginx-restrict80 configured

The Network Policy that you wrote worked exactly as you wanted.

console-partial was only able to access nginx on port 80. console-no-access had no access to any nginx.

I think this is because GKE uses Calico as CNI

Google Container Engine (GKE) also provides beta support for Network Policies using the Calico networking plugin

While you used --network-policy azure, which is Azure-CNI. I'm unable to test this on AKS but you might try changing that to calico. This is explained here, Create an AKS cluster and enable network policy

  • Creates an AKS cluster in the defined virtual network and enables network policy.
    • The azure network policy option is used. To use Calico as the network policy option instead, use the --network-policy calico parameter.

As for automating flags, maybe this would work for you.

You can see labels here:

$ kubectl describe pods console-partial | grep -A3 Labels
Labels:             allow80=true
                    app=console-partial
                    name=console-partial
                    pod-template-hash=6c6dc7d94f

When I've started to edit Labels, using kubectl labels.

Removed a label allow80="true":

$ kubectl label pods console-partial-6c6dc7d94f-v8k5q allow80-
pod/console-partial-6c6dc7d94f-v8k5q labeled
$ kubectl describe pods console-partial | grep -A3 Labels
Labels:             app=console-partial
                    name=console-partial
                    pod-template-hash=6c6dc7d94f

and add a label allow81= true

kubectl label pods console-partial-6c6dc7d94f-v8k5q "allow81=true"
pod/console-partial-6c6dc7d94f-v8k5q labeled

$ kubectl describe pods console-partial | grep -3 Labels
Labels:             allow81=true
                    app=console-partial
                    name=console-partial
                    pod-template-hash=6c6dc7d94f

You can see from a test that the policy works as you wanted.

$ kubectl exec -it console-partial-6c6dc7d94f-v8k5q curl http://dual-nginx:81
<!DOCTYPE html>
<html>
<head>
<title>nginx, instance2</title>
</head>
<body>
  <h1>nginx, instance 2, port 81</h1>
</body>
</html>
$ kubectl exec -it console-partial-6c6dc7d94f-v8k5q curl http://dual-nginx:80
^Ccommand terminated with exit code 130

I hope this is remotely helpful.

-- Crou
Source: StackOverflow