Subnetting within Kubernetes Cluster

5/6/2020

I have couple of deployments - say Deployment A and Deployment B. The K8s Subnet is 10.0.0.0/20. My requirement : Is it possible to get all pods in Deployment A to get IP from 10.0.1.0/24 and pods in Deployment B from 10.0.2.0/24. This helps the networking clean and with help of IP itself a particular deployment can be identified.

-- amp
kubernetes

1 Answer

5/6/2020

Deployment in Kubernetes is a high-level abstraction that rely on controllers to build basic objects. That is different than object itself such as pod or service.

If you take a look into deployments spec in Kubernetes API Overview, you will notice that there is no such a thing as defining subnets, neither IP addresses that would be specific for deployment so you cannot specify subnets for deployments.

Kubernetes idea is that pod is ephemeral. You should not try to identify resources by IP addresses as IPs are randomly assigned. If the pod dies it will have another IP address. You could try to look on something like statefulsets if you are after unique stable network identifiers.

While Kubernetes does not support this feature I found workaround for this using Calico: Migrate pools feature.

First you need to have calicoctl installed. There are several ways to do that mentioned in the install calicoctl docs.

I choose to install calicoctl as a Kubernetes pod:

 kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml

To make work faster you can setup an alias :

alias calicoctl="kubectl exec -i -n kube-system calicoctl /calicoctl -- "

I have created two yaml files to setup ip pools:

apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: pool1 
spec:
  cidr: 10.0.0.0/24
  ipipMode: Always
  natOutgoing: true

apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: pool2 
spec:
  cidr: 10.0.1.0/24
  ipipMode: Always
  natOutgoing: true

Then you you have apply the following configuration but since my yaml were being placed in my host filesystem and not in calico pod itself I placed the yaml as an input to the command:

➜  cat ippool1.yaml | calicoctl apply -f-
Successfully applied 1 'IPPool' resource(s)
➜  cat ippool2.yaml | calicoctl apply -f-
Successfully applied 1 'IPPool' resource(s)

Listing the ippools you will notice the new added ones:

  calicoctl get ippool -o wide
NAME                  CIDR             NAT    IPIPMODE   VXLANMODE   DISABLED   SELECTOR   
default-ipv4-ippool   192.168.0.0/16   true   Always     Never       false      all()      
pool1                 10.0.0.0/24      true   Always     Never       false      all()      
pool2                 10.0.1.0/24      true   Always     Never       false      all() 

Then you can specify what pool you want to choose for you deployment:

--- 
metadata: 
  labels: 
    app: nginx
    name: deployment1-pool1
spec: 
  replicas: 3
  selector: 
    matchLabels: 
      app: nginx
  template: 
    metadata: 
      annotations: 
        cni.projectcalico.org/ipv4pools: "[\"pool1\"]"
 --- 

I have created similar one called deployment2 that used ippool2 with the results below:

deployment1-pool1-6d9ddcb64f-7tkzs    1/1     Running   0          71m   10.0.0.198        acid-fuji   
deployment1-pool1-6d9ddcb64f-vkmht    1/1     Running   0          71m   10.0.0.199        acid-fuji   
deployment2-pool2-79566c4566-ck8lb    1/1     Running   0          69m   10.0.1.195        acid-fuji   
deployment2-pool2-79566c4566-jjbsd    1/1     Running   0          69m   10.0.1.196        acid-fuji   

Also its worth mentioning that while testing this I found out that if your default deployment will have many replicas and will ran out of ips Calico will then use different pool.

-- acid_fuji
Source: StackOverflow