Integrating existing Azure VNET to Kubernetes cluster using ACS-Engine

7/19/2017

Since deploying the k8s cluster in Azure Portal does not allow me to attach the existing Azure VENT to it, I go for ACS-Engine. The default k8s networking environment is as follows:

Private VNet    10.0.0.0/8
Master Subnet   10.240.255.0/24
Agent Subnet    10.240.0.0/24
Pod CIDR        10.244.0.0/16
Service CIDR    10.0.0.0/16

What I want to achieve is like this:

Private VNet    10.25.0.0/24
Master Subnet   10.25.0.0/27
Agent Subnet    10.25.0.32/27
Pod CIDR        10.25.0.64/27
Service CIDR    10.0.0.0/16 (Default by ACS) 

To do this, I first created a Azure VNET (acs-vnet) with address space 10.25.0.0/24. Together I created two subnets "msubnet" and "asubnet" 10.25.0.32/27 and 10.25.0.64/27. Also, I modified the template json as follows:

 "properties": {
    "orchestratorProfile": {
      "orchestratorType": "Kubernetes",
      "orchestratorVersion": "1.6.2",
      "kubernetesConfig": {
        "clusterSubnet": "10.25.0.64/27"
      }
    },
    "masterProfile": {
      "count": 1,
      "dnsPrefix": "acsengine",
      "vmSize": "Standard_D2_v2",
      "vnetSubnetId": "/subscriptions/...../resourceGroups/.../providers/.../subnets/msubnet",
      "firstConsecutiveStaticIP": "10.25.0.5"
    },
    "agentPoolProfiles": [
      {
        "name": "agent",
        "count": 2,
        "vmSize": "Standard_A1",
        "availabilityProfile": "AvailabilitySet",
        "vnetSubnetId": "/subscriptions/.../resourceGroups/.../providers/.../subnets/asubnet",
        "osType": "Windows"
      }
    ],

However, turned out the master is not ready due to POD CIDR not assigned:

user@k8s-master-0000000-0:~$ kubectl get node
NAME                    STATUS     AGE       VERSION
10000acs9001            Ready      31m       v1.6.0-alpha.1.2959+451473d43a2072
k8s-master-10008476-0   NotReady   34m       v1.6.2

And when I ran "kubectl describe node", it showed

  Ready                 False   Wed, 14 Jul 2017 04:40:38 +0000         Wed, 14 Jul 2017 04:12:03 +0000         KubeletNotReady                 runtime network not ready: NetworkReady=false ginNotReady message:docker: network plugin is not ready: Kubenet does not have netConfig. This is most likely due to lack of PodCIDR

With this result I suspect that it may due to the size of subnet assigned to POD CIDR. So I tried two more cases.

Case I

Private VNet    10.25.0.0/16
Master Subnet   10.25.0.0/24
Agent Subnet    10.25.1.0/24
Pod CIDR        10.25.2.0/24
Service CIDR    10.0.0.0/16 (Default by ACS) 

Case II

Private VNet    10.24.0.0/14
Master Subnet   10.25.0.0/24
Agent Subnet    10.25.1.0/24
Pod CIDR        10.24.0.0/16
Service CIDR    10.0.0.0/16 (Default by ACS) 

For case I, it fails as 10.25.2.0/24 is only assigned to the master, but not for the agents. Moreover, the following message come up. I verify it is not a problem with service principal and checked in Azure the created Azure Route has no routes defined.

“NoRouteCreated    RouteController failed to create a route”

For case II, everything works fine at this stage.

With this result, my questions are:

  1. Is there a minimum subnet size that should be assigned to POD CIDR?

  2. If I want to attach a VNET say 20.0.0.0/8 to the cluster but not the original one with 10.0.0.0/8, what are the steps to go? Would changing the value on “$env:VIP_CIDR=\"10.0.0.0/8\"\n\n” in the generated azuredeploy.json file help?

  3. If I add vnetSubnetId to integrate my existing VNET to my k8s cluster, say 20.0.0.0/16, will there be any conflicts with the preallocated one 10.0.0.0/8? (to my understanding, this private VNET is not known to the Azure SDN?)

  4. I have a VM in my existing VNET environment, and would like to connect to a service in Azure using VIP (Service CIDR not known to Azure SDN). Any suggestions to this?

Would be appreciated for any insights.

-- insanecoder
azure
azure-container-service
kubernetes

0 Answers