I have VPC setup on Google Cloud which has 192.0.0.0/24 as subnet in which I am trying to setup k8s cluster for following servers ,
VM : VM NAME : Internal IP
VM 1 : k8s-master : 192.24.1.4
VM 2 : my-machine-1 : 192.24.1.1
VM 3 : my-machine-2 : 192.24.1.3
VM 4 : my-machine-3 : 192.24.1.2
Here k8s-master would act as a master and all other 3 machines would act as nodes. I am using following command to initilize my cluster.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 ( need to change this to vpc subnet) --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=k8s-master-external-ip
I am using flannel for which I am using following command to setup network for my cluster,
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Now whenever I am deploying a new pod, k8s is assigning IP from 10.244.0.0/16 to that pod which is not accessible from my eureka as my eureka server is running on Google Cloud VPC cidr.
I want to configure k8s such that it will use vpc subnet IP ( internal IP of the machine where pod is deployed ).
I even tried to manually download kube-flannel.yml and change cidr to my subnet but that did not solve my problem.
Need help to resolve this. Thanks in advance.
Kubernetes needs 3 subnets.
1 Subnet for the nodes (this would your vpc subnet 192.168.1.0/24) 1 Subnet for your pods. Optionally 1 Subnet for Services.
These subnets cannot be the same they have to be different.
I believe what's missing in your case are routes to make the pods talk to each other. Have a look at this guide for setting up the routes you need