I want to create a mixed Kubernetes cluster, with some local nodes and some EC2 nodes. The master is in the local network. The docker image has to run in bridge network.
Everything is fine related to the local nodes, but the pods launched in EC2 don't have network access.
Here is a sample yaml file:
---
apiVersion: "v1"
kind: "Pod"
metadata:
labels:
jenkins: "slave"
name: "test"
spec:
containers:
- image: "my-image"
imagePullPolicy: "IfNotPresent"
name: "my-test"
hostNetwork: false
If I put true for hostNetwork the pods are launched fine in both situations (with network access), but there is an application requirement saying that I have to start it with bridge network.
kubectl version: 1.13.5
docker version: 18.06.1-ce
k8s network: flannel
If I start that docker image manually, with bridge network, everything is fine both locally and in EC2, the network is accessible. So it is something related to Kubernetes configuration.
Do you have any idea?
Thank you!
I managed to solve the issue by adding the below line to pod's spec file
dnsPolicy: "Default"
This inherits the name resolution configuration from the host. By default this is set to ClusterFirst. More details are available here: https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#inheriting-dns-from-the-node