I created a cluster with 2 vm's. I followed instructions listed below.This is on RHEL 7.3 This is after kubernetes was installed using yum. The version of kubernetes is 1.7
sysctl net.bridge.bridge-nf-call-iptables=1
sysctl net.bridge.bridge-nf-call-ip6tables=1
systemctl stop firewall
systemctl disable firewall
systemctl status firewall
systemctl start iptables.service
systemctl enable iptables.service
iptables -F
service kubelet restart
kubeadm init --pod-network-cidr 10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
kubectl describe nodes
cd ~/Downloads
kubectl apply -f flannel.yml
kubectl apply -f flannel-rbac.yml
kubectl create -f rolebinding.yml
kubectl create -f role.yml
sysctl net.bridge.bridge-nf-call-iptables=1
sysctl net.bridge.bridge-nf-call-ip6tables=1
systemctl stop firewall
systemctl disable firewall
systemctl status firewall
systemctl start iptables.service
systemctl enable iptables.service
iptables -F
kubeadm join --token xxxxxx.xxxxxxxxxxxxxx x.x.x.x:6443
The issue i am having is that the dns is not working as expected. Have been struggling with this for past two days. Would appreciate any help.
Is the KubeDNS addon running?
You should see something like this in your kube-system
namespace when you list pods:
If you don't see those pods, try installing the addon: https://coreos.com/kubernetes/docs/latest/deploy-addons.html
Check firewalld on the nodes... I have to leave mine off (cause they aren't configured properly). And if someone randomly turns it back on I have DNS issues with my cluster.