I have a problem with service (DNS) discovery in kubernetes 1.14 version in ubuntu bionic.
Right now my 2 pods communicating using IP addresses. How can I enable coredns for service (DNS) discovery?
Here is the output of kubectl for service and pods from kube-system namespace:
kubectl get pods,svc --namespace=kube-system | grep dns
pod/coredns-fb8b8dccf-6plz2 1/1 Running 0 6d23h
pod/coredns-fb8b8dccf-thxh6 1/1 Running 0 6d23h
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d23h
apt-get update
apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
kubectl version
apt-mark hold kubelet kubeadm kubectl
kubeadm config images pull
swapoff -a
kubeadm init
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl get pods --all-namespaces
Docker is already installed, so directly installing kubernetes on worker node
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
kubectl version
apt-mark hold kubelet kubeadm kubectl
swapoff -a
Now joined worker node to master
I think everything was setup correctly by default, There was a misunderstanding by me that I can call a server running in one pod from another pod using the container name and port which I have specified in spec, but instead I should use service name and port.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: node-server1-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: node-server1
spec:
hostname: node-server1
containers:
- name: node-server1
image: bvenkatr/node-server1:1
ports:
- containerPort: 5551
kind: Service
apiVersion: v1
metadata:
name: node-server1-service
spec:
selector:
app: node-server1
ports:
- protocol: TCP
port: 5551
As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns. In Kubernetes, CoreDNS is installed with the following default Corefile configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
More info yo can find here.
You can verify your env by running:
kubectl get cm coredns -n kube-system -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
and: kubeadm config view dns: type: CoreDNS
during kubeadm init you should noticed:
[addons] Applied essential addon: CoreDNS
If you are moving from kube-dns to CoreDNS, make sure to set the CoreDNS feature gate
to true during an upgrade. For example, here is what a v1.11.0 upgrade would look like: kubeadm upgrade apply v1.11.0 --feature-gates=CoreDNS=true
In Kubernetes version 1.13 and later the CoreDNS feature gate is removed and CoreDNS is used by default. More information here.
You can see if your coredns pod is working properly by running:
kubectl logs <your coredns pod> -n kube-system
.:53
2019-05-02T13:32:41.438Z [INFO] CoreDNS-1.3.1
CoreDNS-1.3.1
.
.