Using kubeadm with a two-node cluster on VirtualBox Centos7 VMs. I have an app written in R and a mysql database each in their own pods. I've successfully followed instructions to setup nginx ingress controller so that the app can be reached outside the VMs, by my local machine. check :)
However, now when the app (R) tries to reach to the mysql service, the name doesn't resolve. Same with pinging 'mysql' from bash. This no longer works:
mydb<-dbConnect(MySQL(), user = 'root', password ='password',
dbname = 'prototype', host = 'mysql')
Instead I have to use the pod's IP, which does work.
mydb<-dbConnect(MySQL(), user = 'root', password ='password',
dbname = 'prototype', host = '10.244.1.233')
However, isn't this going to change upon reboots and system changes? I'd like a more static way to refer to the mysql db.
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.56.101:6443 5h
mysql 10.244.1.233:3306 41m
r-user-app 10.244.1.232:8787,10.244.1.232:3838 2h
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h
mysql ClusterIP 10.96.138.132 <none> 3306/TCP 28m
r-user-app LoadBalancer 10.100.228.80 <pending> 3838:32467/TCP,8787:31754/TCP 2h
$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
r-user-app storage.test.com 80, 443 3h
$ kubectl describe service mysql
Name: mysql
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=neurocore,tier=mysql
Type: ClusterIP
IP: 10.96.138.132
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 10.244.1.236:3306
Session Affinity: None
Events: <none>
ps auxw | grep kube-proxy
root 1914 0.1 0.3 44848 21668 ? Ssl 11:03 0:20 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
root 29218 0.0 0.0 112660 980 pts/1 R+ 14:23 0:00 grep --color=auto kube-proxy
$iptables-save | grep mysql
-A KUBE-SEP-7P27CEQL6WJZRBQ5 -s 10.244.1.236/32 -m comment --comment "default/mysql:" -j KUBE-MARK-MASQ
-A KUBE-SEP-7P27CEQL6WJZRBQ5 -p tcp -m comment --comment "default/mysql:" -m tcp -j DNAT --to-destination 10.244.1.236:3306
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.138.132/32 -p tcp -m comment --comment "default/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.138.132/32 -p tcp -m comment --comment "default/mysql: cluster IP" -m tcp --dport 3306 -j KUBE-SVC-M7XME3WTB36R42AM
-A KUBE-SVC-M7XME3WTB36R42AM -m comment --comment "default/mysql:" -j KUBE-SEP-7P27CEQL6WJZRBQ5
Based on your svc, you should be able to reach mysql:3306
from within the cluster.
Have you tried kubectl exec -it r-user-app bash
and pinging mysql from within the R app container? host mysql
should return something like "mysql.cluster.local" has address 127.21.0.01" (example). Or return any error. If there isn't an error then maybe the dbConnect() doesn't like the host name?
This is actually an issue with flannel. When I switched to use Weave as the CNI, the service discovery and DNS kube works fine.
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Looks like your service is configured well.
ping 10.96.138.132 no response :(
Each Service has a static address, so the situation when you cannot ping it is normal because that is just a virtual address and the requests to its processing is a bit different than the requests to the real addresses.
I see here only 2 reasons why you can have that problem:
10.96.138.132
as MySQL address instead of mysql
. If it fixes your problem - that is a resolving problem. BTW, you can use Service IP instead of DNS, as I already told - it is static.kube-proxy
logs in kube-system
namespace, maybe you will get any additional info for debugging.