I have a client applicationg running in a pod, on kubernetes 1.11.1., that should connect to a rabbitMQ cluster.
I would like to create a service that should round robin among two IP addresses of two hosts. The hosts are not pods, they are external virtual machines acting as a cluster of rabbitMQ.
I created a service and an endpoint that share the name (rabbitmq-service) in order to match. Now, from a pod it is not possibile to resolve via DNS the servicename.default.svc, while it is possible to resolve both host names via a name that contains the service name, as in: N-N-N-N.servicename.namespace.svc.clusterdomain. (where N-N-N-N is similar to an ip address).
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-service
spec:
ports:
- name: http
protocol: TCP
port: 5672
targetPort: 5672
endpoints.yaml:
apiVersion: v1
kind: Endpoints
metadata:
name: rabbitmq-service
namespace: default
subsets:
- addresses:
- ip: 10.112.63.98
- ip: 10.112.63.99
ports:
- name: http
port: 5672
protocol: TCP
How can I configure the servicename.default.svc resolution?
Thank you.
I applied your configurations and this is what I get:
kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.99.116:8443 45s
rabbitmq-service 10.112.63.98:5672,10.112.63.99:5672 13s
Also I tried to resolve the dns of rabbitmq-service from another pod using the below commands:
kubectl apply -f https://k8s.io/examples/admin/dns/busybox.yaml
kubectl exec -ti busybox -- nslookup rabbitmq-service.default
The above nslookup
command in the pod gave the output:
Defaulting container name to busybox.
Use 'kubectl describe pod/busybox -n default' to see all of the containers in this pod.
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: rabbitmq-service.default
Address 1: 10.101.126.122 rabbitmq-service.default.svc.cluster.local
which means that the service is resolvable. I hope there should be problem in accessing your rabbitmq service from your machines running the kubernetes cluster.
Please refer to this document if you suspect an issue with kubernetes coreDNS.