Connection between pods on the same cluster is failing.
From what I understand, by default - the pods are exposed on the port specified in the yaml file. For example, I have configured my deployment file for redis as below:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
labels:
app: myapp
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- env:
- name: REDIS_PASS
value: '**None**'
image: tutum/redis
ports:
- containerPort: 6379
name: redis
restartPolicy: Always
Below is the deployment file for the pod where the container is trying to access redis:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jks
labels:
app: myapp
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
imagePullSecrets:
- name: myappsecret
containers:
- env:
- name: JOBQUEUE
value: vae_jobqueue
- name: PORT
value: "80"
image: repo.url
name: jks
ports:
- containerPort: 80
volumeMounts:
- name: config-vol
mountPath: /etc/sys0
volumes:
- name: config-vol
configMap:
name: config
restartPolicy: Always
I did not create any service yet. But is it required? The pod is going to be accessed by another pod which is part of the same helm chart. With this setup,there are errors in the second pod which tries to access redis:
2018-11-21T16:12:31.939Z - [33mwarn[39m: Error: Redis connection to redis:6379 failed - getaddrinfo ENOTFOUND redis redis:6379
at errnoException (dns.js:27:10)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:78:26)
How do I make sure that my pod is able to connect to the redis pod on port 6379?
---- UPDATE ----
This is how my charts look like now:
# Source: mychartv2/templates/redis-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
selector:
app: myapp-redis
clusterIP: None
ports:
- name: redis
port: 6379
targetPort: 6379
---
# Source: mychartv2/templates/redis-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
labels:
app: myapp-redis
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-redis
spec:
containers:
- env:
- name: REDIS_PASS
value: '**None**'
image: tutum/redis
ports:
- containerPort: 6379
name: redis
restartPolicy: Always
---
# Source: mychartv2/templates/jks-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jks
labels:
app: myapp-jks
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-jks
spec:
imagePullSecrets:
- name: jkssecret
containers:
- env:
- name: JOBQUEUE
value: jks_jobqueue
- name: PORT
value: "80"
image: repo.url
name: jks
ports:
- containerPort: 80
volumeMounts:
- name: config-vol
mountPath: /etc/sys0
volumes:
- name: config-vol
configMap:
name: jksconfig
restartPolicy: Always
Note: I am using minikube as my kubernetes cluster
Since you didn't create any service for redis
pod, you need either (01) the pod dns name or (02) podIP, followed by port (6379) to connect to it. See dns-pod-service for how to get the dns name for pod.
You can get the dns name in this format <pod_name>.namespace.pod.cluster.local
and the connection uri is <pod_name>.namespace.pod.cluster.local:6379
.
You can get the podIP from .status.podIP
and the connection uri is podIP:6379
.
In cluster the podIP may change for any reason. So it is not wise to use the podIP. It would be better if you create service and use it's dns name followed by the service port (in the case of following yaml it is 6379). We can create a service using following configuration:
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
selector:
app: myapp
clusterIP: None
ports:
- name: redis # Actually, no port is needed.
port: 6379
targetPort: 6379
Update:
However, you can check the redis connection by using redis-cli
binary in your pod from which you want to check connection. If you have this redis-cli
, then run $ redis-cli -h <host>
, where
host = redis_service_host or pod_host or redis_servce_ip or pod_ip
You'd need a Service
to get access to the Redis pod. With your current resources redis:6379
does just not exist, a Service with metadata.name: redis
and the appropriate spec.selector
would make it available.
Be aware the 2 deployments you posted have the same metadata.labels.app
value of myapp
so you'd have to change one to say myapp-redis
for example so the service will target the right pods (with metadata.name: myapp-redis
in that example) and not the pods from your HTTP application.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis
labels:
app: myapp
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-redis
spec:
containers:
- env:
- name: REDIS_PASS
value: '**None**'
image: tutum/redis
ports:
- containerPort: 6379
name: redis
restartPolicy: Always
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
selector:
app: myapp-redis
ports:
- protocol: TCP
port: 6379
Also, you added the tag kubernetes-helm
to your question, so if you are using Helm I'd highly recommend this stable chart: just install it with helm install stable/redis
and you'll be able to access your Redis master with redis-master:6379
and any read-only slave with redis-slave:6379
. You can avoid having slaves if you don't need/want them, just go through the configuration to know how.