I have a reccuring problem with container in different pods can't communicate with each other. To make things simple, I created a cluster with only 2 containers in different pods: 1. app that does only one thing: connecting to redis server. 2. redis-server container
To make long story short: I'm keep getting 'connection refused' when trying to connect from the app to redis:
$ kubectl logs app-deployment-86f848b46f-n7672
> app@1.0.0 start
> node ./app.js
LATEST
Error: connect ECONNREFUSED 10.104.95.63:6379
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1133:16) {
errno: -111,
code: 'ECONNREFUSED',
syscall: 'connect',
address: '10.104.95.63',
port: 6379
}
the app identidfy the redis-service successfully but fails to connect
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-service ClusterIP 10.107.18.112 <none> 4000/TCP 2m42s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29h
redis-service ClusterIP 10.104.95.63 <none> 6379/TCP 29h
the app code:
const redis = require("redis");
const bluebird = require("bluebird");
bluebird.promisifyAll(redis);
console.log('LATEST');
const host = process.env.HOST;
const port = process.env.PORT;
const client = redis.createClient({ host, port });
client.on("error", function (error) {
console.error(error);
});
app's docker file:
FROM node
WORKDIR "/app"
COPY ./package.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
for the redis server I tried the default image of redis, and when it didn't work, I used a custome-made image without any bind to a specific ip and no protected-mode.
redis dockerfile:
FROM redis:latest
COPY redis.conf /usr/local/etc/redis/redis.conf
CMD [ "redis-server", "/usr/local/etc/redis/redis.conf" ]
Finally, I've created 2 deployments with respected ClusterIP services:
app deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 1
selector:
matchLabels:
component: app
template:
metadata:
labels:
component: app
spec:
containers:
- name: app
image: user/redis-app:latest
ports:
- containerPort: 4000
env:
- name: HOST
valueFrom:
configMapKeyRef:
name: app-env
key: HOST
- name: PORT
valueFrom:
configMapKeyRef:
name: app-env
key: PORT
app service:
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
type: ClusterIP
selector:
component: app
ports:
- port: 4000
targetPort: 4000
env file:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-env
data:
PORT: "6379"
HOST: "redis-service.default"
redis deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
spec:
replicas: 1
selector:
matchLabels:
db: redis
template:
metadata:
labels:
db: redis
spec:
containers:
- name: redis
image: user/custome-redis:latest
ports:
- containerPort: 6379
redis service:
apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
type: ClusterIP
selector:
component: redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
Originally, I used Windows enviorment with WSL2 and Kubernetes running over docker with Docker Desktop installed. when it failed, I've provisioned a centos8 vm over virtualbox and installed kubernets with minikube - got the same results..
any ideas?....
Posting an answer out of comments since David Maze found the issue (added as a community wiki, feel free to edit)
It's very important to match labels between pods, deployments, services and other elements.
In the example above, there are different labels used for redis
service:
component: redis
and db: redis
which caused this issue.