I run my kubernetes cluster in three machines. I create a redis service and rc which runs a pod creating three replications. So three containers is running on two of the machines. But the redis slave which runs in a single node fails to connnect to the master which runs in the other.
node1 -> master and slave1
node2 -> slave2
The slave2 just complains like this:
Could not connect to Redis at 192.168.0.4:6379: Connection refused
Connecting to master failed. Waiting...
Error: Connection reset by peer
Could not connect to Redis at 192.168.0.4:6379: Connection refused
Connecting to master failed. Waiting...
Error: Connection reset by peer
Could not connect to Redis at 192.168.0.4:6379: Connection refused
Connecting to master failed. Waiting...
... (a lot of them...)
while the other slave(slave1) runs well in the node which runs the master. So I don't know what is the problem.
Why slave2 tries to connect to this ip '192.168.0.4' instead of 127.0.0.1 (it's said that containers in a pod take the same ip.).
By the way, do containers have its own ip within a pod so that they can communicate with each other and separate themselves from each other.
Oh, I've got it. Containers running in a pod can't be separated into different nodes.
You need to use a service listing all the Redis nodes (master) to connect to them through the service. You can't just use IP as IP may change if the node goes down and the pod is restarted on another node.
Also, 127.0.0.1 is localhost, and that only works for containers within the same pod. (a pod links containers, and runs on a node. pods are deployed/replicated on multiple nodes.)