I have a redis-service running a redis server. Should I be able to store and retrieve data to that service from multiple clients on different services? I have been unsuccessful in my experiments thus far.
I have a compute-service in its own pod that dials the redis-service and stores a key/value
rClient := redis.NewClient(&redis.Options{
Addr: "redis-service:6379",
Password: "", // no password set
DB: 0, // use default DB
})
rClient.Set("trump", "value", 0).Err()
I then have a web-service in its own pod that tries to read this value. Error returns Nil and the value is blank.
rClient := redis.NewClient(&redis.Options{
Addr: "redis-service:6379",
Password: "", // no password set
DB: 0, // use default DB
})
val, err := rClient.Get("trump").Result()
fmt.Fprintf(w, "Print Error: %v \n", err) //prints nil
fmt.Fprintf(w, "Print Value: %s \n", val) // blank
If I set the value in the web-service then I can read the value fine. I just cant seem to set the value in a different service. As far as I know Redis stores data on the server side which would be redis-service.
Here is my redis-service deployment/service yaml files. Maybe it is the configuration?
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: redis-service
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis-service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redis-service
spec:
selector:
matchLabels:
app: redis-service
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis-service
role: master
tier: backend
spec:
containers:
- name: redis-service
image: k8s.gcr.io/redis:e2e # or just image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
This issue has been resolved. I set the Image pull policy to always in my compute-service deployment yaml. I also changed its port from :9090 to 8080 thinking the port might have been used elsewhere. Finally I upgraded my GCP account from a trial. I am not sure exactly which one of these fixed the issue but I am relieved it is resolved now.