I have small nodejs app running replicated with two pods on cluster with two nodes.
However, it seems that connection is not sticky. I need it to be sticky because I use websocket.
Is sessionAffinity not working with LoadBalancer on GCE? Let me know if i can provide more info. Thanks
Finally I had some time for more experiments:
It seems like the sessionAffinity stops working if the rc is deleted and created again after the service was created.
Steps to reproduce:
ServerName.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: servername
labels:
name: servername
spec:
replicas: 10
selector:
name: servername
template:
metadata:
labels:
name: servername
spec:
containers:
- name: app
image: fibheap/printhostname
imagePullPolicy: "Always"
ports:
- containerPort: 80
ServerNameSv.yaml
apiVersion: v1
kind: Service
metadata:
name: servername
labels:
name: servername
spec:
ports:
# the port that this service should serve on
- port: 80
targetPort: 80
selector:
name: servername
type: LoadBalancer
sessionAffinity: ClientIP
Dockerfile
FROM google/nodejs
WORKDIR /app
ADD ./main.js /app/main.js
EXPOSE 80
CMD ["node", "--harmony", "./main.js"]
main.js
// Load the http module to create an http server.
var http = require('http');
var os = require('os');
// Configure our HTTP server to respond with Hello World to all requests.
var server = http.createServer(function (req, res) {
res.writeHead(200, {"Content-Type": "text/plain"});
var ip = req.headers['x-forwarded-for'] ||
req.connection.remoteAddress ||
req.socket.remoteAddress ||
req.connection.socket.remoteAddress;
res.end("CIP:" + ip + " Remote Server:" + os.hostname());
});
// Listen on port 8000, IP defaults to 127.0.0.1
server.listen(80);
// Put a friendly message on the terminal
console.log("Server running at http://127.0.0.1:80/");
2) create rc and service (describe service to get IP and assure ClientIP is set)
3) curl multiple times from loadbalancer ip -> pod name should stay the same
4) delete rc and create again
5) curl again multiple time -> pod name changes
Please let me know if that helps for reproducing. Please feel free to use docker repository fibheap/printhostname directly
Affinity should work. Can you read back your service object and see that affinity was accepted and saved properly?
kubectl get svc app-service
I just created a GCE load balancer, and I confirmed that the GCE targetPool object also has
sessionAffinity: CLIENT_IP
As concluded in https://github.com/kubernetes/kubernetes/issues/36415, sessionAffinity on GCE will probably only work if you make your service a LoadBalanceer with "preserve client ip", i.e. "service.beta.kubernetes.io/external-traffic": "OnlyLocal"
, and you set sessionAffinity=ClientIP
.
Ingress is probably a better bet. I haven't verified this, but Nginx Ingress will bypass services and there's a "sticky-ng" module.