I'm test blue green updates on k8s (1.7.4) running on AWS, where the blue green service is reachable from outside k8s. Current setup is a blue pod, a green pod, and a service router. The router setup is an AWS ELB. The service is reachable via CNAME that points to the ELB.
The problem is during switch over. Updating the service results in a new ELB, resulting in a new target for the CNAME. The time waiting for DNS propagate is downtime. What is another approach that avoids downtime? Service yml below:
##########
# ROUTER #
##########
# This blue green approach does not work because the AWS ELB must be created
# new on each changeoever. This results in a new DNS record and clients are
# lost while the new record propagates.
# Expose the container as a service to the outside via DNS record and port.
apiVersion: v1
kind: Service
metadata:
name: helloworld
annotations:
# URL for the service
external-dns.alpha.kubernetes.io/hostname: helloworld.k8s.example.net
spec:
type: LoadBalancer
ports:
# Outside port mapped to deployed container port
- port: 80
targetPort: helloworldport
selector:
# HOWTO change app name to point to blue or green then
# ubectl replace -f bluegreenrouter.yml --force
app: helloworld-blue
During update of K8S LoadBalancer type service the underlying ELB should never change. Are you sure you do actually update the service (kubectl apply
) and not recreate (kubectl delete
/kubectl create
) ?