Using GKE Ingress loadbalancer with services on Compute Engine

12/28/2017

I have been trying to get a mix of services on both Google Kubernetes Engine and Compute Engine VM(s) under the same HTTP load balancer for accessing them all under the same public static IP and DNS alias. What I'm struggling with is to get the services on VMs accessible.

I gained the basic solution from https://github.com/kubernetes/ingress-gce/blob/master/README.md, with substituting an already existing Deployment for the ReplicationController (basically added glbc on the side of the main application that runs on GKE cluster).

I've been trying to apply the solution proposed in https://stackoverflow.com/a/35446176/2745865 but the dummy NodePort service that is created for the Jenkins instance (that runs outside of the cluster in a Compute Engine VM) stays unhealthy.

In the main app Deployment under containers I have this:

 - name: projectX-glbc
    image: gcr.io/google_containers/glbc:0.9.7
    livenessProbe:
      httpGet:
        path: /ping
        port: 8080
        scheme: HTTP
      initialDelaySeconds: 30
      timeoutSeconds: 5
    resources:
      limits:
        cpu: 100m
        memory: 100Mi
      requests:
        cpu: 100m
        memory: 50Mi
    args:
    - --apiserver-host=http://localhost:8080
    - --default-backend-service=default/projectX-test
    - --sync-period=300s

The rest of the services:

kind: Service
apiVersion: v1
metadata:
  name: projectX-jenkins-dummyservice
spec:
  type: NodePort
  ports:
  - protocol: TCP
    port: 80

---
kind: Endpoints
apiVersion: v1
metadata:
  name: projectX-jenkins-dummyservice
subsets:
  - addresses:
    - ip: 10.156.0.2 # this IP is the static IP of the Jenkins master VM
    ports:
    - name: jenkins-master-http
      protocol: TCP
      port: 80

---
kind: Service
apiVersion: v1
metadata:
  name: projectX-test
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: projectX-backend
    protocol: TCP
  selector:
    app: projectX
    role: backend
    env: test

---
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: projectX-test-ingress
  annotations:
    ingress.kubernetes.io/rewrite-target: /
    kubernetes.io/ingress.global-static-ip-name: "projectX-public-ip"
spec:
  backend:
    serviceName: projectX-test
    servicePort: 80
  rules:
  - host: projectX.domain.invalid
    http:
      paths:
      - path: /jenkins
        backend:
          serviceName: projectX-jenkins-dummyservice
          servicePort: 80

The end result of this config is that it is possible to access the main app on the public IP (as well as with the DNS alias shown as projectX.domain.invalid above), but projectX-test-ingress stays unhealthy due to the backend service automatically generated for Jenkins having "0 of 3 instances healthy". Besides, at least the GUI shows the main app Pod weirdly under both projectX-jenkins-dummyservice and projectX-test, even though it should have nothing to do with the Jenkins dummy service. The main app also gets forcedly restarted periodically, which to me suggests that the setup isn't configured quite right...

The question is what am I doing wrong - or have I understood incorrectly and what I'm trying to accomplish just isn't possible (or should not be done)?

Originally we had a manually built HTTP load balancer on Compute Engine, but with that I couldn't figure out how to include services from Kubernetes cluster (I did only seek for the solution with the GCP GUI however). For example https://stackoverflow.com/a/35447985/2745865 seems to be claiming that this could be done without Kubernetes Ingress object...

Edit: One solution I was able to come up with was to throw away the NodePort projectX-jenkins-dummyservice and the reference to it in projectX-test-ingress config (I put there the root URL pointing to projectX-test to have something there), and then go and change the Compute Engine load balancer manually from the GUI so that instead of the GKE backend it directs /jenkins, /jenkins/* to the original projectX-jenkins-backend that had been created for the the original HTTP load balancer.

This way I got the required functionality, but it is a mix of applying yaml and manual work in the GUI, and should someone need to rebuild this, it might be troublesome... Having a solution built solely in .yaml files would be more self-contained. However, since GKE Ingress is automatically keeping the Compute Engine load balancer in line with the configuration given to it, any manual changes made to the load balancer in Compute Engine side are overwritten after a while.

-- zagrimsan
google-cloud-platform
google-kubernetes-engine
kubernetes
load-balancing

0 Answers