I currently have a Deployment
and a Service
running fine on GKE. My issue is that I would like to "bind" my external IP:Port to a domain name (on OVH), example:
http://www.example.com/api/grpc -> 12.345.67.89:8080
http://www.example.com/api/rest -> 12.345.67.89:8081
After a lot of searches, I finally found out that Ingress could be my solution. I then updated my yaml in order to combine the three of Deployment
, Service
, Ingress
.
Here is my yaml:
# Copyright 2016 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License
# Use this file to deploy the container for the grpc-bookstore sample
# and the container for the Extensible Service Proxy (ESP) to
# Google Kubernetes Engine (GKE).
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myservice
labels:
app: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
containers:
- name: myservice
image: gcr.io/<project_id>/myservice:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
type: NodePort
selector:
app: myservice
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8081
targetPort: 8081
protocol: TCP
name: rest
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice-ingress
spec:
rules:
- http:
paths:
- path: /grpc
backend:
serviceName: myservice
servicePort: 8080
- path: /rest
backend:
serviceName: myservice
servicePort: 8081
I then try to run a simple request to my REST API using: http://www.example.com/api/rest/test
with a POST json body containing my name. The API should return Hello %s
but no, I get either:
I have absolutely no idea about what can be the issue as I followed the Google Documentation
I put http://www.example.com/api/rest
in my example but the followings aren't working neither:
Soo, I could move forward, now my service (which was UNHEALTHY) is HEALTHY, I can connect to it, run CURL on my readinessProbe/livenessProbe endpoint and get 200 OK.
The updated version of my yaml looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myservice
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: myservice
template:
metadata:
labels:
run: myservice
spec:
containers:
- name: myservice
image: gcr.io/<project_id>/myservice:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
- containerPort: 8081
readinessProbe:
httpGet:
path: /health_check
port: 8081
livenessProbe:
httpGet:
path: /health_check
port: 8081
---
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: default
spec:
type: NodePort
selector:
run: myservice
ports:
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8080
targetPort: 8080
protocol: TCP
name: grpc
# Port that accepts gRPC and JSON/HTTP2 requests over HTTP.
- port: 8081
targetPort: 8081
protocol: TCP
name: rest
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice-ingress
spec:
backend:
serviceName: myservice
servicePort: 8081
kubectl describe pods
MacBook-Pro-de-Emixam23:~ emixam23$ kubectl describe pods
Name: myservice-c57d64669-phrzr
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-cluster-kuberne-default-pool-8b65afeb-qgcm/10.166.0.31
Start Time: Thu, 19 Mar 2020 11:36:35 -0400
Labels: pod-template-hash=c57d64669
run=myservice
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container myservice
Status: Running
IP: 10.4.2.28
Controlled By: ReplicaSet/myservice-c57d64669
Containers:
myservice:
Container ID: docker://3f9df91ec4e2631d85e0becdb8d1be64bf97fadb5a5b7049c7391eb8cfdf3eee
Image: gcr.io/<project_id>/myservice:latest
Image ID: docker-pullable://gcr.io/<project_id>/myservice@sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Ports: 8080/TCP, 8081/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Thu, 19 Mar 2020 11:36:40 -0400
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Liveness: http-get http://:8081/health_check delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8081/health_check delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-6cppb (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-6cppb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-6cppb
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m36s default-scheduler Successfully assigned default/myservice-c57d64669-phrzr to gke-cluster-kuberne-default-pool-8b65afeb-qgcm
Normal Pulling 8m35s kubelet, gke-cluster-kuberne-default-pool-8b65afeb-qgcm Pulling image "gcr.io/<project_id>/myservice:latest"
Normal Pulled 8m32s kubelet, gke-cluster-kuberne-default-pool-8b65afeb-qgcm Successfully pulled image "gcr.io/<project_id>/myservice:latest"
Normal Created 8m31s kubelet, gke-cluster-kuberne-default-pool-8b65afeb-qgcm Created container myservice
Normal Started 8m31s kubelet, gke-cluster-kuberne-default-pool-8b65afeb-qgcm Started container myservice
kubectl describe ingress myservice-ingress
MacBook-Pro-de-Emixam23:~ emixam23$ kubectl describe ingress myservice-ingress
Name: myservice-ingress
Namespace: default
Address: XX.XXX.XXX.XXX
Default backend: myservice:8081 (10.4.2.28:8081)
Rules:
Host Path Backends
---- ---- --------
* * myservice:8081 (10.4.2.28:8081)
Annotations:
ingress.kubernetes.io/backends: {"k8s-be-31336--d1838223483f8e56":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-myservice-ingress--d1838223483f8e0
ingress.kubernetes.io/target-proxy: k8s-tp-default-myservice-ingress--d1838223483f8e0
ingress.kubernetes.io/url-map: k8s-um-default-myservice-ingress--d1838223483f8e0
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"myservice-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"myservice","servicePort":8081}}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 11m loadbalancer-controller default/myservice-ingress
Normal CREATE 11m loadbalancer-controller ip: XX.XXX.XXX.XXX
I don't see any error, but I keep getting 404 when I try to hit XX.XXX.XXX.XXX/health_check
My ingress now look like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myservice-ingress
spec:
rules:
- http:
paths:
- path: /grpc/*
backend:
serviceName: myservice
servicePort: 8080
- path: /rest/*
backend:
serviceName: myservice
servicePort: 8081
The /rest/*
endpoint returns 404, the gRPC haven't been tested yet. About the health, now, I have 3 services and one of them isn't healthy, I don't know why:
MacBook-Pro-de-Emixam23:~ emixam23$ kubectl describe ingress myservice-ingress
Name: myservice-ingress
Namespace: default
Address: XX.XXX.XXX.XXX
Default backend: default-http-backend:80 (10.4.2.7:8080)
Rules:
Host Path Backends
---- ---- --------
*
/grpc/* myservice:8080 (10.4.1.23:8080)
/rest/* myservice:8081 (10.4.1.23:8081)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"myservice-ingress","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"myservice","servicePort":8080},"path":"/grpc/*"},{"backend":{"serviceName":"myservice","servicePort":8081},"path":"/rest/*"}]}}]}}
ingress.kubernetes.io/backends: {"k8s-be-30181--d1838223483f8e56":"UNHEALTHY","k8s-be-30368--d1838223483f8e56":"HEALTHY","k8s-be-31613--d1838223483f8e56":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-myservice-ingress--d1838223483f8e0
ingress.kubernetes.io/target-proxy: k8s-tp-default-myservice-ingress--d1838223483f8e0
ingress.kubernetes.io/url-map: k8s-um-default-myservice-ingress--d1838223483f8e0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ADD 14m loadbalancer-controller default/myservice-ingress
Normal CREATE 13m loadbalancer-controller ip: XX.XXX.XXX.XXX
Errors that you provided:
are related to wrong ingress configuration. When the request does not match the path, it is forwarded to default backend.
Example:
- path: /rest
backend:
serviceName: myservice
servicePort: 8081
This definition will route the traffic to myservice
only when /rest
is specified in the request. It will not work with /api/rest
as well as /rest/something
.
It could happen that your backends were not working properly because Ingress
resource did not have enough time to complete the configuration.
To check if Ingress
resource is configured correctly you can describe it with:
$ kubectl describe ingress NAME_OF_INGRESS_RESOURCE
Please take a look below on the part of above command which checks if backend are in Healthy
or Unhealthy
state:
ingress.kubernetes.io/backends: {"k8s-be-31720--0838d11870ae50b1":"HEALTHY","k8s-be-32475--0838d11870ae50b1":"HEALTHY"}
For more information you can visit: Google Cloud Platform -> Network services -> Load Balancing
. Find the corresponding forwarding rule and take a look on backend services.
Before you continue make sure that all backends are in Healthy
state. Please refer to official documentation on Cloud.google.com: Kubernetes Engine: Ingress healthchecks and configure them according to your needs.
GKE
Ingress resource by default operates on ports:
Please take a look on supported protocols by following official documentation: Cloud.google.com: Load Balance Ingress
If you are interested in exposing your application on different ports please consider using one of the custom deployed ingress controllers like:
Please take a look on official documentation Kubernetes.io: Ingress controllers
Additionally please verify if your pods are are running correctly and run some tests if they are responding with appropriate responses. You can do it by either:
kubectl exec -it NAME_OF_THE_POD -- /path/to/shell
and check from the inside of this application podPlease let me know if that helps.
Try this Ingress
definition:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: domain-ingress
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /api/*
backend:
serviceName: myservice
servicePort: 8081
- path: /admin/*
backend:
serviceName: myservice
servicePort: 8081
Please change the DOMAIN.NAME
for one appropriate to your case.