I am currently trying to expose an kubernetes service via an ingress controller, but I cannot seem to so? Who some odd reason does the host/path never resolve to the clusterIp and port that I want to use, eventhough this should have been resolved via my ingress controller and the ressource?...
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.5
ports:
- containerPort: 8080
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller-conf
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: default
spec:
replicas: 1
revisionHistoryLimit: 3
template:
metadata:
labels:
app: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
serviceAccount: nginx
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
imagePullPolicy: Always
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 5
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-backend-service
- --configmap=$(POD_NAMESPACE)/nginx-ingress-controller-conf
- --v=2
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
name: http
- port: 443
targetPort: 443
name: https
selector:
app: nginx-ingress
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-role
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- update
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: nginx-role
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-role
subjects:
- kind: ServiceAccount
name: nginx
namespace: default
---
#Ingress ressource
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-hello
spec:
rules:
- host: dev.hello.com
http:
paths:
- backend:
serviceName: hello-kubernetes
servicePort: 80
---
##Default backend
apiVersion: v1
kind: Service
metadata:
name: default-backend-service
labels:
app: default-backend
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: default-backend
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: default-backend
labels:
app: default-http-backend
spec:
selector:
matchLabels:
app: default-backend
serviceName: default-backend-service
replicas: 2
template:
metadata:
labels:
app: default-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
I tried to make an MVP - but for some reason I can't resolve the path dev.hello.com
I want to use that to tell the ingress which service I want to connect to... but for some reason this never resolves to an ip address - it does not seem to hit anything?
Why? Is this incorrectly setup?
It's hard to understand what you exactly wants. You should be more precise what you exactly wants.
1. Ingress with ClusterIP.
Like Arghya Sadhu wrote, as you are using Ingress
you don't need to specify LoadBalancer.
2. Ingress with NodePort
Keep in mind that you can also use NodePort
with Ingress
. Good explenation and example can be found here.
3. Ingress YAML As per official Kubernetes Docs minimal Ingress resources looks like:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
In your Ingress I couldn't find spec.rules.http.paths.path
.
4. IP of LoadBalancer
Also very important is where are you have your cluster. If you are using On-Prem
like GKE, AWS, AZURE, etc. your LoadBalancer
will automatically get externalIP
which allow you to connect to your cluster from outside. However, if you are using local machine you will need to use MetalLB.
In addition, please take a look at kubernetes docs about Connect a Front End to a Back End Using a Service.
Also please check this tutorial, it might help you.
The service for hello-kubernetes should not be of type LoadBalancer
because you want the ingress to work as Loadbalancer. So change the service of hello-kubernetes to type ClusterIP
.
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes