A MCVE example is here: https://github.com/chrissound/k8s-metallb-nginx-ingress-minikube (just run ./init.sh
and minikube addons enable ingress
).
The IP assigned to the ingress keeps getting reset, I don't know what is causing it? Do I need additional configuration perhaps?
kubectl get ingress --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
chris-example app-ingress example.com 192.168.122.253 80, 443 61m
And a minute later:
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
chris-example app-ingress example.com 80, 443 60m
In terms of configuration I've just applied:
# metallb
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml
# nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
ingress controller logs logs:
I0714 22:00:38.056148 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8681", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress
I0714 22:01:19.153298 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8743", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress
I0714 22:01:38.051694 7 status.go:296] updating Ingress chris-example/app-ingress status from [{192.168.122.253 }] to []
I0714 22:01:38.060044 7 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"chris-example", Name:"app-ingress", UID:"cbf3b5bf-a67a-11e9-be9a-a4cafa3aa171", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"8773", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress chris-example/app-ingress
And the metallb controller logs:
{"caller":"main.go:72","event":"noChange","msg":"service converged, no change","service":"kube-system/kube-dns","ts":"2019-07-14T21:58:39.656725017Z"}
{"caller":"main.go:73","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2019-07-14T21:58:39.656741267Z"}
{"caller":"main.go:49","event":"startUpdate","msg":"start of service update","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.6567588Z"}
{"caller":"main.go:72","event":"noChange","msg":"service converged, no change","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.656842026Z"}
{"caller":"main.go:73","event":"endUpdate","msg":"end of service update","service":"chris-example/app-lb","ts":"2019-07-14T21:58:39.656873586Z"}
As a test I deleted the deployment+daemonset relating to metallb:
kubectl delete deployment -n metallb-system controller
kubectl delete daemonset -n metallb-system speaker
And after the external IP is set, it'll once again reset...
I was curious and recreated your case. I was able to properly expose the service.
First of all: you don't need to use minikube ingress addon when deploying your own NGINX. If you do, you have 2 ingress controllers in a cluster and it leads confusion later. Run: minikube addons disable ingress
Sidenote: You can see this confusion in the IP your ingress got assigned: 192.168.122.253
which is not in CIDR range 192.168.39.160/28
you defined in configmap-metallb.yaml
.
You need to change service type of ingress-nginx
to LoadBalancer
. you can do this by running:
kubectl edit -n ingress-nginx service ingress-nginx
Additionally, you can change app-lb
service to NodePort
, since it doesn't need to be exposed outside of the cluster - ingress controller will take care of it.
It's easier to think about Ingress
object as of ConfigMap
, rather than Service
.
MetalLB takes configuration you provided in ConfigMap
and waits for IP request API call. When it gets one it provides IP form the CIDR range you specified.
In a similar way, ingress controller (NGINX in your case) takes configuration described in Ingress
object and uses it to rout traffic to desired place in the cluster.
Then ingress-nginx
service is exposed outside of the cluster with assigned IP.
Inbound traffic is directed by Ingress controller (NGINX), based on rules described in Ingress
object to a service in font of your application.
Inbound
traffic
++ +---------+
|| |ConfigMap|
|| +--+------+
|| |
|| | CIDR range to provision
|| v
|| +--+----------+
|| |MetalLB | +-------+
|| |Load balancer| |Ingress|
|| +-+-----------+ +---+---+
|| | |
|| | External IP assigned |Rules described in spec
|| | to service |
|| v v
|| +--+--------------------+ +---+------------------+
|| | | | Ingress Controller |
|---->+ ingress-nginx service +----->+ (NGINX pod) |
+---->| +----->+ |
+-----------------------+ +----------------------+
||
VV
+-----------------+
| Backend service |
| (app-lb) |
| |
+-----------------+
||
VV
+--------------------+
| Backend pod |
| (httpbin) |
| |
+--------------------+