Hello guys I have an ingress controller running and i deployed an ingress for kafka (deployed through strimzi), but the ingress is showing me multiples Ip for the address, instead of one, so I'd like to know why and what can I do to fix it cuz from what I've seen in tutorials , whent you have an ingress the Ip given in the address is the same as th on in the ingress controller service ( in my case it should be : 172.24.20.195) so here is the ingres controller components :
root@di-admin-general:/home/lda# kubectl get all -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/default-http-backend-598b7d7dbd-ggghv 1/1 Running 2 6d3h 192.168.129.71 pr-k8s-fe-fastdata-worker-02 <none> <none>
pod/nginx-ingress-controller-4rdxd 1/1 Running 2 6d3h 172.24.20.8 pr-k8s-fe-fastdata-worker-02 <none> <none>
pod/nginx-ingress-controller-g6d2f 1/1 Running 2 6d3h 172.24.20.242 pr-k8s-fe-fastdata-worker-01 <none> <none>
pod/nginx-ingress-controller-r995l 1/1 Running 2 6d3h 172.24.20.38 pr-k8s-fe-fastdata-worker-03 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/default-http-backend ClusterIP 192.168.42.107 <none> 80/TCP 6d3h app=default-http-backend
service/nginx-ingress-controller LoadBalancer 192.168.113.157 172.24.20.195 80:32641/TCP,443:32434/TCP 163m workloadID_nginx-ingress-controller=true
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
daemonset.apps/nginx-ingress-controller 3 3 3 3 3 <none> 6d3h nginx-ingress-controller rancher/nginx-ingress-controller:nginx-0.35.0-rancher2 app=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/default-http-backend 1/1 1 1 6d3h default-http-backend rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1 app=default-http-backend
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/default-http-backend-598b7d7dbd 1 1 1 6d3h default-http-backend rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1 app=default-http-backend,pod-template-hash=598b7d7dbd
root@di-admin-general:/home/lda#
and here are the kafka part:
root@di-admin-general:/home/lda# kubectl get ingress -n kafkanew -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
kafka-ludo-kafka-0 <none> broker-0.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
kafka-ludo-kafka-1 <none> broker-1.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
kafka-ludo-kafka-2 <none> broker-2.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
kafka-ludo-kafka-bootstrap <none> bootstrap.172.24.20.195.nip.io 172.24.20.242,172.24.20.38,172.24.20.8 80, 443 5d22h
you see that we have 3 ips (172.24.20.242,172.24.20.38,172.24.20.8) instead of just one that from what I think should be 172.24.20.195, please if anyone can provide explanations, the link to the strimzi yaml used to expose an ingress is here : https://developers.redhat.com/blog/2019/06/12/accessing-apache-kafka-in-strimzi-part-5-ingress/ thank you for your help
Your external IP is the one you see in kubectl get svc
, in your case 172.24.20.195.
The other IPs you see in kubectl get ingress
are ingress controller pod IPs, which are internal to your cluster.