Service not able to access from outside from the VMWare servers

7/16/2019

We have deployed an application on to Kubernetes Cluster configured on local VMWare servers in the On-prem. I have created a default ingress rule, and however, still, I'm not able to access the service from other machines. I can access locally using "curl" command.

I have re-installed Nginx Ingress controller and configured default ingress resource but not able to access from the outside

[root@uat-amk8smaster01 ~]# kubectl -n stackstorm get svc dd-stackstorm-st2web
NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
dd-stackstorm-st2web   NodePort   10.101.23.22   <none>        80:32714/TCP   16h
[root@uat-amk8smaster01 ~]#


[root@uat-amk8smaster01 ~]# kubectl -n stackstorm get ingress
NAME                  HOSTS   ADDRESS   PORTS   AGE
st2-ingress-default   *                 80      15h
[root@uat-amk8smaster01 ~]#


# cat st2-default-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    name: st2-ingress-default
  name: st2-ingress-default
  namespace: stackstorm
spec:
  backend:
    serviceName: dd-stackstorm-st2web
    servicePort: 80
#

The webpage should open when we try to open using IP:32714.

-- ratnakar
kubernetes
kubernetes-ingress

3 Answers

7/16/2019

If you want to rout traffic to your service via ingress, the flow should be the following :

Ingress --> Ingress controller service --> Ingress controller --> dd-stackstorm-st2web service --> dd-stackstorm-st2web pod

And, apparently, you are trying to expose your dd-stackstorm-st2web service via NodePort and reach it omitting ingress.

My assumption is that you don't have ingress-controller service exposed.

Still, if you want to access service directly through the NodePort

curl http://<node-external-ip>:32714

to find node external ip

kubectl get nodes -o wide
-- A_Suh
Source: StackOverflow

7/16/2019

My advice is to check the status of the ingress using kubectl describe ingress st2-ingress-default and see if it has some events, normally a bad livenessprobe and readinessprobe causes to not be able to connect.

Also, you can review the nginx controller pod logs and see if your traffic is going inside the cluster.

-- wolmi
Source: StackOverflow

7/16/2019

If kubernetes is running on premises, then you have to implement your own ingress controller. In order for the Ingress resource to work, the cluster must have an ingress controller running. Check this page : https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/

-- EAT
Source: StackOverflow