Accessing Kubernetes services through one IP (Master Node)

12/14/2018

I have a Kubernetes cluster installation with a master node and two worker nodes in centos 7 machine(On premise environment). Is there a way to access all deployed services(Built in and my micro service) that will be installed on Kubernetes through master node's ip?

I have used flannel network. My service is running on node port 30011. I am able to access my service from worker node ip and node port[192.23.12.X1:30011 and 192.23.12.X2:30011] port but I am not able to access the same service from master node[192.23.19.21:30011].

Here is my deployment and service yaml file

deployment.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: am-profile
  labels:
    app: am-profile
spec:
  replicas: 1
  selector:
    matchLabels:
      app: am-profile
  template:
    metadata:
      labels:
        app: am-profile
    spec:
      containers:
      - name: am-profile
        image: 192.23.12.160:8083/am-setting:1.0.0
        ports:
        - containerPort: 8081

service.yaml

apiVersion: v1
kind: Service
metadata:
  name: am-profile
  labels:
    app: am-profile
spec:
  type: NodePort
  ports:
   - targetPort: 8081
     port: 8081
     nodePort: 30011
  selector:
     app: am-profile

I want to access this service like http://master-node:30011/hello. Every help is appreciated.

Here is ip table save out put

-A KUBE-NODEPORTS -p tcp -m comment --comment "default/subscriber-profile-service:" -m tcp --dport 30002 -j KUBE-MARK-MASQ 
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/subscriber-profile-service:" -m tcp --dport 30002 -j KUBE-SVC-IUSISESM6NEI4T53 
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.107.113.5/32 -p tcp -m comment --comment "default/subscriber-profile-service: cluster IP" -m tcp --dport 8082 -j KUBE-MARK-MASQ 
-A KUBE-SERVICES -d 10.107.113.5/32 -p tcp -m comment --comment "default/subscriber-profile-service: cluster IP" -m tcp --dport 8082 -j KUBE-SVC-IUSISESM6NEI4T53 [r –
-- Balarama
devops
flannel
kubernetes

1 Answer

12/14/2018

If Kubernetes cluster has no network issues, you are able to access NodePort service using any node of the cluster including master node(s).

By default, kube-proxy pods create ip-tables rules to forward traffic from NodeIP:NodePort to specific pod:port. You can check existing ip-tables rules by running the following command on each node:

$ sudo iptables-save   
# you may need to install iptables package to use this command
# yum -y install iptables

-A KUBE-NODEPORT ... -j KUBE-SVC-... # shows you port number on the node
-A KUBE-SVC-... -j KUBE-SEP-... # shows you destination rules links and balancing
-A KUBE-SEP-... ... -j DNAT --to-destination <pod-ip:port> # shows you destination for traffic that comes to NodePort

If all mentioned rules are in place, check the connectivity from master to node:

master-node$> curl http://<pod-ip>:<port>/path-if-needed/

In case that check fails with a connection error, check the following:

  • Are there custom firewall or firewalld rules that could drop the traffic?
  • Does the cloud VPC security allow traffic between nodes?
  • Is the networking solution (flannel, calico, etc) installed and working properly?
  • Is the SELinux enabled?
-- VAS
Source: StackOverflow