Kubernetes external-IP doesn't work on all nodes

6/10/2018

I have created a Kubernetes cluster by Kubeadm with 3 nodes. Their IP address and hostname are 10.10.10.146/24(k8s1, master), 10.10.10.135/24(k8s2), 10.10.10.170/24(k8s3).

Now I create a nginx service which contains 3 pods with this yaml file:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-app
        image: nginx:1.14.0
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-app-srv
spec:
  type: ClusterIP
  selector:
    app: nginx
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80
  externalIPs: 
    - 10.10.100.1

Finally one of the pods was scheduled to k8s2 and two of them to k8s3.

Then if I route 10.10.100.0/24 to k8s2, only one pod work. If to k8s3, only two pods work. And if to k8s1, no pods work.

How can I make all pods work fine through external-IP from outside just like through cluster-IP from inside no matter which node I route the external-IP subnet to? Or that's not possible or I need something else such as Kubernetes Ingress?

-- 王立巍
kubernetes

1 Answer

6/11/2018

There are several options to expose your service outside the cluster:

The first option is to use NodePort type of Kubernetes Service. This way Service will open a port on the network interface of each node in the cluster, and all traffic coming to that port will be forwarded to the service. By default, port range for this kind of service is limited to 30000–32767.

Here is an example of Service NodePort configuration:

kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  type: NodePort
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 9376
    nodePort: 30080

The second option is available if you are running Kubernetes cluster in the clouds like AWS, GCP, Azure. If you create LoadBalancer type of service, it will create a new load balancer in the cloud and forward all traffic from that load balancer to the service. The downside of this way is that each service creates a separate load balancer, which will cost you additional money.

Here is an example of Service LoadBalancer configuration:

kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  type: LoadBalancer
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 9376

The third option is to use an Ingress object. An Ingress controller should be running in the cluster to configure the cloud networking according to the content of the Ingress object you created. Ingress can provide you functionality of routing requests to different services depending of dns name and URI path. You can also use it on bare-metal Kubernetes cluster, but in this case, you are responsible for routing network traffic to the Ingress controller, for example by firewall rules.

Here are a couple of examples of Ingress configuration:

# redirect all traffic to a service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
spec:
  backend:
    serviceName: testsvc
    servicePort: 80


# redirect traffic to a service for specified URI path
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        backend:
          serviceName: test
          servicePort: 80
-- VAS
Source: StackOverflow