How can I expose a service to other pods in kubernetes?

11/25/2019

I have a simple service

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

And here is how my cluster looks like. Pretty simple.

$kubectl get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE                      NOMINATED NODE   READINESS GATES
my-shell-95cb5df57-cdj4z            1/1     Running   0          23m   10.60.1.32   aks-nodepool-19248108-0   <none>           <none>
nginx-deployment-76bf4969df-58d66   1/1     Running   0          36m   10.60.1.10   aks-nodepool-19248108-0   <none>           <none>
nginx-deployment-76bf4969df-jfkq7   1/1     Running   0          36m   10.60.1.21   aks-nodepool-19248108-0   <none>           <none>
$kubectl get services -o wide
NAME               TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE     SELECTOR
internal-ingress   LoadBalancer   10.0.0.194   10.60.1.35    80:30157/TCP   5m28s   app=nginx-deployment
kubernetes         ClusterIP      10.0.0.1     <none>        443/TCP        147m    <none>
$kubectl get rs -o wide
NAME                          DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES        SELECTOR
my-shell-95cb5df57            1         1         1       23m   my-shell     ubuntu        pod-template-hash=95cb5df57,run=my-shell
nginx-deployment-76bf4969df   2         2         2       37m   nginx        nginx:1.7.9   app=nginx,pod-template-hash=76bf4969df

I see I have 2 pods wiht my nginx app. I want to be able to send a request from any other new pod to either one of them. If one crashes, I want to still be able to send this request. In the past I used a load balancer for this. The problem with load balancers is that they open up a public IP and int this specific scenario, I don't want a public IP anymore. I want this service to be invoked by other pods directly, without a public IP.

I tried to use an internal load balancer.

apiVersion: v1
kind: Service
metadata:
  name: internal-ingress
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "my-subnet"
spec:
  type: LoadBalancer
  loadBalancerIP: 10.60.1.45
  ports:
  - port: 80
  selector:
    app: nginx-deployment

The problem is that it does not get an IP in my 10.60.0.0/16 network like it is described here: https://docs.microsoft.com/en-us/azure/aks/internal-lb#specify-a-different-subnet I get this never ending <pending>.

kubectl get services -o wide
NAME               TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE    SELECTOR
internal-ingress   LoadBalancer   10.0.0.230   <pending>     80:30638/TCP   15s    app=nginx-deployment
kubernetes         ClusterIP      10.0.0.1     <none>        443/TCP        136m   <none>

What am I missing? How to troubleshoot? Is it even possible to have pod to service communication?

-- ddreian
azure
azure-kubernetes
docker
kubernetes

2 Answers

11/26/2019

Use a ClusterIP Service (the default type) which creates only a cluster-internal IP and no public IP:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80

Then you can access the Service (and thus the Pods behind it) from any other Pod in the same namespace by using the Service name as the DNS name:

curl nginx-service

If the Pod from which you want to access the Service is in a different namespace, you have to use the fully qualified domain name of the Service:

curl nginx-service.my-namespace.svc.cluster.local
-- weibeld
Source: StackOverflow

11/26/2019

From the message you provide, it seems you want to use a special private IP address which is in the subnet that the same as the AKS cluster use. I think the possible reason is that the special IP address which you want to use is already assigned by the AKS, it means you cannot use it.

Troubleshooting

So you need to guide to the Vnet which your AKS cluster used and check if the IP address is already in use. Here is the screenshot:

enter image description here

Solution

Choose an IP address that is not assigned by the AKS from the subnet the AKS used. Or do not use a special one, let the AKS assign your load balancer dynamic. Then change your YAML file like below:

apiVersion: v1
kind: Service
metadata:
  name: internal-ingress
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: nginx-deployment
-- Charles Xu
Source: StackOverflow