Istio direct Pod to Pod communication

4/1/2019

I have a problem with the communication to a Pod from a Pod deployed with Istio? I actually need it to make Hazelcast discovery working with Istio, but I'll try to generalize the issue here.

Let's have a sample hello world service deployed on Kubernetes. The service replies to the HTTP request on the port 8000.

$ kubectl create deployment nginx --image=crccheck/hello-world

The created Pod has an internal IP assigned:

$ kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP           NODE                                                  NOMINATED NODE
hello-deployment-84d876dfd-s6r5w   1/1     Running   0          8m    10.20.3.32   gke-rafal-test-istio-1-0-default-pool-91f437a3-cf5d   <none>

In the job curl.yaml, we can use the Pod IP directly.

apiVersion: batch/v1
kind: Job
metadata:
  name: curl
spec:
  template:
    spec:
      containers:
      - name: curl
        image: byrnedo/alpine-curl
        command: ["curl",  "10.20.3.32:8000"]
      restartPolicy: Never
  backoffLimit: 4

Running the job without Istio works fine.

$ kubectl apply -f curl.yaml
$ kubectl logs pod/curl-pptlm
...
Hello World
...

However, when I try to do the same with Istio, it does not work. The HTTP request gets blocked by Envoy.

$ kubectl apply -f <(istioctl kube-inject -f curl.yaml)
$ kubectl logs pod/curl-s2bj6 curl
...
curl: (7) Failed to connect to 10.20.3.32 port 8000: Connection refused

I've played with Service Entries, MESH_INTERNAL, and MESH_EXTERNAL, but with no success. How to bypass Envoy and make a direct call to a Pod?


EDIT: The output of istioctl kube-inject -f curl.yaml.

$ istioctl kube-inject -f curl.yaml
apiVersion: batch/v1
kind: Job
metadata:
  creationTimestamp: null
  name: curl
spec:
  backoffLimit: 4
  template:
    metadata:
      annotations:
        sidecar.istio.io/status: '{"version":"dbf2d95ff300e5043b4032ed912ac004974947cdd058b08bade744c15916ba6a","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
      creationTimestamp: null
    spec:
      containers:
      - command:
        - curl
        - 10.16.2.34:8000/
        image: byrnedo/alpine-curl
        name: curl
        resources: {}
      - args:
        - proxy
        - sidecar
        - --domain
        - $(POD_NAMESPACE).svc.cluster.local
        - --configPath
        - /etc/istio/proxy
        - --binaryPath
        - /usr/local/bin/envoy
        - --serviceCluster
        - curl.default
        - --drainDuration
        - 45s
        - --parentShutdownDuration
        - 1m0s
        - --discoveryAddress
        - istio-pilot.istio-system:15010
        - --zipkinAddress
        - zipkin.istio-system:9411
        - --connectTimeout
        - 10s
        - --proxyAdminPort
        - "15000"
        - --concurrency
        - "2"
        - --controlPlaneAuthPolicy
        - NONE
        - --statusPort
        - "15020"
        - --applicationPorts
        - ""
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: ISTIO_META_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: ISTIO_META_CONFIG_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: ISTIO_META_INTERCEPTION_MODE
          value: REDIRECT
        image: docker.io/istio/proxyv2:1.1.1
        imagePullPolicy: IfNotPresent
        name: istio-proxy
        ports:
        - containerPort: 15090
          name: http-envoy-prom
          protocol: TCP
        readinessProbe:
          failureThreshold: 30
          httpGet:
            path: /healthz/ready
            port: 15020
          initialDelaySeconds: 1
          periodSeconds: 2
        resources:
          limits:
            cpu: "2"
            memory: 128Mi
          requests:
            cpu: 100m
            memory: 128Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsUser: 1337
        volumeMounts:
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /etc/certs/
          name: istio-certs
          readOnly: true
      initContainers:
      - args:
        - -p
        - "15001"
        - -u
        - "1337"
        - -m
        - REDIRECT
        - -i
        - '*'
        - -x
        - ""
        - -b
        - ""
        - -d
        - "15020"
        image: docker.io/istio/proxy_init:1.1.1
        imagePullPolicy: IfNotPresent
        name: istio-init
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          capabilities:
            add:
            - NET_ADMIN
      restartPolicy: Never
      volumes:
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - name: istio-certs
        secret:
          optional: true
          secretName: istio.default
status: {}
---
-- RafaƂ Leszko
istio
kubernetes

2 Answers

5/21/2019

Make sure that you have configured a ingress "Gateway" and after doing that you need to configure a "VirtualService". See link below for simple example.

https://istio.io/docs/tasks/traffic-management/ingress/#configuring-ingress-using-an-istio-gateway

Once you have deployed the gateway along with the virtual service you should be able to 'curl' you service from outside the cluster from an external IP.

But if you want to check for traffic from INSIDE the cluster you will need to use istio's mirroring API to mirror the service (pod) from one pod to another pod, and THEN use your command (kubectl apply -f curl.yaml) to see the traffic.

See link below for mirroring example:

https://istio.io/docs/tasks/traffic-management/mirroring/

hope this helps

-- dghant1024
Source: StackOverflow

6/15/2019

When a pod with an istio side car is started, the follwing things happen

  1. an init container changes the iptables rules so that all the outgoing tcp traffic is routed to the sidecar container (istio-proxy) on port 15001 .

  2. the containers of the pod are started in parallel (curl and istio-proxy)

If your curl container is executed before istio-proxy listens on port 15001, you get the error.

I started this container with a sleep command, exec-d into the container and the curl worked.

$ kubectl apply -f <(istioctl kube-inject -f curl-pod.yaml)

$ k exec -it -n noistio curl -c curl bash
[root@curl /]# curl 172.16.249.198:8000
<xmp>
Hello World


                                       ##         .
                                 ## ## ##        ==
                              ## ## ## ## ##    ===
                           /""""""""""""""""\___/ ===
                      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
                           \______ o          _,/
                            \      \       _,'
                             `'--.._\..--''
</xmp>
[root@curl /]# 

curl-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: curl
spec:
  containers:
  - name: curl
    image: centos
    command: ["sleep",  "3600"]
-- christian
Source: StackOverflow