Envoy Pod to Pod communication within a Service in K8

1/30/2019

Is it possible to send a http Rest request to another K8 Pod that belongs to the same Service in Kubernetes when Envoy is configured?

Important : I have another question here that directed me to ask with Envoy specific tags.

E. G. Service name = UserService , 2 Pods (replica = 2)

Pod 1 --> Pod 2 //using pod ip not load balanced hostname 
Pod 2 --> Pod 1

The connection is over Rest GET 1.2.3.4:7079/user/1

The value for host + port is taken from kubectl get ep

Both of the pod IP's work successfully outside of the pods but when I do a kubectl exec -it into the pod and make the request via CURL, it returns a 404 not found for the endpoint.

Q What I would like to know if it is possible to make a request to another K8 Pod that is in the same Service? Answered : this is definitely possible.

Q Why am I able to get a successful ping 1.2.3.4, but not hit the Rest API?

Q is it possible to directly request a Pod IP from another Pod when Envoy is configured?

Please let me know what config files are needed or output is needed to progress, as I am a complete beginner with K8. Thanks.

below is my config files

 #values.yml
replicaCount: 1

 image:
  repository: "docker.hosted/app"
  tag: "0.1.0"
  pullPolicy: Always
  pullSecret: "a_secret"

service:
 name: http
 type: NodePort
 externalPort: 7079
 internalPort: 7079

ingress:
 enabled: false

deployment.yml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ template "app.fullname" . }}
  labels:
    app: {{ template "app.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      labels:
        app: {{ template "app.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:

            - name: MY_POD_IP
              valueFrom:
               fieldRef:
                fieldPath: status.podIP
            - name: MY_POD_PORT
              value: "{{ .Values.service.internalPort }}"
          ports:
            - containerPort: {{ .Values.service.internalPort }}
          livenessProbe:
            httpGet:
              path: /actuator/alive
              port: {{ .Values.service.internalPort }}
            initialDelaySeconds: 60
            periodSeconds: 10
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /actuator/ready
              port: {{ .Values.service.internalPort }}
          initialDelaySeconds: 60
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
          failureThreshold: 3
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
    {{- end }}
      imagePullSecrets:
        - name: {{ .Values.image.pullSecret }

service.yml

kind: Service
metadata:
  name: {{ template "app.fullname" . }}
  labels:
    app: {{ template "app.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.externalPort }}
      targetPort: {{ .Values.service.internalPort }}
      protocol: TCP
      name: {{ .Values.service.name }}
  selector:
    app: {{ template "app.name" . }}
    release: {{ .Release.Name }}

executed from master

executed from k8 master

executed from inside a pod of the same MicroService

executed from inside a pod of the same MicroService

EDIT 2: output from 'kubectl get -o yaml deployment '

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: 2019-01-29T20:34:36Z
  generation: 1
  labels:
    app: msg-messaging-room
    chart: msg-messaging-room-0.0.22
    heritage: Tiller
    release: msg-messaging-room
  name: msg-messaging-room
  namespace: default
  resourceVersion: "25447023"
  selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/msg-messaging-room
  uid: 4b283304-2405-11e9-abb9-000c29c7d15c
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: msg-messaging-room
      release: msg-messaging-room
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: msg-messaging-room
        release: msg-messaging-room
    spec:
      containers:
      - env:
        - name: KAFKA_HOST
          value: confluent-kafka-cp-kafka-headless
        - name: KAFKA_PORT
          value: "9092"
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: MY_POD_PORT
          value: "7079"
        image: msg-messaging-room:0.0.22
        imagePullPolicy: Always
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/alive
            port: 7079
            scheme: HTTP
          initialDelaySeconds: 60
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: msg-messaging-room
        ports:
        - containerPort: 7079
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /actuator/ready
            port: 7079
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      imagePullSecrets:
      - name: secret
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 2
  conditions:
  - lastTransitionTime: 2019-01-29T20:35:43Z
    lastUpdateTime: 2019-01-29T20:35:43Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: 2019-01-29T20:34:36Z
    lastUpdateTime: 2019-01-29T20:36:01Z
    message: ReplicaSet "msg-messaging-room-6f49b5df59" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 2
  replicas: 2
  updatedReplicas: 2

output from 'kubectl get -o yaml svc $the_service'

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2019-01-29T20:34:36Z
  labels:
    app: msg-messaging-room
    chart: msg-messaging-room-0.0.22
    heritage: Tiller
    release: msg-messaging-room
  name: msg-messaging-room
  namespace: default
  resourceVersion: "25446807"
  selfLink: /api/v1/namespaces/default/services/msg-messaging-room
  uid: 4b24bd84-2405-11e9-abb9-000c29c7d15c
spec:
  clusterIP: 1.2.3.172.201
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 31849
    port: 7079
    protocol: TCP
    targetPort: 7079
  selector:
    app: msg-messaging-room
    release: msg-messaging-room
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}
-- M_K
envoyproxy
istio
kubernetes
load-balancing
spring-boot

2 Answers

4/10/2020

For the Pod to Pod part:

Adding another Service (headless) will allow you to access another Pod via curl while still having Istio enabled.

For example adding

kind: Service
metadata:
  name: {{ template "app.fullname" . }}-headless
  labels:
  ... [same as other service]
spec:
  clusterIP: None
  ... [same as other service]

As a headless service provides the Pods as endpoints, and not its own clusterIP.

If you don't need loadbalancing you can just use the headless service, but if you want both, you can use the first service for external traffic and the headless one for pod to pod communication.

-- char
Source: StackOverflow

2/20/2019

What I posted on another question was , I disabled Istio injection before installing the service and then re enabled it after installing the service and now its all working fine, so the commands that worked for me were:

enter image description here

-- M_K
Source: StackOverflow