Why Istio "Authentication Policy" Example Page isn't working as expected?

8/15/2018

The article here: https://istio.io/docs/tasks/security/authn-policy/ Specifically, when I follow the instruction on the Setup section, I can't connect any httpbin that are residing in namespace foo and bar. But the legacy's one is okay. I expect there is something wrong in the side car proxy being installed.

Here is the output of httpbin pod yaml file (after being injected with istioctl kubeinject --includeIPRanges "10.32.0.0/16" command). I use --includeIPRanges so that the pod can communicate with external ip (for my debugging purpose to install dnsutils, etc package)

apiVersion: v1
kind: Pod
metadata:
  annotations:
    sidecar.istio.io/inject: "true"
    sidecar.istio.io/status: '{"version":"4120ea817406fd7ed43b7ecf3f2e22abe453c44d3919389dcaff79b210c4cd86","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
  creationTimestamp: 2018-08-15T11:40:59Z
  generateName: httpbin-8b9cf99f5-
  labels:
    app: httpbin
    pod-template-hash: "465795591"
    version: v1
  name: httpbin-8b9cf99f5-9c47z
  namespace: foo
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: httpbin-8b9cf99f5
    uid: 1450d75d-a080-11e8-aece-42010a940168
  resourceVersion: "65722138"
  selfLink: /api/v1/namespaces/foo/pods/httpbin-8b9cf99f5-9c47z
  uid: 1454b68d-a080-11e8-aece-42010a940168
spec:
  containers:
  - image: docker.io/citizenstig/httpbin
    imagePullPolicy: IfNotPresent
    name: httpbin
    ports:
    - containerPort: 8000
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-pkpvf
      readOnly: true
  - args:
    - proxy
    - sidecar
    - --configPath
    - /etc/istio/proxy
    - --binaryPath
    - /usr/local/bin/envoy
    - --serviceCluster
    - httpbin
    - --drainDuration
    - 45s
    - --parentShutdownDuration
    - 1m0s
    - --discoveryAddress
    - istio-pilot.istio-system:15007
    - --discoveryRefreshDelay
    - 1s
    - --zipkinAddress
    - zipkin.istio-system:9411
    - --connectTimeout
    - 10s
    - --statsdUdpAddress
    - istio-statsd-prom-bridge.istio-system.istio-system:9125
    - --proxyAdminPort
    - "15000"
    - --controlPlaneAuthPolicy
    - NONE
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIP
    - name: ISTIO_META_POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: ISTIO_META_INTERCEPTION_MODE
      value: REDIRECT
    image: docker.io/istio/proxyv2:1.0.0
    imagePullPolicy: IfNotPresent
    name: istio-proxy
    resources:
      requests:
        cpu: 10m
    securityContext:
      privileged: false
      readOnlyRootFilesystem: true
      runAsUser: 1337
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/certs/
      name: istio-certs
      readOnly: true
  dnsPolicy: ClusterFirst
  initContainers:
  - args:
    - -p
    - "15001"
    - -u
    - "1337"
    - -m
    - REDIRECT
    - -i
    - 10.32.0.0/16
    - -x
    - ""
    - -b
    - 8000,
    - -d
    - ""
    image: docker.io/istio/proxy_init:1.0.0
    imagePullPolicy: IfNotPresent
    name: istio-init
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
  nodeName: gke-tvlk-data-dev-default-medium-pool-46397778-q2sb
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-pkpvf
    secret:
      defaultMode: 420
      secretName: default-token-pkpvf
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - name: istio-certs
    secret:
      defaultMode: 420
      optional: true
      secretName: istio.default
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:41:01Z
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:44:28Z
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: 2018-08-15T11:40:59Z
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://758e130a4c31a15c1b8bc1e1f72bd7739d5fa1103132861eea9ae1a6ae1f080e
    image: citizenstig/httpbin:latest
    imageID: docker-pullable://citizenstig/httpbin@sha256:b81c818ccb8668575eb3771de2f72f8a5530b515365842ad374db76ad8bcf875
    lastState: {}
    name: httpbin
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: 2018-08-15T11:41:01Z
  - containerID: docker://9c78eac46a99457f628493975f5b0c5bbffa1dac96dab5521d2efe4143219575
    image: istio/proxyv2:1.0.0
    imageID: docker-pullable://istio/proxyv2@sha256:77915a0b8c88cce11f04caf88c9ee30300d5ba1fe13146ad5ece9abf8826204c
    lastState:
      terminated:
        containerID: docker://52299a80a0fa8949578397357861a9066ab0148ac8771058b83e4c59e422a029
        exitCode: 255
        finishedAt: 2018-08-15T11:44:27Z
        reason: Error
        startedAt: 2018-08-15T11:41:02Z
    name: istio-proxy
    ready: true
    restartCount: 1
    state:
      running:
        startedAt: 2018-08-15T11:44:28Z
  hostIP: 10.32.96.27
  initContainerStatuses:
  - containerID: docker://f267bb44b70d2d383ce3f9943ab4e917bb0a42ecfe17fe0ed294bde4d8284c58
    image: istio/proxy_init:1.0.0
    imageID: docker-pullable://istio/proxy_init@sha256:345c40053b53b7cc70d12fb94379e5aa0befd979a99db80833cde671bd1f9fad
    lastState: {}
    name: istio-init
    ready: true
    restartCount: 0
    state:
      terminated:
        containerID: docker://f267bb44b70d2d383ce3f9943ab4e917bb0a42ecfe17fe0ed294bde4d8284c58
        exitCode: 0
        finishedAt: 2018-08-15T11:41:00Z
        reason: Completed
        startedAt: 2018-08-15T11:41:00Z
  phase: Running
  podIP: 10.32.19.61
  qosClass: Burstable
  startTime: 2018-08-15T11:40:59Z

Here is the example command when I got the error sleep.legacy -> httpbin.foo

> kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -c sleep -n legacy -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n"

000
command terminated with exit code 7

** Here is the example command when I get success status: sleep.legacy -> httpbin.legacy **

> kubectl exec $(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name}) -csleep -n legacy -- curl http://httpbin.legacy:8000/ip -s -o /dev/null -w "%{http_code}\n"

200

I have followed the instruction to ensure there is no mtls policy defined, etc.

> kubectl get policies.authentication.istio.io --all-namespaces
No resources found.
> kubectl get meshpolicies.authentication.istio.io
No resources found.
> kubectl get destinationrules.networking.istio.io --all-namespaces -o yaml | grep "host:"
host: istio-policy.istio-system.svc.cluster.local
host: istio-telemetry.istio-system.svc.cluster.local
-- Agung Pratama
istio
kubernetes

1 Answer

8/15/2018

NVM, I think I found why. There is configuration being messed up in my part. If you take a look at the statsd address, it is defined with unrecognized hostname istio-statsd-prom-bridge.istio-system.istio-system:9125. I noticed that after looking at the proxy container being restarted/crashed multiple times.

-- Agung Pratama
Source: StackOverflow