Istio allowing all outbound traffic

10/20/2018

So putting everything in detail here for better clarification. My service consist of following attributes in dedicated namespace (Not using ServiceEntry)

  1. Deployment (1 deployment)
  2. Configmaps (1 configmap)
  3. Service
  4. VirtualService
  5. GW

Istio is enabled in namespace and when I create / run deployment it create 2 pods as it should. Now as stated in issues subject I want to allow all outgoing traffic for deployment because my serives needs to connect with 2 service discovery server:

  1. vault running on port 8200
  2. spring config server running on http
  3. download dependencies and communicate with other services (which are not part of vpc/ k8)

Using following deployment file will not open outgoing connections. Only thing works is simple https request on port 443 like when i run curl https://google.com its success but no response on curl http://google.com Also logs showing connection with vault is not establishing as well.

I have used almost all combinations in deployment but non of them seems to work. Anything I am missing or doing this in wrong way? would really appreciate contributions in this :)

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    app: my-application-service
  name: my-application-service-deployment
  namespace: temp-nampesapce
  annotations:
    traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: my-application-service-deployment
    spec:
      containers:
      - envFrom:
        - configMapRef:
            name: my-application-service-env-variables
        image: image.from.dockerhub:latest
        name: my-application-service-pod
        ports:
        - containerPort: 8080
          name: myappsvc
        resources:
          limits:
            cpu: 700m
            memory: 1.8Gi
          requests:
            cpu: 500m
            memory: 1.7Gi

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-application-service-ingress
  namespace: temp-namespace
spec:
  hosts:
  - my-application.mydomain.com
  gateways:
  - http-gateway
  http:
  - route:
    - destination:
        host: my-application-service
        port:
          number: 80


kind: Service
apiVersion: v1
metadata:
  name: my-application-service
  namespace: temp-namespace
spec:
  selector:
    app: api-my-application-service-deployment
  ports:
  - port: 80
    targetPort: myappsvc
    protocol: TCP


apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: http-gateway
  namespace: temp-namespace
spec:
  selector:
    istio: ingressgateway # use Istio default gateway implementation
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*.mydomain.com"

Namespace with istio enabled:

Name:         temp-namespace
Labels:       istio-injection=enabled
Annotations:  <none>
Status:       Active

No resource quota.

No resource limits. 

Describe pods showing that istio and sidecare is working.

Name:           my-application-service-deployment-fb897c6d6-9ztnx
Namespace:      temp-namepsace
Node:           ip-172-31-231-93.eu-west-1.compute.internal/172.31.231.93
Start Time:     Sun, 21 Oct 2018 14:40:26 +0500
Labels:         app=my-application-service-deployment
                pod-template-hash=964537282
Annotations:    sidecar.istio.io/status={"version":"2e0c897425ef3bd2729ec5f9aead7c0566c10ab326454e8e9e2b451404aee9a5","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs...
Status:         Running
IP:             100.115.0.4
Controlled By:  ReplicaSet/my-application-service-deployment-fb897c6d6
Init Containers:
  istio-init:
    Container ID:  docker://a47003a092ec7d3dc3b1d155bca0ec53f00e545ad1b70e1809ad812e6f9aad47
    Image:         docker.io/istio/proxy_init:1.0.2
    Image ID:      docker-pullable://istio/proxy_init@sha256:e16a0746f46cd45a9f63c27b9e09daff5432e33a2d80c8cc0956d7d63e2f9185
    Port:          <none>
    Host Port:     <none>
    Args:
      -p
      15001
      -u
      1337
      -m
      REDIRECT
      -i
      *
      -x

      -b
      8080,
      -d

    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 21 Oct 2018 14:40:26 +0500
      Finished:     Sun, 21 Oct 2018 14:40:26 +0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:         <none>
Containers:
  my-application-service-pod:
    Container ID:   docker://1a30a837f359d8790fb72e6b8fda040e121fe5f7b1f5ca47a5f3732810fd4f39
    Image:          image.from.dockerhub:latest
    Image ID:       docker-pullable://848569320300.dkr.ecr.eu-west-1.amazonaws.com/k8_api_env@sha256:98abee8d955cb981636fe7a81843312e6d364a6eabd0c3dd6b3ff66373a61359
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sun, 21 Oct 2018 14:40:28 +0500
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     700m
      memory:  1932735283200m
    Requests:
      cpu:     500m
      memory:  1825361100800m
    Environment Variables from:
      my-application-service-env-variables  ConfigMap  Optional: false
    Environment:
      vault.token:  <set to the key 'vault_token' in secret 'vault.token'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rc8kc (ro)
  istio-proxy:
    Container ID:  docker://3ae851e8ded8496893e5b70fc4f2671155af41c43e64814779935ea6354a8225
    Image:         docker.io/istio/proxyv2:1.0.2
    Image ID:      docker-pullable://istio/proxyv2@sha256:54e206530ba6ca9b3820254454e01b7592e9f986d27a5640b6c03704b3b68332
    Port:          <none>
    Host Port:     <none>
    Args:
      proxy
      sidecar
      --configPath
      /etc/istio/proxy
      --binaryPath
      /usr/local/bin/envoy
      --serviceCluster
      my-application-service-deployment
      --drainDuration
      45s
      --parentShutdownDuration
      1m0s
      --discoveryAddress
      istio-pilot.istio-system:15007
      --discoveryRefreshDelay
      1s
      --zipkinAddress
      zipkin.istio-system:9411
      --connectTimeout
      10s
      --statsdUdpAddress
      istio-statsd-prom-bridge.istio-system:9125
      --proxyAdminPort
      15000
      --controlPlaneAuthPolicy
      NONE
    State:          Running
      Started:      Sun, 21 Oct 2018 14:40:28 +0500
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      POD_NAME:                      my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
      POD_NAMESPACE:                 temp-namepsace (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      ISTIO_META_POD_NAME:           my-application-service-deployment-fb897c6d6-9ztnx (v1:metadata.name)
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
    Mounts:
      /etc/certs/ from istio-certs (ro)
      /etc/istio/proxy from istio-envoy (rw)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-rc8kc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rc8kc
    Optional:    false
  istio-envoy:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  Memory
  istio-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio.default
    Optional:    true
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                                                  Message
  ----    ------                 ----  ----                                                  -------
  Normal  Started                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Started container
  Normal  SuccessfulMountVolume  3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  MountVolume.SetUp succeeded for volume "istio-certs"
  Normal  SuccessfulMountVolume  3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  MountVolume.SetUp succeeded for volume "default-token-rc8kc"
  Normal  SuccessfulMountVolume  3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  MountVolume.SetUp succeeded for volume "istio-envoy"
  Normal  Pulled                 3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Container image "docker.io/istio/proxy_init:1.0.2" already present on machine
  Normal  Created                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Created container
  Normal  Scheduled              3m    default-scheduler                                     Successfully assigned my-application-service-deployment-fb897c6d6-9ztnx to ip-172-42-231-93.eu-west-1.compute.internal
  Normal  Pulled                 3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Container image "image.from.dockerhub:latest" already present on machine
  Normal  Created                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Created container
  Normal  Started                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Started container
  Normal  Pulled                 3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Container image "docker.io/istio/proxyv2:1.0.2" already present on machine
  Normal  Created                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Created container
  Normal  Started                3m    kubelet, ip-172-31-231-93.eu-west-1.compute.internal  Started container
-- Ahsan Naseem
istio
kubernetes

1 Answer

10/21/2018

Issue was that I tried to adding sidecar in deployment not in pod by adding in pod resolved the issue. Got help from here:

https://github.com/istio/istio/issues/9304

-- Ahsan Naseem
Source: StackOverflow