Kubernetes pod level restricted access to other EC2 instances from AWS EKS nodes

1/3/2019

I had a Elastic search DB running on a EC2 instance. Backend services which connect to Elastic DB are running on AWS EKS nodes.

In order for the backend kubernetes pods to access Elastic DB, i added allowed security groups to EKS nodes and it is working fine.

But my question is all other pods(not the backend ones) which are running in the same node had possible access to Elastic DB because of the underlying node security groups, is there a better secure way to handle this.

-- wudpecker
amazon-eks
amazon-web-services
aws-eks
aws-security-group
kubernetes

1 Answer

1/4/2019

In this situation you could use additionally Kubernetes` Network Policies to define rules, which specify what traffic is allowed to Elastic DB from the selected pods.

For instance start with creating a default deny all egress traffic policy in namespace for all pods, like this:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - Egress

and allow outgoing traffic from specific pods (holding role: db) to CIDR 10.0.0.0/24 on TCP port 5978

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/24
    ports:
    - protocol: TCP
      port: 5978

Please consult official documentation for more information on NetworkPolicies.

-- Nepomucen
Source: StackOverflow