Init container with kubectl get pod
command is used to get ready status of other pod.
After Egress NetworkPolicy was turned on init container can't access Kubernetes API: Unable to connect to the server: dial tcp 10.96.0.1:443: i/o timeout
. CNI is Calico.
Several rules were tried but none of them are working (service and master host IPs, different CIDR masks):
...
egress:
- to:
- ipBlock:
cidr: 10.96.0.1/32
ports:
- protocol: TCP
port: 443
...
or using namespace (default and kube-system namespaces):
...
egress:
- to:
- namespaceSelector:
matchLabels:
name: default
ports:
- protocol: TCP
port: 443
...
Looks like ipBlock
rules just don't work and namespace rules don't work because kubernetes api is non-standard pod.
Can it be configured? Kubernetes is 1.9.5, Calico is 3.1.1.
Problem still exists with GKE 1.13.7-gke.8 and calico 3.2.7
The inly workaround I could come up with so far is the following:
podSelector:
matchLabels:
white: listed
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
This will allow accessing the API server - along with all other IP addresses on the internet :-/
You can combine this with the DENY all non-whitelisted traffic from a namespace rule to deny egress for all other pods.
We aren't on GCP, but the same should apply.
We query AWS for the CIDR of our master nodes and use this data as values for helm charts creating the NetworkPolicy for the k8s API access.
In our case the masters are part of an auto-scaling group, so we need the CIDR. In your case the IP might be enough.
You need to get the real ip of the master using 'kubectl get endpoints --namespace default kubernetes' and make an egress policy to allow that.
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-apiserver
namespace: test
spec:
policyTypes:
- Egress
podSelector: {}
egress:
- ports:
- port: 443
protocol: TCP
to:
- ipBlock:
cidr: x.x.x.x/32