So I'm trying to create a pod on a Kubernetes cluster. Here is the yml file from which I am creating the pod.
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod2
spec:
containers:
- name: task-pv-container2
image: <<image_name>>
The pod hangs at container creating. Here is the output of kubectl describe pod.
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
10s 10s 1 default-scheduler Normal Scheduled Successfully assigned task-pv-pod2 to ip-10-205-234-170.ec2.internal
8s 8s 1 kubelet, ip-10-205-234-170.ec2.internal Warning FailedSync Error syncing pod, skipping: failed to "SetupNetwork" for "task-pv-pod2_default" with SetupNetworkError: "NetworkPlugin cni failed to set up pod \"task-pv-pod2_default\" network: client: etcd cluster is unavailable or misconfigured; error #0: x509: cannot validate certificate for 10.205.234.170 because it doesn't contain any IP SANs\n; error #1: x509: cannot validate certificate for 10.205.235.160 because it doesn't contain any IP SANs\n; error #2: x509: cannot validate certificate for 10.205.234.162 because it doesn't contain any IP SANs\n"
7s 6s 2 kubelet, ip-10-205-234-170.ec2.internal Warning FailedSync Error syncing pod, skipping: failed to "TeardownNetwork" for "task-pv-pod2_default" with TeardownNetworkError: "NetworkPlugin cni failed to teardown pod \"task-pv-pod2_default\" network: client: etcd cluster is unavailable or misconfigured; error #0: x509: cannot validate certificate for 10.205.234.170 because it doesn't contain any IP SANs\n; error #1: x509: cannot validate certificate for 10.205.235.160 because it doesn't contain any IP SANs\n; error #2: x509: cannot validate certificate for 10.205.234.162 because it doesn't contain any IP SANs\n"
Does anyone know what might be causing this. In order to Kubernetes to work with aws as a cloud provider I had to set a proxy variable in the hyperkube container. Co
It seem your ETCD's certs as not trusted for the name (or IP) you access on. I suggest to you to check your cluster health with kubectl get cs
and modify k8s way to talk to ETCD if needed.