today while trying to run my pod , I discovered this error which we see in the describe events:
# kubectl describe pod monitor-prometheus-alertmanager-c94f7b6b7-tg6vc -n monitoring
Name: monitor-prometheus-alertmanager-c94f7b6b7-tg6vc
Namespace: monitoring
Priority: 0
Node: kube-worker-vm2/192.168.1.36
Start Time: Sun, 09 May 2021 20:42:57 +0100
Labels: app=prometheus
chart=prometheus-13.8.0
component=alertmanager
heritage=Helm
pod-template-hash=c94f7b6b7
release=monitor
Annotations: cni.projectcalico.org/podIP: 192.168.222.51/32
cni.projectcalico.org/podIPs: 192.168.222.51/32
Status: Running
IP: 192.168.222.51
IPs:
IP: 192.168.222.51
Controlled By: ReplicaSet/monitor-prometheus-alertmanager-c94f7b6b7
Containers:
prometheus-alertmanager:
Container ID: docker://0ce55357c5f32c6c66cdec3fe0aaaa06811a0a392d0329c989ac6f15426891ad
Image: prom/alertmanager:v0.21.0
Image ID: docker-pullable://prom/alertmanager@sha256:24a5204b418e8fa0214cfb628486749003b039c279c56b5bddb5b10cd100d926
Port: 9093/TCP
Host Port: 0/TCP
Args:
--config.file=/etc/config/alertmanager.yml
--storage.path=/data
--cluster.advertise-address=[$(POD_IP)]:6783
--web.external-url=http://localhost:9093
State: Running
Started: Sun, 09 May 2021 20:52:33 +0100
Ready: False
Restart Count: 0
Readiness: http-get http://:9093/-/ready delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:
POD_IP: (v1:status.podIP)
Mounts:
/data from storage-volume (rw)
/etc/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from monitor-prometheus-alertmanager-token-kspg6 (ro)
prometheus-alertmanager-configmap-reload:
Container ID: docker://eb86ea355b820ddc578333f357666156dc5c5a3a53c63220ca00b98ffada5531
Image: jimmidyson/configmap-reload:v0.4.0
Image ID: docker-pullable://jimmidyson/configmap-reload@sha256:17d34fd73f9e8a78ba7da269d96822ce8972391c2838e08d92a990136adb8e4a
Port: <none>
Host Port: <none>
Args:
--volume-dir=/etc/config
--webhook-url=http://127.0.0.1:9093/-/reload
State: Running
Started: Sun, 09 May 2021 20:44:59 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/etc/config from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from monitor-prometheus-alertmanager-token-kspg6 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: monitor-prometheus-alertmanager
Optional: false
storage-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: prometheus-pv-claim
ReadOnly: false
monitor-prometheus-alertmanager-token-kspg6:
Type: Secret (a volume populated by a Secret)
SecretName: monitor-prometheus-alertmanager-token-kspg6
Optional: false
QoS Class: BestEffort
Node-Selectors: boardType=x86vm
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m54s default-scheduler Successfully assigned monitoring/monitor-prometheus-alertmanager-c94f7b6b7-tg6vc to kube-worker-vm2
Normal Pulled 7m53s kubelet Container image "jimmidyson/configmap-reload:v0.4.0" already present on machine
Normal Created 7m52s kubelet Created container prometheus-alertmanager-configmap-reload
Normal Started 7m52s kubelet Started container prometheus-alertmanager-configmap-reload
Warning Failed 6m27s (x2 over 7m53s) kubelet Failed to pull image "prom/alertmanager:v0.21.0": rpc error: code = Unknown desc = context canceled
Warning Failed 5m47s (x3 over 7m53s) kubelet Error: ErrImagePull
Warning Failed 5m47s kubelet Failed to pull image "prom/alertmanager:v0.21.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Normal BackOff 5m11s (x6 over 7m51s) kubelet Back-off pulling image "prom/alertmanager:v0.21.0"
Warning Failed 5m11s (x6 over 7m51s) kubelet Error: ImagePullBackOff
Normal Pulling 4m56s (x4 over 9m47s) kubelet Pulling image "prom/alertmanager:v0.21.0"
Normal Pulled 19s kubelet Successfully pulled image "prom/alertmanager:v0.21.0" in 4m36.445692759s
then I tried to ping first with google.com since it was working I wanted to check https://registry-1.docker.io/v2/ so I tried to ping docker.io but I do not get ping result. what is causing this ?
osboxes@kube-worker-vm2:~$ ping google.com
PING google.com (142.250.200.14) 56(84) bytes of data.
64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=10 ttl=117 time=35.8 ms
64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=11 ttl=117 time=11.9 ms
64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=12 ttl=117 time=9.16 ms
64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=13 ttl=117 time=11.2 ms
64 bytes from lhr48s29-in-f14.1e100.net (142.250.200.14): icmp_seq=14 ttl=117 time=12.1 ms
^C
--- google.com ping statistics ---
14 packets transmitted, 5 received, 64% packet loss, time 13203ms
rtt min/avg/max/mdev = 9.163/16.080/35.886/9.959 ms
osboxes@kube-worker-vm2:~$ ping docker.io
PING docker.io (35.169.217.170) 56(84) bytes of data.
Because docker.io
does not respond to pings, from anywhere.