Back-off restarting the failed container, the description is Container image mongo:3.4.20
already present on the machine
I have removed all container into that system name mongo, removed all POD, svc, deployment, and rc, but getting the same error, also I tried to label another node with a different name and used that label in yaml
but I got the same error.
I used below yaml
for creating Deployment, in this case, I used to map system with name app=mongodb
, also attached one 8 GB disk in AWS as persistentVolumeClaim
.
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- image: mongo:3.4.20
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- mountPath: "/data/db"
name: db-storage
volumes:
- name: db-storage
persistentVolumeClaim:
claimName: db-storage
why its always going failed and saying Container image already present on the machine, any cache?
Addressed in comments, "already present on the machine" is not an error message. That's a pod event and is there only for debugging and tracing to give you an idea of what steps the kubelet is taking during the pod setup process.