I am new to Kubernetes, I am using Google Container Registry to store my private images, I created a secret key named grc-puller-key
When I use the command: kubectl apply -f deployment.yaml
I encountered an unexpected behavior. Sometimes pods are successfully spin up, sometimes those don't. When I failed, I describe the logs and see that I have no credential to pull image, but sometimes it does work.
Here is my deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: portfolio
labels:
app: portfolio
spec:
selector:
matchLabels:
app: portfolio
template:
metadata:
labels:
app: portfolio
spec:
containers:
- name: portfolio
image: gcr.io/phuong-devops/portfolio:v1
ports:
- containerPort: 3000
protocol: TCP
imagePullSecrets:
- name: grc-puller-key
I am sure that the secret named grc-puller-key was created. I am providing a screen shot as below:
I think that one thing from Kubernetes official documentation could explain to certain extent such situation. My guess is that when deployment worked it could have used local copy of the required image. Take a look at this section:
In your deployment there isn't any imagePullPolicy
explicitly defined and kubernetes uses the default one:
The default pull policy is
IfNotPresent
which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:
- set the
imagePullPolicy
of the container toAlways
.- omit the
imagePullPolicy
and use :latest as the tag for the image to use.- omit the
imagePullPolicy
and the tag for the image to use.- enable the
AlwaysPullImages
admission controller.Note that you should avoid using
:latest
tag, see Best Practices for Configuration for more information.
Most probably when your deployment worked it used local image. You can easily check it in the events of the Pod
by running:
kubectl describe pods your-pod
If you don't see the event pulling image "image-name"
it means this pod has been created without a need of pooling an image so it used the local one. In situation where the image cannot be pulled your Pod
will enter CrashLoopBackOff
state and you'll see events describing what exactly happened.
This is actually the only thing which at the moment comes to my mind that could explain such behaviour. You can easily test it by setting any of the above and check if the pattern changes i.e. if the deployment fails constantly if there is some issue with accessing the image registry.
Thanks for your helps, It turned out when I changed my service account permission to StorageAdmin, everything worked. But I don't know why it leads to such behavior. If I am not authorized to my container registry, It should had been failed all the time. Right? But in fact, It did not.