I am new to kubernetes. I have an issue in the pods. When I run the command
 kubectl get podsResult:
NAME                   READY     STATUS             RESTARTS   AGE
mysql-apim-db-1viwg    1/1       Running            1          20h
mysql-govdb-qioee      1/1       Running            1          20h
mysql-userdb-l8q8c     1/1       Running            0          20h
wso2am-default-813fy   0/1       ImagePullBackOff   0          20hDue to an issue of "wso2am-default-813fy" node, I need to restart it. Any suggestion?
First try to see what's wrong with the pod:
kubectl logs -p <your_pod>In my case it was a problem with the YAML file.
So, I needed to correct the configuration file and replace it:
kubectl replace --force -f <yml_file_describing_pod>In case of not having the yaml file:
kubectl get pod PODNAME -n NAMESPACE -o yaml | kubectl replace --force -f -
Usually in case of "ImagePullBackOff" it's retried after few seconds/minutes. In case you want to try again manually you can delete the old pod and recreate the pod. The one line command to delete and recreate the pod would be:
kubectl replace --force -f <yml_file_describing_pod>$ kubectl replace --force -f <resource-file>if all goes well, you should see something like:
<resource-type> <resource-name> deleted
<resource-type> <resource-name> replaceddetails of this can be found in the Kubernetes documentation, "manage-deployment" and kubectl-cheatsheet pages at the time of writing.
If the Pod is part of a Deployment or Service, deleting it will restart the Pod and, potentially, place it onto another node:
$ kubectl delete po $POD_NAME
replace it if it's an individual Pod:
$ kubectl get po -n $namespace $POD_NAME -o yaml | kubectl replace -f -
Try with deleting pod it will try to pull image again.
kubectl delete pod <pod_name> -n <namespace_name>