I have a dockerized application. When I am running it through docker-compose up, it runs fine and appears in docker images
. But when I try to start a minikube cluster with vm-driver=None
, then the cluster gives error and does not start. However, when I quit my docker application and start minikube cluster again, the cluster starts successfully. But then I couldnt find my docker application image I just ran. Instead I find images like below
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 5 weeks ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 5 weeks ago 122MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 months ago 97MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 months ago 148MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 months ago 225MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 months ago 50.4MB
Is this expected behavior? What is the reason if so?
minikube start --vm-driver=none
Update: I am working in an Ubuntu VM.
It is not expected behavior that Minikube
will delete your docker images. I have tried to reproduce your issue. I had a few docker images on my Ubuntu VM.
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest e445ab08b2be 13 days ago 126MB
busybox latest db8ee88ad75f 2 weeks ago 1.22MB
perl latest bbac4a88d400 3 weeks ago 889MB
alpine latest b7b28af77ffe 3 weeks ago 5.58MB
Later tried to run minikube.
$ sudo minikube start --vm-driver=none
minikube v1.2.0 on linux (amd64)
Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
...
⌛ Verifying: apiserver proxy etcd scheduler controller dns
Done! kubectl is now configured to use "minikube"
I still have all docker images and minikube is working as expected.
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4vd2q 1/1 Running 8 21d
coredns-5c98db65d4-xjx22 1/1 Running 8 21d
etcd-minikube 1/1 Running 5 21d
...
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest e445ab08b2be 13 days ago 126MB
busybox latest db8ee88ad75f 2 weeks ago 1.22MB
perl latest bbac4a88d400 3 weeks ago 889MB
alpine latest b7b28af77ffe 3 weeks ago 5.58MB
After exit from minikube, I still had all docker images.
As you mentioned in original thread, you have used minikube start --vm-driver=none
. If you will use minikube start without sudo you will receive error like:
$ minikube start --vm-driver=none
minikube v1.2.0 on linux (amd64)
Unable to load config: open /home/$user/.minikube/profiles/minikube/config.json: permission denied
or if want stop minikube without sudo:
$ minikube stop
Unable to stop VM: open /home/$user/.minikube/machines/minikube/config.json: permission denied
Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
https://github.com/kubernetes/minikube/issues/new
Please try use sudo with minikube commands. Let me know if that helped. If not please provide your error message.
From the Minikube documentation:
minikube was designed to run Kubernetes within a dedicated VM, and assumes that it has complete control over the machine it is executing on. With the none driver, minikube and Kubernetes run in an environment with very limited isolation, which could result in:
- Decreased security
- Decreased reliability
- Data loss