Why does my MongoDB deployment break minikube (E0413, empty IP)?

4/13/2020

I get an error when I try to start a minikube instance for the second time.

when I first use minikube delete && minikube start everything works fine. But after a minikube stop, minikube won't start again:

  minikube v1.9.1 auf Ubuntu 19.10
✨  Using the docker driver based on existing profile
  Starting control plane node m01 in cluster minikube
  Pulling base image ...
  Restarting existing docker container for "minikube" ...
  StartHost failed, but will try again: provision: Temporary Error: provisioning: error getting ssh client: Error dialing tcp via ssh client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

a run of minikube status states

E0413 11:05:10.272990   14864 status.go:233] kubeconfig endpoint: empty IP
m01
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured

WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`

following this instruction, minikube update-context lead to

  update config: empty ip
  minikube is exiting due to an error. If the above message is not useful, open an issue:
  https://github.com/kubernetes/minikube/issues/new/choose

I once got an other error, which said I should disconnect VPN (I not running a VPN connection) and refered to a github page with an VPN error on MacOS, but I could not reproduce it.

EDIT: If I do not deploy to minikube, a restart is possible. But after I deployed some pods and stopped minikube, it wont start again.

EDIT2: I still did not find the problem, I've uninstalled OpenVPN but it didn't change...

  minikube v1.9.1 auf Ubuntu 19.10
✨  Using the docker driver based on existing profile
  Starting control plane node m01 in cluster minikube
  Pulling base image ...
  Restarting existing docker container for "minikube" ...
  StartHost failed, but will try again: provision: Temporary Error: provisioning: error getting ssh client: Error dialing tcp via ssh client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  Updating the running docker "minikube" container ...
❌  [SSH_AUTH_FAILURE] Failed to start docker container. "minikube start" may fix it. provision: Temporary Error: provisioning: error getting ssh client: Error dialing tcp via ssh client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
  Suggestion: Your host is failing to route packets to the minikube VM. If you have VPN software, try turning it off or configuring it so that it does not re-route traffic to the VM IP. If not, check your VM environment routing options.
  Documentation: https://minikube.sigs.k8s.io/docs/reference/networking/vpn/
⁉️   Related issue: https://github.com/kubernetes/minikube/issues/3930

EDIT 3: This problem seems to be caused by my monogdb deployment. If I deploy it and stop minikube, minikube will not start with the problems mentioned above. However, when I deploy any other pods, it works as expected.

monog-deployment.yml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb-standalone
spec:
  serviceName: database
  replicas: 1
  selector:
    matchLabels:
      app: database
  template:
    metadata:
      labels:
        app: database
        selector: mongodb-standalone
    spec:
      containers:
      - name: mongodb-standalone
        image: mongo:latest
        env:
          - name: MONGO_INITDB_ROOT_USERNAME_FILE
            value: /etc/mongo-credentials/admin/MONGO_ROOT_USERNAME
          - name: MONGO_INITDB_ROOT_PASSWORD_FILE
            value: /etc/mongo-credentials/admin/MONGO_ROOT_PASSWORD
        volumeMounts:
          - mountPath: /etc/mongo-credentials
            name: mongo-credentials
            readOnly: true
          - name: mongodb-scripts
            mountPath: /docker-entrypoint-initdb.d
            readOnly: true
          - name: mongodb-conf
            mountPath: /config
            readOnly: true
          - name: mongodb-data
            mountPath: /data/db
      nodeSelector:
         kubernetes.io/hostname: minikube
      volumes:
        - name: mongo-credentials
          secret:
            secretName: mongo-credentials
            items:
            - key: MONGO_ROOT_USERNAME
              path: admin/MONGO_ROOT_USERNAME
              mode: 0444
            - key: MONGO_ROOT_PASSWORD
              path: admin/MONGO_ROOT_PASSWORD
              mode: 0444
            - key: MONGO_USERNAME
              path: MONGO_USERNAME
              mode: 0444
            - key: MONGO_PASSWORD
              path: MONGO_PASSWORD
              mode: 0444
            - key: MONGO_USERS_LIST
              path: MONGO_USERS_LIST
              mode: 0444
        - name: mongodb-scripts
          configMap:
            name: mongodb-standalone
            items:
              - key: ensure-users.js
                path: ensure-users.js
        - name: mongodb-conf
          configMap:
            name: mongodb-standalone
            items:
              - key: mongo.conf
                path: mongo.conf
        - name: mongodb-data
          persistentVolumeClaim:
            claimName: mongodb-standalone
---
apiVersion: v1
kind: Service
metadata:
  name: database
  labels:
    app: database
spec:
  clusterIP: None
  selector:
    app: database
---
apiVersion: v1
kind: Secret
metadata:
  name: mongo-credentials
type: Opaque
data:
  MONGO_ROOT_USERNAME: YWRtaW4K
  MONGO_ROOT_PASSWORD: cGFzc3dvcmQK
  MONGO_USERNAME: c2Vsc2Nhbm5lcgo=
  MONGO_PASSWORD: cGFzc3dvcmQK
  MONGO_USERS_LIST: c2Vsc2Nhbm5lcjpkYkFkbWluLHJlYWRXcml0ZTpwYXNzd29yZAo=
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: mongodb-standalone
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---
apiVersion: v1
kind: PersistentVolume
metadata:
    name: mongodb-standalone
spec:
  capacity:
    storage: 2Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: mongodb-standalone
  local:
    path: /home/docker
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
            - minikube
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mongodb-standalone
spec:
  storageClassName: mongodb-standalone
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb-standalone
data:
  mongo.conf: |
    storage:
      dbPath: /data/db
  ensure-users.js: |
    const targetDbStr = 'selscanner';
    const rootUser = cat('/etc/mongo-credentials/admin/MONGO_ROOT_USERNAME');
    const rootPass = cat('/etc/mongo-credentials/admin/MONGO_ROOT_PASSWORD');
    const usersStr = cat('/etc/mongo-credentials/MONGO_USERS_LIST');

    // auth against admin
    const adminDb = db.getSiblingDB('admin');
    adminDb.auth(rootUser, rootPass);
    print('Successfully authenticated admin user');

    // we'll create the users here
    const targetDb = db.getSiblingDB(targetDbStr);

    // user-defined roles should be stored in the admin db
    const customRoles = adminDb
      .getRoles({rolesInfo: 1, showBuiltinRoles: false})
      .map(role => role.role)
      .filter(Boolean);

    // parse the list of users, and create each user as needed
    usersStr
      .trim()
      .split(';')
      .map(s => s.split(':'))
      .forEach(user => {
        const username = user[0];
        const rolesStr = user[1];
        const password = user[2];

        if (!rolesStr || !password) {
          return;
        }

        const roles = rolesStr.split(',');
        const userDoc = {
          user: username,
          pwd: password,
        };

        userDoc.roles = roles.map(role => {
          if (!~customRoles.indexOf(role)) {
            // is this a user defined role?
            return role; // no, it is built-in, just use the role name
          }
          return {role: role, db: 'admin'}; // yes, user-defined, specify the long format
        });

        try {
          targetDb.createUser(userDoc);
        } catch (err) {
          if (!~err.message.toLowerCase().indexOf('duplicate')) {
            // if not a duplicate user
            throw err; // rethrow
          }
        }
      });
-- lumiemp
kubectl
kubernetes
kubernetes-deployment
minikube

0 Answers