Kubectl fail after a long time

4/21/2018

I have made up a little cluster (it is 1 Machine the master and two VM the nodes), now I have created a NFS directory to share a persistence volume:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs #nome di riferimento
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.57.1
    path: "/mnt/shardisk"

and a claim that call it:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Mi

and finally a stupid pod to use it:

kind: Pod
apiVersion: v1
metadata:
  name: nginx-nfs
spec:
  volumes:
    - name: storage
      persistentVolumeClaim:
       claimName: test-pvc
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: storage

now I have created a cluster from the physical machine and I have joined it from the VM, I have used callico for the network services (because flannel fail to start if someone know why it would be wonderful to solve it)

now if I try to do: kubectl describe pod I see all work fine and so to kubectl logs nginx-nfs, but if I try to do kubectl exec -it nginx-nfs /bin/bash

all freeze for a very long time and after that I have this:

Error from server: error dialing backend: dial tcp 10.0.2.15:10250: getsockopt: connection timed out
-- Cristian Monti
cluster-computing
docker
kubeadm
kubectl
kubernetes

1 Answer

4/24/2018

I have "solve" it, i use kubernetes in 2 different lan and so the admin.conf have an ip that no match the current ip and it will not work, i have solve it creating same vm internal to the host and nat a static ip on it

-- Cristian Monti
Source: StackOverflow