I am following the Running Kubernetes locally via Docker guide and I am unable to get the master to start normally.
Step One: Run etcd
docker run --net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 --data-dir=/var/etcd/data
The etcd container appears to start normally. Don't see any errors with docker logs
and I end up with an etcd process listening on 4001.
Step Two: Run the master
docker run --net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet --api_servers=http://localhost:8080 --v=2 --address=0.0.0.0 --enable_server --hostname_override=127.0.0.1 --config=/etc/kubernetes/manifests
I believe the this is where my issues begin. Below is the output from docker logs
:
W1021 13:23:04.093281 1 server.go:259] failed to set oom_score_adj to -900: write /proc/self/oom_score_adj: permission denied W1021 13:23:04.093426 1 server.go:462] Could not load kubeconfig file /var/lib/kubelet/kubeconfig: stat /var/lib/kubelet/kubeconfig: no such file or directory. Trying auth path instead. W1021 13:23:04.093445 1 server.go:424] Could not load kubernetes auth path /var/lib/kubelet/kubernetes_auth: stat /var/lib/kubelet/kubernetes_auth: no such file or directory. Continuing with defaults. I1021 13:23:04.093503 1 server.go:271] Using root directory: /var/lib/kubelet I1021 13:23:04.093519 1 plugins.go:69] No cloud provider specified. I1021 13:23:04.093526 1 server.go:290] Successfully initialized cloud provider: "" from the config file: "" I1021 13:23:05.126191 1 docker.go:289] Connecting to docker on unix:///var/run/docker.sock I1021 13:23:05.126396 1 server.go:651] Adding manifest file: /etc/kubernetes/manifests I1021 13:23:05.126409 1 file.go:47] Watching path "/etc/kubernetes/manifests" I1021 13:23:05.126416 1 server.go:661] Watching apiserver E1021 13:23:05.127148 1 reflector.go:136] Failed to list *api.Pod: Get http://localhost:8080/api/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1: dial tcp 127.0.0.1:8080: connection refused E1021 13:23:05.127295 1 reflector.go:136] Failed to list *api.Service: Get http://localhost:8080/api/v1/services: dial tcp 127.0.0.1:8080: connection refused E1021 13:23:05.127336 1 reflector.go:136] Failed to list *api.Node: Get http://localhost:8080/api/v1/nodes?fieldSelector=metadata.name%3D127.0.0.1: dial tcp 127.0.0.1:8080: connection refused I1021 13:23:05.343848 1 plugins.go:56] Registering credential provider: .dockercfg W1021 13:23:05.394268 1 container_manager_linux.go:96] Memory limit 0 for container /docker-daemon is too small, reset it to 157286400 I1021 13:23:05.394284 1 container_manager_linux.go:100] Configure resource-only container /docker-daemon with memory limit: 157286400 I1021 13:23:05.395019 1 plugins.go:180] Loaded volume plugin "kubernetes.io/aws-ebs" I1021 13:23:05.395040 1 plugins.go:180] Loaded volume plugin "kubernetes.io/empty-dir" I1021 13:23:05.395052 1 plugins.go:180] Loaded volume plugin "empty" I1021 13:23:05.395068 1 plugins.go:180] Loaded volume plugin "kubernetes.io/gce-pd" I1021 13:23:05.395080 1 plugins.go:180] Loaded volume plugin "gce-pd" I1021 13:23:05.395098 1 plugins.go:180] Loaded volume plugin "kubernetes.io/git-repo" I1021 13:23:05.395112 1 plugins.go:180] Loaded volume plugin "git" I1021 13:23:05.395124 1 plugins.go:180] Loaded volume plugin "kubernetes.io/host-path" I1021 13:23:05.395136 1 plugins.go:180] Loaded volume plugin "kubernetes.io/nfs" I1021 13:23:05.395147 1 plugins.go:180] Loaded volume plugin "kubernetes.io/secret" I1021 13:23:05.395156 1 plugins.go:180] Loaded volume plugin "kubernetes.io/iscsi" I1021 13:23:05.395166 1 plugins.go:180] Loaded volume plugin "kubernetes.io/glusterfs" I1021 13:23:05.395178 1 plugins.go:180] Loaded volume plugin "kubernetes.io/persistent-claim" I1021 13:23:05.395194 1 plugins.go:180] Loaded volume plugin "kubernetes.io/rbd" I1021 13:23:05.395274 1 server.go:623] Started kubelet I1021 13:23:05.395296 1 server.go:63] Starting to listen on 0.0.0.0:10250 I1021 13:23:05.395507 1 server.go:82] Starting to listen read-only on 0.0.0.0:10255
Step Three: Run the service proxy
docker run -d --net=host --privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
The docker logs from this step contained similar errors to what I saw in Step Two.
I1021 13:32:03.177004 1 server.go:88] Running in resource-only container "/kube-proxy" I1021 13:32:03.177432 1 proxier.go:121] Setting proxy IP to 192.168.19.200 and initializing iptables E1021 13:32:03.195731 1 api.go:108] Unable to load services: Get http://127.0.0.1:8080/api/v1/services: dial tcp 127.0.0.1:8080: connection refused E1021 13:32:03.195924 1 api.go:180] Unable to load endpoints: Get http://127.0.0.1:8080/api/v1/endpoints: dial tcp 127.0.0.1:8080: connection refused
docker ps
output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 576d15c22537 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube proxy --m" About an hour ago Up About an hour high_pasteur a98637c9d523 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube kubelet -" About an hour ago Up 34 minutes drunk_jones 618afb1de613 gcr.io/google_containers/etcd:2.0.9 "/usr/local/bin/etcd " 2 hours ago Up 2 hours high_yonath
The first error from Step Two's logs led me to believe the error may have something to do with iptables.
iptables -L
output:
Chain INPUT (policy ACCEPT) target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT) target prot opt source destination
Chain DOCKER (1 references) target prot opt source destination
docker exec -ti a98637c9d523 cat /etc/kubernetes/manifests/master.json
output:
{ "apiVersion": "v1", "kind": "Pod", "metadata": {"name":"k8s-master"}, "spec":{ "hostNetwork": true, "containers":[ { "name": "controller-manager", "image": "gcr.io/google_containers/hyperkube:v1.0.6", "command": [ "/hyperkube", "controller-manager", "--master=127.0.0.1:8080", "--v=2" ] }, { "name": "apiserver", "image": "gcr.io/google_containers/hyperkube:v1.0.6", "command": [ "/hyperkube", "apiserver", "--portal-net=10.0.0.1/24", "--address=127.0.0.1", "--etcd-servers=http://127.0.0.1:4001", "--cluster-name=kubernetes", "--v=2" ] }, { "name": "scheduler", "image": "gcr.io/google_containers/hyperkube:v1.0.6", "command": [ "/hyperkube", "scheduler", "--master=127.0.0.1:8080", "--v=2" ] } ] } }
Docker version 1.8.3
Kernel version 4.2.3
Any insight would be greatly appreciated.
Can you downgrade docker version to 1.7.2 first? I did the exact what you did above with docker 1.7.2, and everything works.
$ curl 127.0.0.1:8080/
{
"paths": [
"/api",
"/api/v1",
"/api/v1beta3",
"/healthz",
"/healthz/ping",
"/logs/",
"/metrics",
"/resetMetrics",
"/swagger-ui/",
"/swaggerapi/",
"/ui/",
"/version"
]
}
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0141e596414c gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube proxy -- 15 minutes ago Up 15 minutes nostalgic_nobel
10634ce798e9 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube schedule 16 minutes ago Up 16 minutes k8s_scheduler.b725e775_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_43562383
5618a39eb11d gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube apiserve 16 minutes ago Up 16 minutes k8s_apiserver.70750283_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_e5d145be
25f336102b26 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube controll 16 minutes ago Up 16 minutes k8s_controller-manager.aad1ee8f_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_fe538b9b
7f1391840920 gcr.io/google_containers/pause:0.8.0 "/pause" 17 minutes ago Up 17 minutes k8s_POD.e4cc795_k8s-master-127.0.0.1_default_9b44830745c166dfc6d027b0fc2df36d_26fd84fd
a11715435f45 gcr.io/google_containers/hyperkube:v0.21.2 "/hyperkube kubelet 17 minutes ago Up 17 minutes jovial_hodgkin
a882a1a4b917 gcr.io/google_containers/etcd:2.0.9 "/usr/local/bin/etcd 18 minutes ago Up 18 minutes adoring_hodgkin
There are a couple of known issues with docker 1.8.3, especially docker#17190. We had to workaround such issue through kubernetes#16052. But such changes are not cherry-picked to Kubernetes 1.0 release. From the output you posted above, I noticed that there is no pause container. Also you can run docker ps -a
to check if some containers are dead, and copy & paste the output of docker logs <dead-container>
here?
I will file an issue to make sure Kubernetes 1.1 release working fine with docker 1.8.3. Thanks!