I went through both daemonset doesn't create any pods and DaemonSet doesn't create any pods: v1.1.2 before asking this question. Here is my problem.
Kubernetes cluster is running on CoreOS
NAME=CoreOS
ID=coreos
VERSION=1185.3.0
VERSION_ID=1185.3.0
BUILD_ID=2016-11-01-0605
PRETTY_NAME="CoreOS 1185.3.0 (MoreOS)"
ANSI_COLOR="1;32"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://github.com/coreos/bugs/issues"
I refer to https://coreos.com/kubernetes/docs/latest/getting-started.html guide and created 3 etcd, 2 masters and 42 nodes. All applications running in the cluster without issue.
I got a requirement of setting up logging with fluentd-elasticsearch and downloaded yaml files in https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch deployed fluentd deamonset.
kubectl create -f fluentd-es-ds.yaml
I could see it got created but none of pod created.
kubectl --namespace=kube-system get ds -o wide
NAME DESIRED CURRENT NODE-SELECTOR AGE CONTAINER(S) IMAGE(S) SELECTOR
fluentd-es-v1.22 0 0 alpha.kubernetes.io/fluentd-ds-ready=true 4h fluentd-es gcr.io/google_containers/fluentd-elasticsearch:1.22 k8s-app=fluentd-es,kubernetes.io/cluster-service=true,version=v1.22
kubectl --namespace=kube-system describe ds fluentd-es-v1.22
Name: fluentd-es-v1.22
Image(s): gcr.io/google_containers/fluentd-elasticsearch:1.22
Selector: k8s-app=fluentd-es,kubernetes.io/cluster-service=true,version=v1.22
Node-Selector: alpha.kubernetes.io/fluentd-ds-ready=true
Labels: k8s-app=fluentd-es
kubernetes.io/cluster-service=true
version=v1.22
Desired Number of Nodes Scheduled: 0
Current Number of Nodes Scheduled: 0
Number of Nodes Misscheduled: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.
I verified below details according to the comments in above SO questions.
kubectl api-versions
apps/v1alpha1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1beta1
autoscaling/v1
batch/v1
batch/v2alpha1
certificates.k8s.io/v1alpha1
extensions/v1beta1
policy/v1alpha1
rbac.authorization.k8s.io/v1alpha1
storage.k8s.io/v1beta1
v1
I could see below logs in one kube-controller-manager after restart.
I0116 20:48:25.367335 1 controllermanager.go:326] Starting extensions/v1beta1 apis
I0116 20:48:25.367368 1 controllermanager.go:328] Starting horizontal pod controller.
I0116 20:48:25.367795 1 controllermanager.go:343] Starting daemon set controller
I0116 20:48:25.367969 1 horizontal.go:127] Starting HPA Controller
I0116 20:48:25.369795 1 controllermanager.go:350] Starting job controller
I0116 20:48:25.370106 1 daemoncontroller.go:236] Starting Daemon Sets controller manager
I0116 20:48:25.371637 1 controllermanager.go:357] Starting deployment controller
I0116 20:48:25.374243 1 controllermanager.go:364] Starting ReplicaSet controller
The other one has below log.
I0116 23:16:23.033707 1 leaderelection.go:295] lock is held by {master.host.name} and has not yet expired
Am I missing something? Appreciate your help on figure out the issue.
I found the solution after studying https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml
There is nodeSelector: set as alpha.kubernetes.io/fluentd-ds-ready: "true"
But nodes doesn't have a label like that. What I did is add the label as below to one node to check whether it's working.
kubectl label nodes {node_name} alpha.kubernetes.io/fluentd-ds-ready="true"
After that, I could see fluentd pod started to run
kubectl --namespace=kube-system get pods
NAME READY STATUS RESTARTS AGE
fluentd-es-v1.22-x1rid 1/1 Running 0 6m
Thanks.