I am setting up a kubernetes setup using ansible. To setup master i have written a playbook as shown below.
- hosts: master
become: yes
tasks:
- name: initialize the cluster
shell: kubeadm init --pod-network-cidr=10.244.0.0/16
- name: create .kube directory
become: yes
become_user: ubuntu
file:
path: $HOME/.kube
state: directory
mode: 0755
- name: copy admin.conf to user's kube config
copy:
src: /etc/kubernetes/admin.conf
dest: /home/ubuntu/.kube/config
remote_src: yes
owner: ubuntu
- name: install Pod network
become: yes
become_user: ubuntu
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml >> pod_network_setup.txt
args:
chdir: $HOME
creates: pod_network_setup.txt
The problem is that when i run the playbook, it doesn't wait for the initialization to complete i.e. for command "kubeadm init " to return and runs each task one after another. Since, the initialization takes time and one it is completed then only the file "/etc/kubernetes/admin.conf" is created. As ansible doesn't wait for it to complete it exists giving error in task 3 that "/etc/kubernetes/admin.conf" is not found.
If i run following playbook then ansible waits till the initialization is complete i.e. the control hangs till "kubeadm init " returns.
- hosts: master
become: yes
tasks:
- name: initialize the cluster
shell: kubeadm init --pod-network-cidr=10.244.0.0/16
How can i make ansible wait for command "kubeadm init " to complete and then only start another command ?
When I use command module instead of shell module it appears to wait as it has no trouble making the copy of admin.conf. Also my next step after creating the .kube/config is to apply flannel overlay and that also works. Here's what my tasks looks like.
- name: Initialize the Kubernetes cluster using kubeadm
command: kubeadm init --config /etc/kubernetes/kubeadminit.yaml
- name: create .kube in root home
file:
path: /root/.kube
state: directory
- name: copy kubernetes admin.conf to root home dir
copy:
src: /etc/kubernetes/admin.conf
dest: /root/.kube/config
remote_src: yes
There are three ways:
kubeadm init
ansible task.- hosts: master
become: yes
tasks:
- name: initialize the cluster
shell: kubeadm init --pod-network-cidr=10.244.0.0/16
- name: sleep for 20 seconds
wait_for:
timeout: 20
- hosts: master
become: yes
tasks:
- name: initialize the cluster
shell: kubeadm init --pod-network-cidr=10.244.0.0/16
register: result
until: result.stdout.find("Your Kubernetes master has initialized successfully!") != -1
retries: 1
delay: 20
NOTE: Here we retry kubeadm init
until we get the string Your Kubernetes master has initialized successfully!
in the output.
/etc/kubernetes/admin.conf
exists after executing kubeadm init
ansible task.- hosts: master
become: yes
tasks:
- name: initialize the cluster
shell: kubeadm init --pod-network-cidr=10.244.0.0/16
- name: create .kube directory
become: yes
become_user: ubuntu
file:
path: $HOME/.kube
state: directory
mode: 0755
- name: Check admin.conf file exists.
stat:
path: /etc/kubernetes/admin.conf
register: k8s_conf
- name: copy admin.conf to user's kube config
copy:
src: /etc/kubernetes/admin.conf
dest: /home/ubuntu/.kube/config
remote_src: yes
owner: ubuntu
when: k8s_conf.stat.exists
NOTE: Here we execute admin.conf copy only when the k8s config file /etc/kubernetes/admin.conf
exists.
Hope this helps.