Ansible shell module execute kubectl error connection to localhost refused

4/15/2018

I have an ansible script that I am using to build a kubernetes cluster. If I run kubectl via the shell module:

- name: Ensure kube-apiserver-to-kubelet ClusterRole is applied
  shell: "kubectl apply -f kube-apiserver-to-kubelet.yaml"
  delegate_to: controller1
  run_once: true

I get the following error "The connection to the server localhost:8080 was refused - did you specify the right host or port?":

fatal: [controller1 -> 10.240.0.11]: FAILED! => {"changed": true, "cmd": "kubectl apply -f kube-apiserver-to-kubelet.yaml", "delta": "0:00:00.116446", "end": "2018-04-15 21:42:51.023786", "msg": "non-zero return code", "rc": 1, "start": "2018-04-15 21:42:50.907340", "stderr": "The connection to the server localhost:8080 was refused - did you specify the right host or port?", "stderr_lines": ["The connection to the server localhost:8080 was refused - did you specify the right host or port?"], "stdout": "", "stdout_lines": []}

However, if I log into controller1 (I use a bastion host as a proxy to the nodes.) and execute the command it runs without issue:

kubectl apply -f kube-apiserver-to-kubelet.yaml
clusterrole.rbac.authorization.k8s.io "system:kube-apiserver-to-kubelet" configured
clusterrolebinding.rbac.authorization.k8s.io "system:kube-apiserver" configured

Why does this work on the node directly but not via ansible and what do I need to be doing to make this run without failing?

-- amb85
ansible
kubectl
kubernetes

1 Answer

4/15/2018

It's likely not picking up your kubeconfig (localhost:8080 is the unconfigured default)

I would either use the builtin "k8s_raw" module or specify the --kubeconfig parameter in your shell script

-- Trondh
Source: StackOverflow