I want to manage servers and configure them with ansible. After creating a join command with kubeadm, I want to save the command in the controller machine RAM. And, saving the secret join command locally on the controller machine is problematic for my job purposes. For some issues, Ansible Vault is not an option for me that I can work with.
Is there any way I can save join command and pass this to worker nodes without saving locally on the controller machine? A short-lived token is alright as long as I can join newer nodes to the cluster.
Any secured way that doesn't involve saving join command or token to local storage and new nodes can join after a long period of time, would work for me.
I am creating small clusters with ansible and had this issue as well.
My first solution was exactly what you say you don't want to do... the join command (How to grab last two lines from ansible (register stdout) initialization of kubernetes cluster) I went with another option for simplicity not for security, as it was a pain because I had to change the permissions on the join-command file after it was copied to the ansible server so that the user I was running the playbooks as could read it... And if I used it for a second cluster the join-command would change and i'd lose the old one and not be able to add nodes to the previous cluster... anyway.
My second solution I liked better is this:
I created a yml init file for my nodes that includes a long term token (Not sure if you would have issues with a long lived token) that was created on master. So when I kubeadm init my nodes I have ansible copy in the init file first then init with it.
ansible snippets:
- name: Create the kubernetes init yaml file for worker node
template:
src: kubeadminitworker.yml
dest: /etc/kubernetes/kubeadminitworker.yaml
- name: Join the node to cluster
command: kubeadm join --config /etc/kubernetes/kubeadminitworker.yaml
register: join_output
- debug:
var: join_output.stdout
kubeadmininitworker.yml:
apiVersion: kubeadm.k8s.io/v1beta2
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
file:
kubeConfigPath: /etc/kubernetes/discovery.yml
timeout: 5m0s
tlsBootstrapToken: <token string removed for post>
kind: JoinConfiguration
nodeRegistration:
criSocket: /var/run/dockershim.sock
kubeletExtraArgs:
cloud-provider: external
Where the token string matches what's on the master.
I also used an init file when created the master with ansible as well which included my long term token.
master init for reference:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: <token string removed for post>
ttl: 0s
usages:
- signing
- authentication
nodeRegistration:
kubeletExtraArgs:
cloud-provider: external
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
useHyperKubeImage: false
networking:
serviceSubnet: "10.96.0.0/12"
podSubnet: "172.16.0.0/16"
etcd:
local:
imageRepository: "k8s.gcr.io"
dns:
type: "CoreDNS"
imageRepository: "k8s.gcr.io"
I did this a while ago - but I believe I just ran the create token command on an existing cluster, copied the token string into my two init files and then deleted the token from the existing cluster. So far so good...