Following https://coreos.com/kubernetes/docs/latest/getting-started.html , i wanted to generate my TLS assets for my kubernetes cluster. My plan to push those keys via cloud-config to the aws-api to create EC2 instances won't work, because i won't know the public and private IPs of those instances in advance.
I though about moving the ca cert to the instances via the cloud-config, where i then, generate those assets from a script run by a systemd unit file. Biggest concern here is that i don't want to put a ca root cert into a cloud config.
Does anyone have a solution to this situation?
According to how kube-aws does it, I can set my api-server conf like this:
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = kubernetes.mydomain.de
IP.1 = 10.3.0.1
to the "minimal config file" i added
DNS.5 = kubernetes.mydomain.de
IP.2 = 10.3.0.1
The worker conf looks like this:
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = *.*.cluster.internal
The trick here is to set the SAN as a wildcard *.*.cluster.internal
. This way all the workers verify with that cert on the internal network and I don't have to set the specific IP address.