Certificates in my kubernetes are expired. What are the steps in redeploying certificates. After redployment pod health is affected. How do i overcome this?
[mdupaguntla@iacap067 K8S_HA_Setup_Post_RPM_Installation_With_RBAC]$ sudo kubectl logs elasticsearch-logging-0
+ export NODE_NAME=elasticsearch-logging-0
+ NODE_NAME=elasticsearch-logging-0
+ export NODE_MASTER=true
+ NODE_MASTER=true
+ export NODE_DATA=true
+ NODE_DATA=true
+ export HTTP_PORT=9200
+ HTTP_PORT=9200
+ export TRANSPORT_PORT=9300
+ TRANSPORT_PORT=9300
+ export MINIMUM_MASTER_NODES=2
+ MINIMUM_MASTER_NODES=2
+ chown -R elasticsearch:elasticsearch /data
+ ./bin/elasticsearch_logging_discovery
F0323 07:18:25.043962 8 elasticsearch_logging_discovery.go:78] kube-system namespace doesn't exist: Unauthorized
goroutine 1 [running]:
k8s.io/kubernetes/vendor/github.com/golang/glog.stacks(0xc4202b1200, 0xc42020a000, 0x77, 0x85)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:766 +0xcf
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).output(0x1a38100, 0xc400000003, 0xc4200ba2c0, 0x1994cf4, 0x22, 0x4e, 0x0)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:717 +0x322
k8s.io/kubernetes/vendor/github.com/golang/glog.(*loggingT).printf(0x1a38100, 0x3, 0x121acfe, 0x1e, 0xc4206aff50, 0x2, 0x2)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:655 +0x14c
k8s.io/kubernetes/vendor/github.com/golang/glog.Fatalf(0x121acfe, 0x1e, 0xc4206aff50, 0x2, 0x2)
/go/src/k8s.io/kubernetes/vendor/github.com/golang/glog/glog.go:1145 +0x67
main.main()
/go/src/k8s.io/kubernetes/cluster/addons/fluentd-elasticsearch/es-image/elasticsearch_logging_dis
Certificates in my kubernetes are expired. What are the steps in redeploying certificates. After redployment pod health is affected. How do i overcome this?
...
F0323 07:18:25.043962 8 elasticsearch_logging_discovery.go:78] kube-system namespace doesn't exist: Unauthorized
It seems you must have regenerated the private key for the certificates, rather that just issuing new certs using CSRs generated using the existing keys of the cluster.
If that is true, then you will need to do (at least) one of the following two things:
Dig the old private key files out of a backup, generate a CSR from them, re-issue the API certificates, and chalk this up to a valuable lesson not to delete private keys again without careful thought
Or:
Delete all the serviceAccounts
named in any Pod
's serviceAccountName
, for every namespace, followed by a deletion of those pods themselves to get their volumeMount:
s rebound. Addition information is in their admin guide.
If all goes well, the ServiceAccountController
will recreate those ServiceAccount
secrets, allowing those Pod
s to start back up, and you are back in business.
The concrete steps to manage the X.509 certificates for a cluster are too numerous to fit into a single answer box, but that is the high level overview of what needs to happen.