Can't remove replication controller manager within deployment in kubernetes

1/13/2017

I am having a bit of a problem. I deleted a pod of a replication controller and now I want to recreate it. I tried:

kubectl create -f kube-controller-manager.yaml
Error from server: error when creating "kube-controller-manager.yaml": deployments.extensions "kube-controller-manager" already exists

So figured to:

kubectl delete deployment kube-controller-manager --namespace=kube-system -v=8

Which circles for a while giving this response:

GET https://k8s-k8s.westfield.io:443/apis/extensions/v1beta1/namespaces/kube-system/deployments/kube-controller-manager
I0112 17:33:53.334288   44607 round_trippers.go:303] Request Headers:
I0112 17:33:53.334301   44607 round_trippers.go:306]     Accept: application/json, */*
I0112 17:33:53.334310   44607 round_trippers.go:306]     User-Agent: kubectl/v1.4.7 (darwin/amd64) kubernetes/92b4f97
I0112 17:33:53.369422   44607 round_trippers.go:321] Response Status: 200 OK in 35 milliseconds
I0112 17:33:53.369445   44607 round_trippers.go:324] Response Headers:
I0112 17:33:53.369450   44607 round_trippers.go:327]     Content-Type: application/json
I0112 17:33:53.369454   44607 round_trippers.go:327]     Date: Fri, 13 Jan 2017 01:33:53 GMT
I0112 17:33:53.369457   44607 round_trippers.go:327]     Content-Length: 1688
I0112 17:33:53.369518   44607 request.go:908] Response Body: {"kind":"Deployment","apiVersion":"extensions/v1beta1","metadata":{"name":"kube-controller-manager","namespace":"kube-system","selfLink":"/apis/extensions/v1beta1/namespaces/kube-system/deployments/kube-controller-manager","uid":"830c83d0-d860-11e6-80d5-066fd61aec22","resourceVersion":"197967","generation":5,"creationTimestamp":"2017-01-12T00:46:10Z","labels":{"k8s-app":"kube-controller-manager"},"annotations":{"deployment.kubernetes.io/revision":"1"}},"spec":{"replicas":0,"selector":{"matchLabels":{"k8s-app":"kube-controller-manager"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"k8s-app":"kube-controller-manager"}},"spec":{"volumes":[{"name":"secrets","secret":{"secretName":"kube-controller-manager","defaultMode":420}},{"name":"ssl-host","hostPath":{"path":"/usr/share/ca-certificates"}}],"containers":[{"name":"kube-controller-manager","image":"quay.io/coreos/hyperkube:v1.4.7_coreos.0","command":["./hyperkube","controller-manager","--root-ca-file=/etc/kubernetes/secrets/ca.crt","--service-account-private-key-file=/etc/kubernetes/secrets/service-account.key","--leader-elect=true","--cloud-provider=aws","--configure-cloud-routes=false"],"resources":{},"volumeMounts":[{"name":"secrets","readOnly":true,"mountPath":"/etc/kubernetes/secrets"},{"name":"ssl-host","readOnly":true,"mountPath":"/etc/ssl/certs"}],"terminationMessagePath":"/dev/termination-log","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"Default","securityContext":{}}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":1,"maxSurge":1}},"revisionHistoryLimit":0,"paused":true},"status":{"observedGeneration":3}}
I0112 17:33:54.335302   44607 round_trippers.go:296] GET https://k8s-k8s.westfield.io:443/apis/extensions/v1beta1/namespaces/kube-system/deployments/kube-controller-manager

And then times out saying that it timed out waiting for an api response.

Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.7", GitCommit:"92b4f971662de9d8770f8dcd2ee01ec226a6f6c0", GitTreeState:"clean", BuildDate:"2016-12-10T04:49:33Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.7+coreos.0", GitCommit:"0581d1a5c618b404bd4766544bec479aedef763e", GitTreeState:"clean", BuildDate:"2016-12-12T19:04:11Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

I originally had client version 1.5.2 Downgraded to see if that helps. It didn't.

-- user3081519
kubernetes

2 Answers

11/30/2018

You can delete a replication controller using following command:

kubectl delete rc kube-controller-manager
-- Steephen
Source: StackOverflow

1/18/2017

A replication controller defines what a pod looks like and how many replicas should exist in your cluster. The controller-manager's job is to make sure enough replicas are healthy and running, if not then it will ask the scheduler to place the pods onto hosts.

If you delete a pod, then a new one should get spun up automatically. You would just have to run: kubectl delete po <podname>

It's interesting that you are trying to delete the controller manager. typically after creating that, you shouldn't have to touch it.

-- Steve Sloka
Source: StackOverflow