How to update Kubernetes API version in deployment script or using --runtime-config

12/11/2016

I need to be able to use the batch/v2alpha1 and apps/v1alpha1 on k8s. Currently, I'm running a cluster with 1.5.0-beta.1 installed. I would prefer to do this in the deployment script, but all I can find are the fields

"apiVersionDefault": "2016-03-30",
"apiVersionStorage": "2015-06-15",

And nowhere can I find anything about what dates to use to update those. There are also some instructions in the kubernetes docs which explain how to use the --runtime-config flag on the kubes-apiserver.. so follow those, I ssh'd into master, found the kube-apiserver manifest file and edited it to look like such:

apiVersion: "v1" kind: "Pod" metadata: name: "kube-apiserver" namespace: "kube-system" labels: tier: control-plane component: kube-apiserver spec: hostNetwork: true containers: - name: "kube-apiserver" image: "gcr.io/google_containers/hyperkube-amd64:v1.5.0-beta.1" command: - "/hyperkube" - "apiserver" - "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota" - "--address=0.0.0.0" - "--allow-privileged" - "--insecure-port=8080" - "--secure-port=443" - "--cloud-provider=azure" - "--cloud-config=/etc/kubernetes/azure.json" - "--service-cluster-ip-range=10.0.0.0/16" - "--etcd-servers=http://127.0.0.1:4001" - "--tls-cert-file=/etc/kubernetes/certs/apiserver.crt" - "--tls-private-key-file=/etc/kubernetes/certs/apiserver.key" - "--client-ca-file=/etc/kubernetes/certs/ca.crt" - "--service-account-key-file=/etc/kubernetes/certs/apiserver.key" - "--v=4" - "--runtime-config=batch/v2alpha1,apps/v1alpha1" volumeMounts: - name: "etc-kubernetes" mountPath: "/etc/kubernetes" - name: "var-lib-kubelet" mountPath: "/var/lib/kubelet" volumes: - name: "etc-kubernetes" hostPath: path: "/etc/kubernetes" - name: "var-lib-kubelet" hostPath: path: "/var/lib/kubelet"

That pretty much nuked my cluster.. so I'm at a complete loss now. I'm going to have to rebuild the cluster, so I'd prefer to add this in the deployment template, but really any help would be appreciated.

-- josibake
azure
kubernetes

1 Answer

2/9/2018

ACS-Engine clusters allow the ability to specify most any options you desire to override - see this document for the cluster definitions. I don't think a post-deployment script exists because there are no common options you want to change with the apicontroller and other k8s components after a deployment other than K8s version upgrades. For this purpose there are scripts included in ACS-Engine and other options for various cloud providers and flavors of kubernetes (i.e. Tectonic has a mechanism for auto-upgrades).

To manually override the values after the deployment of an ACS-Engine deployed K8s cluster, you can manually update the manifests here:

/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml

And also update the values in the kubelet here (i.e. to update the version of kubernetes): /etc/default/kubelet

Of course you'll want to kubectl drain your nodes before making these changes, reboot the node, and once the node comes back online and is running properly kubectl uncordon the node.

Hard to say why your cluster was nuked without knowing more information. In general, I'd say it is probably best if you are making lots of changes to apiversions and configurations you probably want a new cluster.

-- David Tesar
Source: StackOverflow