To add a new environment variable to my aws-node daemonset to disable snat on eks nodes in a private subnet i tried to patch the saemonset by using kubectl. I dont want use edit because I want to add these variable with ansible later.
$ kubectl patch daemonset -n kube-system aws-node --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/1/env/4", "value": {"name": "AWS_VPC_K8S_CNI_EXTERNALSNAT", "value": "true" } }]'
The "" is invalid
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.8-eks-7c34c0", GitCommit:"7c34c0d2f2d0f11f397d55a46945193a0e22d8f3", GitTreeState:"clean", BuildDate:"2019-03-01T22:49:39Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get daemonset -n kube-system aws-node -ojson
{
"apiVersion": "extensions/v1beta1",
"kind": "DaemonSet",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"k8s-app\":\"aws-node\"},\"name\":\"aws-node\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"k8s-app\":\"aws-node\"}},\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"k8s-app\":\"aws-node\"}},\"spec\":{\"affinity\":{\"nodeAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":{\"nodeSelectorTerms\":[{\"matchExpressions\":[{\"key\":\"beta.kubernetes.io/os\",\"operator\":\"In\",\"values\":[\"linux\"]},{\"key\":\"beta.kubernetes.io/arch\",\"operator\":\"In\",\"values\":[\"amd64\"]}]}]}}},\"containers\":[{\"env\":[{\"name\":\"AWS_VPC_K8S_CNI_LOGLEVEL\",\"value\":\"DEBUG\"},{\"name\":\"MY_NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"WATCH_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"602401143452.dkr.ecr.eu-central-1.amazonaws.com/amazon-k8s-cni:v1.3.2\",\"imagePullPolicy\":\"Always\",\"name\":\"aws-node\",\"ports\":[{\"containerPort\":61678,\"name\":\"metrics\"}],\"resources\":{\"requests\":{\"cpu\":\"10m\"}},\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/host/opt/cni/bin\",\"name\":\"cni-bin-dir\"},{\"mountPath\":\"/host/etc/cni/net.d\",\"name\":\"cni-net-dir\"},{\"mountPath\":\"/host/var/log\",\"name\":\"log-dir\"},{\"mountPath\":\"/var/run/docker.sock\",\"name\":\"dockersock\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"aws-node\",\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/opt/cni/bin\"},\"name\":\"cni-bin-dir\"},{\"hostPath\":{\"path\":\"/etc/cni/net.d\"},\"name\":\"cni-net-dir\"},{\"hostPath\":{\"path\":\"/var/log\"},\"name\":\"log-dir\"},{\"hostPath\":{\"path\":\"/var/run/docker.sock\"},\"name\":\"dockersock\"}]}},\"updateStrategy\":{\"type\":\"RollingUpdate\"}}}\n"
},
"creationTimestamp": "2019-05-15T06:16:57Z",
"generation": 3,
"labels": {
"k8s-app": "aws-node"
},
"name": "aws-node",
"namespace": "kube-system",
"resourceVersion": "527483",
"selfLink": "/apis/extensions/v1beta1/namespaces/kube-system/daemonsets/aws-node",
"uid": "0ae27eda-76d9-11e9-a0b4-02731f2710d4"
},
"spec": {
"revisionHistoryLimit": 10,
"selector": {
"matchLabels": {
"k8s-app": "aws-node"
}
},
"template": {
"metadata": {
"annotations": {
"scheduler.alpha.kubernetes.io/critical-pod": ""
},
"creationTimestamp": null,
"labels": {
"k8s-app": "aws-node"
}
},
"spec": {
"affinity": {
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "beta.kubernetes.io/os",
"operator": "In",
"values": [
"linux"
]
},
{
"key": "beta.kubernetes.io/arch",
"operator": "In",
"values": [
"amd64"
]
}
]
}
]
}
}
},
"containers": [
{
"env": [
{
"name": "AWS_VPC_K8S_CNI_LOGLEVEL",
"value": "DEBUG"
},
{
"name": "MY_NODE_NAME",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "spec.nodeName"
}
}
},
{
"name": "WATCH_NAMESPACE",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
}
],
"image": "602401143452.dkr.ecr.eu-central-1.amazonaws.com/amazon-k8s-cni:v1.3.2",
"imagePullPolicy": "Always",
"name": "aws-node",
"ports": [
{
"containerPort": 61678,
"hostPort": 61678,
"name": "metrics",
"protocol": "TCP"
}
],
"resources": {
"requests": {
"cpu": "10m"
}
},
"securityContext": {
"privileged": true
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"volumeMounts": [
{
"mountPath": "/host/opt/cni/bin",
"name": "cni-bin-dir"
},
{
"mountPath": "/host/etc/cni/net.d",
"name": "cni-net-dir"
},
{
"mountPath": "/host/var/log",
"name": "log-dir"
},
{
"mountPath": "/var/run/docker.sock",
"name": "dockersock"
}
]
}
],
"dnsPolicy": "ClusterFirst",
"hostNetwork": true,
"restartPolicy": "Always",
"schedulerName": "default-scheduler",
"securityContext": {},
"serviceAccount": "aws-node",
"serviceAccountName": "aws-node",
"terminationGracePeriodSeconds": 30,
"tolerations": [
{
"operator": "Exists"
}
],
"volumes": [
{
"hostPath": {
"path": "/opt/cni/bin",
"type": ""
},
"name": "cni-bin-dir"
},
{
"hostPath": {
"path": "/etc/cni/net.d",
"type": ""
},
"name": "cni-net-dir"
},
{
"hostPath": {
"path": "/var/log",
"type": ""
},
"name": "log-dir"
},
{
"hostPath": {
"path": "/var/run/docker.sock",
"type": ""
},
"name": "dockersock"
}
]
}
},
"templateGeneration": 3,
"updateStrategy": {
"rollingUpdate": {
"maxUnavailable": 1
},
"type": "RollingUpdate"
}
},
"status": {
"currentNumberScheduled": 4,
"desiredNumberScheduled": 4,
"numberMisscheduled": 0,
"numberReady": 0,
"numberUnavailable": 4,
"observedGeneration": 3,
"updatedNumberScheduled": 4
}
}
I expect a new added environment variable to disable snat for eks worker in a private subnet
The error is in the array index. It should be 0 not 1. So after container
kubectl patch daemonset -n kube-system aws-node --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/env/4", "value": {"name": "AWS_VPC_K8S_CNI_EXTERNALSNAT", "value": "true" } }]'
It worked for me. Thanks for the command. It was pretty close.
Im actually using it to enable CNI custom network in EKS. If anyone needs it...
kubectl patch daemonset -n kube-system aws-node --type='json' -p='[{"op":"add","path":"/spec/template/spec/containers/0/env/0", "value":{"name":"AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG","value":"true"}}]'
You might need to use an older kubectl version. The version of your kubectl client might be too new for the Kubernetes API server that you are trying to reach. The docs say
kubelet must not be newer than kube-apiserver, and may be up to two minor versions older.
I would try using the same version in both the client and the server. To download an specific kubectl version
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.11.8/bin/linux/amd64/kubectl
$ chmod +x ./kubectl
and then try again your command.