Is there a way to add node labels when deploying worker nodes in EKS. I do not see an option in the CF template available for worker nodes.
The only option I see right now is to use kubectl label command to add labels which is post cluster setup. However, the need to have complete automation which means applications are deployed automatically post cluster deployments and labels help in achieving the segregation.
With the new EKS-optimized AMIs(amazon-eks-node-vXX) and Cloudformation template refactors provided by AWS it is now possible to add node labels as simple as providing arguments to the BootstrapArguments
parameter of the [amazon-eks-nodegroup.yaml][1]
Cloudfomation template. For example --kubelet-extra-args --node-labels=my-key=my-value
. For more details check the AWS announcement: Improvements for Amazon EKS Worker Node Provisioning
You'll need to add the config in user_data
and use the --node-labels
option for the kubelet. Here's an example user_data which includes node_labels:
NodeLaunchConfig:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
AssociatePublicIpAddress: 'true'
IamInstanceProfile: !Ref NodeInstanceProfile
ImageId: !Ref NodeImageId
InstanceType: !Ref NodeInstanceType
KeyName: !Ref KeyName
SecurityGroups:
- !Ref NodeSecurityGroup
UserData:
Fn::Base64:
Fn::Join: [
"",
[
"#!/bin/bash -xe\n",
"CA_CERTIFICATE_DIRECTORY=/etc/kubernetes/pki", "\n",
"CA_CERTIFICATE_FILE_PATH=$CA_CERTIFICATE_DIRECTORY/ca.crt", "\n",
"MODEL_DIRECTORY_PATH=~/.aws/eks", "\n",
"MODEL_FILE_PATH=$MODEL_DIRECTORY_PATH/eks-2017-11-01.normal.json", "\n",
"mkdir -p $CA_CERTIFICATE_DIRECTORY", "\n",
"mkdir -p $MODEL_DIRECTORY_PATH", "\n",
"curl -o $MODEL_FILE_PATH https://s3-us-west-2.amazonaws.com/amazon-eks/1.10.3/2018-06-05/eks-2017-11-01.normal.json", "\n",
"aws configure add-model --service-model file://$MODEL_FILE_PATH --service-name eks", "\n",
"aws eks describe-cluster --region=", { Ref: "AWS::Region" }," --name=", { Ref: ClusterName }," --query 'cluster.{certificateAuthorityData: certificateAuthority.data, endpoint: endpoint}' > /tmp/describe_cluster_result.json", "\n",
"cat /tmp/describe_cluster_result.json | grep certificateAuthorityData | awk '{print $2}' | sed 's/[,\"]//g' | base64 -d > $CA_CERTIFICATE_FILE_PATH", "\n",
"MASTER_ENDPOINT=$(cat /tmp/describe_cluster_result.json | grep endpoint | awk '{print $2}' | sed 's/[,\"]//g')", "\n",
"INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)", "\n",
"sed -i s,MASTER_ENDPOINT,$MASTER_ENDPOINT,g /var/lib/kubelet/kubeconfig", "\n",
"sed -i s,CLUSTER_NAME,", { Ref: ClusterName }, ",g /var/lib/kubelet/kubeconfig", "\n",
"sed -i s,REGION,", { Ref: "AWS::Region" }, ",g /etc/systemd/system/kubelet.service", "\n",
"sed -i s,MAX_PODS,", { "Fn::FindInMap": [ MaxPodsPerNode, { Ref: NodeInstanceType }, MaxPods ] }, ",g /etc/systemd/system/kubelet.service", "\n",
"sed -i s,MASTER_ENDPOINT,$MASTER_ENDPOINT,g /etc/systemd/system/kubelet.service", "\n",
"sed -i s,INTERNAL_IP,$INTERNAL_IP,g /etc/systemd/system/kubelet.service", "\n",
"DNS_CLUSTER_IP=10.100.0.10", "\n",
"if [[ $INTERNAL_IP == 10.* ]] ; then DNS_CLUSTER_IP=172.20.0.10; fi", "\n",
"sed -i s,DNS_CLUSTER_IP,$DNS_CLUSTER_IP,g /etc/systemd/system/kubelet.service", "\n",
"sed -i s,CERTIFICATE_AUTHORITY_FILE,$CA_CERTIFICATE_FILE_PATH,g /var/lib/kubelet/kubeconfig" , "\n",
"sed -i s,CLIENT_CA_FILE,$CA_CERTIFICATE_FILE_PATH,g /etc/systemd/system/kubelet.service" , "\n"
"sed -i s,INTERNAL_IP/a,--node-labels tier=development,g /etc/systemd/system/kubelet.service" , "\n"
"systemctl daemon-reload", "\n",
"systemctl restart kubelet", "\n",
"/opt/aws/bin/cfn-signal -e $? ",
" --stack ", { Ref: "AWS::StackName" },
" --resource NodeGroup ",
" --region ", { Ref: "AWS::Region" }, "\n"
]
]
The relevant line is:
"sed -i s,INTERNAL_IP/a,--node-labels tier=development,g /etc/systemd/system/kubelet.service" , "\n"
WARNING: I haven't tested this, but I do something similar and it works fine
If you are using eksctl you can add labels to the node groups:
Like so:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: dev-cluster
region: eu-north-1
nodeGroups:
- name: ng-1-workers
labels: { role: workers }
instanceType: m5.xlarge
desiredCapacity: 10
privateNetworking: true
- name: ng-2-builders
labels: { role: builders }
instanceType: m5.2xlarge
desiredCapacity: 2
privateNetworking: true
See https://eksctl.io/usage/managing-nodegroups/ for more info
I've managed to get it work with the next sed expression:
sed -i '/--node-ip/ a \ \ --node-labels group=node \\' /etc/systemd/system/kubelet.service