According to the official kubernetes documentation, in order for your nodes to get access to AWS ECR, the following flag needs to be added to ~/.kube/config
:
iam:
allowContainerRegistry: true
legacy: false
Then, after updating the cluster, the following permissions should be added to your ec2 instances:
{
"Sid": "kopsK8sECR",
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:DescribeRepositories",
"ecr:GetAuthorizationToken",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:ListImages"
],
"Resource": [
"*"
]
}
However, I just created a cluster on AWS using kops
and my nodes already have those permissions, without me doing any additional config stuff.
Is this normal?
$ kops version
Version 1.8.0 (git-5099bc5)
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
According to the kops documentation, those roles are created for you and assigned to the right ec2 instances:
Two IAM roles are created for the cluster: one for the masters, and one for the nodes.
The Strict IAM flag by default will not grant nodes access to the AWS EC2 Container Registry (ECR), as can be seen by the above example policy documents. To grant access to ECR, update your Cluster Spec with the following and then perform a cluster update:
iam: allowContainerRegistry: true legacy: false
As far as I understand it, if you add allowContainerRegistry: true
, kops will add those permissions to the automatically created IAM Role.