Is there a way to deploy a helm chart from CodeDeploy?

12/9/2019

Has figured out a good system to deploy helm (3) charts to EKS via codedeploy? I've not found anything exactly on point with my searches, and want to check before rolling my own.

Research so far:

So it seems like my best chance is to start with the final option, create a helm 3 layer of my own, have codebuild generate artifacts such as the helm chart and kube config, modify the Helm lambda in the quickstart to consume them, and then initiate the helm update from that lambda within CodeDeploy. Is that a sound strategy?

This task seems like a very obvious one. Kubernetes is a big deal. Helm is a big deal. CI/CD is a big deal. So it seems like there's a significant population of AWS users who might want this. Bu there's not a clear best practice to follow.

-- Iain Bryson
amazon-web-services
aws-code-deploy
continuous-integration
eks
kubernetes-helm

2 Answers

12/12/2019

I agree with you, this is a gap. CodeDeploy's deployment integrations are very tight, i.e. it can only deploy to:

  • EC2 Instance
  • On Premises Server
  • ECS (Rolling & Blue/Green)
  • Lambda

There is no EKS deployment option as of yet.

In the absence of native integration, anything you do to achieve the requirement will be a 'hack' at best. Looking at CodeDeploy architecture, it is not well suited for even such hacks. I would instead advise to use CodeBuild and run the helm commands yourself in the buildspec. See this answer [1] for connecting CodeBuild to EKS. There can be other similar options, like using CodePipeline + Jenkns, but the idea is the same.

[1] Getting "Unable to recognize \"hello-k8s.yml\": Unauthorized" error when running kubectl apply -f hello-k8s.yml in CodeBuild phase

-- shariqmaws
Source: StackOverflow

12/21/2019

Here's what I ended up doing. In order to deploy with a lambda function, I needed layers for kubectl and helm. AWS EKS Quickstart has a good kubectl layer, but its helm layer is not helm 3, so I make my own:

docker build ./lambdas/layers/helm -t makehelm:latest
pushd lambda/layers/helm
mkdir helm/lambda/bin
docker run -v $PWD/lambda/bin:/out makehelm:latest cp -R /usr/local/bin/helm /out/
zip -r lambda.zip lambda

lambdas/layers/heml contains the following Dockerfile:

FROM amazonlinux:2

RUN yum update -y
RUN yum install -y openssl-devel
RUN yum install -y openssl

RUN yum groupinstall -y "Development Tools"

RUN yum provides /usr/bin/which
RUN yum install -y which

RUN curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
RUN chmod 700 get_helm.sh
RUN ./get_helm.sh

RUN ls -als /usr/local/bin/helm

Next step is to produce the helm chart as an artifact of the CodeBuild pipeline (my chart template is inside the product source repo):

. . .
    post_build: {
        commands: [
. . .
            'docker push "${PRODUCT_REPOSITORY_URI}:${CODEBUILD_RESOLVED_SOURCE_VERSION}"',
            './scripts/make-helm-deployment-values.sh > product-chart/values-dev.yaml',
            './scripts/make-aws-deployment-values.sh > product-chart/templates/aws-resources-configmap.yaml',
            'cat product-chart/values-dev.yaml',
            'cat product-chart/templates/aws-resources-configmap.yaml',
            'zip -r facts_machine_chart.zip product-chart/',
. . .
        ]
    }
},
artifacts: {
    'base-directory': '.',
    files: ['facts_machine_chart.zip'],
},
. . .

Then make-* scripts are there to flow parameters derived from the cloud formation template (i.e. CF Template -> CodeBuild Environment Variable -> script generating EKS configmaps from environment -> configmaps used in chart) into the code running in EKS. I use it for things like CloudFront ARNs and such.

Next, define the lambda, and add the proper permissions and environment:

        const helmLayer = new LayerVersion(this, 'helmLayer', {
            code: Code.fromAsset(path.join(__dirname, '../lambdas/layers/helm/lambda')),
            compatibleRuntimes: [Runtime.PYTHON_3_7],
            description: 'helm support',
            layerVersionName: 'helmLayer'
        });

         deployFunction = new Function(this, 'deployFunction', {
            runtime: Runtime.PYTHON_3_7,
            handler: 'index.handler',
            code: Code.fromAsset(__dirname + '/../lambdas/deploy'),
            timeout: cdk.Duration.seconds(300),
            layers: [kubectlLayer, helmLayer]
        });
// lambda created above is passed in in `props.deployLambda`
// helmChartArtifact is the CDK construct mathching the BuildProps artifact declaration of the chart zipfile
        props.deployLambda.addEnvironment('EKS_CLUSTER_ROLE_ARN', props.clusterDeveloperRole.roleArn);
        props.deployLambda.addEnvironment('EKS_CLUSTER_ARN', props.cluster.clusterArn);
        props.deployLambda.addEnvironment('EKS_CLUSTER_ENDPOINT', props.cluster.clusterEndpoint);
        const deployAction = new codepipeline_actions.LambdaInvokeAction({
            actionName: 'Deploy',
            lambda: props.deployLambda,
            inputs: [helmChartArtifact]
        });

        pipeline.addStage({
            stageName: 'Deploy',
            actions: [deployAction],
        });

        const kubeConfigSecret = secretsmanager.Secret.fromSecretArn(this, 'ProductDevSecret', 'arn:aws:secretsmanager:us-west-2:947675402426:secret:dev/product/kubeconfig-2XgYxq');
        kubeConfigSecret.grantRead(props.deployLambda.role as iam.IRole);
        // Must be admin to deploy in our case...
        // props.deployLambda.addToRolePolicy(props.clusterDeveloperPolicyStatement);
        props.deployLambda.addToRolePolicy(props.clusterAdminPolicyStatement);
        props.deployLambda.addToRolePolicy(new iam.PolicyStatement({
            effect: iam.Effect.ALLOW,
            actions: [
                'eks:DescribeCluster'
            ],
            resources: [props.cluster.clusterArn]
        }));

I haven't found a more elegant way to write it so I'm (manually) taking the .kubeconfig I generate after the template is deployed, stashing it in SecretsManager, and using that to auth the lambda with the cluster. I'd love a more elegant solution.

Finally, the lambda itself. Mostly it works like the EKS Quickstart, with the following guts:

        secret_string = client.get_secret_value(SecretId='dev/product/kubeconfig')['SecretString']

        if not os.path.exists('/tmp/.kube'):
            os.mkdir('/tmp/.kube')
        kubeconfig_filename = "/tmp/.kube/config"
        text_file = open(kubeconfig_filename, "w")
        text_file.write(secret_string)
        text_file.close()
        os.environ["KUBECONFIG"] = kubeconfig_filename

        # Extract the Job ID
        job_id = event['CodePipeline.job']['id']

        # Extract the Job Data
        job_data = event['CodePipeline.job']['data']

        for currentpath, folders, files in os.walk('/opt'):
            for file in sorted(files):
                print(os.path.join(currentpath, file))

        # with open(kubeconfig_filename, 'r') as fin:
        #    print(fin.read())
        # run_command("cat {}".format(kubeconfig_filename))

        print('{}', json.dumps(event))

        # Get the list of artifacts passed to the function
        artifacts = job_data['inputArtifacts']

        # Get the artifact details
        artifact_data = find_artifact(artifacts, 'ProductChart')
        # Get S3 client to access artifact with
        s3 = setup_s3_client(job_data)
        # Get the JSON template file out of the artifact
        template = get_template(s3, artifact_data)

        run_command('kubectl version')
        run_command('helm status product')
        run_command('helm lint /tmp/product-chart/')
        run_command('kubectl delete job db-migrate', True)
        # TODO: could be upgrade or install, based on the status above, if we really want full automation
        run_command('helm upgrade product /tmp/product-chart/ -f /tmp/product-chart/values-dev.yaml')

        put_job_success(job_id, 'success')
-- Iain Bryson
Source: StackOverflow