How do I roll out a new version of a container in a pod on Kubernetes using Terraform?

12/17/2018

While I've been learning about Kubernetes and Terraform I've been building a Node.js microservices example.

It's all been going well so far and with a few commands I can provision a Kubernetes cluster and deploy a couple of Node.js microservices to it.

The full example is available on GitHub: https://github.com/ashleydavis/nodejs-microservices-example

You can see the full setup for the cluster and the pods in this file: https://github.com/ashleydavis/nodejs-microservices-example/blob/master/scripts/infrastructure/kubernetes/kubernetes.tf

For example one of the pods is defined like this:

resource "kubernetes_pod" "web" {
  metadata {
    name = "nodejs-micro-example-web"

    labels {
      name = "nodejs-micro-example-web"
    }
  }

  spec {
    container {
      image = "${var.docker_registry_name}.azurecr.io/web:${var.version}"
      name  = "nodejs-micro-example-web"
    }
  }
}

It all works great for the initial roll out, but I'm unable to get the system to update when I change the code and build new versions of the Docker images.

When I do this I update the variable "version" that you can see in that previous snippet of code.

When I subsequently run terraform apply it gives me the following error saying that the pod already exists:

kubernetes_pod.web: pods "nodejs-micro-example-web" already exists

So my question is how do I use Kubernetes and Terraform to roll out code updates (i.e. updated Docker images) and have new pods be deployed to the cluster? (and at the same time have the old pods be cleaned up).

-- Ashley Davis
docker
kubernetes
microservices
terraform

2 Answers

1/16/2019

To answer my own question... I'm now using a Kubernetes deployment in my Terraform script to provision a pod and this works well.

Full code example is on Github.

This is the configuration:

resource "kubernetes_deployment" "web" {
  metadata {
    name = "web"

    labels {
      test = "web"
    }
  }

  spec {
    replicas = 1

    selector {
      match_labels {
        test = "web"
      }
    }

    template {
      metadata {
        labels {
          test = "web"
    }
  }

  spec {
    container {
      image = "${var.docker_registry_name}.azurecr.io/web:${var.version}"
      name  = "web"

          port {
            container_port = 80
          }
        }
      }
    }
  }
}
-- Ashley Davis
Source: StackOverflow

12/17/2018

It's the following line that is incorrect:

    name = "nodejs-micro-example-web"

because a Pod's name is unique within its namespace.

You almost never want to deploy a standalone Pod, because kubernetes considers those as ephemeral. That's ordinarily not a problem because Pods are created under the supervision of a Deployment or ReplicationController (or a few others, but you hopefully get the idea). In your case, if^H^H when that Pod falls over, kubernetes will not restart it and then it's a pretty good bet that outcome will negate a lot of the value kubernetes brings to the situation.

-- mdaniel
Source: StackOverflow