I see CI/CD systems like GitLab CI/CD, BitBucket Pipelines, and CodeFresh CI/CD. These look good, but I'm wondering, why not build my container images either on localhost (then upload the image using rsync
/scp
) or build directly on the cluster, and then deploy via kubectl? This would circumvent using a build service and container registry (which I don't want to pay for). For small teams, this seems viable. I realize it's not as nice as using a build service, but aside from that, why not run deployments this way?
If you use the standard repository setup, you don’t need to give people ssh access to the nodes, and the nodes are fairly replaceable: if a node dies (and even cloud-hosted nodes sometimes need to be replaced) if it can always pull its content from a Docker repository then you don’t need to do any manual work to start a new one up.
At a very very minimum I’d set up (or pay for) a Docker repository (or hosted service: Docker Hub, quay.io, Google’s GCR, Amazon’s ECR, ...) and write a build script that docker build
s an image, docker push
es it to somewhere appropriate, updates a Deployment object, and kubectl apply
s it. You don’t necessarily need a CI system (but you probably want one; again you can buy one in the cloud).
On a non-technical level, if you design and build a deployment sequence that involves hand-running a sequence of commands, especially in a small company, it will become a maintainability problem (you personally will spend a great deal of time running these same commands over and over and will have to fix it if it doesn’t work).