Deploy a C++ application to the Google Cloud Platform Kubernetes engine

3/18/2018

As far as my understanding goes, the Kubernetes engine is meant for deploying applications that can be load balanced, for example, having an application which unhashes a string. If pod-a is on high load, it would be offloaded to pod-b. Correct me if I am wrong here, since if this is false, my following question will not make sense.


After exploring it for few hours I can't seem to figure out how to deploy a C++ application to the Kubernetes cluster. How would I do so?

What I tried:

I tried to follow the guide: Interactive Tutorial - Deploying an App, however, I couldn't understand how I would get my C++ app as an image that could be deployed.

What the C++ application is:

At the moment it proxies TCP traffic to another HOST designated by clients' HOSTNAME. It is pretty much a reverse proxy, however, this is NOT an HTTP application.

-- DeprecatedLuke
c++
dockerfile
google-cloud-platform
google-kubernetes-engine
kubernetes

1 Answer

3/19/2018

Is Kubernetes the right choice?

-

Kubernetes is really useful to loadbalance workloads, to provide high availability in case of failure to speed up test processes, and to increase safety during production rollout through different strategies and increase security through segregation.

However, not all the kind of workloads can take advantage of all the features introduced by Kubernetes.

  • For example, if your application is built in such a way it needs a stable amount of RAM and CPU, the code as well is really stable and you need merely one replica, then maybe Kubernetes and containers are not the best choice (even if you can perfectly use them), and you should rather implement everything on a big monolithic server/virtual machine.

But if you need to deploy it on a different cloud provider, and it should run merely some hours every day, maybe then it can make use as well of those features. If you are willing to add a layer, make sure that you need the features it introduces, otherwise it would be merely an overhead.

Note that Kubernetes it is not capable of splitting your workload alone. Therefore I do not know if what you mean by "If pod-a is on high load, it would be offloaded to pod-b" likely yes it is possible, but you have to instruct it to do so.

Kubernetes takes care to run your POD, making sure there have been scheduled on nodes where enough memory and CPU is available according to your specification, you can set up autoscaling procedures as well to support high workload periods or to scale even the cluster itself. Your application should have been created in such a way to support a divide and conquer pattern, otherwise you will likely have three nodes, one pod running on one node, two idle and a overhead that you could have avoided.

  • If your C++ application POD unhashes a strings and a single request could consume all the resources of a node Kubernetes will not "spit" the initial workload and will not create for you more PODS scheduling them across the cluster! Of course you can achieve something similar, but it will not come for free and you will likely need to modify your C++ code.

For sure you can take advantage of Kubernetes, running your application on it is pretty easy, but maybe you will have to modify something in the architecture to fully make advantage of those features.


Deploy the C++ application

The process to deploy your application in Kubernetes is pretty standard. Develop it locally, create a Docker image with all the libraries and components you need, test it locally, push it to the registry, and create the deployment in Kubernetes.

Let's say that you have all the resources needed to run your application and your executable file in a local folder. Create the Docker file.

Example, modify to implement your application, I have reported it as an example to show syntax:

# Download base image, Ubuntu 16.04 (Xenial Xerus)
FROM ubuntu:16.04

# Update software repository
RUN apt-get update

# Install nginx, php-fpm and supervisord from the Ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor

# Define the environment variable
ENV nginx_vhost /etc/nginx/sites-available/default
[...]

# Enable php-fpm on the nginx virtualhost configuration
COPY default ${nginx_vhost}
[...]
RUN chown -R www-data:www-data /var/www/html

# Volume configuration
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]

# Configure services and port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443

Built it running:

export PROJECT_ID="$(gcloud config get-value project -q)"
docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .
gcloud docker -- push gcr.io/${PROJECT_ID}/hello-app:v1
kubectl run hello --image=gcr.io/${PROJECT_ID}/hello-app:v1 --port [port number if needed]

More information is here.

-- GalloCedrone
Source: StackOverflow