Use SSD or local SSD in GKE cluster

1/8/2018

I would like to have Kubernetes use the local SSD in my Google Kubernetes engine cluster without using alpha features. Is there a way to do this?

Thanks in advance for any suggestions or your help.

-- Milkakuh
google-cloud-platform
google-kubernetes-engine
kubernetes

3 Answers

3/5/2018

If you're looking to run the whole of Docker on SSD's in your Kubernetes cluster, this is how I did it on my node pool (ubuntu nodes):

On your server:

# stop docker
sudo service docker stop

# format and mount disk
sudo mkfs.ext4 -m 0 -F -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb
rm -fr /var/lib/docker
sudo mkdir -p /var/lib/docker
sudo mount -o discard,defaults /dev/sdb /var/lib/docker
sudo chmod 711 /var/lib/docker

# backup and edit fstab
sudo cp /etc/fstab /etc/fstab.backup
echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /var/lib/docker ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab

# start docker 
sudo service docker start

As mentioned by others, you might want to look into the "Local SSD's option" provided by GKE first. Reason the provided option of adding SSD's didn't cut it for me, was that my nodes needed a single SSD of 4TB and as I understand the local ssd's are a fixed size.

-- Steven De Coeyer
Source: StackOverflow

1/17/2018

You can use local SSD with your Kubernetes nodes as explained in the below documentation:

  • Visit the Kubernetes Engine menu in GCP Console.

  • Click Create cluster.

  • Configure your cluster as desired. Then, from the Local SSD disks (per node) field, enter the desired number of SSDs as an absolute number.

  • Click Create.

To create a node pool with local SSD disks in an existing cluster:

Visit the Kubernetes Engine menu in GCP Console.

  • Select the desired cluster.

  • Click Edit.

  • From the Node pools menu, click Add node pool. Configure the node pool as desired. Then, from the Local SSD disks (per node) field, enter the desired number of SSDs as an absolute number. Click Save.

Be aware of the disadvantages/limitations of local SSD storage in Kubernetes as explained in this documentation link:

  • Because local SSDs are physically attached to the node's host virtual machine instance, any data stored in them only exists on that node. As the data stored on the disks is local, you should ensure that your application is resilient to having this data being unavailable.

  • A Pod that writes to a local SSD might lose access to the data stored on the disk if the Pod is rescheduled away from that node. Additionally, upgrading a node causes the data to be erased.

  • You cannot add local SSDs to an existing node pool.

Above points are very important if you want to have high availability in your Kubernetes deployment.

Kubernetes local SSD storage is ephemeral and presents some problems for non-trivial applications when running in containers. In Kubernetes, when a container crashes, kubelet will restart it, but the files in it will be lost because the container starts with a clean state. Also, when running containers together in a Pod it is often necessary that those containers share files.

You can use Kubernetes Volume abstraction to solve above problems as explained in the following documentation.

-- JMD
Source: StackOverflow

1/10/2018

https://cloud.google.com/kubernetes-engine/docs/concepts/local-ssd explains how to use local SSDs on your nodes in Google Kubernetes Engine. Based on the gcloud commands, the feature appears to be beta (not alpha) so I don't think you need to rely on any alpha features to take advantage of it.

-- Robert Bailey
Source: StackOverflow