Can't mount nfs persistent volume in kubernetes, because there is no nfs client install on each slave node

11/17/2015

After I manually install nfs client package under each node, then it works. But in GKE, slave node can be scale in and out. After create a new slave node, I lose nfs client package again.

Is there any way we can install software package when kubernetes spin up a new slave node?

-- user200778
google-kubernetes-engine
kubernetes

3 Answers

11/17/2015

There isn't currently a way to automatically run a command when a new GKE node is provisioned.

-- Robert Bailey
Source: StackOverflow

11/17/2015

Please also see https://github.com/kubernetes/kubernetes/issues/16741 where we're discussing nfs and pretty much exactly this problem (amongst others)

-- Prashanth B
Source: StackOverflow

11/17/2015

Starting last week, new GKE clusters should be on created on 1.1.1 by default, and the nfs-common package is installed on all 1.1.1 clusters. (For existing clusters, you'll need to wait until the hosted master is upgraded, then initiate a node upgrade.)

See https://github.com/kubernetes/kubernetes/blob/release-1.1/examples/nfs/README.md for a larger example.

-- Zach Loafman
Source: StackOverflow