Cannot find how to do so in the docs. After draining the node with --ignore-daemonsets --force
pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet are lost. How should I move such pods prior to issuing the drain command? I want to preserve the local data on these pods.
A good practice is to always start a Pod as a Deployment with specs.replicas: 1
. It's very easy as the Deployment specs.template
literally takes in your Pod specs
, and quite convenient as the deployment will make sure your Pod is always running.
Then, assuming you'll only have 1 replica of your Pod, you can simply use a PersistentVolumeClaim and attach it to the pod as a volume, you do not need a StatefulSet in that case. Your data will be stored in the PVC, and whenever your Pod is moved over nodes for whatever reason it will reattach the volume automatically without loosing any data.
Now, if it's too late for you, and your Pod hasn't got a volume pointing to a PVC, you can still get ready to change that by implementing the Deployment/PVC approach, and manually copy data out of your current pod:
kubectl cp theNamespace/thePod:/the/path /somewhere/on/your/local/computer
Before copying it back to the new pod:
kubectl cp /somewhere/on/your/local/computer theNamespace/theNewPod:/the/path
This time, just make sure /the/path
(to reuse the example above) is actually a Volume mapped to a PVC so you won't have to do that manually again!