I apologize for my poor English.
I created 1 master-node and 1 worker-node in cluster, and deployed container (replicas:4).
then kubectl get all
shows like as below. (omitted)
NAME NODE
pod/container1 k8s-worker-1.local
pod/container2 k8s-worker-1.local
pod/container3 k8s-worker-1.local
pod/container4 k8s-worker-1.local
next, I added 1 worker-node to this cluster. but all containers keep to be deployed to worker1.
ideally, I want 2 containers to stop, and start up on worker2 like as below.
NAME NODE
pod/container1 k8s-worker-1.local
pod/container2 k8s-worker-1.local
pod/container3 k8s-worker-2.local
pod/container4 k8s-worker-2.local
Do I need some commands after adding additional node?
Scheduling only happens when a pod is started. After that, it won't be moved. There are tools out there for deleting (evicting) pods when nodes get too imbalanced, but if you're just starting out I wouldn't go that far for now. If you delete your 4 pods and recreate them (or let the Deployment system recreate them as is more common in a real situation) they should end up more balanced (though possibly not 2 and 2 since the system isn't exact and spreading out is only one of the factors used in scheduling).