I tried in a test kubernetes cluster (1.2.5) that one can easily overload the cluster by creating a conflicting pair of replication controller (first) and deployment (afterwards), e.g.:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx2
spec:
replicas: 3
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
fighting against
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
So if several team members work on the same kubernetes cluster it can easily happen that one mistake brings down all services on the cluster.
What are tools or best practices to protect against such kind of operational errors?
There has been some discussion on this, but no good solutions yet.
Outside of Kubernetes, you would probably have to write a script to describe
all ReplicationControllers/ReplicaSets/Deployments, and make sure that the new one you are adding does not have an overlapping selector before allowing the kubectl create
call.
Inside Kubernetes, there are some other possible solutions:
You can add your thoughts to https://github.com/kubernetes/kubernetes/issues/2210