I'm trying to pass same label to a deployment, Both deployments have different image and environment variables. I'm using the same label so i can group the metrics together.
But the deployment is failing. Can someone please point me a workaround or is it because of the api version i'm using?
Deployment1:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: stg-postgres-exporter-pgauth
namespace: prometheus-exporters
spec:
replicas: 1
template:
metadata:
labels:
db: foo
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9187"
prometheus.io/job_name: "postgres-exporter"
spec:
containers:
- name: stg-rds-exporter
image: wrouesnel/postgres_exporter:v0.8.0
....
Deployment2:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: stg-rds-exporter-pgauth
namespace: prometheus-exporters
spec:
replicas: 1
template:
metadata:
labels:
db: foo
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9042"
prometheus.io/job_name: "rds-exporter"
prometheus.io/path: "/basic"
spec:
containers:
- name: stg-rds-exporter-pgauth
image: hbermu/rds_exporter:latest
....
Error:
15:27:39 The Deployment "stg-rds-exporter-pgauth" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"db":"foo"}: selector
does not match template labels
kubectl version:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T23:49:20Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.9-eks-502bfb", GitCommit:"502bfb383169b124d87848f89e17a04b9fc1f6f0", GitTreeState:"clean", BuildDate:"2020-02-07T01:31:02Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes uses the labels and selectors to control the replicas of your Deployments, check the example below available in k8s doc:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
You have the selector matchLabels:
selector:
matchLabels:
app: nginx
And the template labels:
template:
metadata:
labels:
app: nginx
They have to match, and that's why your deployment is failing.
Kubernetes uses the labels to control the replicas of your Deployment, so I'd recommend adding a second label to your Deployments. That would make the selector unique, but you would still be able to query the entity by one of its labels.
.spec.selector
is a required field that specifies a label selector for the Pods targeted by this Deployment. .spec.selector
must match .spec.template.metadata.labels
, or it will be rejected by the API. In API version apps/v1
, .spec.selector
and .metadata.labels
do not default to .spec.template.metadata.labels
if not set. So they must be set explicitly. Also note that .spec.selector
is immutable after creation of the Deployment in apps/v1
You should not create other Pods whose labels match this selector, either directly, by creating another Deployment, or by creating another controller such as a ReplicaSet or a ReplicationController. If you do so, the first Deployment thinks that it created these other Pods. Kubernetes does not stop you from doing this