I have kubectl job that is invalid. I am debugging it and I extracted it to yaml file and I can see this:
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: 2020-03-19T21:40:11Z
labels:
app: vault-unseal-app
job-name: vault-unseal-vault-unseal-1584654000
name: vault-unseal-vault-unseal-1584654000
namespace: infrastructure
ownerReferences:
- apiVersion: batch/v1beta1
blockOwnerDeletion: true
controller: true
kind: CronJob
name: vault-unseal-vault-unseal
uid: c9965fdb-4fbb-11e9-80d7-061cf1426d5a
resourceVersion: "163413544"
selfLink: /apis/batch/v1/namespaces/infrastructure/jobs/vault-unseal-vault-unseal-1584654000
uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4
spec:
backoffLimit: 0
completions: 1
parallelism: 1
selector:
matchLabels:
app: vault-unseal-app
template:
metadata:
creationTimestamp: null
labels:
app: vault-unseal-app
job-name: vault-unseal-vault-unseal-1584654000
spec:
containers:
- env:
- name: VAULT_ADDR
value: http://vault-vault:8200
- name: VAULT_SKIP_VERIFY
value: "1"
- name: VAULT_TOKEN
valueFrom:
secretKeyRef:
key: vault_token
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_0
valueFrom:
secretKeyRef:
key: unseal_key_0
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_1
valueFrom:
secretKeyRef:
key: unseal_key_1
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_2
valueFrom:
secretKeyRef:
key: unseal_key_2
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_3
valueFrom:
secretKeyRef:
key: unseal_key_3
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_4
valueFrom:
secretKeyRef:
key: unseal_key_4
name: vault-unseal-vault-unseal
image: blockloop/vault-unseal
imagePullPolicy: Always
name: vault-unseal
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
nodeSelector:
nodePool: ci
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 5
status:
conditions:
- lastProbeTime: 2020-03-19T21:49:11Z
lastTransitionTime: 2020-03-19T21:49:11Z
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 1
startTime: 2020-03-19T21:40:11Z
When I run kubectl create -f my_file.yaml
, I am getting this error:
The Job "vault-unseal-vault-unseal-1584654000" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"controller-uid":"35262878-07bb-11eb-9b2c-0abca2a23428", "app":"vault-unseal-app"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: `selector` not auto-generated
Can someone suggest how to fix this?
Update:
After testing removal of .spec.selector
I am getting error: error: jobs.batch "vault-unseal-vault-unseal-1584654000" is invalid
This is how my config looks without .spec.selector
:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: batch/v1
kind: Job
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"batch/v1","kind":"Job","metadata":{"annotations":{},"creationTimestamp":"2020-03-19T21:40:11Z","labels":{"controller-uid":"35e63c20-6a2a-11ea-b577-069afd6d30d4","job-name":"vault-unseal-vault-unseal-1584654000"},"name":"vault-unseal-vault-unseal-1584654000","namespace":"infrastructure","ownerReferences":[{"apiVersion":"batch/v1beta1","blockOwnerDeletion":true,"controller":true,"kind":"CronJob","name":"vault-unseal-vault-unseal","uid":"c9965fdb-4fbb-11e9-80d7-061cf1426d5a"}],"resourceVersion":"163427805","selfLink":"/apis/batch/v1/namespaces/infrastructure/jobs/vault-unseal-vault-unseal-1584654000","uid":"35e63c20-6a2a-11ea-b577-069afd6d30d4"},"spec":{"backoffLimit":20,"completions":1,"parallelism":1,"selector":{"matchLabels":{"controller-uid":"35e63c20-6a2a-11ea-b577-069afd6d30d4"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"controller-uid":"35e63c20-6a2a-11ea-b577-069afd6d30d4","job-name":"vault-unseal-vault-unseal-1584654000"}},"spec":{"containers":[{"env":[{"name":"VAULT_ADDR","value":"http://vault-vault:8200"},{"name":"VAULT_SKIP_VERIFY","value":"1"},{"name":"VAULT_TOKEN","valueFrom":{"secretKeyRef":{"key":"vault_token","name":"vault-unseal-vault-unseal"}}},{"name":"VAULT_UNSEAL_KEY_0","valueFrom":{"secretKeyRef":{"key":"unseal_key_0","name":"vault-unseal-vault-unseal"}}},{"name":"VAULT_UNSEAL_KEY_1","valueFrom":{"secretKeyRef":{"key":"unseal_key_1","name":"vault-unseal-vault-unseal"}}},{"name":"VAULT_UNSEAL_KEY_2","valueFrom":{"secretKeyRef":{"key":"unseal_key_2","name":"vault-unseal-vault-unseal"}}},{"name":"VAULT_UNSEAL_KEY_3","valueFrom":{"secretKeyRef":{"key":"unseal_key_3","name":"vault-unseal-vault-unseal"}}},{"name":"VAULT_UNSEAL_KEY_4","valueFrom":{"secretKeyRef":{"key":"unseal_key_4","name":"vault-unseal-vault-unseal"}}}],"image":"blockloop/vault-unseal","imagePullPolicy":"Always","name":"vault-unseal","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","nodeSelector":{"nodePool":"devs"},"restartPolicy":"OnFailure","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":5}}},"status":{"conditions":[{"lastProbeTime":"2020-03-19T21:49:11Z","lastTransitionTime":"2020-03-19T21:49:11Z","message":"Job has reached the specified backoff limit","reason":"BackoffLimitExceeded","status":"True","type":"Failed"}],"failed":1,"startTime":"2020-03-19T21:40:11Z"}}
creationTimestamp: 2020-03-19T21:40:11Z
labels:
controller-uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4
job-name: vault-unseal-vault-unseal-1584654000
name: vault-unseal-vault-unseal-1584654000
namespace: infrastructure
ownerReferences:
- apiVersion: batch/v1beta1
blockOwnerDeletion: true
controller: true
kind: CronJob
name: vault-unseal-vault-unseal
uid: c9965fdb-4fbb-11e9-80d7-061cf1426d5a
resourceVersion: "163442526"
selfLink: /apis/batch/v1/namespaces/infrastructure/jobs/vault-unseal-vault-unseal-1584654000
uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4
spec:
backoffLimit: 100
completions: 1
parallelism: 1
template:
metadata:
creationTimestamp: null
labels:
controller-uid: 35e63c20-6a2a-11ea-b577-069afd6d30d4
job-name: vault-unseal-vault-unseal-1584654000
spec:
containers:
- env:
- name: VAULT_ADDR
value: http://vault-vault:8200
- name: VAULT_SKIP_VERIFY
value: "1"
- name: VAULT_TOKEN
valueFrom:
secretKeyRef:
key: vault_token
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_0
valueFrom:
secretKeyRef:
key: unseal_key_0
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_1
valueFrom:
secretKeyRef:
key: unseal_key_1
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_2
valueFrom:
secretKeyRef:
key: unseal_key_2
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_3
valueFrom:
secretKeyRef:
key: unseal_key_3
name: vault-unseal-vault-unseal
- name: VAULT_UNSEAL_KEY_4
valueFrom:
secretKeyRef:
key: unseal_key_4
name: vault-unseal-vault-unseal
image: blockloop/vault-unseal
imagePullPolicy: Always
name: vault-unseal
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
nodeSelector:
nodePool: devs
restartPolicy: OnFailure
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 5
status:
conditions:
- lastProbeTime: 2020-03-19T21:49:11Z
lastTransitionTime: 2020-03-19T21:49:11Z
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 1
startTime: 2020-03-19T21:40:11Z
It looks like you are not using the selector
that the system generates for you automatically by default. Bear in mind that the recommended option when creating a job is NOT to fill in selector
. It makes it more probable to create a duplicate labels+selectors. Therefore you should use the auto-generated ones, which ensure uniqueness and release you from the necessity of manual management.
The official docs have this explained in more detail with an example. Please notice the below parts:
Normally, when you create a Job object, you do not specify
.spec.selector
. The system defaulting logic adds this field when the Job is created. It picks a selector value that will not overlap with any other jobs.
and:
You need to specify
manualSelector: true
in the new Job since you are not using the selector that the system normally generates for you automatically.
If you want to use manual selectors you need to set: .spec.manualSelector: true
in the job's spec. This way the API server will not generate labels automatically and you will be able to set them yourself.
EDIT:
Remember that spec.Completions
spec.Selector
and spec.Template
are immutable fields and are not allowed to be updated. In order to make changes there you need to create a new Job.
The official docs regarding Writing a Job spec will help you understand what should and what shouldn't be put into the Job spec. Notice that despite:
In addition to required fields for a Pod, a pod template in a Job must specify appropriate labels (see pod selector) and an appropriate restart policy.
it is advised that the pod selector / labels are not specified as I explained earlier in order to not create a duplicate labels+selectors.