I wrote a job and I always get init error. I have noticed that if I remove the related command
all goes fine and I do not get any init error. My question is: how can I debug commands that need to run in the job? I use pod describe but all I can see is an exit status code 2.
apiVersion: batch/v1
kind: Job
metadata:
name: database-import
spec:
template:
spec:
initContainers:
- name: download-dump
image: google/cloud-sdk:alpine
command: ##### ERROR HERE!!!
- bash
- -c
- "gsutil cp gs://webshop-254812-sbg-data-input/pg/spryker-stg.gz /data/spryker-stage.gz"
volumeMounts:
- name: application-default-credentials
mountPath: "/secrets/"
readOnly: true
- name: data
mountPath: "/data/"
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secrets/application_default_credentials.json
containers:
- name: database-import
image: postgres:9.6-alpine
command:
- bash
- -c
- "gunzip -c /data/spryker-stage.gz | psql -h postgres -Uusername -W spy_ch "
env:
- name: PGPASSWORD
value: password
volumeMounts:
- name: data
mountPath: "/data/"
volumes:
- name: application-default-credentials
secret:
secretName: application-default-credentials
- name: data
emptyDir: {}
restartPolicy: Never
backoffLimit: 4
And this is the job describe:
Name: database-import
Namespace: sbg
Selector: controller-uid=a70d74a2-f596-11e9-a7fe-025000000001
Labels: app.kubernetes.io/managed-by=tilt
Annotations: <none>
Parallelism: 1
Completions: 1
Start Time: Wed, 23 Oct 2019 15:11:40 +0200
Pods Statuses: 1 Running / 0 Succeeded / 3 Failed
Pod Template:
Labels: app.kubernetes.io/managed-by=tilt
controller-uid=a70d74a2-f596-11e9-a7fe-025000000001
job-name=database-import
Init Containers:
download-dump:
Image: google/cloud-sdk:alpine
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
gsutil cp gs://webshop-254812-sbg-data-input/pg/spryker-stg.gz /data/spryker-stage.gz
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /secrets/application_default_credentials.json
Mounts:
/data/ from data (rw)
/secrets/ from application-default-credentials (ro)
Containers:
database-import:
Image: postgres:9.6-alpine
Port: <none>
Host Port: <none>
Command:
/bin/bash
-c
gunzip -c /data/spryker-stage.gz | psql -h postgres -Uusername -W
spy_ch
Environment:
PGPASSWORD: password
Mounts:
/data/ from data (rw)
Volumes:
application-default-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: application-default-credentials-464thb4k85
Optional: false
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m5s job-controller Created pod: database-import-9tsjw
Normal SuccessfulCreate 119s job-controller Created pod: database-import-g68ld
Normal SuccessfulCreate 109s job-controller Created pod: database-import-8cx6v
Normal SuccessfulCreate 69s job-controller Created pod: database-import-tnjnh
You check the logs using
Kubectl logs <Pod name>
Pod name is completed job or running job.
In logs you can more idea about error and easily you can debug the job on running on Kubernetes.
If you are using the Kubernetes Cluster on GKE and enabled the stackdriver monitoring then you can use it also for debugging.
The command to see the log of an init container ran in a job is:
kubectl logs -f <pod name> -c <initContainer name>
Init:Error -> Init Container has failed to execute.
That is because there are some errors in the initContainers command section
There You can read how the yaml should be prepared.
I have fixed your yaml file.
apiVersion: batch/v1
kind: Job
metadata:
name: database-import
spec:
template:
spec:
containers:
- name: database-import
image: postgres:9.6-alpine
command:
- bash
- "-c"
- "gunzip -c /data/spryker-stage.gz | psql -h postgres -Uusername -W spy_ch "
env:
- name: PGPASSWORD
value: password
volumeMounts:
- name: data
mountPath: "/data/"
initContainers:
- name: download-dump
image: google/cloud-sdk:alpine
command:
- /bin/bash
- "-c"
- "gsutil cp gs://webshop-254812-sbg-data-input/pg/spryker-stg.gz /data/spryker-stage.gz"
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /secrets/application_default_credentials.json
volumeMounts:
- name: application-default-credentials
mountPath: "/secrets/"
readOnly: true
- name: data
mountPath: "/data/"
volumes:
- name: application-default-credentials
secret:
secretName: application-default-credentials
- name: data
emptyDir: {}
restartPolicy: Never
backoffLimit: 4
Result after kubectl apply -f job.yaml
job.batch/database-import created
Let me know if it works now.
EDIT
Use kubectl describe job <name of your job>
and add results,we will see why it is not working then.