With the following kustomize:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: postfix
name: postfix
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: postfix
strategy:
type: Recreate
template:
metadata:
labels:
service: postfix
spec:
containers:
- name: postfix
image: fk-postfix-placeholder
command: ["postfix"]
imagePullPolicy: Always
env:
- name: SMTP_DOMAIN
valueFrom:
secretKeyRef:
name: my-environment-config
key: SMTP_DOMAIN
- name: SMTP_USER
valueFrom:
secretKeyRef:
name: my-environment-config
key: SMTP_USER
- name: SMTP_PASSWORD
valueFrom:
secretKeyRef:
name: my-environment-config
key: SMTP_PASSWORD
ports:
- containerPort: 25
resources:
requests:
cpu: 1m
memory: 50Mi
volumeMounts:
- mountPath: /var/lib/postfix
name: my-storage
subPath: postfix
- mountPath: /dev/shm
name: dshm
restartPolicy: Always
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-storage-claim-postfix
- name: dshm
emptyDir:
medium: Memory
My pod does not come up:
$k get pods
NAME READY STATUS RESTARTS AGE
...
postfix-5555558c74-2kcv5 0/1 CrashLoopBackOff 16 18m
...
When I inspect it, I see the following in events
$k describe pods/postfix-5555558c74-2kcv5
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 28m default-scheduler Successfully assigned my-test/postfix-5555558c74-2kcv5 to gke-xxxxxxxxx-us-central1-default-pool-xxxxxx61-29fq
Normal Pulled 28m (x2 over 28m) kubelet Successfully pulled image "registry.gitlab.com/octomy/postfix:1.0.0-test-development"
Normal Pulling 27m (x3 over 28m) kubelet Pulling image "registry.gitlab.com/octomy/postfix:1.0.0-test-development"
Normal Created 27m (x3 over 28m) kubelet Created container postfix
Normal Started 27m (x3 over 28m) kubelet Started container postfix
Warning BackOff 3m8s (x125 over 28m) kubelet Back-off restarting failed container
But when I try to figure out what happened in logs, this is what I see:
$k logs pods/postfix-5555558c74-2kcv5
Yes literally nothing is the content of the logs. When I run this in a local container using docker-compose the log is chock full of output.
So I want to know, what should be my next step in debuggin this?
The answer is obvious once you spot it. The important part is; what does it mean when a normally log-happy container exits without a single line of log output?
The answer is, when a chatty container does not output any logs, the container does not start at all. This could have many causes, but in my case the reason was that I specified a custom command
like so:
command: ["postfix"]
There is no executable inside the container called "postfix" and so the the container simply terminated silently with exit code 1.
The reason why I had set a command
in the first place was that I misunderstood the nomenclaturic differences between Docker and Kubernetes;
In Docker an endpoint
is the executable to run and command
is whatever arguments you pass to it. Both can be specified inside the image itself and both can be overridden at run time (for example on the commandline or in docker-compose.yaml
).
In Kubernetes a command
is the executable to run and args
are the arguments you pass to it.
I wanted to override the Command specified inside the Docer container so I naively put command: ["postfix"]
in my kubernetes yaml but of course in kubernetes this would override the executable rather than the arguments. The fix was to put args: ["postfix"]
instead.
<rant>
IMHO I think this is an area of potential improvement on the part of kubernetes tools. There should at least be some kind of warning "command specified does not seem to exist in container" in the event log or appended to the empty log output.
</rant>