I am preparing a helm chart for pilosa. After installing the chart (or while creating the deployment), the pilosa pod enters to a CrashLoopBackOff.
This is the rendered YAML file for the k8s deployment.
# Source: pilosa/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: RELEASE-NAME-pilosa
labels:
helm.sh/chart: pilosa-0.1.0
app.kubernetes.io/name: pilosa
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: pilosa
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
labels:
app.kubernetes.io/name: pilosa
app.kubernetes.io/instance: RELEASE-NAME
spec:
imagePullSecrets:
- name: my-cr-secret
serviceAccountName: default
securityContext:
{}
initContainers:
- command:
- /bin/sh
- -c
- |
sysctl -w net.ipv4.tcp_keepalive_time=600
sysctl -w net.ipv4.tcp_keepalive_intvl=60
sysctl -w net.ipv4.tcp_keepalive_probes=3
image: busybox
name: init-sysctl
securityContext:
privileged: true
containers:
- name: pilosa
securityContext:
{}
image: "mycr.azurecr.io/pilosa:v1.4.0"
imagePullPolicy: IfNotPresent
command:
- server
- --data-dir
- /data
- --max-writes-per-request
- "20000"
- --bind
- http://pilosa:10101
- --cluster.coordinator=true
- --gossip.seeds=pilosa:14000
- --handler.allowed-origins="*"
ports:
- name: http
containerPort: 10101
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
volumeMounts:
- name: "pilosa-pv-storage"
mountPath: /data
resources:
{}
volumes:
- name: pilosa-pv-storage
persistentVolumeClaim:
claimName: pilosa-pv-claim
When checked the reason for that i found:
$ kubectl describe pod pilosa-57cb7b8764-knsmw
.
.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 48s default-scheduler Successfully assigned default/pilosa-57cb7b8764-knsmw to 10.0.10.3
Normal Pulling 47s kubelet Pulling image "busybox"
Normal Pulled 45s kubelet Successfully pulled image "busybox"
Normal Created 45s kubelet Created container init-sysctl
Normal Started 45s kubelet Started container init-sysctl
Normal Pulling 45s kubelet Pulling image "mycr.azurecr.io/pilosa:v1.2.0"
Normal Pulled 15s kubelet Successfully pulled image "mycr.azurecr.io/pilosa:v1.2.0"
Normal Created 14s (x2 over 15s) kubelet Created container pilosa
Warning Failed 14s (x2 over 15s) kubelet Error: failed to start container "pilosa": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"server\": executable file not found in $PATH": unknown
Normal Pulled 14s kubelet Container image "mycr.azurecr.io/pilosa:v1.2.0" already present on machine
Warning BackOff 10s kubelet Back-off restarting failed container
That means the problem is it cannot run command server :
Error: failed to start container "pilosa": Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"server\": executable file not found in $PATH": unknown
But that command is available in pilosa as specified here : https://www.pilosa.com/docs/latest/installation/
Can anyone help me to find a solution for this?
The issue here is that Kubernetes is overriding the ENTRYPOINT
in the Pilosa Docker image. The server
command is actually a subcommand of pilosa
, which works because of how the Pilosa Dockerfile defines the command:
ENTRYPOINT ["/pilosa"]
CMD ["server", "--data-dir", "/data", "--bind", "http://0.0.0.0:10101"]
Because you are using the command:
declaration, it overrides both the ENTRYPOINT
and the CMD
when invoking the container.
I think the simple solution is to replace command:
with args:
, and I believe k8s will no longer override the ENTRYPOINT
. Or you could instead add /pilosa
to the front of the command.
You may also take a look at this Pilosa helm chart, which is unmaintained but might work for you. Note that it uses a StatefulSet instead of a Deployment, which should fit Pilosa better: https://github.com/pilosa/helm