I have written the nifi.properties
into a Kubernetes ConfigMap
. When I deploy NiFi (as a StatefulSet
) I want to have this nifi.properties
file to be used by the NiFi I just deployed. To do so I added a volume for the ConfigMap
and mounted it in the Container. The associated statefulset.yaml
looks like this:
...
containers:
- name: 'myName'
image: 'apache/nifi:latest'
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: http-2
containerPort: 1337
protocol: TCP
volumeMounts:
- name: 'nifi-config'
mountPath: /opt/nifi/nifi-1.6.0/conf/nifi.properties
volumes:
- name: 'nifi-config'
configMap:
name: 'nifi-config'
...
This doesn't work, I think it is, because NiFi is already running and the nifi.properties
file is locked by the service. The pod cannot be created, I get an error: ...Device or resource is busy
. I also tried that with the bootstrap.conf
file, which works, but I don't think that changes in there are recognized by the NiFi service because it would have to be restarted.
I already had the same issue with NiFi deployed on pure Docker, where I worked around by stopping the container, copying the files and starting the container; not very pretty, but working.
Using environment variables to change values in NiFi as stated here is also not an option, because the possibility of changing parameters there are very limited.
This problem doesn't occurs for NiFi only. I think that there are many situations where someone want's to change the configuration for a system running within Kubernetes
, so I hope there is any solution to handle this issue.
I solved this with the help of this helm file, but changed it a bit. Actually it is nearly the same as the answer that pepov has given, but as stated in my comment, I got a CrashLoopBackOff
. This also had nothing to do with the image version, because I used my own image that is based on NiFi 1.6.0 also containing some custom processors.
So my solution is to use the postStart
handler of Kubernetes. Problem is that it is not guaranteed that this handler is called before the ENTRYPOINT (see). But in this case the pod would crash and restart, eventually getting it right; right now I haven't had this problem, so it seems to be good for now.
I copy the content of the configMap
into a dedicated folder and copy them in the associated NiFi folder in the postStart
handler.
So here is the statefulset.yaml
:
...
containers:
- name: 'myName'
image: 'apache/nifi:latest'
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: http-2
containerPort: 1337
protocol: TCP
volumeMounts:
- name: 'nifi-config'
mountPath: /opt/nifi/nifi-1.6.0/kubeconfig
lifecycle:
postStart:
exec:
command:
- bash
- -c
- |
cp -a /opt/nifi/nifi-1.6.0/kubeconfig/. /opt/nifi/nifi-1.6.0/conf
volumes:
- name: 'nifi-config'
configMap:
name: 'nifi-config'
...
There are two problems with the above setup:
To workaround the second issue you can simply mount the configmap item as a separate file (nifi.properties.tmp) and copy it to the destination by wrapping the container entry point with a custom command.
...
containers:
- name: 'myName'
image: 'apache/nifi:latest'
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: http-2
containerPort: 1337
protocol: TCP
volumeMounts:
- name: 'nifi-config'
mountPath: /opt/nifi/nifi-1.6.0/conf/nifi.properties.tmp
subPath: nifi.properties
command:
- bash
- -c
- |
cat "${NIFI_HOME}/conf/nifi.properties.tmp" > "${NIFI_HOME}/conf/nifi.properties"
exec "${NIFI_BASE_DIR}/scripts/start.sh
# or you can do the property edits yourself and skip the helper script:
# exec bin/nifi.sh run
volumes:
- name: 'nifi-config'
configMap:
name: 'nifi-config'
...