I'm using this Splunk image on Kubernetes (testing locally with minikube).
After applying the code below I'm facing the following error:
ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?
My Splunk deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: splunk
labels:
app: splunk-app
tier: splunk
spec:
selector:
matchLabels:
app: splunk-app
track: stable
replicas: 1
template:
metadata:
labels:
app: splunk-app
tier: splunk
track: stable
spec:
volumes:
- name: configmap-inputs
configMap:
name: splunk-config
containers:
- name: splunk-client
image: splunk/splunk:latest
imagePullPolicy: Always
env:
- name: SPLUNK_START_ARGS
value: --accept-license --answer-yes
- name: SPLUNK_USER
value: root
- name: SPLUNK_PASSWORD
value: changeme
- name: SPLUNK_FORWARD_SERVER
value: splunk-receiver:9997
ports:
- name: incoming-logs
containerPort: 514
volumeMounts:
- name: configmap-inputs
mountPath: /opt/splunk/etc/system/local/inputs.conf
subPath: "inputs.conf"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: splunk-config
data:
inputs.conf: |
[monitor:///opt/splunk/var/log/syslog-logs]
disabled = 0
index=my-index
I tried to add also this env variables - with no success:
- name: SPLUNK_HOME
value: /opt/splunk
- name: SPLUNK_ETC
value: /opt/splunk/etc
I've tested the image with the following docker configuration - and it ran successfully:
version: '3.2'
services:
splunk-forwarder:
hostname: splunk-client
image: splunk/splunk:latest
environment:
SPLUNK_START_ARGS: --accept-license --answer-yes
SPLUNK_USER: root
SPLUNK_PASSWORD: changeme
ports:
- "8089:8089"
- "9997:9997"
Saw this on Splunk forum but the answer did not help in my case.
Any ideas?
Edit #1:
Minikube version: Upgraded fromv0.33.1
to v1.2.0
.
Full error log:
$kubectl logs -l tier=splunk
splunk_common : Set first run fact -------------------------------------- 0.04s
splunk_common : Set privilege escalation user --------------------------- 0.04s
splunk_common : Set current version fact -------------------------------- 0.04s
splunk_common : Set splunk install fact --------------------------------- 0.04s
splunk_common : Set docker fact ----------------------------------------- 0.04s
Execute pre-setup playbooks --------------------------------------------- 0.04s
splunk_common : Setting upgrade fact ------------------------------------ 0.04s
splunk_common : Set target version fact --------------------------------- 0.04s
Determine captaincy ----------------------------------------------------- 0.04s
ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?
Edit #2: Adding config map to the code (was removed from the original question for the sake of brevity). This is the cause of failure.
Based on the direction pointed out by @Amit-Kumar-Gupta I'll try also to give a full solution.
So this PR change makes it so that containers cannot write to secret
, configMap
, downwardAPI
and projected volumes since the runtime will now mount them as read-only.
This change is since v1.9.4
and can lead to issues for various applications which chown or otherwise manipulate their configs.
When Splunk boots, it registers all the config files in various locations on the filesystem under ${SPLUNK_HOME}
which is in our case /opt/splunk
.
The error specified in the my question reflect that splunk failed to manipulate all the relevant files in the /opt/splunk/etc
directory because of the change in the mounting mechanism.
Now for the solution.
Instead of mounting the configuration file directly inside the /opt/splunk/etc
directory we'll use the following setup:
We'll start the docker container with a default.yml
file which will be mounted in /tmp/defaults/default.yml
.
For that, we'll create the default.yml
file with:docker run splunk/splunk:latest create-defaults > ./default.yml
Then, We'll go to the splunk:
block and add a config:
sub block under it:
splunk:
conf:
inputs:
directory: /opt/splunk/etc/system/local
content:
monitor:///opt/splunk/var/log/syslog-logs:
disabled : 0
index : syslog-index
outputs:
directory: /opt/splunk/etc/system/local
content:
tcpout:splunk-indexer:
server: splunk-indexer:9997
This setup will generate two files with a .conf
postfix (Remember that the sub block start with conf:
) which be owned by the correct Splunk user and group.
The inputs:
section will produce the a inputs.conf
with the following content:
[monitor:///opt/splunk/var/log/syslog-logs]
disabled = 0
index=syslog-index
In a similar way, the outputs:
block will resemble the following:
[tcpout:splunk-receiver]
server=splunk-receiver:9997
This is instead of the passing an environment variable directly like I did in the origin code:
SPLUNK_FORWARD_SERVER: splunk-receiver:9997
Now everything is up and running (:
Full setup of the forwarder.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: splunk-forwarder
labels:
app: splunk-forwarder-app
tier: splunk
spec:
selector:
matchLabels:
app: splunk-forwarder-app
track: stable
replicas: 1
template:
metadata:
labels:
app: splunk-forwarder-app
tier: splunk
track: stable
spec:
volumes:
- name: configmap-forwarder
configMap:
name: splunk-forwarder-config
containers:
- name: splunk-forwarder
image: splunk/splunk:latest
imagePullPolicy : Always
env:
- name: SPLUNK_START_ARGS
value: --accept-license --answer-yes
- name: SPLUNK_PASSWORD
valueFrom:
secretKeyRef:
name: splunk-secret
key: password
volumeMounts:
- name: configmap-forwarder
mountPath: /tmp/defaults/default.yml
subPath: "default.yml"
For further reading:
https://splunk.github.io/docker-splunk/ADVANCED.html
https://github.com/splunk/docker-splunk/blob/develop/docs/ADVANCED.md
https://splunk.github.io/splunk-ansible/ADVANCED.html#inventory-script
There are two questions here: (1) why are you seeing that error message, and (2) how to achieve the desired behaviour you're hoping to achieve that you're trying to express through your Deployment
and ConfigMap
. Unfortunately, I don't believe there's a "cloud-native" way to achieve what you want, but I can explain (1), why it's hard to do (2), and point you to something that might give you a workaround.
The error message:
ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong?
does not imply that you've set those environment variables incorrectly (necessarily), it implies that Splunk is looking for a file in that location and can't read a file there, and it's providing a hint that maybe you've put the file in another place but forgot to give Splunk the hint (via the $SPLUNK_HOME
or $SPLUNK_ETC
environment variables) to look elsewhere.
The reason why it can't read /opt/splunk/etc/splunk-launch.conf
is because, by default, the /opt/splunk
directory would be populated with tons of subdirectories and files with various configurations, but because you're mounting a volume at /opt/splunk/etc/system/local/inputs.conf
, nothing can be written to /opt/splunk
.
If you simply don't mount that volume, or mount it somewhere else (e.g. /foo/inputs.conf
) the Deployment will start fine. Of course the problem is that it won't know anything about your inputs.conf
, and it'll use the default /opt/splunk/etc/system/local/inputs.conf
it writes there.
I assume what you want to do is allow Splunk to generate all the directories and files it likes, you only want to set the contents of that one file. While there is a lot of nuance about how Kubernetes deals with volume mounts, in particular those coming from ConfigMaps, and in particular when using subPath
, at the end of the day I don't think there's a clean way to do what you want.
I did an Internet search for "splunk kubernetes inputs.conf" and this was my first result: https://www.splunk.com/blog/2019/02/11/deploy-splunk-enterprise-on-kubernetes-splunk-connect-for-kubernetes-and-splunk-insights-for-containers-beta-part-2.html. This is from official splunk.com, and it's advising running things like kubectl cp
and kubectl exec
to:
"Exec" into the master pod, and run ... commands, to copy (configuration) into the (target) directory and chown to splunk user.
♂️