$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Looking at the Sonarqube helm chart
requirements.yaml
dependencies:
- name: sonarqube
version: 0.5.0
repository: https://kubernetes-charts.storage.googleapis.com/
Trying to install the latest version of the java plugin:
values.yaml
plugins:
install:
- "http://central.maven.org/maven2/org/sonarsource/java/sonar-java-plugin/5.3.0.13828/sonar-java-plugin-5.3.0.13828.jar"
However, I am getting an error on the init container:
$ kubectl logs sonarqube-sonarqube-7b5dfd84cf-sglk5 -c install-plugins
sh: /opt/sonarqube/extensions/plugins/install_plugins.sh: Permission denied
$ kubectl describe po sonarqube-sonarqube-7b5dfd84cf-sglk5
Name: sonarqube-sonarqube-7b5dfd84cf-sglk5
Namespace: default
Node: docker-for-desktop/192.168.65.3
Start Time: Thu, 19 Apr 2018 15:22:04 -0500
Labels: app=sonarqube
pod-template-hash=3618984079
release=sonarqube
Annotations: <none>
Status: Pending
IP: 10.1.0.250
Controlled By: ReplicaSet/sonarqube-sonarqube-7b5dfd84cf
Init Containers:
install-plugins:
Container ID: docker://b090f52b95d36e03b8af86de5a6729cec8590807fe23e27689b01e5506604463
Image: joosthofman/wget:1.0
Image ID: docker-pullable://joosthofman/wget@sha256:74ef45d9683b66b158a0acaf0b0d22f3c2a6e006c3ca25edbc6cf69b6ace8294
Port: <none>
Command:
sh
-c
/opt/sonarqube/extensions/plugins/install_plugins.sh
State: Waiting
Reason: CrashLoopBackOff
Is there a way to exec
into the into the init container?
My attempt:
$ kubectl exec -it sonarqube-sonarqube-7b5dfd84cf-sglk5 -c install-plugins sh
error: unable to upgrade connection: container not found ("install-plugins")
Update
With @WarrenStrange's suggestion:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sonarqube-postgresql-59975458c6-mtfjj 1/1 Running 0 11m
sonarqube-sonarqube-685bd67b8c-nmj2t 1/1 Running 0 11m
$ kubectl get pods sonarqube-sonarqube-685bd67b8c-nmj2t -o yaml
...
initContainers:
- command:
- sh
- -c
- 'mkdir -p /opt/sonarqube/extensions/plugins/ && cp /tmp/scripts/install_plugins.sh
/opt/sonarqube/extensions/plugins/install_plugins.sh && chmod 0775 /opt/sonarqube/extensions/plugins/install_plugins.sh
&& /opt/sonarqube/extensions/plugins/install_plugins.sh '
image: joosthofman/wget:1.0
imagePullPolicy: IfNotPresent
name: install-plugins
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/sonarqube/extensions
name: sonarqube
subPath: extensions
- mountPath: /tmp/scripts/
name: install-plugins
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-89d9n
readOnly: true
...
Create a new pod manifest extracted from the init container manifest. Replace the command with sleep 6000
and execute the commands. This allows you to poke around.
Given this:
$ kubectl logs sonarqube-sonarqube-7b5dfd84cf-sglk5 -c install-plugins
sh: /opt/sonarqube/extensions/plugins/install_plugins.sh: Permission denied
It seems that Permission denied
is the actual error that happened inside the install-plugin
container. You can diagnose the joosthofman/wget:1.0
by spinning it up locally docker run -it --rm joosthofman/wget:1.0 sh
. This will give you a shell into the container so you can check out its runtime contents.
I did it myself and, for one thing, the script it's trying to execute does not even exist inside the container.
/ # ls -l /opt/sonarqube/extensions/plugins/install_plugins.sh
ls: /opt/sonarqube/extensions/plugins/install_plugins.sh: No such file or directory
The issue is that the container does not exist (see the CrashLoopBackOff).
One of the things that I do with init containers (assuming you have the source) is to put a sleep 600 on failure in the entrypoint. At least for debugging. This lets you exec into the container to poke around to see the cause of the failure.