I want to install SonarQube on Kubernets with the corresponding stable Helm Chart. This worked on the first times. But then I noticed that LDAP doesn't work, so I modified the values.yaml
to install plugins like mentioned in the chart:
plugins:
install:
- "https://github.com/SonarSource/sonar-ldap/releases/download/2.2-RC3/sonar-ldap-plugin-2.2.0.601.jar"
Since the pods doesn't get updated, I tried re-installing the chart:
helm delete --purge sonarqube
helm install stable/sonarqube --namespace sonarqube --name sonarqube -f values.yaml
The problem is that now the main SonarQube related pod doesn't get created any more, as we can see in the helm install
result:
NAME: sonarqube
LAST DEPLOYED: Wed Sep 25 16:04:25 2019
NAMESPACE: sonarqube2
STATUS: DEPLOYED
RESOURCES:
==> v1/Secret
NAME TYPE DATA AGE
sonarqube-postgresql Opaque 1 0s
==> v1/ConfigMap
NAME DATA AGE
sonarqube-sonarqube-config 0 0s
sonarqube-sonarqube-copy-plugins 1 0s
sonarqube-sonarqube-install-plugins 1 0s
sonarqube-sonarqube-tests 1 0s
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sonarqube-postgresql Pending nfs-client 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sonarqube-postgresql ClusterIP 10.14.45.251 <none> 5432/TCP 0s
sonarqube-sonarqube ClusterIP 10.14.38.122 <none> 9000/TCP 0s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sonarqube-postgresql 1 1 1 0 0s
sonarqube-sonarqube 1 0 0 0 0s
==> v1beta1/Ingress
NAME HOSTS ADDRESS PORTS AGE
sonarqube-sonarqube sonarqube-test.mycluster.internal 80, 443 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
sonarqube-postgresql-b78f87cd7-ht845 0/1 Pending 0 0s
NOTES:
1. Get the application URL by running these commands:
http://sonarqube-test.mycluster.internal
Also kubectl get pod
shows just the Postgresql pod after some minutes:
NAME READY STATUS RESTARTS AGE
sonarqube-postgresql-b78f87cd7-ht845 1/1 Running 0 6m
On the first runs, I had an additionally second pod containing SonarQube itself. As you can imagine, the application is not reachable on sonarqube-test.mycluster.internal
, it shows 503 internal server error.
Why the SonarQube doesn't exist any more?
I see no reason for this and already tried cleaning up everything multiple time, like removing the helm release, remove the entire namespace and reduce my values.yaml
to a minium as possible. Also used just helm install stable/sonarqube
without any values.yaml
, the SonarQube pod is still missing.
All nodes are running on Kubernetes 1.11.3 so we met the criteria of having Kubernetes 1.6+ from SonarQubes requirements.
The values.yaml
file:
replicaCount: 1
service:
type: ClusterIP
port: 80
ingress:
enabled: true
hosts:
- name: sonarqube-test.mycluster.internal
path: /
tls:
- hosts:
- sonarqube-test.mycluster.internal
persistence:
storageClass: nfs-client
size: 10Gi
postgresql:
enabled: true
I tried the same values.yaml
with corresponding adjusted hostnames on our productive cluster (the problem from this question is on our test cluster) and it works as expected.
The relevant excerpt from helm-install
is the second line here:
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
sonarqube-postgresql-6765fd498b-gnd8w 0/1 ContainerCreating 0 0s
sonarqube-sonarqube-6c9cc8869c-45tmk 0/1 Init:0/1 0 0s
Differences from prod to tests are
I had a similar problem caused by our default PSP, that restricts priviledged containers. SonarQube has a priviledged init container to set sysctl parameters required by Elasticsearch. By looking at the logs, I saw the following event:
$ kubectl get events
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
8m 1h 21 sonarqube-xxx ReplicaSet Warning FailedCreate replicaset-controller Error creating: pods "sonarqube-xxx-" is forbidden: unable to validate against any pod security policy: [spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]
The other environment currently doesn't have PSP enabled, since we're evaluating them. This explains why it reacts inconsistent across both clusters.Since it's a test system, I simply removed the PSP. As a long term solution, I want to pull an additional values.yaml
parameter that disables the priviledged init container.
We already adjusted the sysctl parameter using Ansible on our clusters, and our goal is to have no priviledged containers for security reasons.If you're fine witth priviledged containers, you could also create a PSP for priviledged containers. Find more details in the Kubernetes docs: https://kubernetes.io/docs/concepts/policy/pod-security-policy/