I am trying to setup Jenkins with stable/helm charts but Jenkins pod always remains in Init status and doesn't give any errors while I am describing the Jenkins pod. I am not able to debug is as it's in Init status.
I have already created PV & PVC and assigned PVC in values files. Below is my configuration:
master:
componentName: "jenkins-master"
image: "jenkins/jenkins"
tag: "lts"
imagePullPolicy: "IfNotPresent"
lifecycle:
numExecutors: 0
customJenkinsLabels: []
useSecurity: true
enableXmlConfig: true
securityRealm: |-
<securityRealm class="hudson.security.LegacySecurityRealm"/>
authorizationStrategy: |-
<authorizationStrategy class="hudson.security.FullControlOnceLoggedInAuthorizationStrategy">
<denyAnonymousReadAccess>true</denyAnonymousReadAccess>
</authorizationStrategy>
hostNetworking: false
adminUser: "admin"
adminPassword: "admin"
rollingUpdate: {}
resources:
requests:
cpu: "50m"
memory: "256Mi"
limits:
cpu: "2000m"
memory: "2048Mi"
usePodSecurityContext: true
servicePort: 8080
targetPort: 8080
serviceType: NodePort
serviceAnnotations: {}
deploymentLabels: {}
serviceLabels: {}
podLabels: {}
nodePort: 32323
healthProbes: true
healthProbesLivenessTimeout: 5
healthProbesReadinessTimeout: 5
healthProbeLivenessPeriodSeconds: 10
healthProbeReadinessPeriodSeconds: 10
healthProbeLivenessFailureThreshold: 5
healthProbeReadinessFailureThreshold: 3
healthProbeLivenessInitialDelay: 90
healthProbeReadinessInitialDelay: 60
slaveListenerPort: 50000
slaveHostPort:
disabledAgentProtocols:
- JNLP-connect
- JNLP2-connect
csrf:
defaultCrumbIssuer:
enabled: true
proxyCompatability: true
cli: false
slaveListenerServiceType: "ClusterIP"
slaveListenerServiceAnnotations: {}
slaveKubernetesNamespace:
loadBalancerSourceRanges:
- 0.0.0.0/0
extraPorts:
installPlugins:
- kubernetes:1.18.1
- workflow-job:2.33
- workflow-aggregator:2.6
- credentials-binding:1.19
- git:3.11.0
- blueocean:1.18.1
- kubernetes-cd:2.0.0
enableRawHtmlMarkupFormatter: false
scriptApproval:
initScripts:
jobs: {}
JCasC:
enabled: false
pluginVersion: "1.27"
supportPluginVersion: "1.18"
configScripts:
welcome-message: |
jenkins:
systemMessage: Welcome to our CI\CD server. This Jenkins is configured and managed 'as code'.
customInitContainers: []
sidecars:
configAutoReload:
enabled: false
image: shadwell/k8s-sidecar:0.0.2
imagePullPolicy: IfNotPresent
resources: {}
sshTcpPort: 1044
folder: "/var/jenkins_home/casc_configs"
nodeSelector: {}
tolerations: []
podAnnotations: {}
customConfigMap: false
overwriteConfig: false
overwriteJobs: false
ingress:
enabled: false
apiVersion: "extensions/v1beta1"
labels: {}
annotations: {}
hostName:
tls:
backendconfig:
enabled: false
apiVersion: "extensions/v1beta1"
name:
labels: {}
annotations: {}
spec: {}
route:
enabled: false
labels: {}
annotations: {}
additionalConfig: {}
hostAliases: []
prometheus:
enabled: false
serviceMonitorAdditionalLabels: {}
scrapeInterval: 60s
scrapeEndpoint: /prometheus
alertingRulesAdditionalLabels: {}
alertingrules: []
agent:
enabled: true
image: "jenkins/jnlp-slave"
tag: "3.27-1"
customJenkinsLabels: []
imagePullSecretName:
componentName: "jenkins-slave"
privileged: false
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "200m"
memory: "256Mi"
alwaysPullImage: false
podRetention: "Never"
envVars:
volumes:
nodeSelector: {}
command:
args:
sideContainerName: "jnlp"
TTYEnabled: false
containerCap: 10
podName: "default"
idleMinutes: 0
yamlTemplate:
persistence:
enabled: true
existingClaim: jenkins-pvc
storageClass:
annotations: {}
accessMode: "ReadWriteOnce"
size: "2Gi"
volumes:
mounts:
networkPolicy:
enabled: false
apiVersion: networking.k8s.io/v1
rbac:
create: true
serviceAccount:
create: true
name:
annotations: {}
serviceAccountAgent:
create: false
name:
annotations: {}
backup:
enabled: false
componentName: "backup"
schedule: "0 2 * * *"
annotations:
iam.amazonaws.com/role: "jenkins"
image:
repository: "nuvo/kube-tasks"
tag: "0.1.2"
extraArgs: []
existingSecret: {}
env:
- name: "AWS_REGION"
value: "us-east-1"
resources:
requests:
memory: 1Gi
cpu: 1
limits:
memory: 1Gi
cpu: 1
destination: "s3://nuvo-jenkins-data/backup"
checkDeprecation: true```
We recently had this issue while trying to run Jenkins using helm. The issue was that the pod couldn't inintialize because of an error that occurred while Jenkins was trying to configure itself and pull updates down from jenkins.io. You can find these log messages using a command similar to the following:
kubectl logs solemn-quoll-jenkins-abc78900-xxx -c copy-default-config
Replace solemn-quoll-jenkins-abc78900-xxx
above with whatever name helm assigns to your jenkins pod. The issue was in the copy-default-config
container, so the -c
option allows you to peek at the logs of this container within the jenkins pod. In our case, it was an http proxy issue where the copy-default-config
container was failing because it could not connect to https://updates.jenkins.io/ to download updates for plugins. You can test if it is a plugin update issue by going into your values.yaml file and commenting out all the plugins under the installPlugins:
heading in the yaml file.
For example:
installPlugins:
#- kubernetes:1.18.1
#- workflow-job:2.33
#- workflow-aggregator:2.6
#- credentials-binding:1.19
#- git:3.11.0
#- blueocean:1.18.1
#- kubernetes-cd:2.0.0