I need to deploy GitLab with Helm on Kubernetes. I have the problem: PVC is Pending.
I see volume.alpha.kubernetes.io/storage-class: default
in PVC description, but I set value gitlabDataStorageClass: gluster-heketi
in values.yaml. And I fine deploy simple nginx from article https://github.com/gluster/gluster-kubernetes/blob/master/docs/examples/hello_world/README.md Yes, I use distribute storage GlusterFS https://github.com/gluster/gluster-kubernetes
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gitlab1-gitlab-data Pending 19s
gitlab1-gitlab-etc Pending 19s
gitlab1-postgresql Pending 19s
gitlab1-redis Pending 19s
gluster1 Bound pvc-922b5dc0-6372-11e8-8f10-4ccc6a60fcbe 5Gi RWO gluster-heketi 43m
Structure for single of pangings:
# kubectl get pvc gitlab1-gitlab-data -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.alpha.kubernetes.io/storage-class: default
creationTimestamp: 2018-05-29T19:43:18Z
finalizers:
- kubernetes.io/pvc-protection
name: gitlab1-gitlab-data
namespace: default
resourceVersion: "263950"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/gitlab1-gitlab-data
uid: 8958d4f5-6378-11e8-8f10-4ccc6a60fcbe
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
status:
phase: Pending
In describe I see:
# kubectl describe pvc gitlab1-gitlab-data
Name: gitlab1-gitlab-data
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: volume.alpha.kubernetes.io/storage-class=default
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 2m (x43 over 12m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
My values.yaml file:
# Default values for kubernetes-gitlab-demo.
# This is a YAML-formatted file.
# Required variables
# baseDomain is the top-most part of the domain. Subdomains will be generated
# for gitlab, mattermost, registry, and prometheus.
# Recommended to set up an A record on the DNS to *.your-domain.com to point to
# the baseIP
# e.g. *.your-domain.com. A 300 baseIP
baseDomain: my-domain.com
# legoEmail is a valid email address used by Let's Encrypt. It does not have to
# be at the baseDomain.
legoEmail: my@mail.com
# Optional variables
# baseIP is an externally provisioned static IP address to use instead of the provisioned one.
#baseIP: 95.165.135.109
nameOverride: gitlab
# `ce` or `ee`
gitlab: ce
gitlabCEImage: gitlab/gitlab-ce:10.6.2-ce.0
gitlabEEImage: gitlab/gitlab-ee:10.6.2-ee.0
postgresPassword: NDl1ZjNtenMxcWR6NXZnbw==
initialSharedRunnersRegistrationToken: "tQtCbx5UZy_ByS7FyzUH"
mattermostAppSecret: NDl1ZjNtenMxcWR6NXZnbw==
mattermostAppUID: aadas
redisImage: redis:3.2.10
redisDedicatedStorage: true
redisStorageSize: 5Gi
redisAccessMode: ReadWriteOnce
postgresImage: postgres:9.6.5
# If you disable postgresDedicatedStorage, you should consider bumping up gitlabRailsStorageSize
postgresDedicatedStorage: true
postgresAccessMode: ReadWriteOnce
postgresStorageSize: 30Gi
gitlabDataAccessMode: ReadWriteOnce
#gitlabDataStorageSize: 30Gi
gitlabRegistryAccessMode: ReadWriteOnce
#gitlabRegistryStorageSize: 30Gi
gitlabConfigAccessMode: ReadWriteOnce
#gitlabConfigStorageSize: 1Gi
gitlabRunnerImage: gitlab/gitlab-runner:alpine-v10.6.0
# Valid values for provider are `gke` for Google Container Engine. Leaving it blank (or any othervalue) will disable fast disk options.
#provider: gke
# Gitlab pages
# The following 3 lines are needed to enable gitlab pages.
# pagesExternalScheme: http
# pagesExternalDomain: your-pages-domain.com
# pagesTlsSecret: gitlab-pages-tls # An optional reference to a tls secret to use in pages
## Storage Class Options
## If defined, volume.beta.kubernetes.io/storage-class: <storageClass>
## If not defined, but provider is gke, will use SSDs
## Otherwise default: volume.alpha.kubernetes.io/storage-class: default
gitlabConfigStorageClass: gluster-heketi
gitlabDataStorageClass: gluster-heketi
gitlabRegistryStorageClass: gluster-heketi
postgresStorageClass: gluster-heketi
redisStorageClass: gluster-heketi
healthCheckToken: 'SXBAQichEJasbtDSygrD'
# Optional, for GitLab EE images only
#gitlabEELicense: base64-encoded-license
# Additional omnibus configuration,
# see https://docs.gitlab.com/omnibus/settings/configuration.html
# for possible configuration options
#omnibusConfigRuby: |
# gitlab_rails['smtp_enable'] = true
# gitlab_rails['smtp_address'] = "smtp.example.org"
gitlab-runner:
checkInterval: 1
# runnerRegistrationToken must equal initialSharedRunnersRegistrationToken
runnerRegistrationToken: "tQtCbx5UZy_ByS7FyzUH"
# resources:
# limits:
# memory: 500Mi
# cpu: 600m
# requests:
# memory: 500Mi
# cpu: 600m
runners:
privileged: true
## Build Container specific configuration
##
# builds:
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
## Service Container specific configuration
##
# services:
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
## Helper Container specific configuration
##
# helpers:
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
You can see I have the StorageClass:
# kubectl get sc
NAME PROVISIONER AGE
gluster-heketi kubernetes.io/glusterfs 48m
Without a link to the actual helm you used, it's impossible for anyone to troubleshoot why the go-template isn't correctly consuming your values.yaml
.
I see
volume.alpha.kubernetes.io/storage-class: default
in PVC description, but I set valuegitlabDataStorageClass: gluster-heketi
in values.yaml
I can appreciate you set whatever you wanted in values.yaml, but as long as that StorageClass
doesn't match any existing StorageClass
, I'm not sure what positive thing will materialize from there. You can certainly try creating a StorageClass
named default
containing the same values as your gluster-heketi
SC, or update the PVC to use the correct SC.
To be honest, this may be a bug in the helm chart, but until it is fixed (and/or we get the link to the chart to help you know how to adjust your yaml) if you want your GitLab to deploy, you will need to work around this bad situation manually.