I am new to kubenetes. I have setup a cluster of kubernetes on two machine. and when I am deploying pods using stateful set.But kubernetes is not creating pvc.
I am doing POC for installing redis cluster on kubernets cluster, So For that I have downloaded a stateful set from below site url. [https://medium.com/zero-to/setup-persistence-redis-cluster-in-kubertenes-7d5b7ffdbd98]
This stateful set was working fine with minikube , but when I am deploying it on kubernetes cluster(I have created with 2 machine) It is giving below error:
root@xen-727:/usr/local/bin# kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-cluster-0 0/1 Pending 0 13m
root@xen-727:/usr/local/bin# kubectl describe pod redis-cluster-0
Name: redis-cluster-0
Namespace: default
Node: /
Labels: app=redis-cluster
controller-revision-hash=redis-cluster-b5b75cc79
statefulset.kubernetes.io/pod-name=redis-cluster-0
Annotations: <none>
Status: Pending
IP:
Controllers: <none>
Containers:
redis-cluster:
Image: tiroshanm/kubernetes-redis-cluster:latest
Ports: 6379/TCP, 16379/TCP
Command:
/usr/local/bin/redis-server
Args:
/redis-conf/redis.conf
Liveness: exec [sh -c redis-cli -h $(hostname) ping] delay=20s timeout=1s period=3s #success=1 #failure=3
Readiness: exec [sh -c redis-cli -h $(hostname) ping] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/data from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-h22jv (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-redis-cluster-0
ReadOnly: false
default-token-h22jv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-h22jv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready=:Exists:NoExecute for 300s
node.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
15m 14m 4 default-scheduler Warning FailedScheduling pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
root@xen-727:/usr/local/bin# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
data-redis-cluster-0 Pending slow 15m
root@xen-727:/usr/local/bin# kubectl get pv
No resources found.
I created one storage class :
root@xen-727:/usr/local/bin# kubectl get sc
NAME TYPE
slow (default) kubernetes.io/gce-pd
But After search a lot , It seems that kubernetes is not using this storage class to create pv.
storage class code:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: slow
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
below is my complete code:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: redis-cluster
labels:
app: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
template:
metadata:
labels:
app: redis-cluster
annotations:
spec:
containers:
- name: redis-cluster
image: tiroshanm/kubernetes-redis-cluster:latest
imagePullPolicy: Always
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["/usr/local/bin/redis-server"]
args: ["/redis-conf/redis.conf"]
readinessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 20
periodSeconds: 3
volumeMounts:
- name: data
mountPath: /data
readOnly: false
volumeClaimTemplates:
- metadata:
name: data
labels:
name: redis-cluster
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Mi
Expected output: It should create 6 nodes, with 6 pvc and 6 pv.
You need to create a storage that you are requesting with PersistentVolumeClaim
.
Example of Volume types are available here.
A
PersistentVolume
(PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.A
PersistentVolumeClaim
(PVC) is a request for storage by a user. It is similar to a pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., can be mounted once read/write or many times read-only).
If you are on GCE, you can use gcePersistentDisk
A
gcePersistentDisk
volume mounts a Google Compute Engine (GCE) Persistent Disk into your Pod. UnlikeemptyDir
, which is erased when a Pod is removed, the contents of a PD are preserved and the volume is merely unmounted. This means that a PD can be pre-populated with data, and that data can be “handed off” between Pods.
You need to use the gcloud
command to create a drive inside the GCE:
gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
And using it inside a POD
, like in the example below:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
# This GCE PD must already exist.
gcePersistentDisk:
pdName: my-data-disk
fsType: ext4
If you prefer, you can setup your own nfs
server and use it inside Kubernetes, an example on how to set it up, is available here.
You can also check the documentation on how to use volumes on AWS.
Hope this will be enough to help you.