Ignite stateful cluster setup issue with k8 on AWS

2/16/2019

I am trying to create ignite cluster on aws but I ran into some trouble. My pods are not coming up. I have followed ignite official documentation for stateful kubernetes deployments on AWS EKS but while creating the ignite cluster I see that pod creation is in hang state.

I have created two separate files one for persistence storage one for wal storage. [Separate Disk for WAL]

I see that the storages are created:

    ➜  ignite-configs kubectl get sc
    NAME                               PROVISIONER             AGE

    ignite-persistence-storage-class   kubernetes.io/aws-ebs   3h
    ignite-wal-storage-class           kubernetes.io/aws-ebs   3h

ignite-configs kubectl describe pod ignite-0
    Name:               ignite-0
    Namespace:          angus-ignite-dev
    Priority:           0
    PriorityClassName:  <none>
    Node:               <none>
    Labels:             app=ignite
                        controller-revision-hash=ignite-76f754f985
                        statefulset.kubernetes.io/pod-name=ignite-0
    Annotations:        <none>
    Status:             Pending
    IP:
    Controlled By:      StatefulSet/ignite
    Containers:
      ignite:
        Image:       apacheignite/ignite:2.6.0
        Ports:       11211/TCP, 47100/TCP, 47500/TCP, 49112/TCP, 10800/TCP, 8080/TCP, 10900/TCP
        Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
        Environment:
          OPTION_LIBS:   ignite-kubernetes,ignite-rest-http
          CONFIG_URI:    https://s3.amazonaws.com/009my-bucket009/example-kube-persistence-and-wal.xml
          IGNITE_QUIET:  false
          JVM_OPTS:      -Djava.net.preferIPv4Stack=true
        Mounts:
          /persistence from ignite-persistence (rw)
          /var/run/secrets/kubernetes.io/serviceaccount from ignite-token-h6m7b (ro)
          /wal from ignite-wal (rw)
    Conditions:
      Type           Status
      PodScheduled   False
    Volumes:
      ignite-persistence:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  ignite-persistence-ignite-0
        ReadOnly:   false
      ignite-wal:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  ignite-wal-ignite-0
        ReadOnly:   false
      ignite-token-h6m7b:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  ignite-token-h6m7b
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:



      Type     Reason            Age                       From               Message
      ----     ------            ----                      ----               -------
      Warning  FailedScheduling  2m23s (x12033 over 107m)  default-scheduler  pod has unbound PersistentVolumeClaims (repeated 15 times)

\=====================

Here are my ignite config files:

ignite-persistence-storage-class.yaml
=====================================

#Amazon AWS Configuration
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ignite-persistence-storage-class  #StorageClass name
  namespace: angus-ignite-dev         #Ignite namespace
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2 #Volume type io1, gp2, sc1, st1. Default: gp2
  zones: us-west-2

\=========================================================== Here are my storage yaml files for logs and persistence:

ignite-wal-storage-class.yaml

#Amazon AWS Configuration
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ignite-wal-storage-class  #StorageClass name
  namespace: angus-ignite-dev #Ignite namespace
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2 #Volume type io1, gp2, sc1, st1. Default: gp2
  zones: us-west-2

\====================================================================== Here is my ignite statefulset creation yaml file:

I have some doubts regarding PV and PVCs. Do I have to declare them somewhere?

apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: ignite
  namespace: angus-ignite-dev
spec:
  selector:
    matchLabels:
      app: ignite
  serviceName: ignite
  replicas: 2
  template:
    metadata:
      labels:
        app: ignite
    spec:
      serviceAccountName: ignite
      containers:
      - name: ignite
        image: apacheignite/ignite:2.6.0
        env:
        - name: OPTION_LIBS
          value: ignite-kubernetes,ignite-rest-http
        - name: CONFIG_URI
          value: https://s3.amazonaws.com/009my-bucket009/example-kube-persistence-and-wal.xml
        - name: IGNITE_QUIET
          value: "false"
        - name: JVM_OPTS
          value: "-Djava.net.preferIPv4Stack=true"
        ports:
        - containerPort: 11211 # JDBC port number.
        - containerPort: 47100 # communication SPI port number.
        - containerPort: 47500 # discovery SPI port number.
        - containerPort: 49112 # JMX port number.
        - containerPort: 10800 # SQL port number.
        - containerPort: 8080 # REST port number.
        - containerPort: 10900 #Thin clients port number.
        volumeMounts:
        - mountPath: "/wal"
          name: ignite-wal
        - mountPath: "/persistence"
          name: ignite-persistence
  volumeClaimTemplates:
  - metadata:
      name: ignite-persistence
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "ignite-persistence-storage-class"
      resources:
        requests:
          storage: "100Gi"
  - metadata:
      name: ignite-wal
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "ignite-wal-storage-class"
      resources:
        requests:
          storage: "100Gi"
-- Tuhin Subhra Mandal
amazon-eks
ignite
kubernetes

0 Answers