How to connect my application to kubernetes mysql statefulset

5/10/2020

I have deployed mysql statefulset following the link https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/ and all the 3 mysql pods are running fine. I have written an application in Golang which reads MySQL environmental variable from a config.toml file when connecting to MySQL server on my local machine. The config.toml file contain these variables. These are use when my application is running on my local machine.

MySQLServer = "127.0.0.1"
Port = "3306"
MySQLDatabase = "hss_lte_db"
User = "hss"
Password = "hss" 

Now I would like to deploy my application in my Kubernetes cluster so that it connects to the MySQL Statefulset service. I created my deployment as shown below but the pod show Error and CrashLoopBackOff. Need help as how to connect my application to the MySQL Statefulset service. Also am not sure if the MySQLServer connection string is right in the configMap.

apiVersion: v1 
data:
  config.toml: |
   MySQLServer = "mysql-0.mysql,mysql-1.mysql,mysql-2.mysql"
   Port = "3306"
   MySQLDatabase = "hss_lte_db"
   User = "root"
   Password = ""

   GMLCAddressPort = ":8000"
   NRFIPAddr = "192.168.31.115"
   NRFPort = "30005"

kind: ConfigMap
metadata:
  name: vol-config-gmlcapi
  namespace: default


---
apiVersion: apps/v1 
kind: Deployment
metadata:
  name: gmlc-instance
  namespace: default
spec:
  selector:
    matchLabels:
      app: gmlc-instance
  replicas: 1 
  template:
    metadata:
      labels:
        app: gmlc-instance
        version: "1.0"
    spec:
      nodeName: k8s-worker-node2
      containers:
      - name: gmlc-instance
        image: abrahaa1/gmlcapi:1.0.0
        imagePullPolicy: "Always"
        ports:
        - containerPort: 8000
        volumeMounts:
        - name: configs
          mountPath: /gmlcapp/config.toml
          subPath: config.toml
        volumeMounts:
        - name: gmlc-configs
          mountPath: /gmlcapp/profile.json
          subPath: profile.json
      volumes:
      - name: configs 
        configMap:
          name: vol-config-gmlcapi
      - name: gmlc-configs
        configMap:
          name: vol-config-profile

I have made some variable name changes to deployment, so the updated deployment is as above but still did not connect. The description of the pod is as

ubuntu@k8s-master:~/gmlc$ kubectl describe pod gmlc-instance-5898989874-s5s5j -n default
Name:         gmlc-instance-5898989874-s5s5j
Namespace:    default
Priority:     0
Node:         k8s-worker-node2/192.168.31.151
Start Time:   Sun, 10 May 2020 19:50:09 +0300
Labels:       app=gmlc-instance
              pod-template-hash=5898989874
              version=1.0
Annotations:  <none>
Status:       Running
IP:           10.244.1.120
IPs:
  IP:           10.244.1.120
Controlled By:  ReplicaSet/gmlc-instance-5898989874
Containers:
  gmlc-instance:
    Container ID:   docker://b756e67a39b7397e24fe394a8b17bc6de14893329903d3eace4ffde86c335213
    Image:          abrahaa1/gmlcapi:1.0.0
    Image ID:       docker-pullable://abrahaa1/gmlcapi@sha256:e0c8ac2a3db3cde5015ea4030c2099126b79bb2472a9ade42576f7ed1975b73c
    Port:           8000/TCP
    Host Port:      0/TCP
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 10 May 2020 19:50:33 +0300
      Finished:     Sun, 10 May 2020 19:50:33 +0300
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 10 May 2020 19:50:17 +0300
      Finished:     Sun, 10 May 2020 19:50:17 +0300
    Ready:          False
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /gmlcapp/profile.json from gmlc-configs (rw,path="profile.json")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-prqdp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  configs:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      vol-config-gmlcapi
    Optional:  false
  gmlc-configs:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      vol-config-profile
    Optional:  false
  default-token-prqdp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-prqdp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason   Age               From                       Message
  ----     ------   ----              ----                       -------
  Normal   Pulling  9s (x3 over 28s)  kubelet, k8s-worker-node2  Pulling image "abrahaa1/gmlcapi:1.0.0"
  Normal   Pulled   7s (x3 over 27s)  kubelet, k8s-worker-node2  Successfully pulled image "abrahaa1/gmlcapi:1.0.0"
  Normal   Created  7s (x3 over 26s)  kubelet, k8s-worker-node2  Created container gmlc-instance
  Normal   Started  6s (x3 over 26s)  kubelet, k8s-worker-node2  Started container gmlc-instance
  Warning  BackOff  6s (x3 over 21s)  kubelet, k8s-worker-node2  Back-off restarting failed container

Still did not able to connect.

Logs output: ubuntu@k8s-master:~/gmlc$ kubectl logs gmlc-instance-5898989874-s5s5j -n default 2020/05/10 18:13:21 open config.toml: no such file or directory

It looks like the config.toml file is the problem and my application meed this file to run.

I have 2 files (config.toml and profile.json) that have to be in the /gmlcapp/ directory for the application to run. Because the profile.json is huge to add to the deployment as in the above, i have created the its configmap seperately. This it configmap output

ubuntu@k8s-master:~/gmlc$ kubectl get configmaps
NAME                 DATA   AGE
mysql                2      4d3h
vol-config-gmlcapi   1      97m
vol-config-profile   1      7h56m

Also this is the logs when i comment the vol-config-profile in the deployment.

ubuntu@k8s-master:~/gmlc$ kubectl logs gmlc-instance-b4ddd459f-fd8nr -n default
root:@tcp(mysql-0.mysql,mysql-1.mysql,mysql-2.mysql:3306)/hss_lte_db
2020/05/10 18:39:43 GMLC cannot ping MySQL sever
2020/05/10 18:39:43 Cannot read json file
panic: Cannot read json file

goroutine 1 [running]:
log.Panic(0xc00003dda8, 0x1, 0x1)
    /usr/local/go/src/log/log.go:351 +0xac
gmlc-kube/handler.init.0()
    /app/handler/init.go:43 +0x5e9
-- tom
kubernetes
mysql

1 Answer

5/11/2020

I have got it running by changing the volumeMount in the deployment.

Solution below:

apiVersion: apps/v1 
kind: Deployment
metadata:
  name: gmlc-instance
  namespace: default
spec:
  selector:
    matchLabels:
      app: gmlc-instance
  replicas: 1 
  template:
    metadata:
      labels:
        app: gmlc-instance
        version: "1.0"
    spec:
      nodeName: k8s-worker-node2
      containers:
      - name: gmlc-instance
        image: abrahaa1/gmlcapi:1.0.0
        imagePullPolicy: "Always"
        ports:
        - containerPort: 8000
        volumeMounts:
        - name: configs
          mountPath: /gmlcapp/config.toml
          subPath: config.toml
          readOnly: true
        - name: gmlc-configs
          mountPath: /gmlcapp/profile.json
          subPath: profile.json
      volumes:
      - name: configs 
        configMap:
          name: vol-config-gmlcapi
      - name: gmlc-configs
        configMap:
          name: vol-config-profile
-- tom
Source: StackOverflow