Error in Version Updation for MySQL Openshift Templates

3/27/2020

I am trying to deploy MySQL Instance on Openshift Cloud Platform. My requirements are:

  1. Version 8.0.19(latest)
  2. 1 Master and 2 Slave Replica Set
  3. Persistency

I found the templates for MySQL version 5.7 at location: MySQL-Version5.7

After some changes I have successfully integrated these templates in my source code. These are perfect as per my requirement except the the MySQL Version issue. I have tried multiple ways to deploy the MySQL latest version using these templates but faced some errors in every case.

After changing the version value 5.7 to latest in these templates, only the master replica was deployed with errors :

Readiness probe failed: ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111)

After fixing this error, some more errors followed, but the same template is working fine without any modification for version 5.7. I must be missing something in templates which I don't know. It is a mandatory requirement for me and I am new to this.

How can I deploy MySQL latest version using these templates?

-- Himanshu
kubernetes
mysql
openshift

2 Answers

3/27/2020

Instead of using this template, i would recommend to use helm chart. If you are new to Kubernetes and by mistake, something wrong configured it may lead to disaster.

You can use helm chart which is like pre-configured templates.

You can check this out : MySQL-Helm

I am not sure maybe you are facing issues due to implementation changes in MYSQL version for readiness & liveness.

For MySQL HA configuration you can check this out : https://github.com/helm/charts/tree/master/incubator/mysqlha

Template for MySQL latest version 8 :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - image: mysql:8
          name: mysql
          args:
            - "--default-authentication-plugin=mysql_native_password"
          env:
            - name: MYSQL_ROOT_PASSWORD
              value : password
          ports:
            - containerPort: 3306
              name: mysql
          volumeMounts:
            - name: mysql-persistent-storage
              mountPath: /var/lib/mysql
      volumes:
        - name: mysql-persistent-storage
          persistentVolumeClaim:
            claimName: mysql-volumeclaim
-- Harsh Manvar
Source: StackOverflow

3/27/2020

This is not a Template per say. There might be a problem is you switch a version to a higher because this might be optimized for this particular MySQL 5.7

If you really want to use a template that you can just upgrade to a higher version in one go, you should consider the advice by @Harsh Manvar, using MySQL - Helm Chart.

The errors which you mentioned is generated by this part of StatefulSet part:

readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1

This means that pod will check if it can connect to db using commandmysql -h 127.0.0.1 -e SELECT 1. The periodSeconds field specifies that the kubelet should perform a readiness probe every 2 seconds. The initialDelaySeconds field tells the kubelet that it should wait 5 seconds before performing the first probe.

You can change those values to higher ones to make sure this works.

If you provide mode details regarding other errors you are expecting we will try to help further.

Also if You deployed the 5.7 first and then tried to just change the version, this might not work as some resources already were created like volumes and latest would not work with the previous version resources. You should consider running this an a clean namespace or removing previously created objects.

You can follow the Cleaning up:

  1. Cancel the SELECT @@server_id loop by pressing Ctrl+C in its terminal, or running the following from another terminal:
kubectl delete pod mysql-client-loop --now
  1. Delete the StatefulSet. This also begins terminating the Pods.
kubectl delete statefulset mysql
  1. Verify that the Pods disappear. They might take some time to finish terminating.
kubectl get pods -l app=mysql

You’ll know the Pods have terminated when the above returns:

  No resources found.
  1. Delete the ConfigMap, Services, and PersistentVolumeClaims.
kubectl delete configmap,service,pvc -l app=mysql
  1. If you manually provisioned PersistentVolumes, you also need to manually delete them, as well as release the underlying resources. If you used a dynamic provisioner, it automatically deletes the PersistentVolumes when it sees that you deleted the PersistentVolumeClaims. Some dynamic provisioners (such as those for EBS and PD) also release the underlying resources upon deleting the PersistentVolumes.
-- Crou
Source: StackOverflow