I am spinning up a MYSQL container on a K8s cluster. The deployment yaml is as follow:
---
apiVersion: v1
kind: Pod
metadata:
name: {{ ca.name }}-db-mysql
labels:
k8s-app: {{ ca.name }}-db-mysql
spec:
containers:
- name: {{ ca.name }}-db-mysql
image: "mysql:5.7.23"
env:
- { name: "MYSQL_ROOT_PASSWORD", value: "password" }
- { name: "MYSQL_ROOT_HOST", value: "%" }
args: ["mysqld", --default-authentication-plugin=mysql_native_password", "--sql-mode="]
ports:
- containerPort: 3306
volumeMounts:
- { mountPath: "/var/lib/mysql", name: "{{ ca.name }}-mysql", subPath: "cas/db/rca/mysqldb" }
restartPolicy: OnFailure
volumes:
- name: {{ ca.name }}-mysql
persistentVolumeClaim:
claimName: {{ ca.name }}-mysql-volume
Despite of using MYSQL_ROOT_HOST configuration, the configured pod doesn't gets it's host configured as % for the root user.
I have to manually update the host settings to access MYSQL container outside the cluster.
kubectl exec -it ca1st-orgb-db-mysql /bin/bash
mysql -u root
SELECT host FROM mysql.user WHERE User = 'root';
CREATE USER 'root'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%';
FLUSH PRIVILEGES;
Why it doesn't gets configured with the default setting which is being provided in the environment variables? Could someone can share their thoughts on this.
I think there is an issue with MYSQL image 5.7.23. I downgraded MYSQL version to 5.6 and removed args from pod configuration and everything works as expected.
DB Queries:
kubectl run -it --rm --image=mysql:5.7.23 --restart=Never mysql-client -- mysql -h ca1st-orga-db-mysql -pyourpassword
SELECT host FROM mysql.user WHERE User = 'root';
You can notice %
along with localhost.
Hope this helps.