Please don't mark this as duplicate.I have made some changes this time.Believe me, I have tried other answers and they don't seem to solve my issue.I am unable to link tomcat container with my MySQL database container in kubernetes.
Built my tomcat image using this dockerfile
FROM picoded/tomcat7
COPY data-core-0.0.1-SNAPSHOT.war /usr/local/tomcat/webapps/data-core-0.0.1-SNAPSHOT.war
mysql-service.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
mysql-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
imagePullPolicy: "IfNotPresent"
env:
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_DATABASE
value: data-core
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /docker-entrypoint-initdb.d //my sql init script will
get copied from hostpath
of persistant volume.
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-initdb-pv-claim
Tomcat-service.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
labels:
app: tomcat
spec:
type: NodePort
ports:
- name: myport
port: 8080
targetPort: 8080
nodePort: 30000
selector:
app: tomcat
tier: frontend
Tomcat-Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat
labels:
app: tomcat
spec:
selector:
matchLabels:
app: tomcat
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: tomcat
tier: frontend
spec:
containers:
- image: suji165475/vignesh:tomcatserver //this is the tomcat image
built using dockerfile with
war file(spring boot app)
copied to webapps folder
name: tomcat
env:
- name: DB_PORT_3306_TCP_ADDR
value: mysql #service name of mysql
- name: DB_ENV_MYSQL_DATABASE
value: data-core
- name: DB_ENV_MYSQL_ROOT_PASSWORD
value: root
ports:
- containerPort: 8080
name: myport
volumeMounts:
- name: tomcat-persistent-storage
mountPath: /var/data
volumes:
- name: tomcat-persistent-storage
persistentVolumeClaim:
claimName: tomcat-pv-claim
I have specified all the environment variables including the MySQL service name in the Tomcat deployment needed for connecting them.Also made sure to create persistent volumes and claims for both the containers.Yet my war file still wont start in tomcat's manager app.
Are my yaml files correct or is there still some changes to be made??
NOTE: I am running on a server using putty terminal.
URL used to access my app in browser-
206.189.22.155:30000/data-core-0.0.1-SNAPSHOT
Your YAML files are correct. I've recreated whole environment mentioned in the question and I've got healthy Tomcat with the application in the Running state.
If someone also wants to test it, Tomcat manager username/password are:
username="the-manager" password="needs-a-new-password-here"
No SEVERE errors was found in the tomcat log, I've got the response from the application:
{"text":"Data-core"}
which looks like correct response. I've also got the empty table sequence in the Mysql database data-core.
I can guess yo've had some kind of connectivity problem, probably caused by incorrect work of Kubernetes network addon (Calico/Flannel/etc.)
How to troubleshot it:
To test connectivity to Mysql or Tomcat resources we can exec to their pods and run tests using simple commands:
$ kubectl exec mysql-pod-name -it -- mysql -hlocalhost -uroot -proot data-core --execute="show tables;"
or just run additional pod to check if services correctly points to the mysql pod:
$ kubectl run mysql-client --rm -it --image mysql --restart=Never --command -- mysql -hmysql -uroot -proot data-core --execute="show tables;"
For tomcat pod we can use the following commands to check the user passwords and application response:
$ kubectl exec -ti tomcat-pod-name -- cat /usr/local/tomcat/conf/tomcat-users.xml
$ kubectl exec -ti tomcat-pod-name -- curl http://localhost:8080/data-core-0.0.1-SNAPSHOT/
or use separate pod with curl
or wget
to check if Tomcat Service and NodePort works well:
$ kubectl run curl -it --rm --image=appropriate/curl --restart=Never -- curl http://tomcat:8080/data-core-0.0.1-SNAPSHOT/
$ curl http://Cluster.Node.IP:30000/data-core-0.0.1-SNAPSHOT/
By using IPs of different nodes you can check also cluster connectivity because NodePort Service open the same port on all cluster nodes and then iptables rules on the nodes forward traffic to Pod's IP.
If pod is located on the different node, Flannel/Calico/etc. network plugin delivers it to the correct node and to the Pod.