I'd like to create a mongodb stateful deployment that shares my host's local directory /mnt/nfs/data/myproject/production/permastore/mogno
(network file system directory) with all mongodb pods at /data/db
. I'm running my kubernetes cluster on three VirtualMachines.
When I don't use persistent volume claims I can start mongo without any problem! But, when I start mongodb with persistent volume claim, I get this error.
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
Does anyone know why mongo fails to start, when /data/db
is mountend with persistent volume? How to fix it?
Below config files will not work in your environment due to differents paths. However, you should be able to get idea behind my setup.
Persistent Volume pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: phenex-mongo
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /mnt/nfs/data/phenex/production/permastore/mongo
claimRef:
name: phenex-mongo
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
volumeMode: Filesystem
Persistent Volume Claim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: phenex-mongo
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
Deployment deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-mongo
mountPath: /data/db
volumes:
- name: phenex-mongo
persistentVolumeClaim:
claimName: phenex-mongo
Applying configs
$ kubectl apply -f pv.yaml
$ kubectl apply -f pc.yaml
$ kubectl apply -f deployment.yaml
Checking cluster state
$ kubectl get deploy,po,pv,pvc --output=wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.extensions/mongo 1/1 1 1 38m mongo mongo:4.2.0-bionic run=mongo
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongo-59f669657d-fpkgv 1/1 Running 0 35m 10.44.0.2 web01 <none> <none>
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/phenex-mongo 1Gi RWO Retain Bound phenex/phenex-mongo manual 124m Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/phenex-mongo Bound phenex-mongo 1Gi RWO manual 122m Filesystem
Running mongo pod
$ kubectl exec -it mongo-59f669657d-fpkgv mongo
MongoDB shell version v4.2.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2019-08-14T14:25:25.452+0000 E QUERY [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2019-08-14T14:25:25.453+0000 F - [main] exception: connect failed
2019-08-14T14:25:25.453+0000 E - [main] exiting with code 1
command terminated with exit code 1
Logs
$ kubectl logs mongo-59f669657d-fpkgv
2019-08-14T14:00:32.287+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongo-59f669657d-fpkgv
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] db version v4.2.0
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] git version: a4b751dcf51dd249c5865812b390cfd1c0129c30
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] allocator: tcmalloc
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] modules: none
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] build environment:
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] distmod: ubuntu1804
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] distarch: x86_64
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] target_arch: x86_64
2019-08-14T14:00:32.291+0000 I CONTROL [initandlisten] options: { net: { bindIp: "*" } }
root@mongo-59f669657d-fpkgv:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mongodb 1 0.0 2.7 208324 27920 ? Dsl 14:00 0:00 mongod --bind_ip_all
root 67 0.0 0.2 18496 2060 pts/1 Ss 15:12 0:00 bash
root 81 0.0 0.1 34388 1536 pts/1 R+ 15:13 0:00 ps aux
I've found cause and solution! In my setup, I was sharing a directory over a network using NFS. This way, all my cluster nodes (minions) had access to common directory located at /mnt/nfs/data/
.
The reason that mongo
couldn't start was due to invalid Persistent Volumes. Namely, I was using persistent volume HostPath type - this will work for a single node testing, or if you manually create directory structure on all your cluster nodes e.g. /tmp/your_pod_data_dir/
. But, if you will try to mount nfs directory as a hostPath it will cause problems -such I had!
For directories that are shared over Network File System use NFS persistent volume type (NFS Example)! Below you will find my setup and two solutions.
/etc/hosts - my cluster nodes.
# Cluster nodes
192.168.123.130 master
192.168.123.131 web01
192.168.123.132 compute01
192.168.123.133 compute02
List of exported NFS directories.
[vagrant@master]$ showmount -e
Export list for master:
/nfs/data compute*,web*
/nfs/www compute*,web*
This solution, shows deployment that mounts nfs directory via volumes -have a look at volumes
and volumeMounts
section.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-nfs
mountPath: /data/db
volumes:
- name: phenex-nfs
nfs:
# IP of master node
server: 192.168.123.130
path: /nfs/data/phenex/production/permastore/mongo
This solution, shows deployment that mounts nfs directory via volume claims -have a look at persistentVolumeClaim
, Persistent Volume and Persistent Volume Claim are defined below.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
labels:
run: mongo
spec:
selector:
matchLabels:
run: mongo
strategy:
type: Recreate
template:
metadata:
labels:
run: mongo
spec:
containers:
- image: mongo:4.2.0-bionic
name: mongo
ports:
- containerPort: 27017
name: mongo
volumeMounts:
- name: phenex-nfs
mountPath: /data/db
volumes:
- name: phenex-nfs
persistentVolumeClaim:
claimName: phenex-nfs
Persistent Volume - NFS
apiVersion: v1
kind: PersistentVolume
metadata:
name: phenex-nfs
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
nfs:
# IP of master node
server: 192.168.123.130
path: /nfs/data
claimRef:
name: phenex-nfs
persistentVolumeReclaimPolicy: Retain
Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: phenex-nfs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
# Checking cluster state
[vagrant@master ~]$ kubectl get deploy,po,pv,pvc --output=wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.extensions/mongo 1/1 1 1 18s mongo mongo:4.2.0-bionic run=mongo
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/mongo-65b7d6fb9f-mcmvj 1/1 Running 0 18s 10.44.0.2 web01 <none> <none>
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/phenex-nfs 1Gi RWO Retain Bound /phenex-nfs 27s Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/phenex-nfs Bound phenex-nfs 1Gi RWO 27s Filesystem
# Attaching to pod and checking network bindings
[vagrant@master ~]$ kubectl exec -it mongo-65b7d6fb9f-mcmvj -- bash
root@mongo-65b7d6fb9f-mcmvj:/$ apt update
root@mongo-65b7d6fb9f-mcmvj:/$ apt install net-tools
root@mongo-65b7d6fb9f-mcmvj:/$ netstat -tunlp tcp 0 0 0.0.0.0:27017
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:27017 0.0.0.0:* LISTEN -
# Running mongo clinet
root@mongo-65b7d6fb9f-mcmvj:/$ mongo
MongoDB shell version v4.2.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("45287a0e-7d41-4484-a267-5101bd20fad3") }
MongoDB server version: 4.2.0
Server has startup warnings:
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2019-08-14T18:03:29.703+0000 I CONTROL [initandlisten]
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).
The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.
To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---
>