I am running a cluster of
kubectl exec
and shell into the pods themselves, I can access Mongo/Postgres but using the docker network IP addressHere is some sample commands that show the problem
Shell in:
HOST$ kubectl exec -it my-system-mongo-54b8c75798-lptzq /bin/bash
Once in, I connect to mongo using the docker network IP:
MONGO-POD# mongo mongodb://172.17.0.6
Welcome to the MongoDB shell.
> exit
bye
Now I try to use the K8 service IP (the DNS works, as it gets translated to 10.96.154.36 as seen below)
MONGO-POD# mongo mongodb://my-system-mongo
MongoDB shell version v3.6.3
connecting to: mongodb://my-system-mongo
2020-01-03T02:39:55.883+0000 W NETWORK [thread1] Failed to connect to 10.96.154.36:27017 after 5000ms milliseconds, giving up.
2020-01-03T02:39:55.903+0000 E QUERY [thread1] Error: couldn't connect to server my-system-mongo:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:251:13
@(connect):1:6
exception: connect failed
Ping also doesn't work
MONGO-POD# ping my-system-mongo
PING my-system-mongo.default.svc.cluster.local (10.96.154.36) 56(84) bytes of data.
--- my-system-mongo.default.svc.cluster.local ping statistics ---
112 packets transmitted, 0 received, 100% packet loss, time 125365ms
My set up is running Minikube 1.6.2 with Kubernetes 1.17 and Helm 3.0.2. Here is my full (helm created) dry run yaml file:
NAME: mysystem-1578018793
LAST DEPLOYED: Thu Jan 2 18:33:13 2020
NAMESPACE: default
STATUS: pending-install
REVISION: 1
HOOKS:
---
# Source: mysystem/templates/tests/test-connection.yaml
apiVersion: v1
kind: Pod
metadata:
name: "my-system-test-connection"
labels:
helm.sh/chart: mysystem-0.1.0
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
annotations:
"helm.sh/hook": test-success
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['my-system:']
restartPolicy: Never
MANIFEST:
---
# Source: mysystem/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-system-configmap
labels:
helm.sh/chart: mysystem-0.1.0
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
data:
_lots_of_key_value_pairs: here-I-shortened-it
---
# Source: mysystem/templates/my-system-mongo-service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-system-mongo
labels:
helm.sh/chart: mysystem-0.1.0
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongo
spec:
type: ClusterIP
ports:
- port: 27017
targetPort: 27017
protocol: TCP
name: mongo
selector:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/component: mongo
---
# Source: mysystem/templates/my-system-pg-service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-system-postgres
labels:
helm.sh/chart: mysystem-0.1.0
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: postgres
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: postgres
selector:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/component: postgres
---
# Source: mysystem/templates/my-system-restsrv-service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-system-rest-server
labels:
helm.sh/chart: mysystem-0.1.0
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: rest-server
spec:
type: NodePort
ports:
#- port: 8009
# targetPort: 8009
# protocol: TCP
# name: jpda
- port: 8080
targetPort: 8080
protocol: TCP
name: http
selector:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/component: rest-server
---
# Source: mysystem/templates/my-system-mongo-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-system-mongo
labels:
helm.sh/chart: mysystem-0.1.0
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongo
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/component: mongo
template:
metadata:
labels:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/component: mongo
spec:
imagePullSecrets:
- name: regcred
serviceAccountName: default
securityContext:
{}
containers:
- name: my-system-mongo-pod
securityContext:
{}
image: private.hub.net/my-system-mongo:latest
imagePullPolicy: Always
envFrom:
- configMapRef:
name: my-system-configmap
ports:
- name: "mongo"
containerPort: 27017
protocol: TCP
resources:
{}
---
# Source: mysystem/templates/my-system-pg-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-system-postgres
labels:
helm.sh/chart: mysystem-0.1.0
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: postgres
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/component: postgres
template:
metadata:
labels:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/component: postgres
spec:
imagePullSecrets:
- name: regcred
serviceAccountName: default
securityContext:
{}
containers:
- name: mysystem
securityContext:
{}
image: private.hub.net/my-system-pg:latest
imagePullPolicy: Always
envFrom:
- configMapRef:
name: my-system-configmap
ports:
- name: postgres
containerPort: 5432
protocol: TCP
resources:
{}
---
# Source: mysystem/templates/my-system-restsrv-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-system-rest-server
labels:
helm.sh/chart: mysystem-0.1.0
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: rest-server
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/component: rest-server
template:
metadata:
labels:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
app.kubernetes.io/component: rest-server
spec:
imagePullSecrets:
- name: regcred
serviceAccountName: default
securityContext:
{}
containers:
- name: mysystem
securityContext:
{}
image: private.hub.net/my-system-restsrv:latest
imagePullPolicy: Always
envFrom:
- configMapRef:
name: my-system-configmap
ports:
- name: rest-server
containerPort: 8080
protocol: TCP
#- name: "jpda"
# containerPort: 8009
# protocol: TCP
resources:
{}
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mysystem,app.kubernetes.io/instance=mysystem-1578018793" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:80
My best theory (in part after working through this) is that the kube-proxy
is not working properly in minikube, however I am not sure how to troubleshoot this. When is shell into minikube and grep through journalctl for proxy I get this:
# grep proxy journal.log
Jan 03 02:16:02 minikube sudo[2780]: docker : TTY=unknown ; PWD=/home/docker ; USER=root ; COMMAND=/bin/touch -d 2020-01-02 18:16:03.05808666 -0800 /var/lib/minikube/certs/proxy-client.crt
Jan 03 02:16:02 minikube sudo[2784]: docker : TTY=unknown ; PWD=/home/docker ; USER=root ; COMMAND=/bin/touch -d 2020-01-02 18:16:03.05908666 -0800 /var/lib/minikube/certs/proxy-client.key
Jan 03 02:16:15 minikube kubelet[2821]: E0103 02:16:15.423027 2821 reflector.go:156] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:minikube" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.503466 2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-n78g9" (UniqueName: "kubernetes.io/secret/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy-token-n78g9") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.503965 2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/50fbf70b-724a-4b76-af7f-5f4b91735c84-xtables-lock") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.530948 2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/50fbf70b-724a-4b76-af7f-5f4b91735c84-lib-modules") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube kubelet[2821]: I0103 02:16:15.538938 2821 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy") pod "kube-proxy-pbs6s" (UID: "50fbf70b-724a-4b76-af7f-5f4b91735c84")
Jan 03 02:16:15 minikube systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/50fbf70b-724a-4b76-af7f-5f4b91735c84/volumes/kubernetes.io~secret/kube-proxy-token-n78g9.
Jan 03 02:16:16 minikube kubelet[2821]: E0103 02:16:16.670527 2821 configmap.go:200] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
Jan 03 02:16:16 minikube kubelet[2821]: E0103 02:16:16.670670 2821 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy\" (\"50fbf70b-724a-4b76-af7f-5f4b91735c84\")" failed. No retries permitted until 2020-01-03 02:16:17.170632812 +0000 UTC m=+13.192986021 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50fbf70b-724a-4b76-af7f-5f4b91735c84-kube-proxy\") pod \"kube-proxy-pbs6s\" (UID: \"50fbf70b-724a-4b76-af7f-5f4b91735c84\") : failed to sync configmap cache: timed out waiting for the condition"
And while that does show some problems, I am not sure how to act on them or correct it.
UPDATE:
I spotted this when grepping through the journal:
# grep conntrack journal.log
Jan 03 02:16:04 minikube kubelet[2821]: W0103 02:16:04.286682 2821 hostport_manager.go:69] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Looking into conntrack, though the minikube VM doesn't have yum or apt!
Let's look at the relevant Service:
apiVersion: v1
kind: Service
metadata:
name: my-system-mongo
spec:
ports:
- port: 27017 # note typo here, see @aviator's answer
targetPort: 27017
protocol: TCP
name: mongo
selector:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
In particular note the selector:
; this can route traffic to any pod that has these two labels. For example, this is a valid target:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-system-postgres
spec:
selector:
matchLabels:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
template:
metadata:
labels:
app.kubernetes.io/name: mysystem
app.kubernetes.io/instance: mysystem-1578018793
Since every pod has the same pair of labels, any service can send traffic to any pod; your "MongoDB" service isn't necessarily targeting the actual MongoDB pod. Your deployment specs have the same problem and I wouldn't be surprised if the kubectl get pods
output is a little bit confused.
The right answer here is to add another label that distinguishes the different parts of your application from each other. The Helm docs recommend
app.kubernetes.io/component: mongodb
This must appear in the labels of the pod spec embedded in the deployments, the matching deployment selector, and the matching service selector; simply setting it on all related objects including the deployment and service labels makes sense.
You have a typo in your mongodb service definition.
- port: 27107
targetPort: 27017
Change the port to 27017.