I'm using minkube (kubenetes single node). I exposed a service and when I want to run it with:
curl $(minikube ip):$NODE_PORT
i get the error that said:
curl: (7) Failed to connect to 192.168.99.100 port 31539: Connection refused
I also try to run the container in Docker and everything was good. But in Kubernetes I can't run the application.
I know there are similar issues in site, but as much as I seen, none of them fix my problem.
Does anyone know what's wrong? or what am I missing?
(I must also mention that I am new to Kubernetes)
output of execute "kubectl get svc -n namespace" command :
No resources found.
yaml file :
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-10-02T15:25:18Z"
labels:
app: urbackup-v11
name: urbackup-v11
namespace: default
resourceVersion: "195336"
selfLink: /api/v1/namespaces/default/services/urbackup-v11
uid: a1c18360-a2bb-4de9-a25c-b0ffd45a20b2
spec:
clusterIP: 10.111.173.217
externalTrafficPolicy: Cluster
ports:
- nodePort: 31539
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: urbackup-v11
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
output of execute iptables-save
command :
# Generated by iptables-save v1.6.1 on Sat Sep 28 22:33:21 2019
*nat
:PREROUTING ACCEPT [21:3442]
:INPUT ACCEPT [16:3200]
:OUTPUT ACCEPT [1510:295823]
:POSTROUTING ACCEPT [1510:295823]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Sat Sep 28 22:33:21 2019
# Generated by iptables-save v1.6.1 on Sat Sep 28 22:33:21 2019
*filter
:INPUT ACCEPT [64079:559675075]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [65202:547155125]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j
ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-
ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Sat Sep 28 22:33:21 2019
here , result of kubectl -n default get pods -o yaml -l app=urbackup-v11
command :
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-10-02T15:22:34Z"
generateName: urbackup-v11-774ff76465-
labels:
app: urbackup-v11
pod-template-hash: 774ff76465
name: urbackup-v11-774ff76465-ch42z
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: urbackup-v11-774ff76465
uid: 6d7ba6c6-5318-4dc1-bfd7-356f85598236
resourceVersion: "212488"
selfLink: /api/v1/namespaces/default/pods/urbackup-v11-774ff76465-ch42z
uid: 22674f75-4507-4405-81d9-d1bb29b5a70b
spec:
containers:
- image: uroni/urbackup-server
imagePullPolicy: Always
name: urbackup-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-rggcc
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: minikube
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-rggcc
secret:
defaultMode: 420
secretName: default-token-rggcc
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-10-02T15:22:34Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-10-05T09:44:00Z"
message: 'containers with unready status: [urbackup-server]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-10-05T09:44:00Z"
message: 'containers with unready status: [urbackup-server]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-10-02T15:22:34Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://ba3170190d5315e9edaa5e2674d5cd38bff6c8fd5c8025537a7c0ece77a695c7
image: uroni/urbackup-server:latest
imageID: docker-pullable://uroni/urbackup-server@sha256:ed18b99ac85147e01dceb2dc45844c5689fb19bbe4c915d7e5b52b6a376db242
lastState: {}
name: urbackup-server
ready: false
restartCount: 1
state:
terminated:
containerID: docker://ba3170190d5315e9edaa5e2674d5cd38bff6c8fd5c8025537a7c0ece77a695c7
exitCode: 255
finishedAt: "2019-10-05T09:43:09Z"
reason: Error
startedAt: "2019-10-04T12:11:01Z"
hostIP: 10.0.2.15
phase: Running
qosClass: BestEffort
startTime: "2019-10-02T15:22:34Z"
kind: List
metadata:
resourceVersion: ""
selfLink: ""
The ports
configuration is missing in your Pod spec.containers
.
You need to add it and ensure that it matches with your Service targetPort
(See: Exposing pods to the cluster).
It should be something like this:
spec:
containers:
- image: uroni/urbackup-server
imagePullPolicy: Always
ports:
- containerPort: 8080
name: urbackup-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-rggcc
readOnly: true