I'm using Docker and Minikube on Windows for learning k8s, but I failed to deploy my first app.
I ran the following commands:
kubectl run testapp --image=saphyra/testapp:latest --port=8080
kubectl expose deployment testapp --type=NodePort
minikube service testapp
Using command minikube dashboard
I can see that the pod is created, and running, I can see the logs, and it says Tomcat started on port 8080 as expected.
So seems like everything is OK.
But how can I call the endpoint (from a browser) of the service I started? The tutorial I followed (and many other YouTube tutorials) says now I have to be able to call the endpoints of the app.
What do I miss? How can I reach my app?
Edit:
Describe result
kubectl describe svc testapp >
Name: testapp
Namespace: default
Labels: run=testapp
Annotations: <none>
Selector: run=testapp
Type: NodePort
IP: 10.110.10.61
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31612/TCP
Endpoints: 172.18.0.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
kube proxy logs:
server_others.go:323] Unknown proxy mode "", assuming iptables proxy
node.go:135] Successfully retrieved node IP: 172.17.0.2
server_others.go:145] Using iptables Proxier.
server.go:571] Version: v1.17.3
conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288
conntrack.go:52] Setting nf_conntrack_max to 524288
conntrack.go:127] sysfs is not writable: {Device:sysfs Path:/sys Type:sysfs Opts:[ro nosuid nodev noexec relatime] Freq:0 Pass:0} (mount options are [ro nosuid nodev noexec relatime])
conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
config.go:313] Starting service config controller
shared_informer.go:197] Waiting for caches to sync for service config
config.go:131] Starting endpoints config controller
shared_informer.go:197] Waiting for caches to sync for endpoints config
shared_informer.go:204] Caches are synced for service config
shared_informer.go:204] Caches are synced for endpoints config
EDIT 2
kubectl get svc testapp -o yaml
>
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-03-25T17:03:57Z"
labels:
run: testapp
name: testapp
namespace: default
resourceVersion: "3454"
selfLink: /api/v1/namespaces/default/services/testapp
uid: 048d05df-eaad-4d4b-845f-d98b222fe101
spec:
clusterIP: 10.110.10.61
externalTrafficPolicy: Cluster
ports:
- nodePort: 31612
port: 8080
protocol: TCP
targetPort: 8080
selector:
run: testapp
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
kubectl get pod testapp-c565bfccc-xht6j -o yaml
>
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-03-25T17:03:52Z"
generateName: testapp-c565bfccc-
labels:
pod-template-hash: c565bfccc
run: testapp
name: testapp-c565bfccc-xht6j
namespace: default
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: testapp-c565bfccc
uid: 21e2c9ab-3771-482f-8849-71754aaf5ff6
resourceVersion: "3491"
selfLink: /api/v1/namespaces/default/pods/testapp-c565bfccc-xht6j
uid: c4dfdcb6-d050-4c44-99d1-6881bc39f805
spec:
containers:
- image: saphyra/testapp:latest
imagePullPolicy: Always
name: testapp
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-689j9
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: m01
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-689j9
secret:
defaultMode: 420
secretName: default-token-689j9
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-03-25T17:03:52Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-03-25T17:04:12Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-03-25T17:04:12Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-03-25T17:03:52Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://9a3ddc83df779bb4dc4b88718d82f7065d2dc647360fa200b4815ecc260a7ed4
image: saphyra/testapp:latest
imageID: docker-pullable://saphyra/testapp@sha256:b328a874297521f35c84a37cde160e23a39d6a12c7184dbe3c88ff0250b05df6
lastState: {}
name: testapp
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2020-03-25T17:04:11Z"
hostIP: 172.17.0.2
phase: Running
podIP: 172.18.0.4
podIPs:
- ip: 172.18.0.4
qosClass: BestEffort
startTime: "2020-03-25T17:03:52Z"
Okay, finally I found: minikube start selected 'docker' as driver, so the whole kubernetes was started in a docker container, and I guess docker container was not exposed to host.
When I set driver hyperv explicit, and minikube started its own vm, everything worked fine.
However it is still a question how can I expose minikube running in docker container.
The Minikube VM is exposed to the host system via a host-only IP address, that can be obtained with the $ minikube ip
command. Any services of type NodePort
can be accessed over that IP address, on the NodePort
.
From the information you provided, your service is listening to NodePort:31612
. So, you can access your app from the browser by the following URL: minikube-ip:NodePort
.
Let's say your minikube ip is 192.168.99.100
and the NodePort is 31612
, then the URL will be, 192.168.99.100:31612