I installed docker and minikube through Docker for Windows Installer.exe
. And this installed Docker Desktop 2.1.0.1.
Docker Version -
PS C:\myk8syamls> docker version
Client: Docker Engine - Community
Version: 19.03.1
API version: 1.40
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:17:08 2019
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.1
API version: 1.40 (minimum version 1.12)
Go version: go1.12.5
Git commit: 74b1e89
Built: Thu Jul 25 21:17:52 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.6
GitCommit: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc:
Version: 1.0.0-rc8
GitCommit: 425e105d5a03fabd737a126ad93d62a9eeede87f
docker-init:
Version: 0.18.0
GitCommit: fec3683
k8s version -
PS C:\myk8syamls> kubectl.exe version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
After I have created k8s services, I am not able to access them through my local machine.
PS C:\myk8syamls> kubectl.exe get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 101d
nginx-clusterip-svc ClusterIP 10.96.214.171 <none> 80/TCP 26m
nginx-nodeport-svc NodePort 10.101.9.117 <none> 80:30007/TCP,8081:30008/TCP 26m
postgres NodePort 10.103.103.87 <none> 5432:32345/TCP 101d
I have tried - accessing nodeport service, nginx-nodeport-svc by hitting
10.101.9.117:30007 and 10.101.9.117:80 - did not work
and
I have tried - accessing the clusterip service, nginx-clusterip-svc by hitting
10.96.214.171:80 - did not work
How can I access these service from local machine?? This is quite critical for me to resolve, so any help is greatly appreciated.
Edit - following answer from @rriovall
i did this -
kubectl expose deployment nginx-deployment --type=NodePort --name=nginx-nodeport-expose-svc
and on querying -
PS C:\myk8syamls> kubectl.exe get svc nginx-nodeport-expose-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-nodeport-expose-svc NodePort 10.107.212.76 <none> 80:30501/TCP 42s
Still there is no external IP and accessing http://10.107.212.76:30501/
still does not work
or
PS C:\myk8syamls> kubectl.exe get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
docker-desktop Ready master 102d v1.14.3 192.168.65.3 <none> Docker Desktop 4.9.184-linuxkit docker://19.3.1
accessing http://192.168.65.3:30501/
does not work either.
You need to expose the nginx cluster as an external service.
$ kubectl expose deployment nginx --port=80 --target-port=80 \
--type=LoadBalancer
service "nginx" exposed
It may take several minutes to see the value of EXTERNAL_IP.
You can then visit http://EXTERNAL_IP/ to see the server being served through network load balancing.
Load Balancer: This is the default method for many Kubernetes installations in the cloud, and it works great. It supports multiple protocols and multiple ports per service. But by default it uses an IP for every service, and that IP is configured to have its own load balancer configured in the cloud. These add costs and overhead that is overkill for essentially every cluster with multiple services, which is almost every cluster these days.
NodePort: Is an open port on every worker node in the cluster that has a pod for that service. When traffic is received on that open port, it directs it to a specific port on the ClusterIP for the service it is representing. In a single-node cluster this is very straight forward. In a multi-node cluster the internal routing can get more complicated. In that case you might want to introduce an external load balancer so you can spread traffic out across all the nodes and be able to handle failures a bit easier.
In your case this would works too, you would need to create a service object that exposes the deployment:
kubectl expose deployment hello-world --type=NodePort --name=nginx-nodeport-svc
For more details check out this public documentation