Team,
Currently we have setup a kubernetes cluster with a single master and single worker node.
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-18-129.ap-south-1.compute.internal Ready <none> 15h v1.15.0
k8s-master Ready master 15h v1.15.0
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15h
nodeport NodePort 10.104.192.11 <none> 80:30385/TCP 4s
[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-554b9c67f9-wcsds 1/1 Running 1 15h 10.44.0.1 ip-172-31-18-129.ap-south-1.compute.internal <none> <none>
[root@k8s-master ~]# curl -v 172.31.18.129:30385
* Rebuilt URL to: 172.31.18.129:30385/
* Trying 172.31.18.129...
* TCP_NODELAY set
Here am using my worker node ip to access my container from master. Whereas I can able to access from worker node by cluster ip, please find the output below :
[root@ip-172-31-18-129 ~]# curl 10.104.192.11
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
What I expected to happen?
Nginx container suppose to be accessible from master using worker node ip address, but am unable to do that for some reason.
Kubernetes version (use kubectl version):
[root@k8s-master ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:37:41Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration: AWS EC2 instances
OS (e.g: cat /etc/os-release): RHEL 8
Kernel (e.g. uname -a):
[root@k8s-master ~]# uname -a
Linux k8s-master 4.18.0-80.4.2.el8_0.x86_64 #1 SMP Fri Jun 14 13:20:24 UTC 2019 x86_64 x86_64 x86_64 GNU/L
Any help is appreciated. Thanks in advance !!!
Check the link below. Since you have hosted your worker node in the cloud environment, you need to achieve this through Ingress resource