I am using kubernetes and run one service. Service is running and is showing in service. But i am not able to access it from the public ip of the instance. Below is my deployment file.
apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: apache-deployment
spec:
selector:
matchLabels:
app: apache
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: mobingi/ubuntu-apache2-php7:7.2
ports:
- containerPort: 80
Here is my list of service.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apache-service NodePort 10.106.242.181 <none> 80:31807/TCP 9m5s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11m
But when i check the same service from the follwing telnet with the public ip of cluster and node. It is not responding.
telnet public-ip:31807
Any type of help will be appreciable.
What do you mean by cluster IP? Do you mean the node that acts as kunernetes master? It won't work if you use master IP. Because masters don't have deployments scheduled due to security concerns.
Exposing a service via nodeport means that the service listens to a particular port in each of the worker nodes. So you can access the kunernetes worker nodes with the nodeports and get response. However if you created the cluster using cloud providers like aws, the worker nodes security groups are secured. Probably, you need to edit the security groups of worker nodes to access the service.