I'm struggling with Kubernetes' service without a selector. The cluster is installed on AWS with the kops. I have a deployment with 3 nginx pods exposing port 80:
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngix-dpl # Name of the deployment object
labels:
app: nginx
spec:
replicas: 3 # Number of instances in the deployment
selector: # Selector identifies pods to be
matchLabels: # part of the deployment
app: nginx # by matching of the label "app"
template: # Templates describes pods of the deployment
metadata:
labels: # Defines key-value map
app: nginx # Label to be recognized by other objects
spec: # as deployment or service
containers: # Lists all containers in the pod
- name: nginx-pod # container name
image: nginx:1.17.4 # container docker image
ports:
- containerPort: 80 # port exposed by container
After creation of the deployment, I noted the IP addresses:
$ kubectl get pods -o wide | awk {'print $1" " $3" " $6'} | column -t
NAME STATUS IP
curl Running 100.96.6.40
ngix-dpl-7d6b8c8944-8zsgk Running 100.96.8.53
ngix-dpl-7d6b8c8944-l4gwk Running 100.96.6.43
ngix-dpl-7d6b8c8944-pffsg Running 100.96.8.54
and created a service that should serve the IP addresses:
apiVersion: v1
kind: Service
metadata:
name: dummy-svc
labels:
app: nginx
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: dummy-svc
subsets:
- addresses:
- ip: 100.96.8.53
- ip: 100.96.6.43
- ip: 100.96.8.54
ports:
- port: 80
name: http
The service is successfully created:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dummy-svc ClusterIP 100.64.222.220 <none> 80/TCP 32m
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 5d14h
Unfortunately, my attempt to connect to the nginx through the service from another pod of the same namespace fails:
$ curl 100.64.222.220
curl: (7) Failed to connect to 100.64.222.220 port 80: Connection refused
I can successfully connect to the nginx pods directly:
$ curl 100.96.8.53
<!DOCTYPE html>
<html>
<head>
....
I noticed that my service does not have any endpoints. But I'm not sure that the manual endpoints should be shown there:
$ kubectl get svc/dummy-svc -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"dummy-svc","namespace":"default"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}]}}
creationTimestamp: "2019-11-22T08:41:29Z"
labels:
app: nginx
name: dummy-svc
namespace: default
resourceVersion: "4406151"
selfLink: /api/v1/namespaces/default/services/dummy-svc
uid: e0aa9d01-0d03-11ea-a19c-0a7942f17bf8
spec:
clusterIP: 100.64.222.220
ports:
- port: 80
protocol: TCP
targetPort: 80
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I understand that it is not a proper use case for services and using of a pod selector will bring it to work. But I want to understend why this configuration does not work. I don't know where to look for the solution. Any hint will be appreciated.
correct the service definition as below
apiVersion: v1
kind: Service
metadata:
name: dummy-svc
labels:
app: nginx
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: nginx
it works if you remove the "name" field from the endpoints configuration. it should look like this:
apiVersion: v1
kind: Endpoints
metadata:
name: dummy-svc
subsets:
- addresses:
- ip: 172.17.0.4
- ip: 172.17.0.5
- ip: 172.17.0.6
ports:
- port: 80