I've got a single-node Kubernetes "cluster" built with kubeadm
in AWS.
I have deployed a simple Nginx deployment with this config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx0-deployment
labels:
app: nginx0-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx0
template:
metadata:
labels:
app: nginx0
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx:latest
ports:
- containerPort: 80
name: backend-http
I have also created an AWS ELB LoadBalancer:
kind: Service
apiVersion: v1
metadata:
name: nginx0-service
balancing-enabled: "true"
spec:
selector:
app: nginx0-deployment
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
This created the ELB and opened the relevant ports in the K8s instance security group.
{ec2-instance} ~ $ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx0-deployment-548f99f47c-ns75w 1/1 Running 0 3m45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
service/nginx0-service LoadBalancer 10.106.179.191 acfc4150....elb.amazonaws.com 80:30675/TCP 63s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx0-deployment 1/1 1 1 3m45s
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx0-deployment-548f99f47c 1 1 1 3m45s
However something is still missing between the ELB and the POD because browsing to http://acfc4150....elb.amazonaws.com/
doesn't work - Chrome says ERR_EMPTY_RESPONSE.
I guess it's something to do with the ELB port mapping 80:30675/TCP - I have checked the incoming traffic on the instance and I see packets for port 30675 but nothing goes back. As if the mapping between this port and the POD's port 80 wasn't set up?
Any idea what should I add to my manifests to make it work?
Thanks!
You have the wrong labels; your Deployment has app: nginx0-deployment
but your Pods have app: nginx0
and Service
s don't target Deployments, they target Pods
Update your Service
to have:
spec:
selector:
app: nginx0
instead