I'm trying to deploy a Flask python API to Kubernetes (EKS). I've got the Dockerfile setup, but with some weird things going on.
Dockerfile
:
FROM python:3.8
WORKDIR /app
COPY . /app
RUN pip3 install -r requirements.txt
EXPOSE 43594
ENTRYPOINT ["python3"]
CMD ["app.py"]
I build the image running docker build -t store-api .
.
When I try running the container and hitting an endpoint, I get socker hung up
. However, if I run the image doing
docker run -d -p 43594:43594 store-api
I can successfully hit the endpoint with a response.
My hunch is the port mapping.
Now having said all that, running the image in a Kubernetes pod, I cannot get anything back from the endpoint and get socket hung up
.
My question is, how do I explicitly add port mapping to my Kubernetes deployment/service?
Part of the Deployment.yaml
:
spec:
containers:
- image: store-api
name: store-api
ports:
- containerPort: 43594
resources: {}
volumeMounts:
- mountPath: /usr/local/bin/wait-for-it.sh
name: store-api-claim0
imagePullPolicy: Always
Service.yaml
:
spec:
type: LoadBalancer
ports:
- port: 43594
protocol: TCP
targetPort: 43594
selector:
app: store-api
status:
loadBalancer: {}
If I port forward using kubectl port-forward deployment/store-api 43594:43594
and post the request to localhost:43594/
it works fine.
This is a community wiki answer posted for better visibility. Feel free to expand it.
Problem
Output for kubectl describe service <name_of_the_service>
command contains Endpoints: <none>
Some theory
From Kubernetes Glossary:
An abstract way to expose an application running on a set of Pods as a network service. The set of Pods targeted by a Service is (usually) determined by a selector. If more Pods are added or removed, the set of Pods matching the selector will change. The Service makes sure that network traffic can be directed to the current set of Pods for the workload.
Endpoints track the IP addresses of Pods with matching selectors.
Allows users to filter a list of resources based on labels. Selectors are applied when querying lists of resources to filter them by labels.
Solution
Labels in spec.template.metadata.labels
of the Deployment should be the same as in spec.selector
from the Service.
Additional information related to such issue can be found at Kubernetes site:
If the ENDPOINTS column is \<none>, you should check that the spec.selector field of your Service actually selects for metadata.labels values on your Pods.