I have a node outside of my Kubernetes cluster running a web service that I need to access from inside a Pod. The documentation mentions using a Service without a Selector here: http://kubernetes.io/docs/user-guide/services/
So I created a service like so:
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"spec": {
"ports": [
{
"protocol": "TCP",
"port": 8082,
"targetPort": 8082
}
]
}
}
Then created my endpoint:
{
"kind": "Endpoints",
"apiVersion": "v1",
"metadata": {
"name": "my-service"
},
"subsets": [
{
"addresses": [
{ "ip": "128.115.198.7" }
],
"ports": [
{ "port": 8082 }
]
}
]
}
Test App:
apiVersion: v1
kind: Pod
metadata:
name: ta-p
spec:
restartPolicy: Never
containers:
- name: ta-c
image: "centos:7"
command: ["/bin/bash","-c", "sleep 100000"]
nodeSelector:
node: "kube-minion-1"
Remote into Pod doing:
kubectl exec ta-p -c ta-c -i --tty -- /bin/bash
Then whenever I kubectl exec
into a container in my pod and try to ping or curl my-service like so:
curl http://my-service/api/foo
it times out. I have verified DNS is setup and working correctly. However, I have even tried using the IP address directly bound to the service:
curl http://10.0.124.106:8082/api/foo
Anyone have any suggestions?
Make sure the names on ports for both service and endpoints are the same.
{
...
"ports": [
{
"name": "my-service",
"port": 8082
}
]
...
}
Not sure what was going on. It appears my Kube cluster must have been in an awkward state. I restarted the cluster and it is now working...
Note: A better way to solve this now is using externalName
on the Service. This will add a CNAME
record to the internal Kubernetes DNS: https://github.com/kubernetes/community/blob/master/contributors/design-proposals/network/service-external-name.md
This feature was shipped with Kubernetes 1.4.