I have an application that can receive commands from a specific port like so:
echo <command> | nc <hostname> <port>
In this case it is opening port 22082, I believe in it's Docker container.
When I place this application into a kubernetes pod, I need to expose it by creating a kubernetes service. Here is my service:
apiVersion: v1
kind: Service
metadata:
name: commander
spec:
selector:
app: commander
ports:
- protocol: TCP
port: 22282
targetPort: 22082
#type: NodePort
externalIPs:
- 10.10.30.19
NOTE: I commented out NodePort because I haven't been able to expose the port using that method. Whenever I use sudo netstat -nlp | grep 22282
I get nothing.
Using an external IP i'm able to find the port and connect to it using netcat, but whenever I issue a command over the port, it just hangs.
Normally I should be able to issue a 'help' command and get information on the app. With kubernetes I can't get that same output.
Now, if I use hostNetwork: true
in my app yaml (not the service), I can connect to the port and get my 'help' info.
What could be keeping my command from reaching the app while not using hostNetwork configuration?
Thanks
UPDATE: Noticed this message from sudo iptables --list
:
Chain KUBE-SERVICES (1 references)
target prot opt source destination
REJECT tcp -- anywhere 172.21.155.23 /* default/commander: has no endpoints */ tcp dpt:22282 reject-with icmp-port-unreachable
UPDATE #2: I solved the above error by setting spec.template.metadata.labels.app to commander. I still, however, am experiencing an inability to send any command to the app.
Thanks to @sfgroups I discovered that I needed to set an actual nodePort
like so:
apiVersion: v1
kind: Service
metadata:
name: commander
spec:
selector:
app: commander
ports:
- protocol: TCP
port: 22282
nodePort: 32282
targetPort: 22082
type: NodePort
Pretty odd behavior, makes me wonder what the point of the port
field even is!