We are currently working on a platform for managing services via Kubernetes. As part of that, we are wanting to add functionality to allow clients (through our UI) to issue shell commands against pods (using websockets between the two). We are trying to leverage the Kubernetes /exec
API endpoint for this to open up the connection to the pod. The issue is that while the initial setup of the sockets appears to work well, the issuing of subsequent commands from the UI to the pod don't get any response from the pod - almost as if the pod isn't receiving the message.
We currently have a Node.js Express REST service sitting as a middle-man between our UI and Kubernetes. This REST service is responsible for managing two websockets - one from the UI to the service, and another from the service to Kubernetes. This service handles sending messages across one socket to the other as interactions happen.
When the UI needs to send a shell command to a pod, it initiates the process of opening the sockets by calling to the REST service, which then makes the following Kubernetes API call to open up the socket to the pod
/api/v1/namespaces/namespace-xxxxxx/pods/hello-world-xxxxxxxx/exec?stdin=true&stderr=true&stdout=true&tty=true&command=sh
while Express manages the socket from the REST service to the UI.
The REST service has a 'pipe' function that has handles to each socket, and passes data between the two as necessary (debugging statements removed for brevity):
const pipe = (podSocket, clientSocket) => {
podSocket.onopen = function (event) {
debug(`podSocket.onopen(event): opened websocket connection at ${new Date()}. Event is ${circularJson.stringify(event)}`);
};
podSocket.onmessage = function (event) {
if (clientSocket.readyState === 1) {
const text = Buffer.from(event.data, 'binary').toString('utf-8');
clientSocket.send(text);
}
};
podSocket.onerror = function (err) {
clientSocket.close(1000);
podSocket.close();
};
podSocket.onclose = function (event) {
debug(`podSocket.onclose(): instance closed connection at ${new Date()} with code ${event.code}, reason [${event.reason}]`);
};
clientSocket.onmessage = function (event) {
const text = Buffer.from(event.data, 'binary').toString('utf-8');
if (clientSocket.readyState === 1) {
if (podSocket.readyState === 3)
clientSocket.send(`Connection to instance has been closed`);
else {
podSocket.send(text); // <----------- Issue seems to be here
}
}
};
clientSocket.onerror = function (err) {
clientSocket.close(1000);
podSocket.close(1000);
};
clientSocket.onclose = function (event) {
debug(`clientSocket.onclose(): instance closed connection at ${new Date()} with code ${event.code}, reason [${event.reason}]`);
podSocket.close(1000);
};
};
As you can see above, when the REST service detects a message has been received from the client (a shell command), it attempts to forward that message to the socket opened to the pod. But we get no response from the pod in its socket's onmessage()
callback.
Part of the debugging statements that are not present above do show the command to be executing is being received by the REST service and it is being sent across the other socket to the pod. I also have debugging that shows both sockets are showing a 'ready' state of 1 when means they're both open.
This is a small snippet of what we see:
The /app #
is the default directory for the pod, and that comes as a response from the initial setup of the websocket, so we know that initial connection is correct. But the two subsequent commands show no response at all, until the socket times out and the connections are closed.
What are we missing here? Why are we not able to get the socket tied to the pod to respond to messages send to it?