I'm currently stuck with connecting clusterIp services in kubernetes. The main goal is to connect one pod (microservice) using grpc and other pod (client) using node . I'm using the service name to expose and connect to the microservice products-microservice but I'm getting this error when try to call the microservice on the client.
"Error: 14 UNAVAILABLE: failed to connect to all addresses",
" at Object.exports.createStatusError (/usr/src/app/node_modules/grpc/src/common.js:91:15)",
" at Object.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:1209:28)",
" at InterceptingListener._callNext (/usr/src/app/node_modules/grpc/src/client_interceptors.js:568:42)",
" at InterceptingListener.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:618:8)",
" at callback (/usr/src/app/node_modules/grpc/src/client_interceptors.js:847:24)"
I review the docker image that I created and it's pointing to this address url: '0.0.0.0:50051'
but not working as this article recommends https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/
Until now I have just one microservice for products that contains the logic to manage products and was developed with node-js and grpc (locally works perfect). I named xxx-microservice-products-deployment
and contains their definition in k8s looks like:
kind: Deployment
metadata:
name: pinebox-microservice-products-deployment
labels:
app: pinebox
type: microservice
domain: products
spec:
template:
metadata:
name: pinebox-microservice-products-pod
labels:
app: pinebox
type: microservice
domain: products
spec:
containers:
- name: pg-container
image: postgres
env:
- name: POSTGRES_USER
value: testuser
- name: POSTGRES_PASSWORD
value: testpass
- name: POSTGRES_DB
value: db_development
ports:
- containerPort: 5432
- name: microservice-container
image: registry.digitalocean.com/pinebox/pinebox-microservices-products:latest
imagePullSecrets:
- name: regcred
replicas: 1
selector:
matchLabels:
app: pinebox
type: microservice
domain: products
Then in order to connect to them, we create a service with a clusterIp
that exposes the 50051
, and their definition in k8s looks like:
kind: Service
apiVersion: v1
metadata:
name: pinebox-products-microservice
spec:
selector:
app: pinebox
type: microservice
domain: products
ports:
- targetPort: 50051
port: 50051
Now, we create a client in node too, that contains the api (get,post)
methods that under the hood make the connection with the microservice. I named the client xxx-api-main-app-deployment
and their definition in k8s looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pinebox-api-main-app-deployment
labels:
app: pinebox
type: api
domain: main-app
role: users-service
spec:
template:
metadata:
name: pinebox-api-main-app-pod
labels:
app: pinebox
type: api
domain: main-app
role: products-service
spec:
containers:
- name: pinebox-api-main-app-container
image: registry.digitalocean.com/pinebox/pinebox-main-app:latest
imagePullSecrets:
- name: regcred
replicas: 1
selector:
matchLabels:
app: pinebox
type: api
domain: main-app
role: products-service
Also, I create a service to export the api and their k8s definition looks like:
kind: Service
apiVersion: v1
metadata:
name: pinebox-api-main-app-service
spec:
selector:
app: pinebox
type: api
domain: main-app
role: products-service
type: NodePort
ports:
- name: name-of-the-port
port: 3333
targetPort: 3333
nodePort: 30003
Until here, all looks good. So I tried to make the connection with the service but I got this error
"Error: 14 UNAVAILABLE: failed to connect to all addresses",
" at Object.exports.createStatusError (/usr/src/app/node_modules/grpc/src/common.js:91:15)",
" at Object.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:1209:28)",
" at InterceptingListener._callNext (/usr/src/app/node_modules/grpc/src/client_interceptors.js:568:42)",
" at InterceptingListener.onReceiveStatus (/usr/src/app/node_modules/grpc/src/client_interceptors.js:618:8)",
" at callback (/usr/src/app/node_modules/grpc/src/client_interceptors.js:847:24)"
I didn't find anything useful to make it work. Anyone has any clues ?
So, after digging into a solution for the issue, I found that the kubernetes team recommend to use linkerd
to literally convert the connection to http due to the k8s doesn't work in this case. So I followed this post https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/, then I go to the linkerd
guide and follow the installation steps.
Now I was able to see the linkeird
dashboard, but not able to communicate the microservice with the client. So I tried to check if the port was exposed in the client pod so, I validate using this command:
$ kubectl exec -i -t pod/pinebox-api-main-app-deployment-5fb5d4bf9f-ttwn5 --container pinebox-api-
main-app-container -- /bin/bash
$ pritnenv
and this was the output:
PINEBOX_PRODUCTS_MICROSERVICE_PORT_50051_TCP_PORT=50051
KUBERNETES_SERVICE_PORT_HTTPS=443
PINEBOX_PRODUCTS_MICROSERVICE_SERVICE_PORT=50051
KUBERNETES_PORT_443_TCP_PORT=443
PINEBOX_API_MAIN_APP_SERVICE_SERVICE_PORT_NAME_OF_THE_PORT=3333
PORT=3000
NODE_VERSION=12.18.2
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
PINEBOX_API_MAIN_APP_SERVICE_PORT_3333_TCP_PORT=3333
PINEBOX_PRODUCTS_MICROSERVICE_SERVICE_HOST=10.105.230.111
TERM=xterm
PINEBOX_API_MAIN_APP_SERVICE_PORT=tcp://10.106.81.212:3333
SHLVL=1
PINEBOX_PRODUCTS_MICROSERVICE_PORT=tcp://10.105.230.111:50051
KUBERNETES_SERVICE_PORT=443
PINEBOX_PRODUCTS_MICROSERVICE_PORT_50051_TCP=tcp://10.105.230.111:50051
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PINEBOX_API_MAIN_APP_SERVICE_SERVICE_PORT=3333
KUBERNETES_SERVICE_HOST=10.96.0.1
_=/usr/bin/printenv
root@pinebox-api-main-app-deployment-5fb5d4bf9f-ttwn5:/usr/src/app#
So as you can see there is and env variable that contains the port for the service, for this is working. I'm not using the IP direclty, because it won't work when I scale the deployment to have more resources. Then, I validate that my microservice was running using :
kubectl logs pod/xxx-microservice-products-deployment-78df57c96d-tlvvj -c microservice-container
and this was the output:
[Nest] 1 - 07/25/2020, 4:23:22 PM [NestFactory] Starting Nest application...
[Nest] 1 - 07/25/2020, 4:23:22 PM [InstanceLoader] PineboxMicroservicesProductsDataAccessModule dependencies initialized +12ms
[Nest] 1 - 07/25/2020, 4:23:22 PM [InstanceLoader] PineboxMicroservicesProductsFeatureShellModule dependencies initialized +0ms
[Nest] 1 - 07/25/2020, 4:23:22 PM [InstanceLoader] AppModule dependencies initialized +0ms
[Nest] 1 - 07/25/2020, 4:23:22 PM [NestMicroservice] Nest microservice successfully started +22ms[Nest] 1 - 07/25/2020, 4:23:22 PM Microservice Products is listening +15ms
All looks good. So then I re-validate which port I'm using on the code:
const microservicesOptions = {
transport: Transport.GRPC,
options: {
url: '0.0.0.0:50051',
credentials: ServerCredentials.createInsecure(),
package: 'grpc.health.v1',
protoPath: join(__dirname, 'assets/health.proto'),
},
};
- Client:
ClientsModule.register( { name: 'HERO_PACKAGE', transport: Transport.GRPC, options: { url: '0.0.0.0:50051', package: 'grpc.health.v1', protoPath: join(__dirname, 'assets/health.proto'), // credentials: credentials.createInsecure() }, }, )
Then, I decide to check the logs inside the `linkerd` pod that is running for the client.
```kubectl logs pod/xxx-api-main-app-deployment-5fb5d4bf9f-ttwn5 -c linkerd-init```
and the output was this:
2020/07/25 16:37:50 Tracing this script execution as 1595695070 2020/07/25 16:37:50 State of iptables rules before run: 2020/07/25 16:37:50 > iptables -t nat -vnL 2020/07/25 16:37:50 < Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 2020/07/25 16:37:50 > iptables -t nat -F PROXY_INIT_REDIRECT 2020/07/25 16:37:50 < iptables: No chain/target/match by that name. 2020/07/25 16:37:50 > iptables -t nat -X PROXY_INIT_REDIRECT 2020/07/25 16:37:50 < iptables: No chain/target/match by that name. 2020/07/25 16:37:50 Will ignore port(s) 4190 4191 on chain PROXY_INIT_REDIRECT 2020/07/25 16:37:50 Will redirect all INPUT ports to proxy 2020/07/25 16:37:50 > iptables -t nat -F PROXY_INIT_OUTPUT 2020/07/25 16:37:50 < iptables: No chain/target/match by that name. 2020/07/25 16:37:50 > iptables -t nat -X PROXY_INIT_OUTPUT 2020/07/25 16:37:50 < iptables: No chain/target/match by that name. 2020/07/25 16:37:50 Ignoring uid 2102 2020/07/25 16:37:50 Redirecting all OUTPUT to 4140 2020/07/25 16:37:50 Executing commands: 2020/07/25 16:37:50 > iptables -t nat -N PROXY_INIT_REDIRECT -m comment --comment proxy-init/redirect-common-chain/1595695070 2020/07/25 16:37:50 < 2020/07/25 16:37:50 > iptables -t nat -A PROXY_INIT_REDIRECT -p tcp --match multiport --dports 4190,4191 -j RETURN -m comment --comment proxy-init/ignore-port-4190,4191/1595695070 2020/07/25 16:37:50 < 2020/07/25 16:37:50 > iptables -t nat -A PROXY_INIT_REDIRECT -p tcp -j REDIRECT --to-port 4143 -m comment --comment proxy-init/redirect-all-incoming-to-proxy-port/1595695070 2020/07/25 16:37:50 < 2020/07/25 16:37:50 > iptables -t nat -A PREROUTING -j PROXY_INIT_REDIRECT -m comment --comment proxy-init/install-proxy-init-prerouting/1595695070 2020/07/25 16:37:50 < 2020/07/25 16:37:50 > iptables -t nat -N PROXY_INIT_OUTPUT -m comment --comment proxy-init/redirect-common-chain/1595695070 2020/07/25 16:37:50 < 2020/07/25 16:37:50 > iptables -t nat -A PROXY_INIT_OUTPUT -m owner --uid-owner 2102 -o lo ! -d 127.0.0.1/32 -j PROXY_INIT_REDIRECT -m comment --comment proxy-init/redirect-non-loopback-local-traffic/1595695070 2020/07/25 16:37:51 < 2020/07/25 16:37:51 > iptables -t nat -A PROXY_INIT_OUTPUT -m owner --uid-owner 2102 -j RETURN -m comment --comment proxy-init/ignore-proxy-user-id/1595695070 2020/07/25 16:37:51 < 2020/07/25 16:37:51 > iptables -t nat -A PROXY_INIT_OUTPUT -o lo -j RETURN -m comment --comment proxy-init/ignore-loopback/1595695070 2020/07/25 16:37:51 < 2020/07/25 16:37:51 > iptables -t nat -A PROXY_INIT_OUTPUT -p tcp -j REDIRECT --to-port 4140 -m comment --comment proxy-init/redirect-all-outgoing-to-proxy-port/1595695070 2020/07/25 16:37:51 < 2020/07/25 16:37:51 > iptables -t nat -A OUTPUT -j PROXY_INIT_OUTPUT -m comment --comment proxy-init/install-proxy-init-output/1595695070 2020/07/25 16:37:51 < 2020/07/25 16:37:51 > iptables -t nat -vnL 2020/07/25 16:37:51 < Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 PROXY_INIT_REDIRECT all -- * 0.0.0.0/0 0.0.0.0/0 / proxy-init/install-proxy-init-prerouting/1595695070 / Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 PROXY_INIT_OUTPUT all -- 0.0.0.0/0 0.0.0.0/0 / proxy-init/install-proxy-init-output/1595695070 / Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain PROXY_INIT_OUTPUT (1 references) pkts bytes target prot opt in out source destination 0 0 PROXY_INIT_REDIRECT all -- lo 0.0.0.0/0 !127.0.0.1 owner UID match 2102 / proxy-init/redirect-non-loopback-local-traffic/1595695070 / 0 0 RETURN all -- * 0.0.0.0/0 0.0.0.0/0 owner UID match 2102 / proxy-init/ignore-proxy-user-id/1595695070 / 0 0 RETURN all -- lo 0.0.0.0/0 0.0.0.0/0 / proxy-init/ignore-loopback/1595695070 / 0 0 REDIRECT tcp -- * 0.0.0.0/0 0.0.0.0/0 / proxy-init/redirect-all-outgoing-to-proxy-port/1595695070 / redir ports 4140 Chain PROXY_INIT_REDIRECT (2 references) pkts bytes target prot opt in out source destination 0 0 RETURN tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 4190,4191 / proxy-init/ignore-port-4190,4191/1595695070 / 0 0 REDIRECT tcp -- 0.0.0.0/0 0.0.0.0/0 / proxy-init/redirect-all-incoming-to-proxy-port/1595695070 */ redir ports 4143
```
I'm not sure where the problem is, and thanks in advance for your help. Hopefully this give you more context and you can point me out in the right direction.
The iptables output from the Linkerd proxy-init
looks fine.
Did you check the logs of the linkerd-proxy
inside the container? That might help to understand what is happening.
It's also worth trying the port-forward
test that @KoopaKiller recommends