I'm trying to integrate Linkerd with my grpc service on kubernetes for load balance problem according to this article , but my grpc service cannot receive any request when working with Linkerd and the grpc client freeze, throwing no exception. Both service and client are .Net Core app and use insecure credential.
I made some tests. The grpc server can work without Linkerd and Linkerd can work with ASP.NET Core web api.
I followed official instruction: Getting Started and Adding Your Service. Here is the generated yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: demogrpc
name: demogrpc
spec:
replicas: 3
selector:
matchLabels:
app: demogrpc
strategy: {}
template:
metadata:
annotations:
linkerd.io/created-by: linkerd/cli stable-2.1.0
linkerd.io/proxy-version: stable-2.1.0
creationTimestamp: null
labels:
app: demogrpc
linkerd.io/control-plane-ns: linkerd
linkerd.io/proxy-deployment: demogrpc
spec:
containers:
- env:
- name: GRPC_HOST
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: GRPC_PORT
value: "8000"
image: 192.168.99.25:30000/demogrpchost:1.0.9
imagePullPolicy: Always
name: demogrpc
resources: {}
- env:
- name: LINKERD2_PROXY_LOG
value: warn,linkerd2_proxy=info
- name: LINKERD2_PROXY_BIND_TIMEOUT
value: 10s
- name: LINKERD2_PROXY_CONTROL_URL
value: tcp://linkerd-proxy-api.linkerd.svc.cluster.local:8086
- name: LINKERD2_PROXY_CONTROL_LISTENER
value: tcp://0.0.0.0:4190
- name: LINKERD2_PROXY_METRICS_LISTENER
value: tcp://0.0.0.0:4191
- name: LINKERD2_PROXY_OUTBOUND_LISTENER
value: tcp://127.0.0.1:4140
- name: LINKERD2_PROXY_INBOUND_LISTENER
value: tcp://0.0.0.0:4143
- name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
value: .
- name: LINKERD2_PROXY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: gcr.io/linkerd-io/proxy:stable-2.1.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /metrics
port: 4191
initialDelaySeconds: 10
name: linkerd-proxy
ports:
- containerPort: 4143
name: linkerd-proxy
- containerPort: 4191
name: linkerd-metrics
readinessProbe:
httpGet:
path: /metrics
port: 4191
initialDelaySeconds: 10
resources: {}
securityContext:
runAsUser: 2102
terminationMessagePolicy: FallbackToLogsOnError
imagePullSecrets:
- name: kubernetes-registry
initContainers:
- args:
- --incoming-proxy-port
- "4143"
- --outgoing-proxy-port
- "4140"
- --proxy-uid
- "2102"
- --inbound-ports-to-ignore
- 4190,4191
image: gcr.io/linkerd-io/proxy-init:stable-2.1.0
imagePullPolicy: IfNotPresent
name: linkerd-init
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: false
terminationMessagePolicy: FallbackToLogsOnError
status: {}
And here is the log message from one of the linkerd-proxy:
INFO linkerd2_proxy::app::main using controller at Some(Name(NameAddr { name: DnsName(DNSName("linkerd-proxy-api.linkerd.svc.cluster.local")), port: 8086 }))
INFO linkerd2_proxy::app::main routing on V4(127.0.0.1:4140)
INFO linkerd2_proxy::app::main proxying on V4(0.0.0.0:4143) to None
INFO linkerd2_proxy::app::main serving Prometheus metrics on V4(0.0.0.0:4191)
INFO linkerd2_proxy::app::main protocol detection disabled for inbound ports {25, 3306}
INFO linkerd2_proxy::app::main protocol detection disabled for outbound ports {25, 3306}
WARN 10.244.1.137:8000 linkerd2_proxy::proxy::reconnect connect error to Config { target: Target { addr: V4(10.244.1.137:8000), tls: None(InternalTraffic), _p: () }, settings: Http2, _p: () }: Connection refused (os error 111) (address: 127.0.0.1:8000)
How do I make my grpc service work with Linkerd? Or is there better solution to load balance grpc service in kubernetes?
By specifying GRPC_HOST to 127.0.0.1 allows Linkerd to connect to the grpc server. Since linkerd proxy will use loopback address to connect to the other containers in this case the grpc service.