I deploy nifi and gloo API Gateway on the same GKE cluster. The external IP exposed as LoadBalancer work well (open on Web browser or telnet). However, when I use telnet to connect gloo API Gateway on GKE cloud shell
, my connection was refused.
Depends on relational causes and solutions, I have allow traffic to flow into cluster by creating firewall rule:
gcloud compute firewall-rules create my-rule --allow=all
How can I do for it?
kubectl get -n gloo-system service/gateway-proxy-v2 -o yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"gloo","gateway-proxy-id":"gateway-proxy-v2","gloo":"gateway-proxy"},"name":"gateway-proxy-v2","namespace":"gloo-system"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":8443}],"selector":{"gateway-proxy":"live","gateway-proxy-id":"gateway-proxy-v2"},"type":"LoadBalancer"}}
labels:
app: gloo
gateway-proxy-id: gateway-proxy-v2
gloo: gateway-proxy
name: gateway-proxy-v2
namespace: gloo-system
spec:
clusterIP: 10.122.10.215
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30189
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 30741
port: 443
protocol: TCP
targetPort: 8443
selector:
gateway-proxy: live
gateway-proxy-id: gateway-proxy-v2
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 34.xx.xx.xx
kubectl get svc -n gloo-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gateway-proxy-v2 LoadBalancer 10.122.10.215 34.xx.xx.xx 80:30189/TCP,443:30741/TCP 63m
gloo ClusterIP 10.122.5.253 <none> 9977/TCP 63m
You can try bumping to Gloo version 1.3.6
.
Please take a look at https://docs.solo.io/gloo/latest/upgrading/1.0.0/ to track any possible breaking changes.