I have Azure Kubernetes Service cluster and I have VM outside the cluster, from different virtual network, from which I try to connect to my container Pod App which is being run on TCP Port 9000. I must not use Public IP and That is not HTTP connection, but I need to connect using the TCP connection. For that I followed instructions from this link: https://docs.microsoft.com/en-us/azure/aks/ingress-internal-ip I defined YAML file for helm install
controller:
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
I configured nginx:
helm install nginx-ingress ingress-nginx/ingress-nginx \
-f internal-ingress.yaml \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."beta\.kubernetes\.io/os"=linux
NGINX configuration after that is that it has Ports 80 and 443:
kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
nginx-ingress-ingress-ngingx controller LoadBalancer 10.0.36.81 10.33.27.35 80:31312/TCP,443:30653/TCP
After that I run the helm upgrade
to ensure my tcp port 9000 is being configured
helm upgrade nginx-ingress ingress-nginx/ingress-nginx -f internal-ingress.yaml --set tcp.9000="default/frontarena-ads-aks-test:9000"
This gave me the ConfigMap setting automatically when I check with "kubectl get configmaps":
apiVersion: v1
data:
"9000": default/frontarena-ads-aks-test:9000
kind: ConfigMap
I have also edited my nginx Service:
spec:
clusterIP: 10.0.36.81
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 31312
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30653
port: 443
protocol: TCP
targetPort: https
- name: 9000-tcp
nodePort: 30758
port: 9000
protocol: TCP
targetPort: 9000
I have my deployed App Pod :
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-deployment
labels:
app: frontarena-ads-deployment
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-aks-test
labels:
app: frontarena-ads-aks-test
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
restartPolicy: Always
containers:
- name: frontarena-ads-aks-test
image: fa.dev/:test1
ports:
- containerPort: 9000
selector:
matchLabels:
app: frontarena-ads-aks-test
---
apiVersion: v1
kind: Service
metadata:
name: frontarena-ads-aks-test
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 9000
selector:
app: frontarena-ads-aks-test
I configured and deployed Ingress Controller YAML in same default namespace as well to connect my Ingress with above Service (I suppose it can connect through it based on the ClusterIP ):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ads-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontarena-ads-aks-test
servicePort: 9000
Now this issue is the following:
if I try to target from my VM app deployed outside the AKS cluster in different virtual network with Ingress Controller IP or its DNS name configured by Azure Admins and the Port 9000 - I do not get any response which brings the conclusion that Ingress Controller is not propagating the network connection to my service which targets the app running on Port 9000 on my Pod.
I can't find the reason why Ingress Controller will not forward the traffic to my service which Targets the Port 9000 which is the Port on which My App Pod is being run.
Thank you!!!