Need some serious help here ! Thanks in advance.
I am trying to deploy a microservice based Java application. I am able to get to the frontend service(webapp) on my browser, but I am unable to connect it with the backend (auth service) and hence it shows authentication failure.
HTML LOGIN FORM form points to "/login?referrerURL="
I checked the ingress nginx logs :
Service "default/auth-srv" does not have any active Endpoint.
Service "default/voice-srv" does not have any active Endpoint.
Service "default/reporting-srv" does not have any active Endpoint.
Service "default/webapp-srv" does not have any active Endpoint.
The ingress nginx config file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: <domain_name>
http:
paths:
- path: /auth/?(.*)
backend:
serviceName: auth-srv
servicePort: 8080
- path: /emotion/?(.*)
backend:
serviceName: emotion-srv
servicePort: 8080
- path: /storage/?(.*)
backend:
serviceName: storage-srv
servicePort: 8080
- path: /voice/?(.*)
backend:
serviceName: voice-srv
servicePort: 8080
- path: /backend/?(.*)
backend:
serviceName: backend-srv
servicePort: 8080
- path: /reporting/?(.*)
backend:
serviceName: reporting-srv
servicePort: 8080
## frontend
- path: /?(.*)
backend:
serviceName: webapp-srv
servicePort: 8080
How is the webapp(frontend) connecting to auth service internally?
Using the below endpoint
http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/auth
Above endpoint obtained from: Using the pattern === http://name-of-service.namespace.svc.cluster.local
namespaces :
$ kubectl get namespace
NAME STATUS AGE
default Active 10h
ingress-nginx Active 10h
kube-node-lease Active 10h
kube-public Active 10h
kube-system Active 10h
$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.100.99.130 <loadbalancer>.amazonaws.com 80:32794/TCP,443:30053/TCP 10h
ingress-nginx-controller-admission ClusterIP 10.100.230.126 <none> 443/TCP
Webapp (frontend) pod logs :
2020-07-28 20:57:08.139 INFO 1 --- [io-8080-exec-10] com.symtrain.controller.AdminController : Auth Controller User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36
2020-07-28 20:57:08.139 INFO 1 --- [io-8080-exec-10] com.symtrain.controller.AdminController : Auth Controller URL: http://testprod.symtrain.com/index
2020-07-28 20:57:08.139 INFO 1 --- [io-8080-exec-10] com.symtrain.controller.AdminController : Auth Controller flag:::::: Not IE
2020-07-28 20:57:08.139 INFO 1 --- [io-8080-exec-10] com.symtrain.controller.AdminController : Auth Controller URL inside normal return:
Some extra information for deployments :
$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
auth-depl 2/2 2 2 4h40m
backend-depl 2/2 2 2 4h40m
emotion-depl 2/2 2 2 4h40m
reporting-depl 2/2 2 2 4h40m
storage-depl 2/2 2 2 4h40m
voice-depl 2/2 2 2 4h40m
webapp-depl 2/2 2 2 4h40m
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-srv ClusterIP 10.100.258.118 <none> 8080/TCP 4h41m
backend-srv ClusterIP 10.100.132.251 <none> 8080/TCP 4h41m
emotion-srv ClusterIP 10.100.32.154 <none> 8080/TCP 4h41m
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 10h
reporting-srv ClusterIP 10.100.64.80 <none> 8080/TCP 4h41m
storage-srv ClusterIP 10.100.36.25 <none> 8080/TCP 4h41m
voice-srv ClusterIP 10.100.212.180 <none> 8080/TCP 4h41m
webapp-srv ClusterIP 10.100.21.170 <none> 8080/TCP 4h41m
Endpoints
kubectl get endpoints
NAME ENDPOINTS AGE
auth-srv 192.168.14.60:8080,192.168.44.116:8080 4h53m
backend-srv 192.168.32.14:8080,192.168.37.180:8080 4h53m
emotion-srv 192.168.58.110:8080,192.168.6.148:8080 4h53m
kubernetes 192.168.118.66:443,192.168.82.184:443 10h
reporting-srv 192.168.31.233:8080,192.168.33.218:8080 4h53m
storage-srv 192.168.23.217:8080,192.168.38.48:8080 4h53m
voice-srv 192.168.4.211:8080,192.168.59.186:8080 4h53m
webapp-srv 192.168.31.164:8080,192.168.62.143:8080 4h53m
Auth backend Deployment and Service :
$ kubectl describe deploy auth-depl
Name: auth-depl
Namespace: default
CreationTimestamp: Tue, 28 Jul 2020 16:32:44 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=auth
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=auth
Containers:
auth:
Image: <my_image_name>
Port: 8080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: auth-depl-787446c4db (2/2 replicas created)
Events: <none>
#####################################
$ kubectl describe svc auth-srv
Name: auth-srv
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=auth
Type: ClusterIP
IP: 10.100.218.108
Port: auth 8080/TCP
TargetPort: 8080/TCP
Endpoints: 192.168.14.60:8080,192.178.44.136:8080
Session Affinity: None
Events: <none>
NOTE: I am altering the IPs here for security purposes.
You are trying to hit /login?referrerURL= and this path is not defined in your ingress rules.
EDIT:
It's not common to use ingress service internally within the cluster. Ingress resource is designed to manage external access to internal services.
Note this is also a security concern as you are exposing the auth service (which is a backend service used by your UI layer) externally.
A quick solution for me was to delete the service and recreate it. This is only if all your configs that others have suggested are correct
As per the error messages, it could be that the labels you are using inside of the Service may be a cause for concern. Your service will lookup pods based on their pod labels.
kubectl get pods --show-labels
nginx 1/1 Running 0 16m app=nginx
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
Kubectl run bb --image=busybox - it - - wget - o- auth-serv:8080
controllers/nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 8080
if the above does not resolve the problem, then you might want to look at policies. Maybe, you could create a new ingress network policy policy based on your pod labels to ensure that traffic hits your pods. (https://kubernetes.io/docs/concepts/services-networking/network-policies/)