I have a service running in a cluster in a namespace:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
amundsen-frontend LoadBalancer 10.100.59.220 a563823867e6f11ea82a90a9c116adac-124ae00284b50400.elb.us-west-2.amazonaws.com 80:31866/TCP 70m
And when I run pods:
kubectl get pods
NAME READY STATUS RESTARTS AGE
amundsen-frontend-595b49d856-mkbjj 1/1 Running 0 74m
amundsen-metadata-5df6c6c8d8-nrk9f 1/1 Running 0 74m
amundsen-search-c8b7cd9f6-mspzr 1/1 Running 0 74m
dsci-amundsen-elasticsearch-client-65f858c656-znjfd 1/1 Running 0 74m
dsci-amundsen-elasticsearch-data-0 1/1 Running 0 74m
dsci-amundsen-elasticsearch-master-0 1/1 Running 0 74m
I'm not really sure what to do here. How do I access the url? Can I port forward in development? What do I do in production? The front-end pod is one I want to access, so is the search pod.
This is what's in my charts.yaml
for helm:
frontEnd:
##
## frontEnd.serviceName -- The frontend service name.
##
serviceName: frontend
##
## frontEnd.imageVersion -- The frontend version of the metadata container.
##
imageVersion: 2.0.0
##
## frontEnd.servicePort -- The port the frontend service will be exposed on via the loadbalancer.
##
servicePort: 80
With so little information I don't know if I can solve your problem, but will try to help you find it.
To start with it will be helpful if we can see your service and pod config?
kubectl get sa amundsen-frontend -o yaml
kubectl get pod amundsen-frontend-595b49d856-mkbjj -o yaml
You can try to reach the fronted from another pod, this will help figure out if the problem is in the pod or ingress layer. To gain shell access inside search
pod container run:
kubectl exec -it amundsen-search-c8b7cd9f6-mspzr --container <<name of container>> -- sh
If you have only one container in the pod you can omit the container part from the command above
Once inside check if your are able to reach amundsen-frontend-595b49d856-mkbjj
with curl
curl amundsen-frontend-595b49d856-mkbjj
curl amundsen-frontend-595b49d856-mkbjj:31866
If you are able to establish communication, then look for the problem in the ingress layer. You may want to look at your ingress logs to see why it's timing out.
Network security groups in AWS as also worth exploring.
Is your ingress configured properly?