I have a kubernetes cluster with multiple pods from different images.
I want to be able to expose each of those pods so I will be able to access them using an external DNS record (outside the cluster).
For example: Let's say I have 3 pods (pod1,pod2,pod3), I want to be able to access them from outside the cluster this way:
Is there a way to do it?
Thanks
In AWS you can easily expose PODs using ELB - Kubernetes can automatically create proper ELBs for you. It means that Kubernetes spawn ELB and then attach it to proper services using nodes ports. When you have ELBs in place you can use external-dns plugin metioned by GarMan which can attach DNS records to those ELBs using AWS Route53 integration. So you need to:
Example service would look like:
apiVersion: v1
kind: Service
metadata:
name: public-pod1
namespace: your-deployment
labels:
app: pod1
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
external-dns.alpha.kubernetes.io/hostname: pod1.mydomain.com.
spec:
type: LoadBalancer
loadBalancerSourceRanges:
- 0.0.0.0/0 # Ingress SG for your ELB
ports:
- port: 80
protocol: TCP
targetPort: 80 #That should match your app's port
selector:
app: pod1
external-dns (https://github.com/kubernetes-incubator/external-dns) is designed to do this, you annotate your services with the dns name you want to give them, and external-dns creates the relevant dns entries for you.