I spun a two node cluster in AWS and installed traefik using helm. I see that the service external IP is stuck at pending status. Checked several sources but couldn't find anything to resolve the issue. ANy help is appreciated
helm install stable/traefik
ubuntu@ip-172-31-34-78:~$ kubectl get pods -n default
NAME READY STATUS RESTARTS AGE
unhinged-prawn-traefik-67b67f55f4-tnz5w 1/1 Running 0 18m
ubuntu@ip-172-31-34-78:~$ kubectl get services -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 55m
unhinged-prawn-traefik LoadBalancer 10.102.38.210 <pending> 80:30680/TCP,443:32404/TCP 18m
ubuntu@ip-172-31-34-78:~$ kubectl describe service unhinged-prawn-traefik
Name: unhinged-prawn-traefik
Namespace: default
Labels: app=traefik
chart=traefik-1.52.6
heritage=Tiller
release=unhinged-prawn
Annotations: <none>
Selector: app=traefik,release=unhinged-prawn
Type: LoadBalancer
IP: 10.102.38.210
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30680/TCP
Endpoints: 10.32.0.6:80
Port: https 443/TCP
TargetPort: httpn/TCP
NodePort: https 32404/TCP
Endpoints: 10.32.0.6:8880
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
ubuntu@ip-172-31-34-78:~$ kubectl get svc unhinged-prawn-traefik --namespace default -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
unhinged-prawn-traefik LoadBalancer 10.102.38.210 <pending> 80:30680/TCP,443:32404/TCP 24m
I'm not sure how you installed your cluster, but basically, the kube-controller-manager/kubelet/kube-apiserver
cannot talk to the AWS API to create a load balancer to serve traffic for your Service.
It could be just as simple as your instance missing the required instance profile with the permissions to create a load balancer and routes.
It could also that you need to add this flag to all your kubelets, your kube-apiserver, and your kube-controller-manager:
--cloud-provider=aws
It could also be that you are missing these EC2 tags on your instances:
KubernetesCluster=<yourclustername>
kubernetes.io/cluster/kubernetes=owned
k8s.io/role/node=1
Note that you might also need the KubernetesCluster=<yourclustername>
tag on the subnet where your nodes are on.
It could also be that your K8s nodes don't have a ProviderID:
spec that looks like this:
ProviderID: aws:///<aws-region>/<instance-id>
# You can add it with kubectl edit <node-name>
Note that the --cloud-provider
flag is being deprecated in favor of the Cloud Providers controller.