In brief,These are the steps I have done :
Launched 2 new t3 - small
instances in aws, pre-tagged with key kubernetes.io/cluster/<cluster-name>
and value member
.
Tagged the new security with same tag and opened all ports mentioned here - https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports
Changed hostname
to the output of curl http://169.254.169.254/latest/meta-data/local-hostname
and verified with hostnamectl
Rebooted
Configured aws with https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
Created IAM role
with full ("*"
) permissions and assigned to EC2 instances.
Installed kubelet kubeadm kubectl
using apt-get
Created /etc/default/kubelet
with content KUBELET_EXTRA_ARGS=--cloud-provider=aws
Ran kubeadm init --pod-network-cidr=10.244.0.0/16
on one instance and used output to kubeadm join ...
other node.
Installed Helm.
Installed ingress controller with default backend.
Previously I have tried the above steps, but, installed ingress from the instructions on kubernetes.github.io. Both ended up with same status, EXTERNAL-IP
as <pending>
.
Current status is :
kubectl get pods --all-namespaces -o wide
NAMESPACE NAME IP NODE
ingress ingress-nginx-ingress-controller-77d989fb4d-qz4f5 10.244.1.13 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
ingress ingress-nginx-ingress-default-backend-7f7bf55777-dhj75 10.244.1.12 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system coredns-86c58d9df4-bklt8 10.244.1.14 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system coredns-86c58d9df4-ftn8q 10.244.1.16 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system etcd-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-apiserver-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-controller-manager-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-flannel-ds-amd64-87k8p 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-flannel-ds-amd64-f4wft 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system kube-proxy-79cp2 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system kube-proxy-sv7md 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-scheduler-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system tiller-deploy-5b7c66d59c-fgwcp 10.244.1.15 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kubectl get svc --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 73m <none>
ingress ingress-nginx-ingress-controller LoadBalancer 10.97.167.197 <pending> 80:32722/TCP,443:30374/TCP 59m app=nginx-ingress,component=controller,release=ingress
ingress ingress-nginx-ingress-default-backend ClusterIP 10.109.198.179 <none> 80/TCP 59m app=nginx-ingress,component=default-backend,release=ingress
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 73m k8s-app=kube-dns
kube-system tiller-deploy ClusterIP 10.96.216.119 <none> 44134/TCP 67m app=helm,name=tiller
kubectl describe service -n ingress ingress-nginx-ingress-controller
Name: ingress-nginx-ingress-controller
Namespace: ingress
Labels: app=nginx-ingress
chart=nginx-ingress-1.4.0
component=controller
heritage=Tiller
release=ingress
Annotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
Selector: app=nginx-ingress,component=controller,release=ingress
Type: LoadBalancer
IP: 10.104.55.18
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32318/TCP
Endpoints: 10.244.1.20:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 32560/TCP
Endpoints: 10.244.1.20:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
attached IAM role inline policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-172-31-12-119.ap-south-1.compute.internal Ready master 6d19h v1.13.4 172.31.12.119 XX.XXX.XXX.XX Ubuntu 16.04.5 LTS 4.4.0-1077-aws docker://18.6.3
ip-172-31-3-106.ap-south-1.compute.internal Ready <none> 6d19h v1.13.4 172.31.3.106 XX.XXX.XX.XXX Ubuntu 16.04.5 LTS 4.4.0-1077-aws docker://18.6.3
Could someone please point out what am I missing here, as everywhere on the internet it says a Classic ELB
will be deployed automatically ?
For AWS ELB (type Classic) you have to
Explicitly specify --cloud-provider=aws
in kube services manifests located in /etc/kubernetes/manifests
on the master node:
kube-controller-manager.yaml kube-apiserver.yaml
Restart services:
sudo systemctl daemon-reload
sudo systemctl restart kubelet
Along with other commands, add at bottom or top as you wish. The result should be similar to :
in kube-controller-manager.yaml
spec:
containers:
- command:
- kube-controller-manager
- --cloud-provider=aws
in kube-apiserver.yaml
spec:
containers:
- command:
- kube-apiserver
- --cloud-provider=aws