Exposing kubernetes app using AWS Elastic LoadBalancer

1/28/2019

I created an internal AWS elastic application loadbalancer and in AWS console it shows its state as active. Note that I created this ALB using a jenkins job and in the job I have specified my AWS EC2 instance server which is configured as my kubernetes master.

And I can see following details after the job was completed successfully.

In AWS console under description, I can see below details -

DNS  internal-myservices-987070943.us-east-1.elb.amazonaws.com
Scheme  internal
Type  application
IP address type  ipv4

Then there is a Listeners tab under which I see Listener ID with HTTPS: 443

Also showing Rules with following 2 rules -

IF Path is /*  THEN Forward to myservices-LB
IF Requests otherwise not routed  THEN Forward to myservices-LB

Also, I see other tabs like Monitoring, Integrated services and Tags.

Now, I have a kubernetes cluster with following service created with Type: LoadBalancer - (Source reference: https://github.com/kenzanlabs/kubernetes-ci-cd/blob/master/applications/hello-kenzan/k8s/manual-deployment.yaml)

apiVersion: v1
Kind: Service
metadata:
 name: hello-kenzan
 labels:
 app: hello-kenzan
spec:
 ports:
  - port: 80
    targetPort: 80
 selector:
   app: hello-kenzan
   tier: hello-kenzan
 type: LoadBalancer

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: hello-kenzan
  labels:
    app: hello-kenzan
spec:
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: hello-kenzan
        tier: hello-kenzan
    spec:
      containers:
      - image: gopikrish81/hello-kenzan:latest
        name: hello-kenzan
        ports:
        - containerPort: 80
          name: hello-kenzan

After I created the service with -

kubectl apply -f k8s/manual-deployment.yaml
kubectl get svc

It is showing External-IP as <pending> But since I have created a loadbalancer type, why isnt it creating an ip?

FYI, I can access the app using curl <master node>:<nodeport> Or even I can access it through proxy forwarding.

So without the IP created, there is no possibility of my app to be exposed using DNS, right? Please suggest what I could do so that I can expose my service using the DNS name internal-myservices-987070943.us-east-1.elb.amazonaws.com

I need the app to be exposed with DNS name like http://internal-myservices-987070943.us-east-1.elb.amazonaws.com/#

Thanks in advance

UPDATE as on 29/1

I followed the answer steps as mentioned in this post kube-controller-manager don't start when using "cloud-provider=aws" with kubeadm

1) I modified the file "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf" by adding the below command under [Service]

Environment="KUBELET_EXTRA_ARGS=--cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf

And I created this cloud-config.conf as below -

[Global]
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes

I am not sure what for this Tag and ID refer to but when I run the below command I can see the output mentioning clusterName as "kubernetes"

kubeadm config view

Then I did executed,

systemctl daemon-reload
system restart kubelet

2) Then as mentioned in that, I added --cloud-provider=aws in both kube-controller-manager.yaml and kube-apiserver.yaml

3) I also added below annotation in the manual-deployment.yaml of my application

annotations:
  service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

https://github.com/kenzanlabs/kubernetes-ci-cd/blob/master/applications/hello-kenzan/k8s/manual-deployment.yaml

Now, when I deployed using kubectl apply -f k8s/manual-deployment.yaml the pod itself is not getting created when I checked with kubectl get po --all-namespaces

So I tried to remove step 2 above and again did deployment and now pod was getting created successfully. But still it is showing <pending> for EXTERNAL-IP when I did kubectl get svc

I even renamed my master and worker node to be same as EC2 Instance Private DNS: ip-10-118-6-35.ec2.internal and ip-10-118-11-225.ec2.internal as mentioned in below post and reconfigured the cluster but still no luck. https://medium.com/jane-ai-engineering-blog/kubernetes-on-aws-6281e3a830fe (under the section : Proper Node Names)

Also, in my EC2 instances, I can see IAM role attached and when I see the details for that, I can see there are 8 policies applied to that role. And in one of the policy I can see this below and many other Actions are there which I am not posting here -

{
   "Action": "elasticloadbalancing:*",
   "Resource": "*",
   "Effect": "Allow"
}

I am clueless if some other settings I am missing. Please suggest!

UPDATE as on 30/1

I did the below additional steps as mentioned in this blog - https://blog.scottlowe.org/2018/09/28/setting-up-the-kubernetes-aws-cloud-provider/

1) Added AWS tags to all of my EC2 instances (master and worker nodes) as "kubernetes.io/cluster/kubernetes" and also to my security group

2) I havent added apiServerExtraArgs, controllerManagerExtraArgs and nodeRegistration manually in configuration file. But what I did was I reset the cluster entirely using "sudo kubeadm reset -f" and then I added this in kubeadm conf file in both master and worker nodes -

Environment="KUBELET_EXTRA_ARGS=--cloud-provider=aws --cloud-config=/etc/kubernetes/cloud-config.conf

cloud-config.conf -

[Global]
KubernetesClusterTag=kubernetes.io/cluster/kubernetes
KubernetesClusterID=kubernetes

Then executed in both master and worker nodes -

systemctl daemon-reload
system restart kubelet

3) Now I created the cluster using below command in master node

sudo kubeadm init --pod-network-cidr=192.168.1.0/16 --apiserver-advertise-address=10.118.6.35

4) Then I was able to join the worker node to the cluster successfully and deployed flannel CNI.

After this, get nodes showed Ready status.

One important point to note is that there is kube-apiserver.yaml and kube-controller-manager.yaml files in /etc/kubernetes/manifests path.

When I added --cloud-provider=aws in both of these yaml files, my deployments was not happening and pod was not getting created at all. So when I removed the tag --cloud-provider=aws from kube-apiserver.yaml, deployments and pods were success.

And as requested by Matthew, when I did modify the yaml for kube-apiserver and kube-controller-manager, both the pods got created again successfully. But since pods were not getting created, I removed the tag from kube-apiserver.yaml alone.

Also, I checked the logs with kubectl logs kube-controller-manager-ip-10-118-6-35.ec2.internal -n kube-system

But I dont see any exceptions or abnormalities. I can see this in last part -

IO130 19:14:17.444485    1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-kenzan", UID:"c........", APIVersion:"apps/v1", ResourceVersion:"16212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-kenzan-56686879-ghrhj

Even tried to add this below annotation to manual-deployment.yaml but still shows the same <Pending>

service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

Update as on 1/31

Finally made some progress!! Issue looks like tag mapping. In key value mapping for aws tags, I had key as KubernetesCluster and value as k8s but in config file I had key mapped instead of value.

But now I can see below logs in kube-controller-manager pod -

` 1 aws.go:1041] Building AWS cloud-provider

1 aws.go:1007] Zone not specified in configuration file; querying AWS metadata service

1 controller manager.go:208] error building controller context: cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-02dbgfghjf3e7: "error listing AWS instances: \"RequestError: send request failed\ncaused by: Post https://ec2.us-east-1.amazonaws.com/: dial tcp 54.239.28.176:443: i/o timeout\"" `

Latest update

I used no proxy for *.amazonaws.com and seem to connect now. I say "seem to connect" in the sense by checking the logs I see only below logs without that timeout error which was occurring before adding this no proxy. I also made sure controller pod restarted with my edit and save. And after this I see below logs -

` 1 aws.go:1041] Building AWS cloud-provider

1 aws.go:1007] Zone not specified in configuration file; querying AWS metadata service `

So I assume my controller is able to connect to aws cloud right? But unfortunately still getting <pending> when I created my service again :(

Update as on 01/02

Ok to make it simple, I created aws application load balancer myservices and I got following DNS name listed in aws console - internal-myservices-987070943.us-east-1.elb.amazonaws.com

I also has Target Groups created and showing below under Description - Name as myservices-LB, Protocol as HTTPS , port as 443, Target type as instance, Load Balancer as myservices Under Targets tab I can see Registered targets showing my Instance ID as i-02dbf9b3a7d9163e7 with Port as 443 and other details.. This instance ID is my ec2 instance which I have configured as master node of my kubernetes cluster.

Now when I try to access LB DNS name directly with the URL - internal-myservices-987070943.us-east-1.elb.amazonaws.com/api/v1/namespace s/default/services I am getting "This site can't be reached"

Whereas if I proxy forward from my master node instance using kubectl proxy -- address 0.0.0.0 --accept-hosts '.*' And then if I access directly my master node ip as below I am able to browse - 10.118.6.35:8001/api/v1/namespaces/default/services

Isn't it possible to access kubernetes services deployed either as NodePort or Loadbalancer Type to be accessible using direct AWS Loadbalancer DNS name?? I even tested the connectivity using tracert internal-myservices-987070943.us-east-1.elb.amazonaws.com And I can successfully reach destination 10.118.12.196 in 18 hops

But from my ec2 master node instance it is not tracing. Normally I have proxy set with this command - "export {http,https,ftp}_proxy=http://proxy.ebiz.myorg.com:80" And I can access even external urls. Could this be an issue?

-- Gopi
amazon-elb
aws-application-load-balancer
aws-load-balancer
google-kubernetes-engine
kubernetes

1 Answer

1/29/2019

You are conflating two separate problems here.

It is showing External-IP as But since I have created a loadbalancer type, why isnt it creating an ip?

Presuming it has been showing <pending> long enough for you to compose an SO question about it means that your controller-manager pods don't have the command-line flags --cloud-provider=aws and the accompanying --cloud-config=/the/path/here, and/or do not have the IAM instance role that enables those Pods to create load balancers on your behalf.

However, having said that: fixing that will create a new LoadBalancer, and it will not use your existing ALB. That's doubly true because type: LoadBalancer will create a classic ELB unless annotated to do otherwise

I created an internal AWS elastic application loadbalancer and in AWS console it shows its state as active. Note that I created this ALB using a jenkins job and in the job I have specified my AWS EC2 instance server which is configured as my kubernetes master.

Foremost, in general you should omit the masters from the rotation, since every NodePort is exposed on every worker in your cluster, and it is almost never the case that you want more load and more traffic flowing across the masters in your cluster. Plus, unless you have configured it otherwise, the actual Pods that are serving bytes for the Service are not going to live on the masters, and thus that network traffic would have to be rerouted anyway.

That aside, what you'd want is to change your Service to be type: NodePort, and point the ALB target group at that port:

spec:
 ports:
  - port: 80
    targetPort: 80
 selector:
   app: hello-kenzan
   tier: hello-kenzan
 type: NodePort

You are free to actually include a nodePort: stanza in that ports: item if you wish, or you can just leave it blank and kubernetes will assign it one, very likely starting from the top of the NodePort port allocation range and working down. You can find out which one it chose via kubectl get -o yaml svc hello-kenzan

-- mdaniel
Source: StackOverflow