I'm encountering following error message when I'm trying to deploy to EKS Cluster even I've already added CodeBuild IAM role to aws-auth.yaml
like
- rolearn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/codebuild-eks
username: codebuild-eks
groups:
- system:masters
error: unable to recognize "deployment.yml": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
And here is my codebuild command:
version: 0.2
phases:
install:
commands:
- curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.11.10/2019-06-21/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$HOME/bin:$PATH
- kubectl version --short --client
- curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.13.7/2019-06-11/bin/linux/amd64/aws-iam-authenticator
- chmod +x ./aws-iam-authenticator
- mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$HOME/bin:$PATH
- aws-iam-authenticator help
pre_build:
commands:
- echo Entered the pre_build phase...
- echo Logging in to Amazon EKS...
- mkdir -p ~/.kube
- aws s3 cp s3://ppshein-eks/config ~/.kube/config
- export KUBECONFIG=$KUBECONFIG:~/.kube/config
- aws eks --region $AWS_DEFAULT_REGION update-kubeconfig --name $AWS_CLUSTER_NAME
build:
commands:
- echo Entered the build phase...
- echo Change directory to secondary source
- cd $CODEBUILD_SRC_DIR
- echo List directory
- ls -la
- kubectl get pods --kubeconfig ~/.kube/config
- kubectl apply -f deployment.yml
Problem is when CodeBuild run this kubectl apply -f deployment.yml
statement, I've got error message but above one kubectl get pods --kubeconfig ~/.kube/config
is working fine.
Please let me know which area I've missed to add or configure. Thanks.
These errors indicates that kubectl was unable to reach the kubernetes server endpoint at 127.0.0.1:8080, or the local host. Since you have configured kubeconfig with the command update-kubeconfig
, it seems multiple configs are being merged 1 due to this command:
- export KUBECONFIG=$KUBECONFIG:~/.kube/config
To see the resultant config that kubectl sees, run this command before the failing command:
- kubectl config view # Add this
- kubectl apply -f deployment.yml
To fix, I recommend to change as follows in pre_build phase:
- export KUBECONFIG=~/.kube/config
Or, use '--context' flag with kubectl to select the correct context.
- export KUBECONFIG=file1:file2
- kubectl get pods --context=cluster-1
- kubectl get pods --context=cluster-2