I am deploying a kubernetes cluster in AWS using kops using the following bash script:
#! /bin/bash
export NODE_SIZE=${NODE_SIZE:-t2.micro}
export MASTER_SIZE=${MASTER_SIZE:-t2.small}
export ZONES=${ZONES:-"eu-west-1a,eu-west-1b,eu-west-1c"}
export MASTER_ZONE=${ZONES:-"eu-west-1a"}
export KOPS_STATE_STORE="s3://cluster-state"
export KOPS_DNS_NAME="demo.kubernetes.com"
export NAME="demo.kubernetes.com"
export SSL_CERTIFICATE_ARN="arn:aws:acm:eu-east-1:204911192323:certificate/5e1337bf-f92b-4ccb-9d9e-8197f8782de2"
kops create cluster \
--name=$NAME \
--node-count=3 \
--master-zones eu-west-1a \
--master-count 1 \
--node-size $NODE_SIZE \
--master-size $MASTER_SIZE \
--zones $ZONES \
--topology private \
--dns-zone $KOPS_DNS_NAME \
--networking calico \
--bastion="true" \
--ssh-public-key ./ssh-keys/id_rsa.pub \
--ssh-access 215.138.19.90/32,215.159.102.92/32 \
--admin-access 215.138.19.90/32,215.159.102.92/32 \
--network-cidr 10.10.0.0/16 \
--target=terraform \
--api-ssl-certificate $SSL_CERTIFICATE_ARN \
--image "099720109477/ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20190406" \
--out=.
After the cluster is created, I can communicate with the cluster without any problems. However the real issue is that when I try to create a user and then use that user to access the cluster. User is created using the following bash script:
CLUSTERNAME=demo.kubernetes.com
NAMESPACE=development
USERNAME=asim
GROUPNAME=admin
AWS_PROFILE=demo
# Download CA key and Crts
aws s3 sync s3://${CLUSTERNAME}-state/${CLUSTERNAME}/pki/private/ca/ ca-key --profile ${AWS_PROFILE}
aws s3 sync s3://${CLUSTERNAME}-state/${CLUSTERNAME}/pki/issued/ca/ ca-crt --profile ${AWS_PROFILE}
# Move the key and crt to the current directory
mv ca-key/*.key ca.key
mv ca-crt/*.crt ca.crt
# Generate private key for user
openssl genrsa -out ${USERNAME}.key 2048
CSR_FILE=$USERNAME.csr
KEY_FILE=$USERNAME.key
openssl req -new -key $KEY_FILE -out $CSR_FILE -subj "/CN=$USERNAME/O=$GROUPNAME"
openssl x509 -req -in $CSR_FILE -CA ca.crt -CAkey ca.key -CAcreateserial -out ${USERNAME}.crt -days 10000
CRT_FILE=$USERNAME.crt
cat <<EOF | kubectl create -f -
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: admin-user
namespace: $NAMESPACE
subjects:
- kind: User
name: $USERNAME
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
EOF
kubectl config set-credentials $USERNAME \
--client-certificate=$(pwd)/$CRT_FILE \
--client-key=$(pwd)/$KEY_FILE
kubectl config set-context $USERNAME-$CLUSTERNAME-context --cluster=$CLUSTERNAME --namespace=$NAMESPACE --user=$USERNAME
When I change context to the newly created context for the user, I get the following error:
error: You must be logged in to the server (Unauthorized)
I did some troubleshooting and found that if I don't use a custom ca certificate during the cluster creation and remove the kops option --api-ssl-certificate, I can create users and they work fine, however if I then update the cluster and start using a proper CA certificate instead of a self signed certificate I start getting the error again. For me it is important that I use our own proper certificates at the ELB level, so when the api server is acccessed using the brower we don't get security warning as this will be a production cluster. However I am unable to find anyway through the documentation that tells me how to created users while still having this custom certificate at ELB level? Is there something I am missing any help will be highly appreciated.
I think your problem is sourced in the name of the group, which in context of X509 client certs is mapped to the the certificate’s organization field (/O).
Please try to change 'admin'
group name to the built-in one: 'system:masters'