I am trying to use Keycloak with Oauth2 to secure kubernetes-dashboard. I found several post about this error, but none was specific for my problem. maybe i just miss a simple step ..
so far i followed this guide: https://jamesveitch.com/homelab/02.idam/02.keycloak/
Keycloak is installed at the kubernetes cluster within the namespace keycloak. Kubernetes-dashboard is installed within the namespace kubernetes-dashboard.
Keycloak is reachable under auth.mydomain.com and the dashboard should be reachbale under dashboard.mydomain.com.
I have created a user "test" and assigned a group kubernetes-admin which i would use to grant access to the dashboard.
I configured Keycloak like this:
For Clusterbinding of kubernetes-admin i use this yaml:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: keycloak-admin-group
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
# NOTE: This is a super administrator and can do everything.
# Consider a dedicated role in your actual operation.
name: cluster-admin
subjects:
- kind: Group
name: kubernetes-admin
For the Configuration of OAuth2 and Ingress i use:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/use-port-in-redirects: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
name: dashboard
namespace: kubernetes-dashboard
spec:
tls:
- hosts:
- dashboard.mydomain.com
secretName: dashboard.mydomain.com-tls
rules:
- host: dashboard.mydomain.com
http:
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 443
path: /
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: kubernetes-dashboard
spec:
rules:
- host: dashboard.mydomain.com
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 4180
path: /oauth2
tls:
- hosts:
- dashboard.mydomain.com
secretName: dashboard.mydomain.com-tls
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kubernetes-dashboard
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=keycloak
- --client-id=dashboard
- --client-secret=a27b97fb-eafc-420c-88ba-8017beb54180
- --login-url=https://auth.mydomain.com/auth/realms/dev/protocol/openid-connect/auth
- --redeem-url=https://auth.mydomain.com/auth/realms/dev/protocol/openid-connect/token
- --validate-url=https://auth.mydomain.com/auth/realms/dev/protocol/openid-connect/userinfo
- --keycloak-group=kubernetes-admin
- --email-domain=*
- --http-address=0.0.0.0:4180
- --reverse-proxy=true
- --pass-access-token=true
- --set-xauthrequest=true
- --ssl-insecure-skip-verify=true
- --ssl-upstream-insecure-skip-verify=true
- --cookie-domain=.mydomain.com
- --whitelist-domain=.mydomain.com
- --upstream=https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
# Register a new application
# https://github.com/settings/applications/new
env:
# docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));'
- name: OAUTH2_PROXY_COOKIE_SECRET
value: ekNsWlN6MkphVVVmTnNZUTBEZnZVQT09
image: quay.io/oauth2-proxy/oauth2-proxy
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kubernetes-dashboard
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
This yaml file is used for a development enviroment with letsencrypt-staging and should ignore ssl errors.
What works is, that i can open https://dashboard.mydomain.com/oauth2/signin and press Sigin with Keycloak. But next i am redirect to 403 Permission Error Page.. The oauth2 logs says: Error on /oauth2/callback?error=invalid_request&error_description=Invalid+scopes
Please keep in Mind that i would like to use a upstream the kubernetes dashboard service.. thats why i try here as upstream https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local.. hope this is right ??
How can I fix this ?? Any Ideas ? Is Keycloak configured correctly ?
Some sites say to use (https://* and http://* as valid urls.. tried without luck)
Hope someone can help.. I did spend a lot time to get this work so far.. but no luck so far.
I just found out how to handle this error:
Under the dashboard - Mapper - groups deactivate the full group path .. This removes the leading slash of groups within your token ( you can check at Client Sope - Evaluate) This is needed for the oauth2 parameter --keycloak-group=kubernetes-admin
Create a Client Scope "Users" under the dev realm - at Client Scopes with default values
The upstream is correct assigend in above sample to access the dashboard directly within the cluste. So no change needed there.
After this you should be able to open the dashboard without errors. But i still have a problem that the bearer token is sent, but currently not used to login automatically.. you still have to paste your token.. So i still investigate to get this work..