Say we have a kubernetes cluster with Google as the OIDC provider for authentication. Every developer using that cluster has the ~/.kube/config
with following configured:
user:
auth-provider:
config:
client-id: <client-id>
client-secret: <client-secret>
id-token: <id-token>
idp-issuer-url: https://accounts.google.com
refresh-token: <refresh-token>
When the developer leaves the organisation he is removed from Google login and he can not use this ~/.kube/config
to access kubernetes resources as he would need to login to Google, but he cannot do that now.
But the client id and secret is still leaked.
client-secret
leakage here can be of any security concern?client-id
and client-secret
be used to make a different app and be exploited to make existing organization users to sign in and gain access to the ID-token on that existing user behalf?Please suggest.
PS: the credential type of this client-id and client-secret is "Other" and not a "Web application" with redirect url.
First and foremost, after leaving job is forbid to use confident credentials and access account, that's why developers don't have access to such data after leaving work.
Flow of OpenID in Kubernetes:
- Login to your identity provider
- Your identity provider will provide you with an access_token, id_token and a refresh_token
- When using kubectl, use your id_token with the --token flag or add it directly to your kubeconfig
- kubectl sends your id_token in a header called Authorization to the API server
- The API server will make sure the JWT signature is valid by checking against the certificate named in the configuration
- Check to make sure the id_token hasn’t expired
- Make sure the user is authorized
- Once authorized the API server returns a response to kubectl
- kubectl provides feedback to the user
To you most important points are 5, 6, 7. JWT of your client is not valid, so users who leave job and his account credentials (or members of other organization who have such credentials) cannot access your cluster.
The id_token can’t be revoked, it’s like a certificate so it should be short-lived. There’s no easy way to authenticate to the Kubernetes dashboard without using the kubectl proxy command or a reverse proxy that injects the id_token.
More information you can find here: kubernetes-cluster-access. So assuming you don't have to concern about leaking client_id and
You can also delete cluster/context/user entries too, e.g.:
$ kubectl config unset users.gke_project_zone_name
Client_secret is now optional for the k8s oidc config, which means that it can support public clients (with or without client_secret) and confidential clients (with client_secret, per kubectl user).
So answer for every for your questions is no, there is no need to concern about security aspect.
I hope it helps.