Setup a secret and two service accounts(access-sa and no-access-sa) in test namespace in kubernetes. Then after RoleBind them to appropriate ClusterRoles (access-cr and no-access-cr) where one is having access to secrets in a test namespace and other not. Created two pods (access-pod and no-access-pod) one using access-sa and other using no-access-sa, having a shell script passed to command which pints env variable. Question is why the pod logs shows the secret for no-access-pod even when RBAC policy is configured to not have access to secrets.
apiVersion: v1
kind: Secret
metadata:
namespace: test
name: api-access-secret
type: Opaque
data:
username: YWRtaW4=
password: cGFzc3dvcmQ=
---
# Service account for preventing API access
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: test
name: no-access-sa
---
# Service account for accessing secrets API
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: test
name: secret-access-sa
---
# A role with no access
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: test
name: no-access-cr
rules:
- apiGroups: [""] # "" indicates the core API group
resources: [""]
verbs: [""]
---
# A role for reading/listing secrets
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: test
name: secret-access-cr
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["secrets", "pods"]
verbs: ["get", "watch", "list"]
---
# The role binding to combine the no-access service account and role
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: test
name: no-access-rb
subjects:
- kind: ServiceAccount
name: no-access-sa
roleRef:
kind: Role
name: no-access-cr
apiGroup: rbac.authorization.k8s.io
---
# The role binding to combine the secret-access service account and role
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: test
name: secret-access-rb
subjects:
- kind: ServiceAccount
name: secret-access-sa
roleRef:
kind: Role
name: secret-access-cr
apiGroup: rbac.authorization.k8s.io
---
# Create a pod with the no-access service account
kind: Pod
apiVersion: v1
metadata:
namespace: test
name: no-access-pod
spec:
serviceAccountName: no-access-sa
containers:
- name: no-access-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c" ]
args:
- while true; do
env;
done
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
secretKeyRef:
name: api-access-secret
key: username
---
# Create a pod with the secret-access service account
kind: Pod
apiVersion: v1
metadata:
namespace: test
name: secret-access-pod
spec:
serviceAccountName: secret-access-sa
containers:
- name: access-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c" ]
args:
- while true; do
env;
done
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
secretKeyRef:
name: api-access-secret
key: username
In both the cases I am able to see the value of SPECIAL_LEVEL_KEY as adminSPECIAL_LEVEL_KEY=admin
Please be aware, that the Authorization process (RBAC) starts first for you, as a cluster operator using client tool (kubectl). It simply verifies, whether you are authorized to create resource objects specified in manifest files you shared. In your case Authorization includes checking if you can perform 'get' action on 'secrets' resource, you not any of ServiceAccounts you declared before.
If you want to verify, if your RBAC policies are working as intended from inside the pod, just follow the instructions on accessing Kubernetes API from inside the cluster and query the following API URI: '/api/v1/namespaces/test/secrets'