I am trying to access the Kubernetes service using C# docker within kuberentes service.
I have a python docker YAML file and want to create pod using the same YAML programmatically from c# Dotnet core docker that running int the same cluster of python docker. I found Kubernetes api for dotnet core.I created the code for list pods that is below.
using System;
using k8s;
namespace simple
{
internal class PodList
{
private static void Main(string[] args)
{
var config = KubernetesClientConfiguration.InClusterConfig();
IKubernetes client = new Kubernetes(config);
Console.WriteLine("Starting Request!");
var list = client.ListNamespacedPod("default");
foreach (var item in list.Items)
{
Console.WriteLine(item.Metadata.Name);
}
if (list.Items.Count == 0)
{
Console.WriteLine("Empty!");
}
}
}
}
this code getting error Forbidden ("Operation returned an invalid status code 'Forbidden'"). instead of InClusterConfig using BuildConfigFromConfigFile code is working in local environment.Is anythin I missed?
Edited
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-serviceaccount
namespace: api
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: api
name: test-role
rules:
- apiGroups: ["","apps","batch"]
# "" indicates the core API group
resources: ["deployments", "namespaces","cronjobs"]
verbs: ["get", "list", "update", "patch","create"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: test-binding
namespace: api
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-role
subjects:
- kind: ServiceAccount
name: test-serviceaccount
namespace: api
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
creationTimestamp: "2019-07-04T16:05:43Z"
generation: 4
labels:
app: test-console
tier: middle-end
name: test-console
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: test-console
tier: middle-end
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: "2019-07-04T16:05:43Z"
labels:
app: test-console
tier: middle-end
spec:
serviceAccountName: test-serviceaccount
containers:
- image: test.azurecr.io/tester:1.0.0
imagePullPolicy: Always
name: test-console
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: pull
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
C# code
client.CreateNamespacedCronJob(jobmodel, "testnamesapce");
crone job
'apiVersion': 'batch/v1beta1',
'kind': 'CronJob',
'metadata': {
'creationTimestamp': '2020-08-04T06:29:19Z',
'name': 'forcaster-cron',
'namespace': 'testnamesapce'
},
InClusterConfig
uses the default
service account of the namespace where you are deploying the pod. By default that service account will not have any RBAC which leads to Forbidden
error.
The reason it works in local environment is because it uses credential from kubeconfig
file which most of the time is admin credential having root level RBAC permission to the cluster.
You need to define a Role
and attach that role to the service account using RoleBinding
So if you are deploying the pod in default
namespace then below RBAC should work.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: myrole
namespace: default
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: myrole
subjects:
- kind: ServiceAccount
name: default
namespace: default
Once you apply above RBAC you can check permission of the service account using below command
kubectl auth can-i list pods --as=system:serviceaccount:default:default -n default
yes