Exec Summary
Jenkins is running in a Kubernetes cluster just upgrade to 1.19.7 but now jenkins build scripts are failing when running
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
to give error
io.fabric8.kubernetes.client.KubernetesClientException: Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)
but what permissions or roles should I change?
MORE DETAIL HERE
Jenkins is running within a Kubernetes cluster, as a master, and it picks up GIT jobs and then creates slave pods which are also supposed to run in the same cluster. We have a namespace in the cluster called "Jenkins". As you use Jenkins to creates builds of the microservice applications which are in their own containers, then prompts to have these deployed through the pipeline of test, demo, production.
The cluster has been updated to Kubernetes 1.19.7 using kops. Everything still deploys, runs and is accessible as normal. To the user you would not think that there is a problem to the applications which are running internally on the cluster; all are accessible via the browser and PODS show no significant issues.
Jenkins is still accessible (running version 2.278, with Kubernetes plugin 1.29.1, Kubernetes credential 0.8.0, Kubernetes Client API Plugin 4.13.2-1)
I can log into Jenkins, see everything I would normally expect to see
I can use LENS to connect to the cluster and see all the nodes, pods etc as normal.
However, and this is where our problem now lies post upgrading 1.19.7, when a Jenkins job starts it now always fails at the point which it tries to set the kubectl context
We get this error in every build pipeline at the same place...
[Pipeline] load
[Pipeline] { (JenkinsUtil.groovy)
[Pipeline] }
[Pipeline] // load
[Pipeline] stage
[Pipeline] { (Set-Up and checks)
[Pipeline] withCredentials
Masking supported pattern matches of $KUBECONFIG or $user or $password
[Pipeline] {
[Pipeline] container
[Pipeline] {
[Pipeline] sh
Warning: A secret was passed to "sh" using Groovy String interpolation, which is insecure.
Affected argument(s) used the following variable(s): [KUBECONFIG, user]
See https://****.io/redirect/groovy-string-interpolation for details.
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'
at okhttp3.internal.ws.RealWebSocket.checkResponse(RealWebSocket.java:229)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:196)
at okhttp3.RealCall$AsyncCall.execute(RealCall.java:203)
at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[Pipeline] }
[Pipeline] // container
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] echo
io.fabric8.kubernetes.client.KubernetesClientException: Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)
[Pipeline] }
[Pipeline] // podTemplate
[Pipeline] End of Pipeline
[Bitbucket] Notifying commit build result
[Bitbucket] Build result notified
Now I presume this is about security....but I'm unsure what to change
I can see that it's using system:anonymous and this may have been restricted in later Kubernetes versions, but I'm unsure how to either supply another user or allow this to work from the Jenkins master node in this namespace.
As we run jenkins and also have jenkins deploy I can see the following service accounts
kind: ServiceAccount
apiVersion: v1
metadata:
name: jenkins
namespace: jenkins
selfLink: /api/v1/namespaces/jenkins/serviceaccounts/jenkins
uid: a81a479a-b525-4b01-be39-4445796c6eb1
resourceVersion: '94146677'
creationTimestamp: '2020-08-20T13:32:35Z'
labels:
app: jenkins-master
app.kubernetes.io/managed-by: Helm
chart: jenkins-acme-2.278.102
heritage: Helm
release: jenkins-acme-v2
annotations:
meta.helm.sh/release-name: jenkins-acme-v2
meta.helm.sh/release-namespace: jenkins
secrets:
- name: jenkins-token-lqgk5
and also
kind: ServiceAccount
apiVersion: v1
metadata:
name: jenkins-deployer
namespace: jenkins
selfLink: /api/v1/namespaces/jenkins/serviceaccounts/jenkins-deployer
uid: 4442ec9b-9cbd-11e9-a350-06cfb66a82f6
resourceVersion: '2157387'
creationTimestamp: '2019-07-02T11:33:51Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"jenkins-deployer","namespace":"jenkins"}}
secrets:
- name: jenkins-deployer-token-mdfq9
And the following roles
jenkins-role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{"meta.helm.sh/release-name":"jenkins-acme-v2","meta.helm.sh/release-namespace":"jenkins"},"creationTimestamp":"2020-08-20T13:32:35Z","labels":{"app":"jenkins-master","app.kubernetes.io/managed-by":"Helm","chart":"jenkins-acme-2.278.102","heritage":"Helm","release":"jenkins-acme-v2"},"name":"jenkins-role","namespace":"jenkins","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-role","uid":"de5431f6-d576-4804-b132-6562d0ba7a94"},"rules":[{"apiGroups":["","extensions"],"resources":["*"],"verbs":["*"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["get","list","watch","update"]}]}
meta.helm.sh/release-name: jenkins-acme-v2
meta.helm.sh/release-namespace: jenkins
creationTimestamp: '2020-08-20T13:32:35Z'
labels:
app: jenkins-master
app.kubernetes.io/managed-by: Helm
chart: jenkins-acme-2.278.102
heritage: Helm
release: jenkins-acme-v2
name: jenkins-role
namespace: jenkins
resourceVersion: '94734324'
selfLink: /apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-role
uid: de5431f6-d576-4804-b132-6562d0ba7a94
rules:
- apiGroups:
- ''
- extensions
resources:
- '*'
verbs:
- '*'
- apiGroups:
- ''
resources:
- nodes
verbs:
- get
- list
- watch
- update
jenkins-deployer-role
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-deployer-role
namespace: jenkins
selfLink: >-
/apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-deployer-role
uid: 87b6486e-6576-11e8-92a9-06bdf97be268
resourceVersion: '94731699'
creationTimestamp: '2018-06-01T08:33:59Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"creationTimestamp":"2018-06-01T08:33:59Z","name":"jenkins-deployer-role","namespace":"jenkins","selfLink":"/apis/rbac.authorization.k8s.io/v1/namespaces/jenkins/roles/jenkins-deployer-role","uid":"87b6486e-6576-11e8-92a9-06bdf97be268"},"rules":[{"apiGroups":[""],"resources":["pods"],"verbs":["*"]},{"apiGroups":[""],"resources":["deployments","services"],"verbs":["*"]}]}
rules:
- verbs:
- '*'
apiGroups:
- ''
resources:
- pods
- verbs:
- '*'
apiGroups:
- ''
resources:
- deployments
- services
and jenkins-namespace-manager
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-namespace-manager
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-namespace-manager
uid: 93e80d54-6346-11e8-92a9-06bdf97be268
resourceVersion: '94733699'
creationTimestamp: '2018-05-29T13:45:41Z'
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"creationTimestamp":"2018-05-29T13:45:41Z","name":"jenkins-namespace-manager","selfLink":"/apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-namespace-manager","uid":"93e80d54-6346-11e8-92a9-06bdf97be268"},"rules":[{"apiGroups":[""],"resources":["namespaces"],"verbs":["get","watch","list","create"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["get","list","watch","update"]}]}
rules:
- verbs:
- get
- watch
- list
- create
apiGroups:
- ''
resources:
- namespaces
- verbs:
- get
- list
- watch
- update
apiGroups:
- ''
resources:
- nodes
and finally jenkins-deployer-role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"creationTimestamp":"2018-05-29T13:29:43Z","name":"jenkins-deployer-role","selfLink":"/apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-deployer-role","uid":"58e1912e-6344-11e8-92a9-06bdf97be268"},"rules":[{"apiGroups":["","extensions","apps","rbac.authorization.k8s.io"],"resources":["*"],"verbs":["*"]},{"apiGroups":["policy"],"resources":["poddisruptionbudgets","podsecuritypolicies"],"verbs":["create","delete","deletecollection","patch","update","use","get"]},{"apiGroups":["","extensions","apps","rbac.authorization.k8s.io"],"resources":["nodes"],"verbs":["get","list","watch","update"]}]}
creationTimestamp: '2018-05-29T13:29:43Z'
name: jenkins-deployer-role
resourceVersion: '94736572'
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/jenkins-deployer-role
uid: 58e1912e-6344-11e8-92a9-06bdf97be268
rules:
- apiGroups:
- ''
- extensions
- apps
- rbac.authorization.k8s.io
resources:
- '*'
verbs:
- '*'
- apiGroups:
- policy
resources:
- poddisruptionbudgets
- podsecuritypolicies
verbs:
- create
- delete
- deletecollection
- patch
- update
- use
- get
- apiGroups:
- ''
- extensions
- apps
- rbac.authorization.k8s.io
resources:
- nodes
verbs:
- get
- list
- watch
- update
And the following bindings..
I'm really stuck with this one, I don't want to give system:anonymous access to everything, although guess that could be an option.
The jenkins files which help build this are
JenkinsFile
import org.jenkinsci.plugins.workflow.steps.FlowInterruptedException
def label = "worker-${UUID.randomUUID().toString()}"
def dockerRegistry = "id.dkr.ecr.eu-west-1.amazonaws.com"
def localHelmRepository = "acme-helm"
def artifactoryHelmRepository = "https://acme.jfrog.io/acme/$localHelmRepository"
def jenkinsContext = "jenkins-staging"
def MAJOR = 2 // Change HERE
def MINOR = 278 // Change HERE
def PATCH = BUILD_NUMBER
def chartVersion = "X.X.X"
def name = "jenkins-acme"
def projectName = "$name"
def helmPackageName = "$projectName"
def helmReleaseName = "$name-v$MAJOR"
def fullVersion = "$MAJOR.$MINOR.$PATCH"
def jenkinsVersion = "${MAJOR}.${MINOR}" // Gets passed to Dockerfile for getting image from Docker hub
podTemplate(label: label, containers: [
containerTemplate(name: 'docker', image: 'docker:18.05-dind', ttyEnabled: true, privileged: true),
containerTemplate(name: 'perl', image: 'perl', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.18.8', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'helm', image: 'id.dkr.ecr.eu-west-1.amazonaws.com/k8s-helm:3.2.0', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'clair-local-scan', image: '738398925563.dkr.ecr.eu-west-1.amazonaws.com/clair-local-scan:latest', ttyEnabled: true, envVars: [envVar(key: 'DOCKER_HOST', value: 'tcp://localhost:2375')]),
containerTemplate(name: 'clair-scanner', image: '738398925563.dkr.ecr.eu-west-1.amazonaws.com/clair-scanner:latest', command: 'cat', ttyEnabled: true, envVars: [envVar(key: 'DOCKER_HOST', value: 'tcp://localhost:2375')]),
containerTemplate(name: 'clair-db', image: "738398925563.dkr.ecr.eu-west-1.amazonaws.com/clair-db:latest", ttyEnabled: true),
containerTemplate(name: 'aws-cli', image: 'mesosphere/aws-cli', command: 'cat', ttyEnabled: true)
], volumes: [
emptyDirVolume(mountPath: '/var/lib/docker')
]) {
try {
node(label) {
def myRepo = checkout scm
jenkinsUtils = load 'JenkinsUtil.groovy'
stage('Set-Up and checks') {
jenkinsContext = 'jenkins-staging'
withCredentials([
file(credentialsId: 'kubeclt-staging-config', variable: 'KUBECONFIG'),
usernamePassword(credentialsId: 'jenkins_artifactory', usernameVariable: 'user', passwordVariable: 'password')]) {
jenkinsUtils.initKubectl(jenkinsUtils.appendToParams("kubectl", [
namespaces: ["jenkins"],
context : jenkinsContext,
config : KUBECONFIG])
)
jenkinsUtils.initHelm(jenkinsUtils.appendToParams("helm", [
namespace : "jenkins",
helmRepo : artifactoryHelmRepository,
username : user,
password : password,
])
)
}
}
stage('docker build and push') {
container('perl'){
def JENKINS_HOST = "jenkins_api:1Ft38erDFjjfM6q3a6y7@jenkins.acme.com"
sh "curl -sSL \"https://${JENKINS_HOST}/pluginManager/api/xml?depth=1&xpath=/*/*/shortName|/*/*/version&wrapper=plugins\" | perl -pe 's/.*?<shortName>([\\w-]+).*?<version>([^<]+)()(<\\/\\w+>)+/\\1 \\2\\n/g'|sed 's/ /:/' > plugins.txt"
sh "cat plugins.txt"
}
container('docker'){
sh "ls -la"
sh "docker version"
// This is because of this annoying "feature" where the command ran from docker contains a \r character which must be removed
sh 'eval $(docker run --rm -t $(tty &>/dev/null && echo "-n") -v "$(pwd):/project" mesosphere/aws-cli ecr get-login --no-include-email --region eu-west-1 | tr \'\\r\' \' \')'
sh "sed \"s/JENKINS_VERSION/${jenkinsVersion}/g\" Dockerfile > Dockerfile.modified"
sh "cat Dockerfile.modified"
sh "docker build -t $name:$fullVersion -f Dockerfile.modified ."
sh "docker tag $name:$fullVersion $dockerRegistry/$name:$fullVersion"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:latest"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:${MAJOR}"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:${MAJOR}.$MINOR"
sh "docker tag $name:$fullVersion $dockerRegistry/$name:${MAJOR}.${MINOR}.$PATCH"
sh "docker push $dockerRegistry/$name:$fullVersion"
sh "docker push $dockerRegistry/$name:latest"
sh "docker push $dockerRegistry/$name:${MAJOR}"
sh "docker push $dockerRegistry/$name:${MAJOR}.$MINOR"
sh "docker push $dockerRegistry/$name:${MAJOR}.${MINOR}.$PATCH"
}
}
stage('helm build') {
namespace = 'jenkins'
jenkinsContext = 'jenkins-staging'
withCredentials([
file(credentialsId: 'kubeclt-staging-config', variable: 'KUBECONFIG'),
usernamePassword(credentialsId: 'jenkins_artifactory', usernameVariable: 'user', passwordVariable: 'password')]) {
jenkinsUtils.setContext(jenkinsUtils.appendToParams("kubectl", [
context: jenkinsContext,
config : KUBECONFIG])
)
jenkinsUtils.helmDeploy(jenkinsUtils.appendToParams("helm", [
namespace : namespace,
credentials: true,
release : helmReleaseName,
args : [replicaCount : 1,
imageTag : fullVersion,
namespace : namespace,
"MajorVersion" : MAJOR]])
)
jenkinsUtils.helmPush(jenkinsUtils.appendToParams("helm", [
helmRepo : artifactoryHelmRepository,
username : user,
password : password,
BuildInfo : BRANCH_NAME,
Commit : "${myRepo.GIT_COMMIT}"[0..6],
fullVersion: fullVersion
]))
}
}
stage('Deployment') {
namespace = 'jenkins'
jenkinsContext = 'jenkins-staging'
withCredentials([
file(credentialsId: 'kubeclt-staging-config', variable: 'KUBECONFIG')]) {
jenkinsUtils.setContext(jenkinsUtils.appendToParams("kubectl", [
context: jenkinsContext,
config : KUBECONFIG])
)
jenkinsUtils.helmDeploy(jenkinsUtils.appendToParams("helm", [
dryRun : false,
namespace : namespace,
package : "${localHelmRepository}/${helmPackageName}",
credentials: true,
release : helmReleaseName,
args : [replicaCount : 1,
imageTag : fullVersion,
namespace : namespace,
"MajorVersion" : MAJOR
]
])
)
}
}
}
} catch (FlowInterruptedException e) {
def reasons = e.getCauses().collect { it.getShortDescription() }.join(",")
println "Interupted. Reason: $reasons"
currentBuild.result = 'SUCCESS'
return
} catch (error) {
println error
throw error
}
}
And the groovy file
templateMap = [
"helm" : [
containerName: "helm",
dryRun : true,
namespace : "test",
tag : "xx",
package : "jenkins-acme",
credentials : false,
ca_cert : null,
helm_cert : null,
helm_key : null,
args : [
majorVersion : 0,
replicaCount : 1
]
],
"kubectl": [
containerName: "kubectl",
context : null,
config : null,
]
]
def appendToParams(String templateName, Map newArgs) {
def copyTemplate = templateMap[templateName].clone()
newArgs.each { paramName, paramValue ->
if (paramName.equalsIgnoreCase("args"))
newArgs[paramName].each {
name, value -> copyTemplate[paramName][name] = value
}
else
copyTemplate[paramName] = paramValue
}
return copyTemplate
}
def setContext(Map args) {
container(args.containerName) {
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
}
}
def initKubectl(Map args) {
container(args.containerName) {
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
for (namespace in args.namespaces)
sh "kubectl -n $namespace get pods"
}
}
def initHelm(Map args) {
container(args.containerName) {
// sh "helm init --client-only"
def command = "helm version --short"
// if (args.credentials)
// command = "$command --tls --tls-ca-cert ${args.ca_cert} --tls-cert ${args.helm_cert} --tls-key ${args.helm_key}"
//
// sh "$command --tiller-connection-timeout 5 --tiller-namespace tiller-${args.namespace}"
sh "helm repo add acme-helm ${args.helmRepo} --username ${args.username} --password ${args.password}"
sh "helm repo update"
}
}
def helmDeploy(Map args) {
container(args.containerName) {
sh "helm repo update"
def command = "helm upgrade"
// if (args.credentials)
// command = "$command --tls --tls-ca-cert ${args.ca_cert} --tls-cert ${args.helm_cert} --tls-key ${args.helm_key}"
if (args.dryRun) {
sh "helm lint ${args.package}"
command = "$command --dry-run --debug"
}
// command = "$command --install --tiller-namespace tiller-${args.namespace} --namespace ${args.namespace}"
command = "$command --install --namespace ${args.namespace}"
def setVar = "--set "
args.args.each { key, value -> setVar = "$setVar$key=\"${value.toString().replace(",", "\\,")}\"," }
setVar = setVar[0..-1]
sh "$command $setVar --devel ${args.release} ${args.package}"
}
}
def helmPush(Map args){
container(args.containerName) {
sh "helm package ${args.package} --version ${args.fullVersion} --app-version ${args.fullVersion}+${args.BuildInfo}-${args.Commit}"
sh "curl -u${args.username}:${args.password} -T ${args.package}-${args.fullVersion}.tgz \"${args.helmRepo}/${args.package}-${args.fullVersion}.tgz\""
}
}
return this
And from the log it seems to be when it runs
sh "kubectl --kubeconfig ${args.config} config use-context ${args.context}"
That it throws the error
io.fabric8.kubernetes.client.KubernetesClientException: Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)
but what permissions or roles should I change?
Many thanks, Nick
Take a look at this section of the official kubernetes documentation and this answer provided by Prafull Ladha:
The above error means your apiserver doesn't have the credentials (
kubelet cert and key
) to authenticate the kubelet's log/exec commands and hence theForbidden
error message.You need to provide
--kubelet-client-certificate=<path_to_cert>
and--kubelet-client-key=<path_to_key>
to your apiserver, this way apiserver authenticate the kubelet with the certficate and key pair.
Very similar issue was also reported on GitHub in this thread, where you can find the following explanation:
That means the api server has not been given a credential to use to authenticate to kubelets when proxying log/exec requests.
See apiserver configuration as described in https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authentication