How to get kubernetes applications to change deploy configs

3/17/2020

I have two applications running in K8. APP A has write access to a data store and APP B has read access.

APP A needs to be able to change APP B's running deployment.

How we currently do this is manually by kicking off a process in APP A which adds a new DB in the data store (say db bob). Then we do:

kubectl edit deploy A

And change an environment variable to bob. This starts a rolling restart of all the pods of APP B. We would like to automate this process.

Is there anyway to get APP A to change the deployment config of APP B in k8?

-- Filipe Teixeira
kubernetes

3 Answers

3/19/2020

Thank you for the answers all (upvoted as they were both correct). I am just putting my own answer to document exactly what solved it for me.

In my case I just needed to make use of the patch url available on k8. That plus the this example worked.

All I needed to do was create a service account to restrict who can patch where. Restrict that account to Service A and use the java client in Service A to update the chart of Service B. After that the pods would roll and done.

-- Filipe Teixeira
Source: StackOverflow

3/17/2020

Firstly answering your main question:

Is there anyway to get a service to change the deployment config of another service in k8?

From my understanding you are calling it Service A and B for it's purpose in the real life, but to facilitate understanding I suggested an edit to call them APP A and APP B, because:

In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).

So if in your question you meant:

"Is there anyway to get APP A to change the deployment config of APP B in k8?"

Then Yes, you can give a pod admin privileges to manage other components of the cluster using the kubectl set env command to change/add envs.

In order to achieve this, you will need:

  • A Service Account with needed permissions in the namespace.
    • NOTE: In my example below since I don't know if you are working with multiple namespaces I'm using a ClusterRole, granting cluster-admin to a specific user. If you use only 1 namespace for these apps, consider a Role instead.
  • A ClusterRoleBinding binding the permissions of the service account to a role of the Cluster.
  • The Kubectl client inside the pod (manually added or modifying the docker-image) on APP A

Steps to Reproduce:

  • Create a deployment to apply the cluster-admin privileges, I'm naming it manager-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: manager-deploy
  labels:
    app: manager
spec:
  replicas: 1
  selector:
    matchLabels:
      app: manager
  template:
    metadata:
      labels:
        app: manager
    spec:
      serviceAccountName: k8s-role    
      containers:
        - name: manager
          image: gcr.io/google-samples/node-hello:1.0
  • Create a deployment with a environment var, mocking your Service B. I'm naming it deploy-env.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: env-deploy
  labels:
    app: env-replace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: env-replace
  template:
    metadata:
      labels:
        app: env-replace
    spec:
      serviceAccountName: k8s-role    
      containers:
        - name: env-replace
          image: gcr.io/google-samples/node-hello:1.0
          env:
          - name: DATASTORE_NAME
            value: "john"
  • Create a ServiceAccount and a ClusterRoleBinding with cluster-admin privileges, I'm naming it service-account-for-pod.yaml (notice it's mentioned in manager-deploy.yaml:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: k8s-role
subjects:
- kind: ServiceAccount
  name: k8s-role
  namespace: default
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: k8s-role
  • Apply the service-account-for-pod.yaml, deploy-env.yaml, manager-deploy.yamland list current environment variables from deploy-env pod:
$ kubectl apply -f manager-deploy.yaml 
deployment.apps/manager-deploy created
$ kubectl apply -f deploy-env.yaml 
deployment.apps/env-deploy created
$ kubectl apply -f service-account-for-pod.yaml 
clusterrolebinding.rbac.authorization.k8s.io/k8s-role created
serviceaccount/k8s-role created

$ kubectl exec -it env-deploy-fbd95bb94-hcq75 -- printenv
DATASTORE_NAME=john
  • Shell into the manager pod, download the kubectl binary and apply the kubectl set env deployment/deployment_name VAR_NAME=VALUE:
$ kubectl exec -it manager-deploy-747c9d5bc8-p684s -- /bin/bash

root@manager-deploy-747c9d5bc8-p684s:/# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
root@manager-deploy-747c9d5bc8-p684s:/# chmod +x ./kubectl
root@manager-deploy-747c9d5bc8-p684s:/# mv ./kubectl /usr/local/bin/kubectl

root@manager-deploy-747c9d5bc8-p684s:/# kubectl set env deployment/env-deploy DATASTORE_NAME=bob     
  • Verify the env var value on the pod (notice that the pod is recreated when deployment is modified:
$ kubectl exec -it env-deploy-7f565ffc4-t46zc -- printenv
DATASTORE_NAME=bob

Let me know in the comments if you have any doubt on how to apply this solution to your environment.

-- willrof
Source: StackOverflow

3/17/2020

You could give service A access to your cluster (install kubectl and allow traffic from that NAT of service A to your cluster master) and with some cron jobs or jenkins / ssh or something that will execute your commands do it. You can also do kubectl patch or get the current config of second deployment kubectl get deployment <name> -o yaml --export > deployment.yaml and edit it there with some regex/awk/sed and then apply although the --export method is getting deprecated so you might aswell on service A download the GIT repo and apply the new config like that.

-- CptDolphin
Source: StackOverflow