What is controller pattern in k8s operators ?</html>
The Job controller is an example of a Kubernetes built-in controller. Built-in controllers manage state by interacting with the cluster API server.
Ref: https://kubernetes.io/docs/concepts/architecture/controller/#control-via-api-server
In my understanding, you create a xxx-operator, it means you add a Kind
for your k8s, the kubectl explain <kind-name>
can show the Kind msg you've given.
So, use the xx-operator, you can set your app with a simpler yaml. You can also use tools like Kustomize
or Helm
based on this way.
e.g. :
run this, at first:
kubectl explain ZookeeperCluster
you will get an err.
you can install a helm chart pravega/zookeeper-operator
:
helm install --create-namespace -n op-pravega-zk -- zookeeper-operator
pravega/zookeeper-operator
you will get out msg after installed but it's helpless:
NAME: zookeeper-operator
LAST DEPLOYED: Thu Nov 25 11:20:31 2021
NAMESPACE: op-pravega-zk
STATUS: deployed
REVISION: 1
TEST SUITE: None
but, now, if you run this again:
kubectl explain ZookeeperCluster
you will get these:
KIND: ZookeeperCluster
VERSION: zookeeper.pravega.io/v1beta1
DESCRIPTION:
ZookeeperCluster is the Schema for the zookeeperclusters API
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
ZookeeperClusterSpec defines the desired state of ZookeeperCluster
status <Object>
ZookeeperClusterStatus defines the observed state of ZookeeperCluster
now, you can use it like this yaml ( ...
means ellipsis) :
apiVersion: "zookeeper.pravega.io/v1beta1"
kind: "ZookeeperCluster"
metadata:
name: zookeeper
namespace: pravega-zk
...
spec:
replicas: 3
image:
repository: pravega/zookeeper
...
pod:
serviceAccountName: zookeeper
storageType: persistence
persistence:
reclaimPolicy: Delete
spec:
storageClassName: local-hostpath
...
or like this ( a ZookeeperCluster
Kind in this helm chart ) :
helm install --create-namespace -n pravega-zk --set persistence.storageClassName=local-hostpath -- zookeeper pravega/zookeeper
other e.g. like:
Create a Zookeeper cluster.
kubectl create --namespace zookeeper -f - <<EOF apiVersion: zookeeper.pravega.io/v1beta1 kind: ZookeeperCluster metadata: name: zookeeper namespace: zookeeper spec: replicas: 1 EOF
Ref: https://banzaicloud.com/docs/supertubes/kafka-operator/install-kafka-operator/
btw, after you install a operator, you can use kubectl describe clusterrole zookeeper-operator
to show some message, then run kubectl api-resources
to find it, then you may find the name of this kind ....
The controller pattern has been summarized in these 3 sentences:
A controller tracks at least one Kubernetes resource type. These objects have a spec field that represents the desired state. The controller(s) for that resource are responsible for making the current state come closer to that desired state.
Ref: https://kubernetes.io/docs/concepts/architecture/controller/#controller-pattern
Basically, the Kubernetes controllers watches for the respective resource in a control loop. Once, it find a resource, it reads the desired state from the spec and do some work to make the cluster state same as the desired state.
For example, you have created a Deployment where you have specified in the spec
that you want 1 pod that runs your application. Now, Deployment controller sees this and create 1 pod in the cluster to match your desired state. Now, if you update the Deployment spec and say that you now want 2 pods. The Deployment controller will see this change as it is always watching for Deployment and create another pod in the cluster to match the desired state.
You can find more details about these in the following resources: