Is there a limit on the size of Kubernetes spec and status size in Kubernetes?
I have a use case in which the operator spec is a regular expression and regex gets expanded to a lot of actual items whose status I have to store.
Example:
type RedshiftSinkSpec struct {
TopicRegexes string `json:"topicRegexes"`
}
type Topic string
type RedshiftSinkStatus struct {
// +optional
CurrentMaskStatus map[Topic]MaskStatus `json:"currentMaskStatus,omitempty"`
// +optional
DesiredMaskStatus map[Topic]MaskStatus `json:"desiredMaskStatus,omitempty"`
}
Since the number of topics is being computed from the regular expression. I have no idea how big the data structure can grow for someone else. So would want to cap it at some level. Hence need help with the max limit Kubernetes allows.
Also, it is necessary to have it like this to save on the number of Redshift connections. Cannot really break the problem into one more crd with one topic.
Please suggest.
Quoting the answer from the comment.
Kubernetes stores cluster data in etcd. By default, etcd limits the maximum data entry size to 1.5 megabytes. – Chin Huang
ETCD Documentation: Request size limit:
etcd is designed to handle small key value pairs typical for metadata. Larger requests will work, but may increase the latency of other requests. By default, the maximum size of any request is 1.5 MiB. This limit is configurable through --max-request-bytes flag for etcd server.
Kubernetes has not put any limits on Status and Spec size individually but the limit is on the overall object size since limits are from the etcd storage. It can be configured using --max-request-bytes
. It is better to use Config Maps to store status when the data is huge and save pointers or overall summary in the status.