I use pod readiness gate on kubernetes-1.12.6,like this https://v1-12.docs.kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate
but it does not work like the document
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: tomcat
name: tomcat
namespace: default
spec:
selector:
matchLabels:
run: tomcat
template:
metadata:
labels:
run: tomcat
spec:
containers:
- image: tomcat
name: tomcat
readinessGates:
- conditionType: www.example.com/feature-1
restartPolicy: Always
I want to have a pod with status like this
Kind: Pod
...
spec:
readinessGates:
- conditionType: "www.example.com/feature-1"
status:
conditions:
- type: Ready # this is a builtin PodCondition
status: "True"
lastProbeTime: null
lastTransitionTime: 2018-01-01T00:00:00Z
- type: "www.example.com/feature-1" # an extra PodCondition
status: "False"
lastProbeTime: null
lastTransitionTime: 2018-01-01T00:00:00Z
containerStatuses:
- containerID: docker://abcd...
ready: true
...
but my pod's status is this
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-04-27T14:59:00Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-04-27T14:59:00Z"
message: corresponding condition of pod readiness gate "www.example.com/feature-1"
does not exist.
reason: ReadinessGatesNotReady
status: "False"
type: Ready
why?
As per readinessGates description it seems that some logic external to pod must update this status field. It is up to user to implement such logic.
After pod creation, each feature is responsible for keeping its custom pod condition in sync as long as its ReadinessGate exists in the PodSpec. This can be achieved by running k8s controller to sync conditions on relevant pods.