My deployment consists of:
The sidecar's job is done once it configures the separate service. However, it cannot terminate because Kubernetes will just restart it again. It should not be part of the init container because it should not block the service from running.
Since deployments do not allow a OnFailure
restartPolicy, my current implementation is to let the sidecar go to sleep after it is done with the configuration task.
Is there a way to allow a container to terminate without the deployment restarting it? Alternatively, is there a way for an init container to run alongside the regular containers?
Some details to address XY problems:
Example:
#my-service.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
replicas: 1
template:
spec:
initContainers:
- name: my-service-init
image: my-service
imagePullPolicy: IfNotPresent
env:
- name: DATABASE_URL
value: postgres://postgres:postgres_password@$db_host:5432/database
args: ['init_script.py']
containers:
- name: my-service
env:
- name: DATABASE_URL
value: postgres://db_role:db_password@$db_host:5432/database
image: my-service
imagePullPolicy: IfNotPresent
args: ['main.py']
- name: related-service-configure
env:
- name: RELATED_API_SERVICE_ADMIN_ENDPOINT
value: http://related_api_service/api/
image: my-service
imagePullPolicy: IfNotPresent
args: ['manage_related_service.py']
#related-api-service.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: related-api-service
spec:
replicas: 1
template:
spec:
containers:
- name: related-api-service
env:
- name: DATABASE_URL
value: postgres://db_role:db_password@$db_host:5432/database
image: related-api-image
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: http
protocol: TCP
#manage_related_service.py
import time
import requests
import json
def upgrade_metadata(query_url, metadata_file):
with open(metadata_file) as fp:
metadata = json.load(fp)
print(f"Querying {query_url}...")
rsp = requests.post(query_url, json=metadata)
response = rsp.json()
print(f"Response to request was:\n{response}")
if response.get('success') != 'True':
raise ValueError("Metadata upgrade was not successful")
if __name__ == '__main__':
from environs import Env
env = Env()
env.read_env()
RELATED_API_SERVICE_ADMIN_ENDPOINT = env("RELATED_API_SERVICE_ADMIN_ENDPOINT")
METADATA_FILE = env("METADATA_FILE", "metadata.json")
upgrade_metadata(RELATED_API_SERVICE_ADMIN_ENDPOINT, METADATA_FILE)
# Once metadata has been uploaded, sleep forever
while True:
time.sleep(3600)
I think you should create Kubernestes Jobs instead of Deployments. Then you should terminate sidecar in. After that specific container will be automatically terminated too.
You can simulate specific sidecar behaviors. There is example script which will help you to do so:
containers:
- name: example
image: gcr.io/some/image:latest
command: ["/bin/bash", "-c"]
args:
- |
trap "touch /tmp/pod/main-terminated" EXIT
/my-batch-job/bin/main --config=/config/my-job-config.yaml
volumeMounts:
- mountPath: /tmp/pod
name: tmp-pod
- name: envoy-container
image: gcr.io/our-envoy-plus-bash-image:latest
command: ["/bin/bash", "-c"]
args:
- |
/usr/local/bin/envoy --config-path=/my-batch-job/etc/envoy.json &
CHILD_PID=$!
(while true; do if [[ -f "/tmp/pod/main-terminated" ]]; then kill $CHILD_PID; fi; sleep 1; done) &
wait $CHILD_PID
if [[ -f "/tmp/pod/main-terminated" ]]; then exit 0; fi
volumeMounts:
- mountPath: /tmp/pod
name: tmp-pod
readOnly: true
volumes:
- name: tmp-pod
emptyDir: {}
More information you can find here: sidecar-terminating, sidecars-behaviour.