Can Kubernetes services deployed by helm configured to be restarted when manually deleting via kubectl?

4/1/2019

I am trying to understand the nature of helm deployments in general. I have a deployment managed by helm which brings up a jdbc service using a service.yaml file.

Upon deployment, I can clearly see that the service is alive, in accordance to the service.yaml file.

It I manually delete the service, the service stays dead.

My question is: If I manually delete the service using kubectl delete, is the service supposed be restarted as the deployment is helm managed? Is there any option to configure the service restart even on manual delete? Is this the default and expected behaviour.

I have tried numerous options and scoured through the docs, I am unable to find the spec/option/config that causes the services to be restarted on delete unlike pods, which have a 'Always Restart' option.

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.exampleJDBCService.name }}
  namespace: {{ .Release.Namespace }}
spec:
  type: {{ .Values.exampleJDBCService.type }}
  sessionAffinity: "{{ .Values.sessionAffinity.type }}"
  {{- if (eq .Values.sessionAffinity.type "ClientIP") }}
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: {{ .Values.sessionAffinity.timeoutSeconds }}
  {{- end }}
  selector:
    {{ template "spark-example.fullname" . }}: "true"
  ports:
    - protocol: TCP
      port: {{ .Values.exampleJDBCService.clusterNodePort }}
      targetPort: {{ .Values.exampleJDBCService.targetPort }}
      {{- if (and (eq .Values.exampleJDBCService.type "NodePort") (not (empty .Values.exampleJDBCService.clusterNodePort))) }}
      nodePort: {{ .Values.exampleJDBCService.clusterNodePort }}
      {{- end }}
-- ashokashwin93
jdbc
kubectl
kubernetes
kubernetes-helm
service

2 Answers

4/1/2019

Deleted/corrupted Kubernetes resource objects (in your case Service) cannot be "restarted" automatically by tiller, but luckily can be restored to the desired state of configuration with following helm command:

helm upgrade <your-release-name> <repo-name>/<chart-name> --reuse-values --force

e.g.

helm upgrade my-ingress stable/nginx-ingress --reuse-values --force

You can also use:

  helm history <release_name>

  helm rollback --force [RELEASE] [REVISION]

--force argument in both cases, forces resource update through delete/recreate if needed

-- Nepomucen
Source: StackOverflow

4/1/2019

You mix stuff a bit.

The RestartAlways that you define on a pod configures that it will always restart upon Completion or Failure.

The reason that you see the pod recreated upon deletion is that it has a deployment object that created it, and it has desires to always have the required pods amount.

Helm does not interact with the deletion of objects in the cluster, once he created his objects, he doesn't interact with them anymore until the next to helm command.

Hope that it help you understand the terms a bit better.

-- Shai Katz
Source: StackOverflow