Invalid spec when I run pod.yaml

10/8/2018

When I run my Pod I get the Pod (cas-de) is invalid spec : forbidden pod updates may not change fields other than the spec.containers[*].image, spec.initContainers[*].image, spec.activeDeadlineSeconds or spec.tolerations (only additions to existing tolerations)

However, I searched on the kubernetes website and I didn't find anything wrong: (I really don't understand where is my mistake)

Does it better to set volumeMounts in a Pod or in Deployment?

apiVersion: v1
kind: Pod 
metadata:
  name: cas-de
  namespace: ds-svc
spec:
  containers:
  - name: ds-mg-cas
    image: "docker-all.xxx.net/library/ds-mg-cas:latest"
    imagePullPolicy: Always
    ports:
    - containerPort: 8443
    - containerPort: 6402
    env:
    - name: JAVA_APP_CONFIGS
      value: "/apps/ds-cas/configs"
    - name: JAVA_EXTRA_PARAMS
      value: "-Djava.security.auth.login.config=./config/jaas.config -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6402"
    volumeMounts:
    - name: ds-cas-config
      mountPath: "/apps/ds-cas/context"
  volumes:
    - name: ds-cas-config
      hostPath:
        path: "/apps/ds-cas/context"
-- morla
google-kubernetes-engine
kubernetes

2 Answers

10/8/2018

There are several fields on objects that you simply aren't allowed to change after the object has initially been created. As a specific example, the reference documentation for Containers notes that volumeMounts "cannot be updated". If you hit one of these cases, you need to delete and recreate the object (possibly creating the new one first with a different name).

Does it better to set volumeMounts in a Pod or in Deployment?

Never use bare Pods; always prefer using one of the Controllers that manages Pods, most often a Deployment.

Changing to a Deployment will actually solve this problem because updating a Deployment's pod spec will go through the sequence of creating a new Pod, waiting for it to become available, and then deleting the old one for you. It never tries to update a Pod in place.

-- David Maze
Source: StackOverflow

10/8/2018

YAML template is valid. Some of the fields might have been changed that are forbidden and then kubectl apply .... is executed.

Looks like more like a development. Solution is to delete the existing pod using kubectl delete pod cas-de command and then execute kubectl apply -f file.yaml or kubectl create -f file.yaml.

-- Praveen Sripati
Source: StackOverflow