Kustomize - "failed to find unique target for patch ..."

4/19/2020

I just start using kustomize. I have the following yaml files for kustomize:

ls -l ./kustomize/base/
816 Apr 18 21:25 deployment.yaml
110 Apr 18 21:31 kustomization.yaml
310 Apr 18 21:25 service.yaml

where deployment.yaml and service.yaml are generated files with jib and they are fine in running. And the content of the kustomization.yaml is the following:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:  
- service.yaml
- deployment.yaml  

And in another directory

ls -l ./kustomize/qa
133 Apr 18 21:33 kustomization.yaml
95 Apr 18 21:37 update-replicas.yaml

where

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../base

patchesStrategicMerge:
- update-replicas.yaml

and

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 2

After running "kustomize build ./kustomize/base", I run

~/kustomize build ./kustomize/qa
Error: no matches for OriginalId ~G_~V_Deployment|~X|my-app; no matches for CurrentId ~G_~V_Deployment|~X|my-app; failed to find unique target for patch ~G_~V_Deployment|my-app

I have a look related files and don't see any typo on the application name.

And here is the deployment.yaml file.

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: my-app
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: my-app
    spec:
      containers:
        - image: docker.io/[my Docker ID]/my-app
        name: my-app
        resources: {}
        readinessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/readiness
        livenessProbe:
          httpGet:
            port: 8080
            path: /actuator/health/liveness
        lifecycle:
          preStop:
            exec:
              command: ["sh", "-c", "sleep 10"]
status: {}

Again, the above file is generated with jib with some modifications. And it runs on Kubernetes directly.

How to resolve this problem?

-- vic
kubernetes

1 Answer

4/24/2020

I was able to reproduce your scenario and didn't get any error.

I will post a step by step example so you can double check yours.

  • I'll use a simple nginx server as example, here is the files structure:
$ tree Kustomize/
Kustomize/
├── base
│   ├── deployment.yaml
│   ├── kustomization.yaml
│   └── service.yaml
└── qa
    ├── kustomization.yaml
    └── update-replicas.yaml
2 directories, 5 files
  • Base Yamls:
$ cat Kustomize/base/kustomization.yaml 
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- deployment.yaml
- service.yaml
$ cat Kustomize/base/deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: my-app
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: nginx
        ports:
        - containerPort: 80
$ cat Kustomize/base/service.yaml 
kind: Service
apiVersion: v1
metadata:
  name: nginx-svc
spec:
  selector:
    app: my-app
  type: NodePort
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  • Qa Yamls:
$ cat Kustomize/qa/kustomization.yaml 
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../base

patchesStrategicMerge:
- update-replicas.yaml
$ cat Kustomize/qa/update-replicas.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 2
  • Now I'll build base and apply:
$ kustomize build ./Kustomize/base | kubectl apply -f -
service/nginx-svc created
deployment.apps/my-app created

$ kubectl get all
NAME                          READY   STATUS    RESTARTS   AGE
pod/my-app-64778f875b-7gsg4   1/1     Running   0          52s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/nginx-svc    NodePort    10.96.114.118   <none>        80:31880/TCP   52s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-app   1/1     1            1           52s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/my-app-64778f875b   1         1         1       52s

Everything got deployed as intended, pod, deployment, service and replicaset, with 1 replica. - Now let's deploy the qa update:

$ kustomize build ./Kustomize/qa/ | kubectl apply -f -
service/nginx-svc unchanged
deployment.apps/my-app configured

$ kubectl get all
NAME                          READY   STATUS    RESTARTS   AGE
pod/my-app-64778f875b-7gsg4   1/1     Running   0          3m26s
pod/my-app-64778f875b-zlvfm   1/1     Running   0          27s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/nginx-svc    NodePort    10.96.114.118   <none>        80:31880/TCP   3m26s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-app   2/2     2            2           3m26s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/my-app-64778f875b   2         2         2       3m26s
  • This is the expected behavior and the number of replicas was scaled to 2.

Suggestions:

  • I noticed you added to the question the deployment after being deployed (through kubectl get deploy <name> -o yaml) but maybe the issue is in the original file and when applied it's changed somewhat.
  • Try to reproduce it with the example files I provided to see if you get the same output.

Let me know your results!

-- willrof
Source: StackOverflow