kubectl apply -f k8s: is unable to recognize service and deployment and has no matches for kind "Service" in version "v1"

5/27/2019

I have kubernetes running on OVH without a problem. But i recently reinstalled the build server because of other issues and setup everything but when trying to apply files it gives this horrable error.. did i miss something? and what does this error really mean?

+ kubectl apply -f k8s
unable to recognize "k8s/driver-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
unable to recognize "k8s/driver-mysql-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-mysql-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
unable to recognize "k8s/driver-mysql-persistent-volume-claim.yaml": no matches for kind "PersistentVolumeClaim" in version "v1"
unable to recognize "k8s/driver-phpmyadmin-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-phpmyadmin-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"

I tried all previous answes on SO but none worked out for me. I don't think that i really need it, "correct me if i am wrong on that". I really would like to get some help with this.

I have installed kubectl and i got a config file that i use. And when i execute the kubectl get pods command i am getting the pods that where deployed before

These are some of the yml files

k8s/driver-cluster-ip-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: driver-cluster-ip-service
spec:
  type: ClusterIP
  selector:
    component: driver-service
  ports:
    - port: 3000
      targetPort: 8080

k8s/driver-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: driver-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      component: driver-service
  template:
    metadata:
      labels:
        component: driver-service
    spec:
      containers:
        - name: driver
          image: repo.taxi.com/driver-service
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
      imagePullSecrets:
        - name: taxiregistry

dockerfile

FROM maven:3.6.0-jdk-8-slim AS build
COPY . /home/app/
RUN rm /home/app/controllers/src/main/resources/application.properties
RUN mv /home/app/controllers/src/main/resources/application-kubernetes.properties /home/app/controllers/src/main/resources/application.properties
RUN mvn -f /home/app/pom.xml clean package

FROM openjdk:8-jre-slim
COPY --from=build /home/app/controllers/target/controllers-1.0.jar /usr/local/lib/driver-1.0.0.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/driver-1.0.0.jar"]

kubectl get pods command get pods

kubectl api-versions

api-versions

-- o elhajoui
docker
kubectl
kubernetes
linux
microservices

1 Answer

5/28/2019

solution found

I had to place the binary file in a .kube folder which should be placed in the root directory In my case i had to manually create the .kube folder in the root directory first.

After that I had my env variable set to that folder and placed my config file with my settings in there as well

Then i had to share the folder with the jenkins user and applied rights to the jenkins group

Jenkins was not up to date, so I had to restart the jenkins server.

And it worked like a charm!

Keep in mind to restart the jenkins server so that the changes you make will take affect on jenkins.

-- o elhajoui
Source: StackOverflow