I have a node JS app which i am deploying to kubernetes.
I have made changes to node JS app and am redeploying the app to K8s.
However, I notice that the deployment is not making through.
I checked my docker hub and yes the latest image is being deployed. This is my service.yaml file below
apiVersion: v1
kind: Service
metadata:
name: fourthapp
spec:
type: LoadBalancer #Exposes the service as a node port
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: webapp
and this is my deploy.yaml file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: fourthapp
spec:
replicas: 2
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: index.docker.io/leexha/nodejsapp:latest
ports:
- containerPort: 3000
resources:
requests:
memory: 500Mi
cpu: 0.5
limits:
memory: 500Mi
cpu: 0.5
imagePullPolicy: Always
when i run the service.yaml it reads
C:\Users\adrlee\Desktop\oracle\Web_projects>kubectl apply -f service.yml
service "fourthapp" unchanged
Anything im doing wrong?
Kubernetes won't update running pods unless the pod spec is changed. If you want force the deployment you can run after the apply command:
kubectl patch deployment fourthapp -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}"
It will add/update special date
annotation on the pod template and Kubernetes will update running pods.
If I understood the question you should update the Deployment instead. The service is just a kind of LB which dispatch traffic between your pods.
First, you should add imagePullPolicy: Always
to the deployment to force k8s to download the newest image.
If you want to update the deployment you can run
kubectl apply -f deploy.yml
or performing a Rolling Update
If you don't give every build of your image a distinct name, it's hard to force Kubernetes to restart a Deployment when the underlying image changes: it has no way of knowing that the "latest" tag on Docker Hub now means something else. (imagePullPolicy: Always
will at least force it to get a new image if it happens to be restarting anyways.) When your run kubectl apply
, it looks at the Deployment you're uploading, sees that it matches what's already running, and does nothing.
Best practice is to not use the "latest" tag and give some sort of unique identifier (timestamp, source control commit ID, ...). Then you can update the image ID in the pod spec, kubectl apply
will see that something is different, and Kubernetes will perform a rolling update of the running pod(s) for you. This also has the advantage that, if a build is bad, you can easily go backwards by changing the image tag back to a previous build.