I am working locally with minikube and every time i make a change on the code, i delete the service (and the deployment) and create a new one.
This operation generate a new IP for each container so i also need to update my frontend, and also to insert new data in my db container, since i loose every data every time i delete the service.
It’s way too much wasted time to work efficiently.
I would like to know if there is a way to update a container without generating new IPs, and without deleting the pod (because i don't want to delete my db container everytime i update the backend code)?
Use kops and create a production like cluster in AWS on the free tier. In order to fix this you need to make sure you use a loadbalancer for your frontends. Create a service for your db container exposing the port so your frontends can reach it, and put that in your manifest for your frontends so its static. Service discovery will take care of the ip address and your containers will automatically connect to the ports. You can also setup persistent storage for your DBs. When you update your frontend code, use this to update your containers so nothing will change.
kubectl set image deployment/helloworld-deployment basicnodeapp=buildmystartup/basicnodeapp:2
Here is how I would do a state-full app in production AWS using wordpress for an example.
###############################################################################
#
# Creating a stateful app with persistent storage and front end containers
#
###############################################################################
* Here is how you create a stateful app using volumes and persistent storage for production.
* To start off we can automate the storage volume creation for our mysql server with a storage object and persistent volume claim like so:
$ cat storage.yml
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
zone: us-east-1b
$ cat pv-claim.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: db-storage
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
* Lets go ahead and create these so they are ready for our deployment of mysql
$ kubectl create -f storage.yml
storageclass "standard" created
$ kubectl create -f pv-claim.yml
persistentvolumeclaim "db-storage" created
* Lets also create our secrets file that will be needed for mysql and wordpress
$ cat wordpress-secrets.yml
apiVersion: v1
kind: Secret
metadata:
name: wordpress-secrets
type: Opaque
data:
db-password: cGFzc3dvcmQ=
# random sha1 strings - change all these lines
authkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ4OA==
loggedinkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ4OQ==
secureauthkey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5MQ==
noncekey: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5MA==
authsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5Mg==
secureauthsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5Mw==
loggedinsalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5NA==
noncesalt: MTQ3ZDVhMTIzYmU1ZTRiMWQ1NzUyOWFlNWE2YzRjY2FhMDkyZGQ5NQ==
$ kubectl create -f wordpress-secrets.yml
* Take note of the names we assigned. We will need these for the mysql deployment
* We created the storage in us-east-1b so lets set a node label for our node in that AZ so our deployment is pushed to that node and can attach our volume.
$ kubectl label nodes ip-172-20-48-74.ec2.internal storage=mysql
node "ip-172-20-48-74.ec2.internal" labeled
* Here is our mysql pod definition. Notice at the bottom we use a nodeSelector
* We will need to use that same one for our deployment so it can reach us-east-1b
$ cat wordpress-db.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: wordpress-db
spec:
replicas: 1
selector:
app: wordpress-db
template:
metadata:
name: wordpress-db
labels:
app: wordpress-db
spec:
containers:
- name: mysql
image: mysql:5.7
args:
- "--ignore-db-dir=lost+found"
ports:
- name: mysql-port
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
volumeMounts:
- mountPath: "/var/lib/mysql"
name: mysql-storage
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: db-storage
nodeSelector:
storage: mysql
* Before we go on to the deployment lets expose a service on port 3306 so wordpress can connect.
$ cat wordpress-db-service.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress-db
spec:
ports:
- port: 3306
protocol: TCP
selector:
app: wordpress-db
type: NodePort
$ kubectl create -f wordpress-db-service.yml
service "wordpress-db" created
* Now lets work on the deployment. We are going to use EFS to save all our pictures and blog posts so lets create that on us-east-1b also
* So first lets create our EFS NFS share
$ aws efs create-file-system --creation-token 1
{
"NumberOfMountTargets": 0,
"SizeInBytes": {
"Value": 0
},
"CreationTime": 1501863105.0,
"OwnerId": "812532545097",
"FileSystemId": "fs-55ed701c",
"LifeCycleState": "creating",
"CreationToken": "1",
"PerformanceMode": "generalPurpose"
}
$ aws efs create-mount-target --file-system-id fs-55ed701c --subnet-id subnet-7405f010 --security-groups sg-ffafb98e
{
"OwnerId": "812532545097",
"MountTargetId": "fsmt-a2f492eb",
"IpAddress": "172.20.53.4",
"LifeCycleState": "creating",
"NetworkInterfaceId": "eni-cac952dd",
"FileSystemId": "fs-55ed701c",
"SubnetId": "subnet-7405f010"
}
* Before we launch the deployment lets make sure our mysql server is up and connected to the volume we created
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
db-storage Bound pvc-82c889c3-7929-11e7-8ae1-02fa50f1a61c 8Gi RWO standard 51m
* ok status bound means our container is connected to the volume.
* Now lets launch the wordpress frontend of two replicas.
$ cat wordpress-web.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress-deployment
spec:
replicas: 2
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:4-php7.0
# uncomment to fix perm issue, see also https://github.com/kubernetes/kubernetes/issues/2630
# command: ['bash', '-c', 'chown', 'www-data:www-data', '/var/www/html/wp-content/upload', '&&', 'apache2', '-DFOREGROUND']
ports:
- name: http-port
containerPort: 80
env:
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: db-password
- name: WORDPRESS_AUTH_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: authkey
- name: WORDPRESS_LOGGED_IN_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: loggedinkey
- name: WORDPRESS_SECURE_AUTH_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: secureauthkey
- name: WORDPRESS_NONCE_KEY
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: noncekey
- name: WORDPRESS_AUTH_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: authsalt
- name: WORDPRESS_SECURE_AUTH_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: secureauthsalt
- name: WORDPRESS_LOGGED_IN_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: loggedinsalt
- name: WORDPRESS_NONCE_SALT
valueFrom:
secretKeyRef:
name: wordpress-secrets
key: noncesalt
- name: WORDPRESS_DB_HOST
value: wordpress-db
volumeMounts:
- mountPath: /var/www/html/wp-content/uploads
name: uploads
volumes:
- name: uploads
nfs:
server: us-east-1b.fs-55ed701c.efs.us-east-1.amazonaws.com
path: /
* Notice we put together a string for the NFS share.
* AZ.fs-id.Region.amazonaws.com
* Now lets create our deployment.
$ kubectl create -f wordpress-web.yml
$ cat wordpress-web-service.yml
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
ports:
- port: 80
targetPort: http-port
protocol: TCP
selector:
app: wordpress
type: LoadBalancer
* And now the load balancer for our two nodes
$ kubectl create -f wordpress-web-service.yml
* Now lets find our ELB and create a Route53 DNS name for it.
$ kubectl get services
$ kubectl describe service wordpress
Name: wordpress
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=wordpress
Type: LoadBalancer
IP: 100.70.74.90
LoadBalancer Ingress: acf99336a792b11e78ae102fa50f1a61-516654231.us-east-1.elb.amazonaws.com
Port: <unset> 80/TCP
NodePort: <unset> 30601/TCP
Endpoints: 100.124.209.16:80,100.94.7.215:80
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
38m 38m 1 service-controller Normal CreatingLoadBalancer Creating load balancer
38m 38m 1 service-controller Normal CreatedLoadBalancer Created load balancer
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
wordpress-deployment 2 2 2 2 2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sysdig-agent-4sxv2 1/1 Running 0 3d
sysdig-agent-nb2wk 1/1 Running 0 3d
sysdig-agent-z42zj 1/1 Running 0 3d
wordpress-db-79z87 1/1 Running 0 54m
wordpress-deployment-2971992143-c8gg4 0/1 ContainerCreating 0 1m
wordpress-deployment-2971992143-h36v1 1/1 Running 0 1m
I think you actually need to solve 2 issues:
So the final solution should be like this:
First of all, in your front-end, use DNS names instead of IP addresses to reach your backend. This will save you from rebuilding your front-end app every time you deploy your backend.
That being said, there is no need to delete your service just to deploy a new version of your backend. In fact, you just need to update your deployment, making it refer to the new docker image you have built using the latest code for your backend.
Finally, as long as I understand, you have both your application and your database inside the same Pod. This is not a good practice, you should separate them in order not to cause a downtime in your database when you deploy a new version of your code.
As a sidenote, not sure if this is the case, but if you are using minikube as a development environment you're probably doing it wrong. You should use docker alone with volume binding, but that's out of scope of your question.
It's easy to update the existing Deployment with a new image without necessity to delete it.
Imagine we have a YAML file with the Deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
To run this deployment, run the following command:
$ kubectl create -f nginx-deployment.yaml --record
(--record
- appends the current command to the annotations of the created or updated resource. This is useful for future reviews, such as investigating which commands were executed in each Deployment revision, and for making a rollback.)
To see the Deployment rollout status, run
$ kubectl rollout status deployment/nginx-deployment
To update nginx image version, just run the command:
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Or you can edit existing Deployment with the command:
$ kubectl edit deployment/nginx-deployment
To see the status of the Deployment update process, run the command:
$ kubectl rollout status deployment/nginx-deployment
Waiting for rollout to finish: 2 out of 3 new replicas have been updated...
deployment "nginx-deployment" successfully rolled out
or
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 3 3 3 3 36s
Each time you update the Deployment, it updates the Pods by creating new ReplicaSet, scaling it to 3 replicas, and scaling down old ReplicaSet to 0. If you update the Deployment again during the previous update in progress, it starts to create new ReplicaSet immediately, without waiting for completion of the previous update.
$ kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-1180356465 3 3 3 4s
nginx-deployment-2538420311 0 0 0 56s
If you made a typo while editing the Deployment (for example, nginx:1.91) you can rollback it to the previous good version.
First, check the revisions of this deployment:
$ kubectl rollout history deployment/nginx-deployment
deployments "nginx-deployment"
REVISION CHANGE-CAUSE
1 kubectl create -f nginx-deployment.yaml --record
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
3 kubectl set image deployment/nginx-deployment nginx=nginx:1.91
Because we recorded the command while creating this Deployment using --record, we can easily see the changes we made in each revision.
To see the details of each revision, run:
$ kubectl rollout history deployment/nginx-deployment --revision=2
deployments "nginx-deployment" revision 2
Labels: app=nginx
pod-template-hash=1159050644
Annotations: kubernetes.io/change-cause=kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
Containers:
nginx:
Image: nginx:1.9.1
Port: 80/TCP
QoS Tier:
cpu: BestEffort
memory: BestEffort
Environment Variables: <none>
No volumes.
Now you can rollback to the previous version using command:
$ kubectl rollout undo deployment/nginx-deployment
deployment "nginx-deployment" rolled back
Or you can rollback to a specific version:
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
deployment "nginx-deployment" rolled back
For more information, please read the part of Kubernetes documentation related to Deployment