We are trying to have a replica of a k8s cluster. We already have the cluster backup in s3 which is done by ark running inside the cluster (as a schedular). I am trying to restore the data to new cluster manually, I have the tar.gz file in the vm, but I don't know how to restore it, the documents and other blogs are telling to use ark restore create --from-backup <backup-name>
. I am not sure what I have to provide as backup name, I extracted the backup file and have the resources directory as explained in velero documentation. I tried whatever names I could. But got An error occurred: backups.ark.heptio.com "<strings that I am trying as backup_name>" not found
. I am new with this, so please ask me if I need to provide more information.
We are storing the backups created by ark in s3, which is configured in the yml file of ark.
In order to view the backups that was taken in the old cluster, we must map the ark schedular in new cluster to the same s3 bucket, this can be done by configuring the schedular in ark yml file. Then we can check the custom resources using the command kubectl get crd
. We will get all the custom resources, check for ark, it will be something like
backups.ark.heptio.com.
Then we can see what are the backups in this crd. Use kubectl get backups.ark.heptio.com -n <ark-namespace>
. It should show all the available backups. A list of backups should come. Then use the restore command.
ark restore create --from-backup <name of the backup that you want to restore, we got it in the previous step> -n <namespace-where-ark-running>. It should start restoring the backup. You can check the restore status using `ark restore describe --from-backup <name of the backup that you want to restore, we got it in the previous step> -n <namespace-where-ark-running>`.
Also, we can check the logs of ark pod that is running inside the cluster, it should show things that are being restored.