Backup and Restore Virtual Machine running on OpenShift
A guide for performing the backup and restore of the Virtual Machine (VM) running on the OpenShift environment.
Deprecated Documentation
This document is deprecated and no longer supported. For accurate, up-to-date information, please refer to the documentation for the latest version of Trilio.
# Backup and Restore Virtual Machine running on OpenShift
Red Hat OpenShift Container Platform (OCP) is the market-leading Kubernetes platform. OpenShift efficiently manages your Kubernetes-based applications deployed on it. With the help of the OpenShift Virtualization operator, users can also run their VMs into a pod and these pods are also managed by OpenShift similar to other pods running Kubernetes-based applications.
Install OpenShift Virtualization Operator
Log in to the OpenShift Management Console and go to OperatorHub.
Search the Operator “OpenShift Virtualization”
3. Click on the “Install” and proceed with the installation.
4. Select the appropriate options and proceed with the installation
5. Once the installation is complete, the user needs to create HyperConverged using the Local Storage Class Name. HyperConverged creates and maintains the OpenShift Virtualization Deployments.
6. Once the installation is complete, you can check a new option “Virtualization” visible under the Workload section.
Note: You can learn more about OpenShift Virtualization in this video.
Follow the below steps to install and configure the underlying storage required to run virtual machines.
Install and Configure the OpenShift Data Foundation Operator for underlying Storage for VM Deployment
Once the OpenShift Virtualization Operator is installed, it also needs underlying storage for the VM deployments. You will install OpenShift Data Foundation (ODF) (Formerly known as OpenShift Container Storage (OCS)) Operator which will configure the underlying storage for the VMs to use.
Follow the steps below to install and configure the ODF Operator:
Login to OpenShift Management Console and go to the OperatorHub
Search the Operator “OpenShift Data Foundation” operator
3. Select the Operator and provide appropriate input values. Click on Install.
4. After the installation is complete, you have to create a Storage System which would in-turn create a Storage Cluster. You have to provide an existing Storage class to provision the new storage for Storage Cluster creation to proceed.
5. Once the Storage System creation is complete, in the background, it will install different storage components such as BackingStore, BucketClass, CephBlockPool, etc.
6. Once all the components are in the Ready state, you can check that the storage class is also created in the backend.
7. Now, you are ready to deploy a virtual machine on OpenShift.
Deploy a Virtual Machine on OpenShift
Since, you are have installed the OpenShift Virtualization Operator and you might already be aware of how to deploy a VM or you have already deployed a VM.
If not, it's very easy and similar to the Virtual Machine deployment on any other other virtualization technology. You can follow this video from the OpenShift team to deploy a VM.
Backup a Virtual Machine Using T4K
After setting up all the prerequisites to deploy Trilio for Kubernetes, you might have already launched T4K Management Console. Follow the UI authentication section to login into the console using OpenShift credentials.
Now, we can proceed to perform a VM backup. Follow the below instructions steps to create a VM backup:
On the T4K Management console, select the namespace in the primary cluster from the top dropdown where the VM is created.
2. Select the namespace and click on the “Action” or “Bulk Action” button on the right and select the option “Create Backup”
3. To select the resources to be backed up, we need to create a backupplan that can be used in the future to create backups.
4. After selecting the backupplan, you can click on “Continue” to provide a backup name and click on “Create Backup” to start the namespace backup operation.
5. Once the backup is triggered, you can verify the metadata objects captured as a part of the backup from the `In Progress` backup
6. Now, you can monitor the backup status on the status log window.
7. With all steps marked as green in above screenshot, the backup status shows that it's completed successfully and we can proceed to understand how to restore the VM backup on the same OpenShift cluster.
Restore a Virtual Machine Using T4K
Remove existing VM from OpenShift Cluster
Follow the below steps to delete the VM from the OpenShift Cluster:
Go to the OpenShift Console, Click on Virtualization under the Workload tab. Click on the three dots on the right side of the VM entry and click on the last option “Delete Virtual Machine”.
2. Once the VM entry is removed from the OCP console, check that both PVC created for the VM disks are also removed
3. After you have deleted the VM and all its related resources, you are good to proceed with the namespace restore operation.
Perform a VM Restore from the Backup
In the below steps, you will learn how to restore the VM through namespace backup performed earlier in this article. Follow the below steps to perform the namespace restore from the backup present on the target.
Login to T4K Management Console and select “Backup & Recovery -> Backupplans”. Click on the “Namespace” dropdown at the top and select the namespace that we have taken a backup of.
2. Click on the “Actions” button to get the option to see the “View Backup & Restore Summary”
3. On the next pop-up of the Monitoring summary of the backup and restores for the namespace, click on the accordion to expand the backup details. Click on the “Restore” option at the bottom.
4. On the restore pop-up window, provide the “Restore Name” and “Restore Namespace” where we want to restore the VM.
5. Click on the “Next” button to move to “Transform Components” and “Exclude Resources”.
6. Click on the Exclude Resources, if there is a “DataVolume” resource object present in the backup, then add it to be excluded during the restore.
7. Now, click on the “Transform Components” -> “Add Transform Components” - > Select “Custom” from the dropdown to list down the resources present in the backup. Since we are creating the first transformation, click on `Create New`.
8. Now, we will perform the Transformation for PersistentVolumeClaim
of the rootdisk and datadisk-0 PVC.
9. We will perform the below transformations on one rootdisk
PVC at a time:
Remove the
datadisk-0
PVC from the Objects listProvide a transformation name
You need to understand the
rootdisk
PVC yaml to learn a bit about why specific transformations are added. Follow the below transformations for therootdisk
PVC.If annotation
cdi.kubevirt.io/storage.populatedFor
exists in the PVC, then don’t need to add the annotation, but if the PVC does not have the annotation then the user needs to add the annotation as below:/metadata/annotations/cdi.kubevirt.io~1storage.populatedFor : "rhel8-korean-leopard-rootdisk-i27do
"If the annotation
cdi.kubevirt.io/ownedByDataVolume
exists in the PVC, then remove the annotation usingremove
operation and value/metadata/annotations/cdi.kubevirt.io~1ownedByDataVolume
Remove
the path/spec/volumeName
Remove
theownerReferences
/metadata/ownerReferences/0
Replace
thedataSource
name -/spec/dataSource/name
with the VM namerhel8-korean-leopard
Click on the “Preview Changes” to make sure that the annotations are correct, if not the dry run would fail and it will show an error message.
If everything looks similar to the above screenshot, then close the windows and click on the “Apply” button.
Now, expand the “PersistentVolumeClaim” to see the added transformation, and click on the “Set Transformation” button to go to the datadisk-0 PVC transformation.
10. We will perform the below transformations on one datadisk-0 PVC at a time:
Remove the datadisk-0 PVC from the Objects list
Provide a transformation name
You need to understand the
datadisk-0
PVC yaml to learn a bit about why specific transformations are added. Follow the below transformations for the datadisk-0 PVCIf annotation
cdi.kubevirt.io/storage.populatedFor
exists in the PVC, then don’t need to add the annotation, but if the PVC does not have the annotation then the user needs to add the annotation as below:/metadata/annotations/cdi.kubevirt.io~1storage.populatedFor : "rhel8-korean-leopard-datadisk-0-f92h1"
For
datadisk-0
PVC the annotationcdi.kubevirt.io/ownedByDataVolume
does not exist, so don’t need to perform any transformation.Remove
the path/spec/volumeName
Remove
theownerReferences
-/metadata/ownerReferences/0
Since the
/spec/dataSource
path is not present in thedatadisk-0
PVC spec, don’t need to perform any transformation for the dataSource name.
Click on the “Preview Changes” to make sure that the annotations are correct, if not the dry run would fail and it will show an error message.
If everything looks good line the above screenshot, then close the windows and click on the “Apply” button
Now, expand the “PersistentVolumeClaim” to see the added transformation for datadisk-0 PVC. Click on the
Add
button to save thedatadisk-0
PVC transformation to see the details of the PVC transformation components.
11. Click on the Save
button to start the Restore operation. It will take a few minutes to complete the restore depending on the size of PVCs being restored.
12. After some time, you can check that both the PVCs are created in the restore namespace.
13. The VM is running as expected and both the disks are present with the VM.
14. After this, you can call that the Restore operation is successful and validate the data on the disks which you have created before taking the backup. After the data validation, you can say that the VM is successfully restored by using the restore transformation feature.
Last updated