Management Console
Learn about using with Trilio for Kubernetes with the Management Console
To get started with Trilio via the management console in your environment, the following steps must be performed:
- 1.
- 2.
- 1.Install a compatible CSI Driver
- 2.Create a Backup Target - A location where backups will be stored.
- 3.Create a retention policy (Optional) - To specify how long to keep the backups for.
- 4.Run Example:
- Label Example
- Helm Example
- Operator Example
- Virtual Machine Example
- Namespace Example
Skip this step if your environment already has a CSI driver installed with snapshot capability.
Trilio for Kubernetes requires a compatible Container Storage Interface (CSI) driver that provides the Snapshot feature.
You should check the Kubernetes CSI Developer Documentation to select a driver appropriate for your backend storage solution. See the selected CSI driver's documentation for details on the installation of the driver in your cluster.
Create a secret containing the credentials for data stores to store backups. An example is provided below:
apiVersion: v1
kind: Secret
metadata:
name: sample-secret
type: Opaque
stringData:
accessKey: AKIAS5B35DGFSTY7T55D
secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVD
You can either create the secret using the above YAML definition or use the management console to create it as part of the workflow for creating the backup target.

Create secret while creating AWS target
Please use one of the Target examples provided in the Custom Resource Definition section as a template for creating an NFS, Amazon S3, or any S3-compatible storage target.
Supported values for S3 vendors include:
"AWS", "RedhatCeph", "Ceph", "IBMCleversafe", "Cloudian", "Scality", "NetApp", "Cohesity", "SwiftStack", "Wassabi", "MinIO", "DellEMC", "Other"
An Amazon S3 target example is provided below:

Create demo-s3-target on AWS using above created secret

demo-s3-target created
Note: With the above configuration, the target would get created in the current user namespace unless specified. Also, additional information on Bucket permissions can be found here: AWS S3 Target Permissions
While the example backup custom resources created by following this Getting Started page can be deleted manually via
kubectl
commands, Trilio also provides backup retention capability - to automatically delete the backups based on defined time boundaries.
Create demo-retention-policy

Retention policy created successfully
More information on the Retention Policy spec can be found in the
application CRD reference
section. A retention policy is referenced in the backupPlan CR.Note: With the above configuration, the policy would get created in the default namespace unless specified.
The following section will cover creating a sample application and backup/restore of it via labels, Helm, Operator, or a namespace-based backup.
If your environment does not have a CSI driver (supporting snapshot capability) installed, storage class, and volumesnapshotclass configured, then you can refer to the Install CSI Driver section. Follow the steps to create a Sample Application section below.
More details about CRDs and their usage/explanation can be found in the Custom Resource Definition Section.
Note:
- 1.Backup and BackupPlan should be created in the same namespace.
- 2.For the restore operation, the resources will get restored in the namespace where restore CR is created.
- 3.If there is more than one backup created for the same application, users can select any existing backup information to perform the restore.
The following sections will create a sample application (tag it with labels), backup the application via labels, and then restore the application.
The following steps will be performed.
- 1.Create a sample MySQL application
- 2.Create a BackupPlan CR using a management console that specifies the MySQL application via labels
- 3.Create a Backup CR using the management console with a reference to the BackupPlan CR created above
- 4.Create a Restore CR using the management console with a reference to the Backup CR created above.
Use the following screenshot to assist in the deployment of the MySQL application using the label.

T4K has auto-discovered the application from backup namespace
Create a BackupPlan CR by selecting the application created in the previous step via UI labels in the same namespace where the application resides.

Select application deployed by label and create new backupplan

app:k8s-demo-app is part of backupplan
Create a Backup CR using UI to protect the BackupPlan.
Type
of the backup can be either full or incremental.Note: The first backup into a target location will always be a Full backup.

Select the backupplan and enter backup name

MySQL demo application backup is in-progress state

Application scoped backup of MySQL app deployed by label is successful

Details of the demo-mysql-label-backup
Finally create the Restore CR using UI to restore the Backup, in the same or different namespace using the Backup. In the example provided below, MySQL-label-backup is being restored into the "restore" namespace.
Restore to the same cluster but a different namespace
Note: If restoring into the same namespace, ensure that the original application components have been removed. If restoring into another namespace in the same cluster, ensure that the resources which cannot be shared like ports should be freed or transformation should be used to avoid conflict. More information about transformation can be found at Restore Transformation.

Select the restore-point and click on Restore button

Provide restore name and restore namespace

demo-mysql-label-restore is in progress state

MySQL application is restored to restore namespace
Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), the same target should be created and Target Browsing should be enabled to browse the stored backups.

Enable the Target Browsing for the Target

Use Launch Browser option and search backup using backupplan name

Select the backup and click on Restore

Provide restore name and restore namespace

Restore to a different cluster is successful

MySQL application after the restore to a different cluster is successful
The following sections will create a sample application via Helm, backup the application via Helm selector fields and then restore the application using management UI.
The following steps will be performed.
- 1.Create a cockroachdb instance using Helm
- 2.Create a BackupPlan CR using a management console that specifies the cockroachdb application to protect.
- 3.Create a Backup CR using the management console with a reference to the BackupPlan CR create above
- 4.Create a Restore CR using the management console with a reference to the Backup CR created above.
Use the following screenshot to assist in the deployment of the "Cockroachdb" application using the helm chart.

Auto-discovered cockroachdb helm application
Use the following to create a BackupPlan. Ensure the name of the release you specify matches the output from the helm ls command in the previous step.

Enter backupplan name and select target reposiroty

cockroachdb helm release is part of the backupplan
Use the following screenshot to create a backup CR

Select above created backupplan

Enter the backup name

demo-cockroachdb-helm-backup is in progress state

demo-cockroachdb-helm-backup is in Available state
After the backup has been completed successfully, create a Restore CR to restore the application in the same or different namespace where BackupPlan and Backup CRs are created.
Note: If restoring into the same namespace, ensure that the original application components have been removed. Especially the PVC of the application is deleted.
Restore to the same cluster but a different namespace
Note: If restoring into another namespace in the same cluster, ensure that the resources which cannot be shared like ports should be freed or transformation should be used to avoid conflict. More information about transformation can be found at Restore Transformation.
Before restoring the app, we need to clean the existing app from the same cluster. This is required cluster-level resources of the app can create conflict during the restore operation.
helm delete cockroachdb-app

Select the backup created above from the Restore Points

Enter the restore name and select the restore namespace

demo-cockroachdb-helm-restore is in-progress state

Restore is in Completed state

After restore, all 4 pods of cockroachdb are restored in restore ns
Restore to the different cluster
Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), the same target should be created and Target Browsing should be enabled to browse the stored backups.
After following the above note then follow the instructions same as Restore the backup/application by label section to choose the namespace backup stored at the target repository and perform the helm backup restore.
The following steps will be performed.
- 1.Install a sample etcd Operator
- 2.Create an etcd cluster
- 3.Create a BackupPlan CR using the management console that specifies the etcd application to protect.
- 4.Create a Backup CR using the management console with a reference to the BackupPlan CR created above
- 5.Create a Restore CR using the management console with a reference to the Backup CR created above.
We are demonstrating standard 'etcd-operator' here. First, we need to deploy the operator using its helm chart. Then we need to deploy the etcd cluster.
Follow Install etcd-operator and Create an etcd cluster using the following yaml definition section to install and create an etcd cluster.

T4K auto-discovered the etcd operator in the backup ns
Create a 'BackupPlan' resource to protect 'etcd-operator' and it's clusters. Use the management console to select the etcd-operator auto-discovered by T4K and shown under the Operator section.

Create a backupplan for auto-discovered etcd operator

etcd operator resources captured as a part of backupplan
Take a backup of the above 'BackupPlan'. Use the following screenshot to proceed and create a 'Backup' resource.

Select the above created backupplan and enter backup name

etcd operator backup is in-progress state

etcd operator backup is in Available state
- After the backup completes successfully, you can perform the Restore of it.
- To restore the etcd-operator and its clusters from the above backup, use the screenshots shown below.
Note: If restoring into the same namespace, ensure that the original application components have been removed.
Restore to the same cluster but the different namespace
Note: If restoring into another namespace in the same cluster, ensure that the resources which cannot be shared, for example, ports - should be available or transformation should be used to avoid conflict. More information about transformation can be found at Restore Transformation.
Before restoring the app, we need to clean the existing app from the same cluster. This is required cluster-level resources of the app can create conflict during the restore operation.

Select the backup created above from the Restore Points

Enter the restore name and select the restore namespace

demo-etcd-operator-restore is in-progress state

Restore is in Completed state
The following sections will describe the steps to create a sample Virtual Machine, backup the Virtual Machine like any other Helm and Operator application, and then restore the Virtual Machine using the management UI.
The following steps will be performed.
- 1.Create a Virtual Machine using OpenShift Virtualization Operator.

Virtual Machine running in the OpenShift Container Platform
- 1.Create a BackupPlan CR using a management console that specifies the
centos9
Virtual Machines to protect. - 2.Create a Backup CR using the management console referencing the BackupPlan CR created in step 1.
- 3.Create a Restore CR using the management console again referencing the Backup CR created in step 1.
Use the following screenshot to assist in the creation of a BackupPlan for the Virtual Machine, ensuring that the name of the Virtual Machine you specify matches the VM created in the previous step.

Virtual Machine auto-discovered by the Trilio for Kubernetes

Create New Backup for Virtual Machine

Provide Backupplan name, Target, and other details

Provide Scheduling Policy, Retention Policy for the BackupPlan

Virtual Machine Parameters are added under the Custom Component Details

Continuous Restore (Optional)

Wait for sync up to complete

Backup Plan is created with Virtual Machine and dependent objects
Create a backup CR as shown in the following screenshots.

Provide the Backup name

Different stages of backup - Data snapshot and data upload
After the backup has been completed successfully, create a Restore CR to restore the Virtual Machine in the same or different namespace where BackupPlan and Backup CRs are created.

View Backup and Restore Summary for Virtual Machine

View Backup and Restore Summary for Virtual Machine

Provide Restore CR name and Restore Namespace

No transformation required, click on Create

Restore is complete with both VM disks restored correctly
Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), the same backup target should be created and Target Browsing should be enabled to browse the stored backups.
After following the above note then follow the instructions same as Restore the backup/application by label section to choose the namespace backup stored at the target repository and perform the helm backup restore.
Once the restore is complete, you can log in to the OpenShift Container Platform and go to the restore namespace and check that the Virtual Machine is restored correctly and is in the
Running
state.
VM with name is restored in vm-restore namespace
- 1.Create a namespace called 'wordpress'
- 2.Use Helm to deploy a wordpress application into the namespace.
- 3.Perform a backup of the namespace using the management console
- 4.Delete the namespace/application from the kubernetes cluster
- 5.Create a new namespace 'wordpress-restore'
- 6.Perform a Restore of the namespace using the management console
Create a backupPlan to backup the namespace using the management console

Auto-discovered namespaces

Create demo-ns-backupplan

demo-ns-backupplan is in Available state
Use the following screenshot to build the Backup CR using the management console

Select the backupplan name if present for a namespace

Select the auto-discovered backupplan name created above for backup ns

Provide backup name demo-ns-backup

demo-ns-backup operation is in progress

Namespace scoped backup is sucessful

Namespace scoped backup demo-ns-backup is in available state
Perform a restore of the namespace backup using the management console
Note: If restoring into the same namespace, ensure that the original application components have been removed. If restoring into another namespace in the same cluster, ensure that the resources which cannot be shared like ports should be freed or transformation should be used to avoid conflict. More information about transformation can be found at Restore Transformation.

Select demo-ns-backup from Restore Points and click on Restore button

Provide the restore name and restore namespace

demo-ns-restore is in-progress state

backup ns is restored into restore ns with applications
kubectl get pods -n restore
$ [email protected]:~# kubectl get pods -n restore
NAME READY STATUS RESTARTS AGE
k8s-demo-app-frontend-7c4bdbf9b-4hsl2 1/1 Running 0 2m56s
k8s-demo-app-frontend-7c4bdbf9b-qm84p 1/1 Running 0 2m56s
k8s-demo-app-frontend-7c4bdbf9b-xmrxk 1/1 Running 0 2m56s
k8s-demo-app-mysql-754f46dbd7-cj5z5 1/1 Running 0 2m56s
Restore to the different cluster
If you are trying to restore into a different cluster then follow the guidelines same asRestoring to a different cluster section to choose the namespace backup stored at the target repository and perform the namespace restore.
Last modified 1h ago