Management Console (UI)

This section discusses how to get jump-started with Trilio for Kubernetes (T4K) in a customer environment using T4K Management Console

Deprecated Documentation

This document is deprecated and no longer supported. For accurate, up-to-date information, please refer to the documentation for the latest version of Trilio.

Management Console (UI)

To get started with T4K via the management console in your environment, the following steps must be performed:

Prerequisites

  1. Authenticate access to the Management Console (UI). Refer to UI Authentication.

  2. Configure access to the Management Console (UI). Refer to Configuring the UI.

Steps Overview

  1. Install Test CSI Driver - Leverage the test hostpath driver, if your environment doesn't support one with Snapshot capability today.

  2. Create a T4K Target - Location where backups will be stored.

  3. Create a retention policy (Optional) - To specify how long to keep the backups for.

  4. Run Example:

    • Label Example

    • Helm Example

    • Operator Example

    • Namespace Example

Step 1: Install Test CSI Driver

Skip this step if your environment already has a CSI driver installed with snapshot capability.

Follow the instructions provided in Appendix HostPath CSI Driver for T4K to install the Hostpath CSI driver. The proper functioning of the software will be determined by running an example with the hostpath driver.

Step 2: Create a Target

Create a secret containing the credentials for data stores to store backups. An example is provided below:

apiVersion: v1
kind: Secret
metadata:
  name: sample-secret
type: Opaque
stringData:
  accessKey: AKIAS5B35DGFSTY7T55D
  secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVD

You can either create the secret using the above YAML definition or use the management console to create it as part of the workflow for creating the backup target.

Create secret while creating AWS target

Please use one of the Target examples provided in the Custom Resource Definition section as a template for creating an NFS, Amazon S3, or any S3 compatible storage target.

Supported values for S3 vendors include:

"AWS", "RedhatCeph", "Ceph", "IBMCleversafe", "Cloudian", "Scality", "NetApp", "Cohesity", "SwiftStack", "Wassabi", "MinIO", "DellEMC", "Other"

An Amazon S3 target example is provided below:

Create demo-s3-target on AWS using above created secret
demo-s3-target created

Note: With the above configuration, target would get created in current user namespace unless specified. Also, additional information on Bucket permissions can be found here: AWS S3 Target Permissions

Step 3: Create a Retention Policy (Optional)

While the example backup custom resources created by following this Getting Started page can be deleted manually via kubectl commands, Trilio also provides backup retention capability - to automatically delete the backups based on defined time boundaries.

Create demo-retention-policy
Retention policy created successfully

More information on the Retention Policy spec can be found in theapplication CRD reference section. A retention policy is referenced in the backupPlan CR .

Note: With the above configuration, policy would get created in default namespace unless specified.

Step 4: Run Example

The following section will cover creating a sample application and backup/restore of it via labels, Helm ,Operator or a namespace based backup.

If your environment does not have a CSI driver (supporting snapshot capability) installed, storage class and volumesnapshotclass configured, then you can refer to the Install CSI Driver section. Follow the steps to create a Sample Application section below.

More details around CRDs and their usage/explanation can be found in the Custom Resource Definition Section.

Note:

  1. Backup and BackupPlan should be creaed in the same namespace.

  2. For restore operation, the resources will get restored in the namespace where restore CR is created.

  3. If there are more than one backups created for same applications, users can select any existing backup information to perform the restore.

Step 4.1: Label Example

The following sections will create a sample application (tagged it with labels), backup the application via labels and then restore the application.

The following steps will be performed.

  1. Create a sample MySQL application

  2. Create a BackupPlan CR using management console that specifies the MySQL application via labels

  3. Create a Backup CR using management console with a reference to the BackupPlan CR created above

  4. Create a Restore CR using management console with a reference to the Backup CR created above.

Create a Sample Application

Use the following screenshot to assist in deployment of the MySQL application using label.

T4K has auto-discovered the application from backup namespace

Create a BackupPlan

Create a BackupPlan CR by selecting the application created in the previous step via UI labels in the same namespace where application resides.

Select application deployed by label and create new backupplan
app:k8s-demo-app is part of backupplan

Create a Backup

Create a Backup CR using UI to protect the BackupPlan. Type of the backup can be either full or incremental.

Note: The first backup into a target location will always be a Full backup.

Select the backupplan and enter backup name
MySQL demo application backup is in-progress state
Application scoped backup of MySQL app deployed by label is successful
Details of the demo-mysql-label-backup

Restore the Backup/Application

Finally create the Restore CR using UI to restore the Backup, in the same or different namespace using the Backup. In the example provided below, mysql-label-backup is being restored into the "restore" namespace.

Restore to same cluster but different namespace

Note: If restoring into the same namespace, ensure that the original application components have been removed. If restoring into another namespace in a same cluster, ensure that the resources which cannot be shared like ports should be freed or transformation should be used to avoid conflict. More information about transformation can be found at Restore Transformation.

Select the restore-point and click on Restore button
Provide restore name and restore namespace
demo-mysql-label-restore is in progress state
MySQL application is restored to restore namespace

Restoring to a different cluster

Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), same target should be created and Target Browsing should be enabled to browse the stored backups.

Enable the Target Browsing for the Target
Use Launch Browser option and search backup using backupplan name
Select the backup and click on Restore
Provide restore name and restore namespace
Restore to a different cluster is successful
MySQL application after the restore to a different cluster is successful

Step 4.2: Helm Example

The following sections will create a sample application via Helm, backup the application via Helm selector fields and then restore the application all using management UI.

The following steps will be performed.

  1. Create a cockroachdb instance using Helm

  2. Create a BackupPlan CR using management console that specifies the cockroachdb application to protect.

  3. Create a Backup CR using management console with a reference to the BackupPlan CR create above

  4. Create a Restore CR using management console with a reference to the Backup CR created above.

Create a sample application via Helm

Use the following screenshot to assist in deployment of the "Cockroachdb" application using helm chart.

Auto-discovered cockroachdb helm application

Create a BackupPlan

Use the following to create a BackupPlan. Ensure the name of the release you specify matches the output from the helm ls command in the previous step.

Enter backupplan name and select target reposiroty
cockroachdb helm release is part of the backupplan

Create a Backup

Use the following screenshot to create a backup CR

Select above created backupplan
Enter the backup name
demo-cockroachdb-helm-backup is in progress state
demo-cockroachdb-helm-backup is in Available state

Restore Backup/Application

After the backup has completed successfully, create a Restore CR to restore the application in the same or different namespace where BackupPlan and Backup CRs are created.

Note: If restoring into the same namespace, ensure that the original application components have been removed. Especially the PVC of application are deleted.

Restore to the same cluster but different namespace

Note: If restoring into another namespace in a same cluster, ensure that the resources which cannot be shared like ports should be freed or transformation should be used to avoid conflict. More information about transformation can be found at Restore Transformation.

Before restoring the app, we need to clean the existing app from the same cluster. This is required cluster level resources of the app can create conflict during restore operation.

helm delete cockroachdb-app
Select the backup created above from the Restore Points
Enter the restore name and select the restore namespace
demo-cockroachdb-helm-restore is in-progress state
Restore is in Completed state
After restore, all 4 pods of cockroachdb are restored in restore ns

Restore to the different cluster

Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is running in the remote namespace/cluster as well. To restore into a new cluster (where the Backup CR does not exist), same target should be created and Target Browsing should be enabled to browse the stored backups.

After following above note then follow the instructions same as Restore the backup/application by label section to choose the namespace backup stored at the target repository and perform the helm backup restore.

Step 4.3: Operator Example

The following steps will be performed.

  1. Install a sample etcd Operator

  2. Create a etcd cluster

  3. Create a BackupPlan CR using management console that specifies the etcd application to protect.

  4. Create a Backup CR using management console with a reference to the BackupPlan CR created above

  5. Create a Restore CR using management console with a reference to the Backup CR created above.

We are demonstrating standard 'etcd-operator' here. First we need to deploy the operator using it's helm chart. Then we need to deploy etcd cluster. Follow Install etcd-operator and Create an etcd cluster using following yaml definition section to install and create etcd cluster.

T4K auto-discovered the etcd operator in the backup ns

Create a BackupPlan

Create 'BackupPlan' resource to protect 'etcd-operator' and it's clusters. Use the management console to select the etcd-operator auto-discovered by T4K and shown under Operator section.

Create a backupplan for auto-discovered etcd operator
etcd operator resources captured as a part of backupplan

Create a Backup

Take a backup of above 'BackupPlan'. Use following screenshot to proceed and create a 'Backup' resource.

Select the above created backupplan and enter backup name
etcd operator backup is in-progress state
etcd operator backup is in Available state

Restore the Backup/Application

  • After the backup completes successfully, you can perform the Restore of it.

  • To restore the etcd-operator and it's clusters from the above backup, use the screenshots shown below.

Note: If restoring into the same namespace, ensure that the original application components have been removed.

Restore to the same cluster but different namespace

Note: If restoring into another namespace in the same cluster, ensure that the resources which cannot be shared, for example ports - should be available or transformation should be used to avoid conflict. More information about transformation can be found at Restore Transformation.

Before restoring the app, we need to clean the existing app from the same cluster. This is required cluster level resources of the app can create conflict during restore operation.

Select the backup created above from the Restore Points
Enter the restore name and select the restore namespace
demo-etcd-operator-restore is in-progress state
Restore is in Completed state

Step 4.4: Namespace Example

  1. Create a namespace called 'wordpress'

  2. Use helm to deploy a wordpress application into the namespace.

  3. Perform a backup of the namespace using management console

  4. Delete the namespace/application from kubernetes cluster

  5. Create a new namespace 'wordpress-restore'

  6. Perform a Restore of the namespace using management console

Create a namespace and application

Follow the Create a namespace and application section to create an application.

Create a BackupPlan

Create a backupPlan to backup the namespace using management console

Auto-discovered namespaces
Create demo-ns-backupplan
demo-ns-backupplan is in Available state

Backup the Namespace

Use the following screenshot to build the Backup CR using management console

Select the backupplan name if present for a namespace
Select the auto-discovered backupplan name created above for backup ns
Provide backup name demo-ns-backup
demo-ns-backup operation is in progress
Namespace scoped backup is sucessful
Namespace scoped backup demo-ns-backup is in available state

Restore the Backup/Namespace

Perform restore of the namespace backup using management console

Restore to the same cluster but different namespace

Note: If restoring into the same namespace, ensure that the original application components have been removed. If restoring into another namespace in a same cluster, ensure that the resources which cannot be shared like ports should be freed or transformation should be used to avoid conflict. More information about transformation can be found at Restore Transformation.

Select demo-ns-backup from Restore Points and click on Restore button
Provide the restore name and restore namespace
demo-ns-restore is in-progress state

Validate Restore

backup ns is restored into restore ns with applications

Validate Application Pods

kubectl get pods -n restore
$ root@my-host:~# kubectl get pods -n restore
NAME                                    READY   STATUS    RESTARTS   AGE
k8s-demo-app-frontend-7c4bdbf9b-4hsl2   1/1     Running   0          2m56s
k8s-demo-app-frontend-7c4bdbf9b-qm84p   1/1     Running   0          2m56s
k8s-demo-app-frontend-7c4bdbf9b-xmrxk   1/1     Running   0          2m56s
k8s-demo-app-mysql-754f46dbd7-cj5z5     1/1     Running   0          2m56s

Restore to the different cluster

If you are trying to restore into a different cluster then follow the guidelines same as #restoring-to-a-different-cluster section to choose the namespace backup stored at the target repository and perform the namespace restore.

Last updated