LogoLogo
4.0.X
4.0.X
  • About Trilio for Kubernetes
    • Welcome to Trilio For Kubernetes
    • Version 4.0.X Release Highlights
    • Compatibility Matrix
    • Marketplace Support
    • Features
    • Use Cases
  • Getting Started
    • Getting Started with Trilio on Red Hat OpenShift (OCP)
    • Getting Started with Trilio for Upstream Kubernetes (K8S)
    • Getting Started with Trilio for AWS Elastic Kubernetes Service (EKS)
    • Getting Started with Trilio on Google Kubernetes Engine (GKE)
    • Getting Started with Trilio on VMware Tanzu Kubernetes Grid (TKG)
    • More Trilio Supported Kubernetes Distributions
      • General Installation Prerequisites
      • Rancher Deployments
      • Azure Cloud AKS
      • Digital Ocean Cloud
      • Mirantis Kubernetes Engine
      • IBM Cloud
    • Licensing
    • Using Trilio
      • Overview
      • Post-Install Configuration
      • Management Console
        • About the UI
        • Navigating the UI
          • UI Login
          • Cluster Management (Home)
          • Backup & Recovery
            • Namespaces
              • Namespaces - Actions
              • Namespaces - Bulk Actions
            • Applications
              • Applications - Actions
              • Applications - Bulk Actions
            • Backup Plans
              • Create Backup Plans
              • Backup Plans - Actions
            • Targets
              • Create New Target
              • Targets - Actions
          • Trilio Monitoring
          • Resource Management
          • Guided Tours
        • UI How-to Guides
          • Multi-Cluster Management
          • Creating Backups
            • Cleanup Failed Backups
          • Restoring Backups
            • Cross-Cluster Restores
          • Monitoring Details
          • Disaster Recovery Plan
          • Continuous Restore
      • Command-Line Interface
        • YAML Examples
        • Trilio Helm Operator Values
    • Upgrade
    • Air-Gapped Installations
    • Uninstall
  • Reference Guides
    • T4K Pod/Job Capabilities
      • Resource Quotas
    • Trilio Operator API Specifications
    • Custom Resource Definition - Application
  • Advanced Configuration
    • AWS S3 Target Permissions
    • Management Console
      • KubeConfig Authenticaton
      • Authentication Methods Via Dex
      • UI Authentication
      • RBAC Authentication
      • Configuring the UI
    • Resource Request Requirements
      • Fine Tuning Resource Requests and Limits
    • Observability
      • Observability of Trilio with Prometheus and Grafana
      • Exported Prometheus Metrics
      • T4K Integration with Observability Stack
    • Modifying Default T4K Configuration
  • T4K Concepts
    • Supported Application Types
    • Support for Helm Releases
    • Support for OpenShift Operators
    • T4K Components
    • Backup and Restore Details
      • Immutable Backups
      • Application Centric Backups
    • Backup Retention Process
      • Retention Use Case
    • Continuous Restore
      • Architecture and Concepts
  • Performance
    • S3 as Backup Target
      • T4K S3 Fuse Plugin performance
    • Measuring Backup Performance
  • Ecosystem
    • T4K Integration with Slack using BotKube
    • Monitoring T4K Logs using ELK Stack
    • Rancher Navigation Links for Trilio Management Console
    • Optimize T4K Backups with StormForge
    • T4K GitHub Runner
    • AWS RDS snapshots using T4K hooks
    • Deploying Trilio For Kubernetes with Openshift ACM Policies
  • Krew Plugins
    • T4K QuickStart Plugin
    • Trilio for Kubernetes Preflight Checks Plugin
    • T4K Log Collector Plugin
    • T4K Cleanup Plugin
    • OCP ETCD Plugin
    • RKE ETCD Plugin
  • Support
    • Troubleshooting Guide
    • Known Issues and Workarounds
    • Contacting Support
  • Appendix
    • Ignored Resources
    • OpenSource Software Disclosure
    • CSI Drivers
      • Installing VolumeSnapshot CRDs
      • Install AWS EBS CSI Driver
    • T4K Product Quickview
    • OpenShift OperatorHub Custom CatalogSource
      • Custom CatalogSource in a restricted environment
    • Configure OVH Object Storage as a Target
    • Connect T4K UI hosted with HTTPS to another cluster hosted with HTTP or vice versa
    • Fetch DigitalOcean Kubernetes Cluster kubeconfig for T4K UI Authentication
    • Force Update T4K Operator in Rancher Marketplace
    • Backup and Restore Virtual Machines running on OpenShift
    • T4K For Volumes with Generic Storage
Powered by GitBook
On this page
  • Prerequisites
  • Steps Overview
  • Step 1: Install a CSI Driver
  • Step 2: Create a Target
  • Step 3: Create a Retention Policy (Optional)
  • Step 4: Run Example
  • Step 4.1: Label Example
  • Create a Sample Application
  • Create a BackupPlan
  • Create a Backup
  • Restore the Backup/Application
  • Step 4.2: Helm Example
  • Create a sample application via Helm
  • Create a BackupPlan
  • Create a Backup
  • Restore Backup/Application
  • Step 4.3: Operator Example
  • Create a BackupPlan
  • Create a Backup
  • Restore the Backup/Application
  • Step 4.4: Virtual Machine Example
  • Create a Backupplan
  • Create a Backup
  • Restore Backup/Virtual Machine
  • Step 4.5: Namespace Example
  • Create a namespace and application
  • Create a BackupPlan
  • Backup the Namespace
  • Restore the Backup/Namespace
  1. Getting Started
  2. Using Trilio

Management Console

Learn about using with Trilio for Kubernetes with the Management Console

To get started with Trilio via the management console in your environment, the following steps must be performed:

Prerequisites

  1. Authenticate access to the Management Console (UI). Refer toUI Authentication.

  2. Configure access to the Management Console (UI). Refer toConfiguring the UI.

Steps Overview

  1. Install a compatible CSI Driver

  2. Create a Backup Target - A location where backups will be stored.

  3. Create a retention policy (Optional) - To specify how long to keep the backups for.

  4. Run Example:

    • Label Example

    • Helm Example

    • Operator Example

    • Virtual Machine Example

    • Namespace Example

Step 1: Install a CSI Driver

Skip this step if your environment already has a CSI driver installed with snapshot capability.

Trilio for Kubernetes requires a compatible Container Storage Interface (CSI) driver that provides the Snapshot feature.

Step 2: Create a Target

Create a secret containing the credentials for data stores to store backups. An example is provided below:

apiVersion: v1
kind: Secret
metadata:
  name: sample-secret
type: Opaque
stringData:
  accessKey: AKIAS5B35DGFSTY7T55D
  secretKey: xWBupfGvkgkhaH8ansJU1wRhFoGoWFPmhXD6/vVD

You can either create the secret using the above YAML definition or use the management console to create it as part of the workflow for creating the backup target.

Supported values for S3 vendors include:

"AWS", "RedhatCeph", "Ceph", "IBMCleversafe", "Cloudian", "Scality", "NetApp", "Cohesity", "SwiftStack", "Wassabi", "MinIO", "DellEMC", "Other"

An Amazon S3 target example is provided below:

Note: With the above configuration, the target would get created in the current user namespace unless specified. Also, additional information on Bucket permissions can be found here: AWS S3 Target Permissions

Step 3: Create a Retention Policy (Optional)

While the example backup custom resources created by following this Getting Started page can be deleted manually via kubectl commands, Trilio also provides backup retention capability - to automatically delete the backups based on defined time boundaries.

Note: With the above configuration, the policy would get created in the default namespace unless specified.

Step 4: Run Example

The following section will cover creating a sample application and backup/restore of it via labels, Helm, Operator, or a namespace-based backup.

Note:

  1. Backup and BackupPlan should be created in the same namespace.

  2. For the restore operation, the resources will get restored in the namespace where restore CR is created.

  3. If there is more than one backup created for the same application, users can select any existing backup information to perform the restore.

Step 4.1: Label Example

The following sections will create a sample application (tag it with labels), backup the application via labels, and then restore the application.

The following steps will be performed.

  1. Create a sample MySQL application

  2. Create a BackupPlan CR using a management console that specifies the MySQL application via labels

  3. Create a Backup CR using the management console with a reference to the BackupPlan CR created above

  4. Create a Restore CR using the management console with a reference to the Backup CR created above.

Create a Sample Application

Use the following screenshot to assist in the deployment of the MySQL application using the label.

Create a BackupPlan

Create a BackupPlan CR by selecting the application created in the previous step via UI labels in the same namespace where the application resides.

Create a Backup

Create a Backup CR using UI to protect the BackupPlan. Type of the backup can be either full or incremental.

Note: The first backup into a target location will always be a Full backup.

Restore the Backup/Application

Finally create the Restore CR using UI to restore the Backup, in the same or different namespace using the Backup. In the example provided below, MySQL-label-backup is being restored into the "restore" namespace.

Restore to the same cluster but a different namespace

Restoring to a different cluster

Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is also running in the remote namespace/cluster. To restore into a new cluster (where the Backup CR does not exist), the same target should be created and Target Browsing should be enabled to browse the stored backups.

Step 4.2: Helm Example

The following sections will create a sample application via Helm, backup the application via Helm selector fields and then restore the application using management UI.

The following steps will be performed.

  1. Create a cockroachdb instance using Helm

  2. Create a BackupPlan CR using a management console that specifies the cockroachdb application to protect.

  3. Create a Backup CR using the management console with a reference to the BackupPlan CR created above.

  4. Create a Restore CR using the management console referencing the Backup CR created above.

Create a sample application via Helm

Use the following screenshot to assist in the deployment of the "Cockroachdb" application using the helm chart.

Create a BackupPlan

Use the following to create a BackupPlan. Ensure the name of the release you specify matches the output from the helm ls command in the previous step.

Create a Backup

Use the following screenshot to create a backup CR

Restore Backup/Application

After the backup has been completed successfully, create a Restore CR to restore the application in the same or different namespace where BackupPlan and Backup CRs are created.

Note: If restoring into the same namespace, ensure that the original application components have been removed. Especially the PVC of the application is deleted.

Restore to the same cluster but a different namespace

Before restoring the app, we need to clean the existing app from the same cluster. This is required cluster-level resources of the app can create conflict during the restore operation.

helm delete cockroachdb-app

Restore to the different cluster

Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is also running in the remote namespace/cluster. To restore into a new cluster (where the Backup CR does not exist), the same target should be created and Target Browsing should be enabled to browse the stored backups.

After following the above note follow the instructions same as Restore the backup/application by label section to choose the namespace backup stored at the target repository and perform the helm backup restore.

Step 4.3: Operator Example

The following steps will be performed.

  1. Install a sample etcd Operator

  2. Create an etcd cluster

  3. Create a BackupPlan CR using the management console that specifies the etcd application to protect.

  4. Create a Backup CR using the management console with a reference to the BackupPlan CR created above

  5. Create a Restore CR using the management console referencing the Backup CR created above.

Create a BackupPlan

Create a 'BackupPlan' resource to protect 'etcd-operator' and it's clusters. Use the management console to select the etcd-operator auto-discovered by T4K and shown under the Operator section.

Create a Backup

Take a backup of the above 'BackupPlan'. Use the following screenshot to proceed and create a 'Backup' resource.

Restore the Backup/Application

  • After the backup completes successfully, you can perform the Restore of it.

  • To restore the etcd-operator and its clusters from the above backup, use the screenshots shown below.

Note: If restoring into the same namespace, ensure that the original application components have been removed.

Restore to the same cluster but a different namespace

Before restoring the app, we need to clean the existing app from the same cluster. This is required cluster-level resources of the app can create conflict during the restore operation.

Step 4.4: Virtual Machine Example

The following sections will describe the steps to create a sample Virtual Machine, backup the Virtual Machine like any other Helm and Operator application, and then restore the Virtual Machine using the management UI.

The following steps will be performed.

  1. Create a Virtual Machine using OpenShift Virtualization Operator.

  1. Create a BackupPlan CR using a management console that specifies the centos9 Virtual Machines to protect.

  2. Create a Backup CR using the management console referencing the BackupPlan CR created in step 1.

  3. Create a Restore CR using the management console again referencing the Backup CR created in step 1.

Create a Backupplan

Use the following screenshot to assist in the creation of a BackupPlan for the Virtual Machine, ensuring that the name of the Virtual Machine you specify matches the VM created in the previous step.

Create a Backup

Create a backup CR as shown in the following screenshots.

Restore Backup/Virtual Machine

Restore to the same cluster

After the backup has been completed successfully, create a Restore CR to restore the Virtual Machine in the same or different namespace where BackupPlan and Backup CRs are created.

Restore to the different cluster

Note: If restoring to another cluster (migration scenario), ensure that Trilio for Kubernetes is also running in the remote namespace/cluster. To restore into a new cluster (where the Backup CR does not exist), the same backup target should be created and Target Browsing should be enabled to browse the stored backups.

After following the above note follow the instructions same as Restore the backup/application by label section to choose the namespace backup stored at the target repository and perform the helm backup restore.

Validate the Restored Virtual Machine

Once the restore is complete, you can log in to the OpenShift Container Platform, go to the restore namespace and check that the Virtual Machine is restored correctly and is in the Running state.

Step 4.5: Namespace Example

  1. Create a namespace called 'wordpress'

  2. Use Helm to deploy a wordpress application into the namespace.

  3. Perform a backup of the namespace using the management console

  4. Delete the namespace/application from the kubernetes cluster

  5. Create a new namespace 'restore'

  6. Perform a Restore of the namespace using the management console

Create a namespace and application

Create a BackupPlan

Create a backupPlan to backup the namespace using the management console

Backup the Namespace

Use the following screenshot to build the Backup CR using the management console

Restore the Backup/Namespace

Perform a restore of the namespace backup using the management console

Restore to the same cluster but a different namespace

Validate Restore

Validate Application Pods

kubectl get pods -n restore
$ root@my-host:~# kubectl get pods -n restore
NAME                                    READY   STATUS    RESTARTS   AGE
k8s-demo-app-frontend-7c4bdbf9b-4hsl2   1/1     Running   0          2m56s
k8s-demo-app-frontend-7c4bdbf9b-qm84p   1/1     Running   0          2m56s
k8s-demo-app-frontend-7c4bdbf9b-xmrxk   1/1     Running   0          2m56s
k8s-demo-app-mysql-754f46dbd7-cj5z5     1/1     Running   0          2m56s

Restore to the different cluster

If you are trying to restore into a different cluster then follow the guidelines same asRestoring to a different cluster section to choose the namespace backup stored at the target repository and perform the namespace restore.

PreviousPost-Install ConfigurationNextAbout the UI

Last updated 12 months ago

You should check the to select a driver appropriate for your backend storage solution. See the selected CSI driver's documentation for details on the installation of the driver in your cluster.

Please use one of the provided in the Custom Resource Definition section as a template for creating an NFS, Amazon S3, or any S3-compatible storage target.

A retention policy is referenced in the .

More details about CRDs and their usage/explanation can be found in the .

Note: If restoring into the same namespace, ensure that the original application components have been removed. If restoring into another namespace in the same cluster, ensure that the resources that cannot be shared like ports should be freed or transformation should be used to avoid conflict. More information about transformation can be found at .

Note: If restoring into another namespace in the same cluster, ensure that the resources which cannot be shared like ports should be freed or transformation should be used to avoid conflict. More information about transformation can be found at .

We are demonstrating standard 'etcd-operator' here. First, we need to deploy the operator using its helm chart. Then we need to deploy the etcd cluster. Follow and section to install and create an etcd cluster.

Note: If restoring into another namespace in the same cluster, ensure that the resources that cannot be shared, for example, ports - should be available or transformation should be used to avoid conflict. More information about transformation can be found at .

Users can follow the Red Hat demo to learn how to deploy a VM.\

Follow the section to create an application.

Note: If restoring into the same namespace, ensure that the original application components have been removed. If restoring into another namespace in the same cluster, ensure that the resources that cannot be shared like ports should be freed or transformation should be used to avoid conflict. More information about transformation can be found at .

Kubernetes CSI Developer Documentation
Custom Resource Definition Section
Restore Transformation
Restore Transformation
Restore Transformation
here
Restore Transformation
Create secret while creating AWS S3 target
Create demo-s3-target on AWS using above created secret
demo-s3-target created
Create demo-retention-policy
Retention policy created successfully
T4K has auto-discovered the application from backup namespace
Select application deployed by label app:mysql and create new BackupPlan
Select the BackupPlan and enter backup name. MySQL demo application backup is in-progress state.
Application scoped backup of MySQL app deployed by label is successful
Details of the demo-mysql-label-backup
Select the restore point and click on the Restore button. Provide restore name and restore namespace. demo-mysql-label-restore is in progress state
MySQL application is restored to restore namespace
Enable the Target Browsing for the Target. Use the Launch Browser option and search backup using the backupPlan name. Select the backup and click on Restore. Provide restore name and restore namespace.
Restore to a different cluster is successful
MySQL application after the restore to a different cluster is successful
Auto-discovered cockroachdb helm application
Enter backupplan name and select target repository. Cockroachdb helm release is part of the backupplan
Select above created backupplan. Enter the backup name. demo-cockroachdb-helm-backup is in progress state
demo-cockroachdb-helm-backup is in Available state
Select the backup created above from the Restore Points. Enter the restore name and select the restore namespace. demo-cockroachdb-helm-restore is in progress state
Restore is in Completed state
After restore, cockroachdb helm application is restored in restore namespace
T4K auto-discovered the etcd operator in the backup ns
Create a backupplan for auto-discovered etcd operator. etcd operator resources captured as a part of backupplan
Select the above created backupplan and enter backup name. etcd operator backup is in-progress state
etcd operator backup is in Available state
Select the backup created above from the Restore Points. Enter the restore name and select the restore namespace. demo-etcd-operator-restore is in-progress state
Restore is in Completed state
Virtual Machine running in the OpenShift Container Platform
Virtual Machine auto-discovered by the Trilio for Kubernetes. Create New Backup for Virtual Machine. Provide Backupplan name, Target, and other details. Provide Scheduling Policy, and Retention Policy for the BackupPlan. Virtual Machine Parameters are added under the Custom Component Details.
Wait for sync up to complete
Provide the Backup name. The demo-vm-backup is in in-progress state.
Different stages of backup - Data snapshot and data upload
View Backup and Restore Summary for Virtual Machine. View Backup and Restore Summary for Virtual Machine. Provide Restore CR name and Restore Namespace. No transformation is required, click on Create.
Restore is complete with both VM disks restored correctly
VM with name is restored in vm-restore namespace
Auto-discovered namespaces. Create demo-ns-backupplan. demo-ns-backupplan is in Available state
Select the backupplan name if present for a namespace. Select the auto-discovered backupplan name created above for backup ns. Provide backup name demo-ns-backup. demo-ns-backup operation is in progress
Namespace scoped backup is sucessful
Namespace scoped backup demo-ns-backup is in available state
Select demo-ns-backup from Restore Points and click on the Restore button. Provide the restore name and restore namespace. demo-ns-restore is in-progress state
backup ns is restored into restore namespace with applications
Install etcd-operator
Create an etcd cluster using the following yaml definition
Create a namespace and application
Target examples
backupPlan CR