Only this pageAll pages
Powered by GitBook
1 of 73

T4O-6.x

About Trilio for OpenStack

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Getting Started

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Upgrading to T4O-6.x from older supported versions

Loading...

Loading...

Loading...

Advanced Configuration

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

User Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Admin Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Troubleshooting

Loading...

Loading...

Loading...

API GUIDE

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

T4O Architecture

A quick overview on the Architecture of T4O

Backup-as-a-Service

Trilio is an add-on service to OpenStack cloud infrastructure and provides backup and disaster recovery solutions for tenant workloads. Trilio is very similar to other OpenStack services including Nova, Cinder, Glance, etc., and adheres to all tenets of OpenStack. It is a stateless service that scales with your cloud.

Main Components

Trilio has three main software components which are again segregated into multiple services.

WorkloadManager

This component is registered as a keystone service of type workloads which manages all the workloads being created for utilizing the snapshot and restore functionalities. It has 4 services responsible for managing these workloads, their snapshots, and restores.

  1. workloadmgr-api

  2. workloadmgr-scheduler

  3. workloadmgr-workloads

  4. workloadmgr-cron

DataMover

Similar to WorkloadManager, this component registers a keystone service of the type datamover which manages the transfer of extensive data to/from the backup targets. It has 2 services that are responsible for taking care of the data transfer and communication with WorkloadManager

  1. datamover-api

  2. datamover

Horizon Dashboard Plugin

For ease of access and better user experience, T4O provides an integrated UI with the OpenStack dashboard service Horizon,

  1. Trilio API is a Python module that is installed on all OpenStack controller nodes where the nova-api service is running.

  2. Trilio Datamover is a Python module that is installed on every OpenStack compute nodes

  3. Trilio Horizon plugin is installed as an add-on to Horizon servers. This module is installed on every server that runs Horizon service.

Service Endpoints

Trilio is both a provider and consumer in the OpenStack ecosystem. It uses other OpenStack services such as Nova, Cinder, Glance, Neutron, and Keystone and provides its own services to OpenStack tenants. To accommodate all possible OpenStack deployments, Trilio can be configured to use either public or internal URLs of services. Likewise, Trilio provides its own public, internal, and admin URLs for two of its services WorkloadManager API and Datamover API.

Network Topology

Unlike the previous versions of Trilio for OpenStack, now it utilizes the existing network of the OpenStack deployed environment. The networks for the Trilio services can be configured as per the user's desire in the same way the user configures any other OpenStack service. Additionally, a dedicated network can be provided to Trilio services on both control and compute planes for storing and retrieving backup data from the backup target store.

Release Notes

6.1.1

Release Date : 21st May'2025

What's New?

  1. Support for T4O on OpenStack Helm.

Known Issues

  1. [OpenStack Helm] During T4O deployment, if S3 Immutable bucket is set as the default target, the UI goes into hang state.

    • Workaround:

      • Deploy T4O on OpenStack helm by keeping s3 immutable target as default, then add s3 mutable/NFS as dynamic target. Later, change the s3 mutable(dynamic target) target to default.

  2. workload-reassign-workloads with --source-btt-all cli option doesn't work as expected

    • Workaround:

      • User can use --source-btt option to reassign the workloads from provided backup target type.Sample command : workloadmgr workload-reassign-workloads --old_tenant_ids 71637e3b98434ceeb5158087074af4aa --new_tenant_id 8e325e1056c94625854b3dffb4d1e64e --user_id 3a446f94fb57448a8c99c08238df0f13 --migrate_cloud True --source-btt 979a459d-26f9-4aa9-8f0d-21ca7a1f6b4f

6.1.0

Release Date : 10th April'2025

What's New?

  1. Support for T4O on RHOSO18.0.

  2. Support for Dynamic Backup Target addition without the need of redeployment of T4O.

  3. Support for T4O on OpenStack Helm.

Known Issues

  1. trilio-openstack-operator-rhoso images are not available on Operator Hub

    • Workaround:

      • To install from cli using install script as mentioned in the deployment doc.

  2. workload-reassign-workloads with --source-btt-all cli option doesn't work as expected

    • Workaround:

      • User can use --source-btt option to reassign the workloads from provided backup target type.Sample command : workloadmgr workload-reassign-workloads --old_tenant_ids 71637e3b98434ceeb5158087074af4aa --new_tenant_id 8e325e1056c94625854b3dffb4d1e64e --user_id 3a446f94fb57448a8c99c08238df0f13 --migrate_cloud True --source-btt 979a459d-26f9-4aa9-8f0d-21ca7a1f6b4f

6.0.0

Release Date : 18th Jan'2025

What's New?

  1. Deployment of Trilio on RHOCP (with RHOSP17.1).

  2. Support for multiple target backends.

  3. Support for advanced scheduling of workloads' snapshots.

  4. Support for Immutable Backups on S3 target backend.

  5. Support for Canonical Yoga, 2023.1(Antelope) & 2023.2(Bobcat) on Jammy.

Known Issues

  1. T4O upgrade on Canonical OpenStack is not supported

  2. Restore operations fail for image-booted instance after changing the image from public to private

    • Workaround:

      • Make the image public or accessible to the project/user.

  3. Select All option does not work on Backup Target Types page

  4. Sorting the columns on the backup's admin page is not working

  5. Workload Policy field is missing in the workload creation wizard until the scheduler is enabled

  6. In the workload-reassign-workloads help text --new_tenant_ids is provided instead of --new_tenant_id

  7. [Canonical multiple Backup Target] If NFS Backup Target getting deployed before S3 Backup Targets, then object store services against S3 Backup Target go into error state

    • If the overlay bundle is having multiple Backup Target Types defined such that at least one is NFS and one is S3 and if deployment of NFS happens first, then it is noticed that NFS mount happens successfully whereas object store services against S3 Backup Target Types start failing. Error An error occured (403) when calling HeadBucket operation: Forbidden is observed.

    • Workaround:

      • manually unmount the NFS mountpoint. All required S3 object store services will start successfully along with mounting of S3 sharepath. Unmounted NFS also gets remounted within 2-3 min (via mtab).

  8. workload modify CLI command with workload_id at end is throwing an error

    • Workaround:

      • Place the workload_id before the optional arguments.

      • Sample command workloadmgr workload-modify 858da6e0-5b55-4c63-9a8e-d9f116130686 --jobschedule enabled=True --jobschedule start_time='07:00 AM' --jobschedule start_date='01/08/2025' --hourly interval='1' retention='2' snapshot_type='incremental'

Trilio is like a Data Protection project providing Backup-as-a-Service
Overview of T40 Architecture

Uninstall Trilio

The high-level process is the same for all Distributions.

  1. Uninstall the Horizon Plugin or the Trilio Horizon container

  2. Uninstall the datamover-api container

  3. Uninstall the datamover container

  4. Uninstall the workloadmgr container

Advanced Ceph configurations

Ceph is the most common OpenSource solution to provide block storage through OpenStack Cinder.

Ceph is a very flexible solution. The possibilities of Ceph require additional steps to the Trilio solution.

Installation Strategy and Preparation

Before embarking on the installation process for Trilio in your OpenStack environment, it is highly advisable to carefully consider several key elements. These considerations will not only streamline the installation procedure but also ensure the optimal setup and functionality of Trilio's solutions within your OpenStack infrastructure.

Tenant Quotas

Trilio leverages Cinder snapshots to facilitate the computation of both full and incremental backups.

When executing full backups, Trilio orchestrates the generation of Cinder snapshots for all volumes included in the backup job. These Cinder snapshots remain intact for subsequent incremental backup image calculations.

During incremental backup operations, Trilio generates fresh Cinder snapshots and computes the altered blocks between these new snapshots and the earlier retained snapshots from full or previous backups. The old snapshots are subsequently deleted, while the newly generated snapshots are preserved.

Consequently, it becomes imperative for each tenant benefiting from Trilio's backup functionality to possess adequate Cinder snapshot quotas capable of accommodating these supplementary snapshots. As a guiding principle, it is recommended to append 2 snapshots for each volume incorporated into the backup quotas for the respective tenant. Additionally, a commensurate increase in volume quotas for the tenant is advisable, as Trilio briefly materializes a volume from the snapshot to access data for backup purposes.

During the restoration process, Trilio generates supplementary instances and Cinder volumes. To facilitate seamless restore operations, tenants should maintain adequate quota levels for Nova instances and Cinder volumes. Failure to meet these quota requirements may lead to disruptions in restoration procedures.

AWS S3 Eventual Consistency

The AWS S3 object consistency model includes:

  1. Read-after-write

  2. Read-after-update

  3. Read-after-delete

Each of these models explains how an object becomes consistent after being created, updated, or deleted. None of these methods ensures strong consistency, leading to a delay before an object becomes fully consistent.

Although Trilio has introduced measures to address AWS S3's eventual consistency limitations, the exact time an object achieves consistency cannot be predicted through deterministic means.

There is no official statement from AWS on how long it takes for an object to reach a consistent state. However, read-after-write has a shorter time to reach consistency compared to other IO patterns. Therefore, our solution is designed to maximize the read-after-write IO pattern.

AWS Region Considerations

The time in which an object reaches eventual consistency also depends on the AWS region.

For instance, the AWS-standard region doesn't offer the same level of strong consistency as regions like us-east or us-west. Opting for these regions when setting up S3 buckets for Trilio is advisable. While fully avoiding the read-after-update IO pattern is complex, we've introduced significant access delays for objects to achieve consistency over longer periods. On rare occasions when this does happen, it will cause a backup failure and require a retry.

Trilio Cluster

Trilio can be deployed as a single node or a three-node cluster. It is highly recommended that Trilio is deployed as a three-node cluster for fault tolerance and load balancing. Starting with the 3.0 release, Trilio requires additional IP or FQDN for the cluster and is required for both single-node and three-node deployments. Cluster IP a.k.a virtual IP is used for managing clusters and is used to register the Trilio service endpoint in the keystone service catalog.

Uninstalling from OpenStack Helm

1] Uninstall trilio-openstack helm chart

Run following commands to uninstall trilio services from MOSK cloud.

git clone -b <branch> https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/
cd openstack-helm/trilio-openstack/utils
./uninstall.sh

2] Uninstall Dynamic Backup Targets (Optional)

If you have installed multiple backup targets other than the default one, you can uninstall them using:

3] Verify uninstallation

Verify if the trilio-openstack chart and other components got uninstalled. No objects should be listed in following command output:

kubectl get pods -n trilio-openstack
kubectl get jobs -n trilio-openstack
kubectl get pv -n trilio-openstack | grep nfs (Only if backup target was NFS)
kubectl get pvc -n trilio-openstack | grep nfs (Only if backup target was NFS)

4] Clean TrilioVault’s OpenStack resources

openstack service list | grep -E 'TrilioVaultWLM|dmapi'
openstack endpoint list | grep -E 'TrilioVaultWLM|dmapi'
# Delete TrilioVault services from keystone catalog
openstack service delete TrilioVaultWLM
openstack service delete dmapi
# Verify TrilioVault services and endpoints got cleaned. Following command output should be empty.
openstack service list | grep -E 'TrilioVaultWLM|dmapi'
openstack endpoint list | grep -E 'TrilioVaultWLM|dmapi'
# Login to database and clean TrilioVault's databases and db users.
MYSQL_DBADMIN_PASSWORD=`kubectl -n openstack get secrets/mariadb-dbadmin-password --template={{.data.MYSQL_DBADMIN_PASSWORD}} | base64 -d`
kubectl -n openstack exec -it mariadb-server-0 -- bash
mysql -u root -p${MYSQL_DBADMIN_PASSWORD} -e "REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'dmapi'@'%%';"
mysql -u root -p${MYSQL_DBADMIN_PASSWORD} -e "DROP USER 'dmapi'@'%%';"
mysql -u root -p${MYSQL_DBADMIN_PASSWORD} -e "DROP DATABASE dmapi;"
mysql -u root -p${MYSQL_DBADMIN_PASSWORD} -e "REVOKE ALL PRIVILEGES, GRANT OPTION FROM 'workloadmgr'@'%%';"
mysql -u root -p${MYSQL_DBADMIN_PASSWORD} -e "DROP USER 'workloadmgr'@'%%';"
mysql -u root -p${MYSQL_DBADMIN_PASSWORD} -e "DROP DATABASE workloadmgr;"

5] Unmount the targets from All Worker and Master Nodes

To unmount the backup target mount points from each node, run:

umount /var/lib/trilio/triliovault-mounts/<base64>

Repeat this on each worker and master node where the target was mounted.

Supported Trilio Upgrade Path

The following versions can be upgraded to each other:

From
To

4.1 GA (4.1.94) or 4.1.HFx

5.x.x

4.2 GA (4.2.64) or 4.2.HFx or 4.2.x

5.x.x

5.0 GA or 5.x.x

6.x.x

Serial Upload per Instance during Snapshot

The following steps depict configuring the Trilio for OpenStack to override the default snapshot upload functionality.

By default, during a snapshot, Trilio will consider all the instances that are members of a workload and starts uploading the resources of each instance simulateously. In some cases, it may be required that the snapshots must be done in serial per VM such that backup of all the disks of one VM/instance must be finished before taking backup of the other VMs that are part of that workload.

Trilio has a provision to update the default behaviour of considering all the instances of a workload during backup process, by introducing a configrable parameter in the on of its services.

Follow the below mentioned steps to set the serial upload as the default behavior.

Finding the required configuration file

As per the OpenStack distribution, look for the configuration file of workloadmgr-api(wlm-api) and workloadmgr-workloads(wlm-workloads) services or containers on the controller plane.

The default location of this configration file in the container or VM where it is running is /etc/triliovault/triliovault-wlm.conf

Once the file is located, take a backup of the file in-case of future reference.

Updating the configuration file

Once the files are located, add the below mentioned parameter with its value to the [DEFAULT] section of the configuration file. serial_vm_backup=true

After successfully updating the file, restart the services or containers and these steps need to be followed on all the controller plane.

General Troubleshooting Tips

Troubleshooting inside a complex environment like OpenStack can be very time-consuming.

The following tips will help to speed up the troubleshooting process to identify root causes.

What is happening where

OpenStack and Trilio are divided into multiple services. Each service has a very specific purpose that is called during a backup or recovery procedure. Knowing which service is doing what helps to understand where the error is happening, allowing more focused troubleshooting.

Trilio Workloadmgr

The Trilio Workloadmgr is the Controller of Trilio. It receives all Workload related requests from the users.

Every task of a backup or restore process is triggered and managed from here. This includes the creation of the directory structure and initial metadata files on the Backup Target.

During a backup process

During a backup process is the Trilio Workloadmgr also responsible for gathering the metadata about the backed-up VMs and networks from the OpenStack environment. It sends API calls to the OpenStack endpoints on the configured endpoint type to fetch this information. Once the metadata has been received, the Trilio Workloadmgr writes it as JSON files on the Backup Target.

The Trilio Workloadmgr is also sending the Cinder Snapshot command.

During a restore process

During the restore process, the Trilio Workloadmgr reads the VM metadata from its Database and uses the metadata to create the Shell for the restore. It sends API calls to the OpenStack environment to create the necessary resources.

Datamover API

The dmapi service is the connector between the Trilio cluster and the Datamover running on the compute nodes.

The purpose of the dmapi service is to identify which compute node is responsible for the current backup or restore task. To do so, the dmapi service connects to the nova API requesting the compute hose of a provided VM.

Once the compute host has been identified, the dmapi forwards the command from the Trilio Workloadmgr to the datamover running on the identified compute host.

Datamover

The datamover is the Trilio service running on the compute nodes.

Each datamover is responsible for the VMs running on top of its compute node. A datamover can not work with VMs running on a different compute node.

The datamover controls the freeze and thaw of VMs as well as the actual movement of the data.

Everything on the Backup Target is happening as user nova

Trilio is reading and writing on the Backup Target as nova:nova.

The POSIX user-id and group-id of nova:nova need to be aligned between the Trilio Cluster and all compute nodes. Otherwise, backup or restores may fail with permission or file not found issues.

Alternativ ways to achieve the goal are possible, as long as all required nodes can fully write and read as nova:nova on the Backup Target.

It is recommended to verify the required permissions on the Backup Target in case of any errors during the data transfer phase or in case of any file permission errors.

Input/Output Error with Cohesity NFS

On Cohesity NFS if an Input/Output error is observed, then increase the timeo and retrans parameter values in your NFS options.

Timeout receiving packet error in multipath ISCSi environment

Logging inside all datamover containers and add uxsock_timeout with value as 60000 which is equal to 60 sec inside /etc/multipath.conf.Restart datamover container

Trilio Trustee Role

Trilio is using RBAC to allow the usage of Trilio features to users.

This trustee role is absolutely required and can not be overwritten using the admin role.

It is recommended to verify the assignment of the Trilio Trustee Role in case of any permission errors from Trilio during the creation of Workloads, backups, or restores.

OpenStack Quotas

Trilio is creating Cinder Snapshots and temporary Cinder Volumes. The OpenStack Quotas need to allow that.

Every disk that is getting backed up requires one temporary Cinder Volumes.

Every Cinder Volume that is getting backup requires two Cinder Snapshots. The second Cinder Snapshot is temporary to calculate the incremental.

Important log files

Trilio services logs on RHOSP

On the Controller Node

Below Trilio containers gets deployed on the Contoller node.

  • triliovault_datamover_api

  • triliovault_wlm_api

  • triliovault_wlm_scheduler

  • triliovault_wlm_workloads

  • triliovault-wlm-cron

The log files for above Trilio services can be found here:

/var/log/containers/triliovault-datamover-api/triliovault-datamover-api.log

/var/log/containers/triliovault-wlm-api/triliovault-wlm-api.log

/var/log/containers/triliovault-wlm-cron/triliovault-wlm-cron.log

/var/log/containers/triliovault-wlm-scheduler/triliovault-wlm-scheduler.log

/var/log/containers/triliovault-wlm-workloads/triliovault-wlm-workloads.log

In the case of using S3 as a backup target is there also a log file that keeps track of the S3-Fuse plugin used to connect with the S3 storage.

/var/log/containers/triliovault-wlm-workloads/triliovault-object-store.log

For file serach operation, logs can be found on Controller node at below location:

/var/log/containers/triliovault-wlm-workloads/workloadmgr-filesearch.log

On the Compute Node

Trilio Datamover container gets deployed on the Compute node.

The log file for the Trilio Datamover service can be found at:

/var/log/containers/triliovault-datamover/triliovault-datamover.log

Frequently Asked Questions

Frequently Asked Questions about Trilio for OpenStack

1. Can Trilio for OpenStack restore instance UUIDs?

Answer: NO

Trilio for OpenStack does not restore Instance UUIDs (also known as Instance IDs). The only scenario where we do not modify the Instance UUID is during an Inplace Restore, where we only recover the data without creating new instances.

When Trilio for OpenStack restores virtual machines (VMs), it effectively creates new instances. This means that new Virtual Machine Instance UUIDs are generated for the restored VMs. We achieve this by orchestrating a call to Nova, which creates new VMs with new UUIDs.

By following this approach, we maintain the principles of OpenStack and auditing. We do not update or modify existing database entries when objects are deleted and subsequently recovered. Instead, all deletions are marked as such, and new instances, including the recovered ones, are created as new objects in the Nova tables. This ensures compliance and preserves the integrity of the OpenStack environment.

2. Can Trilio for OpenStack restore MAC addresses?

Answer: YES

Trilio can restore the VMs MAC address, however, there is a caveat when restoring a virtual machine (VM) to a different IP address: a new MAC address will be assigned to the VM.

In the case of a One-Click Restore, the original MAC addresses and IP addresses will be recovered, but the VM will be created with a new UUID, as mentioned in question #1.

When performing a Selective Restore, you have the option to recover the original MAC address. To do so, you need to select the original IP address from the available dropdown menu during the recovery process.

By choosing the original IP address, Trilio for OpenStack will ensure that the VM is restored with its original MAC address, providing more flexibility and customization in the restoration process.

Example of Selective Restore with original MAC (and IP address):

  1. In this example, we have taken a Trilio backup of a VM called prod-1.

  1. The VM is deleted and we perform a Selective Restore of a VM called prod-1, selecting the IP address it was originally assigned from the drop-down menu:

  1. Trilio then restores the VM with the original MAC address:

  1. If you left the option as "Choose next available IP address", it will assign a new MAC to the VM instead as Neutron maps all MAC addresses to IP addresses on the Subnet - so logically a new IP will result in a new MAC address.

Compatibility Matrix

Trilio for OpenStack Compatibility Matrix

Trilio Release
RHOSP version
Linux Distribution
Supported ?
Trilio Release
Canonical OpenStack version
Linux Distribution
Supported ?
Trilio Release
OpenStack Helm version
Supported ?

NFS & S3 Support:

All versions of T4O-6.x releases support NFSv3 and S3 as backup targets on all the compatible distributions.

Encryption Support:

All versions of T4O-6.x releases support encryption using Barbican service on all the compatible distributions.

QEMU Guest Agent

Installing the QEMU guest agent is an optional but recommended service to run inside the VM to ensure backup data consistency with Trilio. QEMU Guest Agent is the standard OpenSource QEMU/KVM hypervisor agent that runs inside a virtual machine (VM) and communicates with the host system (the hypervisor) to provide enhanced management and control of the VM. It is an essential component of any hypervisor platform that OpenStack can use to provide enhanced functionality when required.

Enabling QEMU Guest Agent

To enable this feature, you must set hw_qemu_guest_agent=yes as a metadata parameter on the image you wish to use to create the guest-agent-capable instances from. Reference: https://docs.openstack.org/nova/pike/admin/configuration/hypervisor-kvm.html

If the Virtual Machine is already created from the glance image which doesn't have the hw_qemu_guest_agent=yes property set, guest agent can be enabled using following process:

The process is stopping the VM, manually adding the lines below to the virsh definition using "virsh edit ", and starting it again:

The VM the needs to be running the qemu-guest agent software:

Ubuntu/Debian: sudo apt install -y qemu-guest-agent && sudo systemctl enable qemu-guest-agent && sudo systemctl restart qemu-guest-agent && sudo systemctl status qemu-guest-agent --no-pager

RHEL/CentOS VMs: sudo yum install -y qemu-guest-agent && sudo systemctl enable qemu-guest-agent && sudo systemctl restart qemu-guest-agent && sudo systemctl status qemu-guest-agent --no-pager

Windows VMs: Inside the guest OS, install the guest agent tools from: https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win-guest-tools.exe. Finally, make sure the service is running and set to start on boot automatically, in PowerShell, as Administrator: Set-Service QEMU-GA -StartupType Automatic; Restart-Service QEMU-GA; Get-Service QEMU-GA

You can confirm the qemu-guest-agent is running properly after installed, from the compute host, like this:

virsh qemu-agent-command $INSTANCE '{"execute":"guest-ping"}'

The output must be: {"return":{}}

Software Driven Migration: VMware to OpenStack

Software Driven Migration is supported in T4O 5.X release. Please refer

Licensing

After all Trilio components are installed, the license can be applied.

The license can be applied either through the Admin tab in Horizon or the CLI

Apply license through Horizon

To apply for the license through Horizon follow these steps:

  1. Login to Horizon using admin user.

  2. Click on the Admin Tab.

  3. Navigate to Backups-Admin

  4. Navigate to Trilio

  5. Navigate to License

  6. Click "Update License"

  7. Read the license agreement

  8. Click on "I accept the terms in the License Agreement"

  9. Click on "Next"

  10. Click "Choose File"

  11. Choose license file on the client system

  12. Click "Apply"

Apply license through CLI

  • <license_file> ➡️ path to the license file

Read and accept the End User License Agreement to complete the license application.

Users can preview the latest EULA at our main site:

Requirements

Cloud-Native and Containerized

T4O represents a comprehensively containerized deployment model, eliminating the necessity for any KVM-based appliances for service deployment. This marks a departure from earlier T4O releases, where such appliances were required.

Summary of Requirements

Trilio requires its containers to be deployed on the same plane as OpenStack, utilizing existing cluster resources.

As described in the , Trilio requires sufficient cluster resources to deploy its components on both the Controller Plane and Compute Planes.

  1. Valid Trilio License & Acceptance of the

  2. When deploying the Control Plane to OpenShift, ensure sufficient resources are available on Worker Nodes.

  3. Sufficient storage capacity and connectivity on Cinder for snapshotting operations

  4. Sufficient network capabilities for efficient data transfer of workloads

  5. User and Role permissions for access to required cluster objects

  6. Optional features may have specific requirements such as encryption, file search, snapshot mount, FRM instance, etc

  7. Set hw_qemu_guest_agent=True property on the image and install qemu-guest-agent on the VM, in order to avoid any file system inconsistencies post restore.

  8. For the VMware to OpenStack migration feature, please refer to the and pages.

Disaster Recovery

Trilio Workloads are designed to allow a Disaster Recovery without the need to backup the Trilio database.

As long as the Trilio Workloads are existing on the Backup Target Storage and a Trilio installation has access to them, it is possible to restore the Workloads.

Disaster Recovery Process

  1. and Trilio for the target cloud

  2. Notify users to of Workloads being available

This procedure is designed to be applicable to all OpenStack installations using Trilio. It is to be used as a starting point to develop the exact Desaster Recovery process of a specific environment.

In case that instead of noticing the users, the workloads shall be restored is it necessary to have an User in each Project, that has the necessary privileges to restore.

Mount-paths

Trilio incremental Snapshots involve a backing file to the prior backup taken, which makes every Trilio incremental backup a synthetic full backup.

Trilio is using qcow2 backing files for this feature:

As can be seen in the example is the backing file an absolute path, which makes it necessary, that this path exists so the backing files can be accessed.

Trilio is using the base64 hashing algorithm for the NFS mount-paths, to allow the configuration of multiple NFS Volumes at the same time. The hash value is calculated using the provided NFS path.

When the path of the backing file is not available on the Trilio appliance and Compute nodes, will the restores of incremental backups fail.

The tested and recommended method to make the backing files available is creating the required directory path and using mount --bind to make the path available for the backups.

Running the mount --bind command will make the necessary path available until the next reboot. If it is required to have access to the path beyond a reboot is it necessary to edit the fstab.

E-Mail Notifications

Definition

Trilio can notify users via E-Mail upon the completion of backup and restore jobs.

The E-Mail will be sent to the owner of the Workload.

Requirements to activate E-Mail Notifications

To use the E-mail notifications, two requirements need to be met.

Both requirements need to be set or configured by the OpenStack Administrator. Please contact your OpenStack Administrator to verify the requirements.

User E-Mail assigned

As the E-Mail will be sent to the owner of the Workload does the OpenStack User, who created the workload, require to have an E-Mail address associated.

Trilio E-Mail Server configured

Trilio needs to know which E-Mail server to use, to send the E-mail notifications. Backup Administrators can do this in the "Backup Admin" area.

Activate/Deactivate the E-Mail Notifications

E-Mail notifications are activated tenant wide. To activate the E-Mail notification feature for a tenant follow these steps:

  1. Login to Horizon

  2. Navigate to the Backups

  3. Navigate to Settings

  4. Check/Uncheck the box for "Enable Email Alerts"

Example E-Mails

The following screenshots show example E-mails send by Trilio.

Additions for multiple CEPH configurations

It is possible to configure Cinder and Ceph to use different Ceph users for different Ceph pools and Cinder volume types. Or to have the nova boot volumes and cinder block volumes controlled by different users.

If multiple Ceph storages are configured/integrated with the OpenStack, please ensure that respective conf and keyring files are present in /etc/ceph directory.

In the case of multiple Ceph users, it is required to delete the keyring extension from the triliovault-datamover.conf inside the Ceph block by following below mentioned steps:

  1. Deploy Trilio as per the documented steps.

  2. Post successful deployment, please modify the triliovault-datamover.conf file present at following locations on all compute nodes.

    For RHOSP : /var/lib/config-data/puppet-generated/triliovaultdm/etc/triliovault-datamover/

    For Kolla : /etc/kolla/triliovault-datamover/

  3. Modify keyring_ext value with valid keyring extension (eg. .keyring). This extension is expected to be same for all the keyring files. It will be present under [ceph] block in triliovault-datamover.conf file.

Sample conf entry below. This will try all files with the extension keyring that are located inside /etc/ceph to access the Ceph cluster for a Trilio related task.

  1. Restart triliovault_datamover container on all compute nodes.

Migrating encrypted Workloads

Same cloud - different owner

Migration within the same cloud to a different owner Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project A — User B Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project B — User B Cloud A — Domain A — Project A — User A =>Cloud A — Domain B — Project B — User B

Steps used:

  1. Create a secret for Project A in Domain A via User A.

  2. Create encrypted workload in Project A in Domain A via User A. Take snapshot.

  3. Reassign workload to new owner

  4. Load rc file of User A & provide read only rights through acl to the new owner

    openstack acl user add --user <userB_id> <secret_href> --insecure

Different cloud

Migration between clouds Cloud A — Domain A — Project A — User A => Cloud B — Domain B — Project B — User B

Steps used:

  1. Create a secret for Project A in Domain A via User A.

  2. Create an encrypted workload in Project A in Domain A via User A. Trigger snapshot.

  3. Reassign workload to Cloud B - Domain B — Project B — User B

  4. Load RC file of User B.

  5. Create a secret for Project B in Domain B via User B with the same payload used in Cloud A.

  6. Create token via “openstack token issue --insecure”

  7. Add migrated workload's metadata to the new secret (provide issued token to Auth-Token & workload id to matadata as below)

Resources

Each T4O release includes a set of artifacts such as version-tagged containers, package repositories, and distribution packages.

To help users quickly identify the resources associated with each release, we have added dedicated sub-pages corresponding to a specific release version.

Rebasing existing workloads

The Trilio solution is using the qcow2 backing file to provide full synthetic backups.

Especially when the NFS backup target is used, there are scenarios at which this backing file needs to be updated to a new mount path.

To make this process easier and streamlined Trilio is providing the following rebase tool.

Managing Trusts

Trilio is using the which enables the Trilio service user to act in the name of another OpenStack user.

This system is used during all backup and restore features.

OpenStack Administrators should never have the need to directly work with the trusts created.

The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.

Trusts can only be worked with via CLI

List all trusts

Show a trust

  • <trust_id> ➡️ ID of the trust to show

Create a trust

  • <role_name> ➡️Name of the role that trust is created for

  • --is_cloud_trust {True,False} ➡️ Set to true if creating cloud admin trust. While creating cloud trust use same user and tenant which used to configure Trilio and keep the role admin.

Delete a trust

  • <trust_id> ➡️ ID of the trust to be deleted


...

[ceph]
keyring_ext = .keyring

...
curl -i -X PUT \
   -H "X-Auth-Token:gAAAAABh0ttjiKRPpVNPBjRjZywzsgVton2HbMHUFrbTXDhVL1w2zCHF61erouo4ZUjGyHVoIQMG-NyGLdR7nexmgOmG7ed66LJ3IMVul1LC6CPzqmIaEIM48H0kc-BGvhV0pvX8VMZiozgFdiFnqYHPDvnLRdh7cK6_X5dw4FHx_XPmkhx7PsQ" \
   -H "Content-Type:application/json" \
   -d \
'{
  "metadata": {
      "workload_id": "c13243a3-74c8-4f23-b3ac-771460d76130",
      "workload_name": "workload-c13243a3-74c8-4f23-b3ac-771460d76130"
    }
}' \
 'https://kolla-victoria-ubuntu20-1.triliodata.demo:9311/v1/secrets/f3b2fce0-3c7b-4728-b178-7eb8b8ebc966/metadata'
 
 
curl -i -X GET \
   -H "X-Auth-Token:gAAAAABh0ttjiKRPpVNPBjRjZywzsgVton2HbMHUFrbTXDhVL1w2zCHF61erouo4ZUjGyHVoIQMG-NyGLdR7nexmgOmG7ed66LJ3IMVul1LC6CPzqmIaEIM48H0kc-BGvhV0pvX8VMZiozgFdiFnqYHPDvnLRdh7cK6_X5dw4FHx_XPmkhx7PsQ" \
 'https://kolla-victoria-ubuntu20-1.triliodata.demo:9311/v1/secrets/f3b2fce0-3c7b-4728-b178-7eb8b8ebc966/metadata'

6.1.X

18.0(RHOSO) 17.1

RHEL-9 RHEL-9

✔️ ✔️

6.0.0

17.1

RHEL-9

✔️

6.X

2023.2 (Bobcat) 2023.1 (Antelope) Yoga

Jammy Jammy Jammy

✔️ ✔️ ✔️

6.1.1

2023.2 (Bobcat) 2023.1 (Antelope)

✔️ ✔️

<channel type='unix'>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
VM-Migration-T4O-5.X
workloadmgr license-create <license_file>
https://trilio.io/eula/
qemu-img info 85b645c5-c1ea-4628-b5d8-1faea0e9d549
image: 85b645c5-c1ea-4628-b5d8-1faea0e9d549
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 21M
cluster_size: 65536
backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_3c2fbee5-ad90-4448-b009-5047bcffc2ea/snapshot_f4874ed7-fe85-4d7d-b22b-082a2e068010/vm_id_9894f013-77dd-4514-8e65-818f4ae91d1f/vm_res_id_9ae3a6e7-dffe-4424-badc-bc4de1a18b40_vda/a6289269-3e72-4085-adca-e228ba656984
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false
# echo -n 10.10.2.20:/upstream | base64
MTAuMTAuMi4yMDovdXBzdHJlYW0=
#mount --bind <mount-path1> <mount-path2>
#vi /etc/fstab
<mount-path1> <mount-path2>	none bind	0 0
Install
Configure
Verify required mount-paths and create if necessary
Reassign Workloads
workloadmgr trust-list
workloadmgr trust-show <trust_id>
workloadmgr trust-create [--is_cloud_trust {True,False}] <role_name>
workloadmgr trust-delete <trust_id>
OpenStack Keystone Trust system
Screenshot of a notification E-mail for a successful Snapshot
Screenshot of a notification E-Mail for a failed Snapshot
Screenshot of a notification E-Mail for a successful Restore
Screenshot of a notification E-Mail for a failed Restore

Features

A list of impactful features in Trilio for OpenStack.

Immutable Backups

Release version: 6.0

Trilio offers the capability to create immutable T4O backups/snapshots, ensuring they cannot be deleted until the specified retention period has ended. It leverages the target's locking and versioning features to achieve this immutability.


Purpose

In recent years, the Trilio S3 Fuse plugin introduced the ability to support backup immutability through the S3 object lock mechanism. This feature ensures that backups remain secure even if the OpenStack platform is compromised, acting as a crucial safeguard against cyber attacks. While traditional backup solutions prevent direct exposure of backup media to the platform, Trilio's scale-out architecture necessitates direct attachment of the


Key Highlights

  • Pre-requisites:

    • Create a new S3 bucket to enable the object-locking feature, as it cannot be enabled on existing buckets.

    • Configure Object Locking in Compliance mode with a retention period of at least one day for enhanced protection.

    • Disable object lifecycle policies to prevent automated deletions.

    • Perform manual deletion of objects only after the retention period has expired.

  • Retention

    With the introduction of the immutable backups feature, Trilio’s retention policy has been changed. It's important to note that Trilio can no longer support incremental/full synthetic backups forever, as this functionality requires modifying the backup images. Instead, the following retention policy is now in place:

    • If the user wants to retain n number of backups, then the user needs to take full backups every n backups.

    • To keep at least n backups available, Trilio may retain 2n backups, forming two backup chains. Once a chain is fully formed with one full backup and n-1 incremental backup images, the second chain will start with a new full backup. The first chain will be removed once the second chain is fully formed with a full backup and n-1 incremental backups.


How To Use It

Follow the steps mentioned here to know how to use this feature in the product.


Configuration

To enable this feature, you need to adjust certain configuration settings in the product. Refer to the deployment guides of the desired distribution for detailed instructions.


Limitations

  • The feature may not work as expected if the object lifecycle policy is set for a backup target.

  • The same backup target cannot be used for DR or migration purposes, additional steps will be required to perform it.


Advanced Scheduler & Retention

Release version: 6.0

The Advanced Scheduler is a newly introduced feature in Trilio that provides enhanced flexibility and granularity in scheduling and retaining snapshots. With this feature, users can now schedule both full and incremental snapshots across various intervals—hourly, daily, weekly, monthly, and yearly. Each scheduling type offers customizable retention policies, allowing users to manage snapshots as per their specific requirements effectively.


Purpose

The purpose of the Advanced Scheduler is to empower users with more control over their snapshot management. By addressing limitations in the previous scheduler, this feature ensures improved efficiency, optimized resource usage, and greater alignment with organizational backup policies. This feature is especially beneficial for organizations with diverse workloads that demand varying backup frequencies and retention policies.


Key Highlights

  • Flexible Scheduling Intervals: Users can now schedule snapshots across the following intervals:

    • Hourly: For tasks requiring frequent backups.

    • Daily: Ideal for daily operations requiring end-of-day snapshots.

    • Weekly: Suitable for less critical workloads.

    • Monthly: For long-term periodic backups.

    • Yearly: Designed for archival purposes.

  • Snapshot Type Options:

    • Full Snapshots: Each scheduling type allows the user to choose the full snapshot type. But, full snapshot type is mandatory for Weekly, Monthly, and Yearly scheduling types to avoid longer backup chains which may lead to data corruption.

    • Incremental Snapshots: Hourly and Daily scheduling types provide the options from either incremental or full snapshots.

  • Advanced Retention Policies:

    • Users can define retention settings for each scheduling type.

    • Retention can be based on:

      • Number of Snapshots: Keep a specified number of snapshots for the chosen schedule.

      • Retention Duration: This is based on the scheduling type selected by the user.

  • Enhanced Management Efficiency:

    • Simplifies backup strategies by tailoring frequency, type, and retention for specific workloads.

    • Supports compliance and long-term data retention policies.


How To Use It

Follow the steps mentioned here to know how to use this feature in the product.


Limitations

While the Advanced Scheduler introduces significant improvements, it has certain limitations:

  1. Immutable Backups:

    • In case the user chooses the immutable backup storage, the scheduling policy will get limited options. The user can have only hourly scheduling and retention types and hourly will have only 24hrs interval option.

  2. Resource Utilization:

    • Frequent full snapshots (e.g., on an hourly basis) may lead to higher storage and performance overhead.

  3. Complexity in Configuration:

    • Advanced options may require more planning and understanding, increasing the learning curve for new users.

  4. Dependency on Storage Quotas:

    • Users must ensure sufficient storage capacity to handle the retention policies and the frequency of snapshots.

  5. Manual Monitoring:

    • In cases of excessive scheduling, manual intervention might be necessary to adjust retention policies or optimize schedules.


Multiple Backup Targets

Release version: 6.0

Previously, Trilio allowed a single backup target per cloud, either NFS or S3, for all backups. Now, Trilio supports multiple backup targets, enabling both NFS and S3 types to coexist as backup options. Admin users can configure multiple backup targets and define their accessibility for tenant users. Additionally, admin users have the ability to restrict backup targets to specific projects for enhanced control.


Purpose

The feature aims to enhance the flexibility, scalability, and manageability of backup configurations in Trilio. By supporting multiple backup targets:

  • Organizations can align their backup strategies with diverse operational requirements.

  • Admin users can provide tenant users with more options for workload backups.

  • Access control over backup targets is improved, allowing organizations to secure sensitive data.


Key Highlights

  • Support for Multiple Backup Targets: Trilio now supports both NFS and S3 simultaneously for backup storage.

  • Admin Configuration: Admin users can configure multiple backup targets during deployment.

  • Tenant User Flexibility: Tenant users can choose from available backup targets when creating workloads.

  • Enhanced Access Control: Admin users can mark backup targets as private and restrict access to specific projects by creating backup target types and marking them as private.


How To Use It

Follow the steps mentioned here to know how to use this feature in the product.


Configuration

To take advantage of this feature, you need to adjust certain configuration settings in the product. Refer to the deployment guides of the desired distribution for detailed instructions.


Limitations

  • Backup Targets are configured during deployment. Modifying them may require redeploying Trilio.

  • While this feature is highly expandable, it currently limits the isolation of the backup targets to the projects only.

Switching NFS Backing file

Trilio employs a base64 hash to establish the mount point for NFS Backup targets, ensuring compatibility across multiple NFS Shares within a single Trilio installation. This hash is an integral component of Trilio's incremental backups, functioning as an absolute path for backing files.

Consequently, during a disaster recovery or rapid migration situation, the utilization of a mount bind becomes necessary.

In scenarios that allow for a comprehensive migration period, an alternative approach comes into play. This involves modifying the backing file, thereby enabling the accessibility of Trilio backups from a different NFS Share. The backing file is updated to correspond with the mount point of the new NFS Share.

Backing file change script

Trilio provides a shell script for the purpose of changing the backing file. This script is used after the Trilio appliance has been reconfigured to use the new NFS share.

Downloading the shell script

Please request the shell script from your Trilio Customer Success Manager or Customer Success Engineer by opening a case from our Customer Portal. It is not publically available for download at this time.

Prerequisites

The following requirements need to be met before the change of the backing file can be attempted.

  • The Trilio Appliance has been reconfigured with the new NFS Share

  • The OpenStack environment has been reconfigured with the new NFS Share

    • Please check here for Red Hat OpenStack Platform

  • The workloads are available on the new NFS Share

  • The workloads are owned by nova:nova user

Usage

The shell script is changing one workload at a time.

The shell script has to run as nova user, otherwise the owner will get changed and the backup can not be used by Trilio.

Run the following command:

./backing_file_update.sh /var/triliovault-mounts/<base64>/workload_<workload_id>

with

  • /var/triliovault-mounts/<base64>/ being the new NFS mount path

  • workload_<workload_id> being the workload to rebase

Logging of the procedure

The shell script is generating the following log file at the following location:

/tmp/backing_file_update.log

The log file will not get overwritten when the script is run multiple times. Each run of the script will append the available log file.

Upgrading on RHOSO18.0

1] Configuration change

If any config parameter changed in tvo-operator-inputs.yaml like db user password or service endpoints, you can apply the changes using following command.

cd ctlplane-scripts
./deploy_tvo_control_plane.sh

Above command will output ‘configured' or 'unchanged’ depending upon changes happened in tvo-operator-inputs.yaml.

2] Upgrade to new build

Please follow below steps to upgrade to new build on RHOSO18 setup.

Take a backup of existing triliovault-cfg-scripts and clone latest triliovault-cfg-scripts github repository.

mv triliovault-cfg-scripts triliovault-cfg-scritps-old
git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp18

Manually copy the input values from triliovault-cfg-scripts-old to latest directory.

vi triliovault-cfg-scripts-old/redhat-director-scripts/rhosp18/ctlplane-scripts/tvo-operator-inputs.yaml --> triliovault-cfg-scripts/redhat-director-scripts/rhosp18/ctlplane-scripts/tvo-operator-inputs.yaml
vi triliovault-cfg-scripts-old/redhat-director-scripts/rhosp18/dataplane-scripts/cm-trilio-datamover.yaml --> triliovault-cfg-scripts/redhat-director-scripts/rhosp18/dataplane-scripts/cm-trilio-datamover.yaml
vi triliovault-cfg-scripts-old/redhat-director-scripts/rhosp18/dataplane-scripts/trilio-datamover-service.yaml --> triliovault-cfg-scripts/redhat-director-scripts/rhosp18/dataplane-scripts/trilio-datamover-service.yaml
vi triliovault-cfg-scripts-old/redhat-director-scripts/rhosp18/dataplane-scripts/trilio-data-plane-deployment.yaml --> triliovault-cfg-scripts/redhat-director-scripts/rhosp18/dataplane-scripts/trilio-data-plane-deployment.yaml

2.1] Upgrade Trilio for OpenStack Operator

Run operator deployment with new image tag as mentioned in step 2 of this documentation

2.2] Upgrade Trilio OpenStack Control Plane Services

Update the image tags in tvo-operator-inputs.yaml as mentioned in step 3.2 of this documentation and apply the changes using below command

oc apply -n trilio-openstack -f tvo-operator-inputs.yaml

Verify the deployment status and successful deployment.

2.3] Upgrade Trilio Data Plane Services

Update the image tags in cm-trilio-datamover.yaml and apply the changes as mentioned in step 4.2 of this documentation

Update the ansible runner tag in trilio-datamover-service.yaml and apply the changes as mentioned in step 4.4 of this documentation

Update the deployment name and trigger deployment as mentioned in step 4.5 of this documentation

Verify the deployment as mentioned in step 4.6 of this documentation

2.4] Upgrade Trilio Horizon Plugin

Follow step 5 of this documentation and update the trilio horizon plugin image tag.

Network Considerations

Trilio seamlessly integrates with OpenStack, functioning exclusively through APIs utilizing the OpenStack Endpoints. Furthermore, Trilio establishes its own set of OpenStack endpoints. Additionally, both the Trilio appliance and compute nodes interact with the backup target, impacting the network strategy for a Trilio installation.

Existing endpoints in OpenStack

OpenStack comprises three endpoint groupings:

  • Public Endpoints

Public endpoints are meant to be used by the OpenStack end-users to work with OpenStack.

  • Internal Endpoints

Internal endpoints are intended to be used by the OpenStack services to communicate with each

  • Admin Endpoints

Admin endpoints are meant to be used by OpenStack administrators.

Among these three endpoint categories, it's important to note that the admin endpoint occasionally hosts APIs not accessible through any other type of endpoint.

To learn more about OpenStack endpoints please visit the official OpenStack documentation.

OpenStack endpoints required by Trilio

Trilio communicates with all OpenStack services through a designated endpoint type, determined and configured during the deployment of Trilio's services.

It is recommended to configure connectivity through the admin endpoints if available.

The following network requirements can be identified this way:

  • Trilio services need access to the Keystone admin endpoint on the admin endpoint network if it is available.

  • Trilio services need access to all endpoints of the set endpoint type during deployment.

Recommendation: Provide access to all OpenStack Endpoint types

Trilio recommends granting comprehensive access to all OpenStack endpoints for all Trilio services, aligning with OpenStack's established standards and best practices.

Additionally, Trilio generates its own endpoints, which are integrated within the same network as other OpenStack API services.

To adhere to OpenStack's prescribed standards and best practices, it's advisable that Trilio containers operate on the same network as other OpenStack containers.

  • The public endpoint to be used by OpenStack users when using Trilio CLI or API

  • The internal endpoint to communicate with the OpenStack services

  • The admin endpoint to use the required admin-only APIs of Keystone

Backup target access required by Trilio

The Trilio solution uses backup target storage to place the backup data securely. Trilio divides its backup data into two parts:

  1. Metadata

  2. Volume Disk Data

The first type of data is generated by the Trilio Workloadmgr services through communication with the OpenStack Endpoints. All metadata that is stored together with a backup is written by the Trilio Workloadmgr services to the backup target in the JSON format.

The second type of data is generated by the Trilio Datamover service running on the compute nodes. The Datamover service reads the Volume Data from the Cinder or Nova storage and transfers this data as a qcow2 image to the backup target. Each Datamover service is hereby responsible for the VMs running on its compute node.

The network requirements are therefor:

  • Every Trilio Workloadmgr service containers need access to the backup target

  • Every Trilio Datamover service containers need access to the backup target

Upgrading on RHOSP

1] Prerequisites

Please ensure the following requirements are met before starting the upgrade process:

  • No Snapshot or Restore is running

  • Global job scheduler is disabled

  • Disable the wlm services on all controller nodes

sudo systemctl stop tripleo_triliovault_wlm_api.service
sudo systemctl disable tripleo_triliovault_wlm_api.service
sudo systemctl stop tripleo_triliovault_wlm_scheduler.service
sudo systemctl disable tripleo_triliovault_wlm_scheduler.service
sudo systemctl stop tripleo_triliovault_wlm_workloads.service
sudo systemctl disable tripleo_triliovault_wlm_workloads.service
  • Disable the datamover service on all compute nodes

sudo systemctl stop tripleo_triliovault_datamover.service
sudo systemctl disable tripleo_triliovault_datamover.service
sudo umount /var/lib/nova/triliovault-mounts

2] Install latest Trilio release

Take a backup of existing triliovault-cfg-scripts directory

cd /home/stack
mv triliovault-cfg-scripts/ triliovault-cfg-scripts-5.x

Clone triliovault-cfg-scripts github repository of latest release.

git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git

Copy the passwords and triliovault wlm ids files from old cfg-scripts directory.

cp /home/stack/triliovault-cfg-scripts-5.x/redhat-director-scripts/rhosp17/environments/passwords.yaml /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/trilio_passwords.yaml
cp /home/stack/triliovault-cfg-scripts-5.x/redhat-director-scripts/rhosp17/puppet/trilio/files/triliovault_wlm_ids.conf /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/puppet/trilio/files/

Please follow this documentation from section 1.4 to section 10. Please note that sections 5 and 6 needs to be skipped during this process.

While providing the backup target details in section 4.1, please ensure to provide the existing backup target (of 5.x release) as default.

3] Restart wlm-cron resource

From any one of the controller nodes, run the below command.

sudo pcs resource restart triliovault-wlm-cron-podman-0

4] Fetch backup target details

From any one of the controller nodes, login to wlm api container and run the below commands to fetch the backup target details.

sudo podman exec -it triliovault_wlm_api bash
source /etc/triliovault-wlm/cloud_admin_rc

workloadmgr backup-target-type-list

This command will return the backup_target_id and backup_target_type_id.

5] Import workloads

From any one of the controller ndoes, login to wlm api container and run the below commands.

sudo podman exec -it triliovault_wlm_api bash
source /etc/triliovault-wlm/cloud_admin_rc

workloadmgr abandon-resource --all-workloads --all-policies --cloud-wide
workloadmgr workload-get-importworkloads-list --source-bt {{ backup_target_id }}
workloadmgr workload-importworkloads --source-btt {{ backup_target_type_id }}

6] Update backing file path

This step would be needed only when your old backup target is S3.

Please follow this documentation to update the backing file path of existing snapshots.

architecture overview
EULA
prerequisite
limitations
Network Considerations
Installation Strategy and Preparation
6.1.1
6.1.0
6.0.0

Welcome to Trilio for OpenStack

Trilio Data, Inc

Trilio Data, Inc. is the leader in providing backup and recovery solutions for cloud-native applications. Established in 2013, its flagship products, Trilio for OpenStack(T4O) and Trilio for Kubernetes(T4K) are used by a wide number of large corporations around the world.

About Trilio for OpenStack

T4O, by Trilio Data, is a native OpenStack service-based solution that provides policy-based comprehensive backup and recovery for OpenStack workloads. It captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data, and Metadata of an environment) as full or incremental snapshots. These snapshots can be held in a variety of storage environments including NFS, AWS S3, and other S3-compatible storages. With Trilio and its one-click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). With Trilio, IT departments are enabled to fully deploy OpenStack solutions and provide business assurance through enhanced data retention, protection, and integrity.

With the use of Trilio’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. Trilio takes point-in-time backup of the entire workload consisting of compute resources, network configurations, and storage data as one unit. It also takes incremental backups that capture only the changes that were made since the last backup. Incremental snapshots save a considerable amount of storage space as the backup only includes changes since the last backup. The benefits of using VAST for backup and restore could be summarized below:

Efficient Capture and Storage of Snapshots

Since our full backups only include data that is committed to storage volume and the incremental backups only include changed blocks of data since the last backup, our backup processes are efficient and store backup images efficiently on the backup media.

Faster and Reliable Recovery

When your applications become complex that snap multiple VMs and storage volumes, our efficient recovery process will bring your application from zero to operational with just a click of a button

Easy Migration of Workloads between Clouds

Trilio captures all the details of your application and hence our migration includes your entire application stack without leaving anything for guesswork.

Low Total Cost of Ownership

Our tenant-driven backup policy and automation eliminate the need for dedicated backup administrators, thereby improving your total cost of ownership.

About this Documentation

This documentation serves as the end-user technical resource to accompany Trilio for OpenStack. You will learn about the architecture, installation, and vast number of operations of this product. The intended audience is anyone who wants to understand the value, operations, and nuances of protecting their cloud-native applications with Trilio for OpenStack.

Multi-IP NFS Backup target mapping file configuration

Introduction

Filename and location:

This file exclusively comes into play when users aim to configure Trilio with a NFS backup target that employs multiple network endpoints. For all other scenarios, such as single IP NFS or S3, this file remains inactive, and in such instances, please consult the standard installation documentation.

When using an NFS backup target with multiple network endpoints, T4O will mount a single IP/endpoint on a designated compute node for a specific NFS share. This approach enables users to distribute NFS share IPs/endpoints across various compute nodes.

The 'triliovault_nfs_map_input.yml' file allows users to distribute/load balance NFS share endpoints across compute nodes in a given cloud.

Note: Two IPs/endpoints of the same NFS share on a single compute node is an invalid scenario and not required because in the backend it stores data at the same place.

Examples

Using hostnames

Here, the User has ‘one’ NFS share exposed with three IP addresses. 192.168.1.34, 192.168.1.35, 192.168.1.33 Share directory path is: /var/share1

So, this NFS share supports the following full paths that clients can mount:

There are 32 compute nodes in the OpenStack cloud. 30 node hostnames have the following naming pattern

The remaining 2 node hostnames do not follow any format/pattern.

Now the mapping file will look like this

Using IPs

Compute node IP range used here: 172.30.3.11-40 and 172.30.4.40, 172.30.4.50 Total of 32 compute nodes

Other complex examples are available on github at

Getting the correct compute hostnames/IPs

RHOSP

Use the following command to get compute hostnames. Check the ‘Name' column. Use these exact hostnames in 'triliovault_nfs_map_input.yml' file.

In the following command output, ‘overcloudtrain1-novacompute-0' and ‘overcloudtrain1-novacompute-1' are correct hostnames.

Run this command on undercloud by sourcing 'stackrc'.

Schedulers

Definition

Every Workload has its own schedule. Those schedules can be activated, deactivated and modified.

A schedule is defined by:

  • Status (Enabled/Disabled)

  • Start Day/Time

  • End Day

  • Hrs between 2 snapshots

Disable a schedule

Using Horizon

To disable the scheduler of a single Workload in Horizon do the following steps:

  1. Login to the Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload to be modified

  5. Click the small arrow next to "Create Snapshot" to open the sub-menu

  6. Click "Edit Workload"

  7. Navigate to the tab "Schedule"

  8. Uncheck "Enabled"

  9. Click "Update"

Using CLI

  • --workloadid <workloadid> ➡️ Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>

Enable a schedule

Using Horizon

To disable the scheduler of a single Workload in Horizon do the following steps:

  1. Login to the Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload to be modified

  5. Click the small arrow next to "Create Snapshot" to open the sub-menu

  6. Click "Edit Workload"

  7. Navigate to the tab "Schedule"

  8. check "Enabled"

  9. Click "Update"

Using CLI

  • --workloadid <workloadid> ➡️ Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>

Modify a schedule

To modify a schedule the workload itself needs to be modified.

Please follow this procedure to .

Verify the scheduler trust is working

Trilio is using the which enables the Trilio service user to act in the name of another OpenStack user.

This system is used during all backup and restore features.

Using Horizon

As a trust is bound to a specific user for each Workload does the Trilio Horizon plugin show the status of the Scheduler on the Workload list page.

Using CLI

  • <workload_id> ➡️ ID of the workload to validate

Installing WorkloadManager CLI client

About the workloadmgr CLI client

The workloadmgr CLI client is provided as rpm and deb packages.

The following operating systems have been verified with the installation:

  • Rocky9, RHEL9

  • CentOS8

  • Ubuntu 18.04, Ubuntu 20.04

Installing the workloadmgr client will automatically install all required OpenStack clients as well.

Further, the installation of the workloadmgr client will integrate the client into the global OpenStack Python client, if available.

Install workloadmgr client rpm package on CentOS8

The Trilio workload manager CLI client has several requirements that need to be met before the client can be installed without dependency issues.

Preparing the workloadmgr client installation

The following steps need to be done to prepare the installation of the workloadmgr client:

  1. Add required repositories

    1. epel-release yum -y install epel-release

    2. centos-release-openstack-train yum -y install centos-release-openstack-train

These repositories are required to fulfill the following dependencies:

python3-pbr,python3-prettytable,python3-requests,python3-simplejson,python3-six,python3-pyyaml,python3-pytz,python3-openstackclient

Installing the workloadmgr client

Installing from the Trilio online repository

To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:

Create the Trilio yum repository file /etc/yum.repos.d/trilio.repo Enter the following details into the repository file:

Install the workloadmgr client issuing the following command:

yum install python-3-workloadmgrclient-el8

Install workloadmgr client rpm package on Rocky9 and RHEL9

The Trilio workload manager CLI client has several requirements that need to be met before the client can be installed without dependency issues.

Preparing the workloadmgr client installation

The following steps need to be done to prepare the installation of the workloadmgr client:

  1. Add required repositories

    1. epel-release yum -y install epel-release

Installing the workloadmgr client

Installing from the Trilio online repository

To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:

Create the Trilio yum repository file /etc/yum.repos.d/trilio.repo Enter the following details into the repository file:

Install the workloadmgr client issuing the following command:

yum install python-3-workloadmgrclient-el9

Install workloadmgr client deb packages on Ubuntu

The Trilio workloadmgr client packages for Ubuntu are only available from the online repository.

Preparing the workloadmgr client installation

There is no preparation required. All dependencies are automatically resolved by the standard repositories provided by Ubuntu.

Installing the Workloadmgr client

Installing from the Trilio online repository

To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:

Create the Trilio yum repository file /etc/apt/sources.list.d/fury.list Enter the following details into the repository file:

run apt update to make the new repository available.

The apt package manager is used to install the workloadmgr client package:

apt-get install python3-workloadmgrclient

Workload Quotas

Trilio enables OpenStack administrators to set Project Quotas against the usage of Trilio.

The following Quotas can be set:

  • Number of Workloads a Project is allowed to have

  • Number of Snapshots a Project is allowed to have

  • Number of VMs a Project is allowed to protect

  • Amount of Storage a Project is allowed to use on the Backup Target

Work with Workload Quotas via Horizon

The Trilio Quota feature is available for all supported OpenStack versions and distributions, but only Train and higher releases include the Horizon integration of the Quota feature.

Workload Quotas are managed like any other Project Quotas.

  1. Login into Horizon as user with admin role

  2. Navigate to Identity

  3. Navigate to Projects

  4. Identify the Project to modify or show the quotas on

  5. Use the small arrow next to "Manage Members" to open the submenu

  6. Choose "Modify Quotas"

  7. Navigate to "Workload Manager"

  8. Edit Quotas as desired

  9. Click "Save"

Work with Workload Quotas via CLI

List available Quota Types

Trilio is providing several different Quotas. The following command allows listing those.

Trilio 4.1 do not yet have the Quota Type Volume integrated. Using this will not generate any Quotas a Tenant has to apply to.

Show Quota Type Details

The following command will show the details of a provided Quota Type.

  • <quota_type_id> ➡️ID of the Quota Type to show

Create a Quota

The following command will create a Quota for a given project and set the provided value.

  • <quota_type_id> ➡️ID of the Quota Type to be created

  • <allowed_value>➡️ Value to set for this Quota Type

  • <high_watermark>➡️ Value to set for High Watermark warnings

  • <project_id>➡️ Project to assign the quota to

The high watermark is automatically set to 80% of the allowed value when set via Horizon.

A created Quota will generate an allowed_quota_object with its own ID. This is ID is needed when continuing to work with the created Quota.

List allowed Quotas

The following command lists all Trilio Quotas set for a given project.

  • <project_id>➡️ Project to list the Quotas from

Show allowed Quota

The following command shows the details about a provided allowed Quota.

  • <allowed_quota_id> ➡️ID of the allowed Quota to show.

Update allowed Quota

The following command shows how to update the value of an already existing allowed Quota.

  • <allowed_value>➡️ Value to set for this Quota Type

  • <high_watermark>➡️ Value to set for High Watermark warnings

  • <project_id>➡️ Project to assign the quota to

  • <allowed_quota_id> ➡️ID of the allowed Quota to update

Delete allowed Quota

The following command will delete an allowed Quota and sets the value of the connected Quota Type back to unlimited for the affected project.

  • <allowed_quota_id> ➡️ID of the allowed Quota to delete

[trilio]
name=Trilio Repository
baseurl=https://yum.fury.io/trilio-6-0/
enabled=1
gpgcheck=0
[trilio]
name=Trilio Repository
baseurl=https://yum.fury.io/trilio-6-0/
enabled=1
gpgcheck=0
deb [trusted=yes] https://apt.fury.io/trilio-6-0/ /
triliovault-cfg-scripts/common/triliovault_nfs_map_input.yml
192.168.1.33:/var/share1 
192.168.1.34:/var/share1 
192.168.1.35:/var/share1
prod-compute-1.trilio.demo 
prod-compute-2.trilio.demo 
prod-compute-3.trilio.demo 
. 
. 
. 
prod-compute-30.trilio.demo
compute_bare.trilio.demo 
compute_virtual
multi_ip_nfs_shares: 
 - "192.168.1.34:/var/share1": ['prod-compute-[1:10].trilio.demo', 'compute_bare.trilio.demo'] 
   "192.168.1.35:/var/share1": ['prod-compute-[11:20].trilio.demo', 'compute_virtual'] 
   "192.168.1.33:/var/share1": ['prod-compute-[21:30].trilio.demo'] 

single_ip_nfs_shares: []
multi_ip_nfs_shares: 
 - "192.168.1.34:/var/share1": ['172.30.3.[11:20]', '172.30.4.40'] 
   "192.168.1.35:/var/share1": ['172.30.3.[21:30]', '172.30.4.50'] 
   "192.168.1.33:/var/share1": ['172.30.3.[31:40]'] 

single_ip_nfs_shares: []
(undercloud) [stack@ucqa161 ~]$ openstack server list
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
triliovault-cfg-scripts/common/examples-multi-ip-nfs-map at master · trilioData/triliovault-cfg-scripts
workloadmgr disable-scheduler --workloadids <workloadid>
workloadmgr enable-scheduler --workloadids <workloadid>
workloadmgr scheduler-trust-validate <workload_id>
modify the workload
OpenStack Keystone Trust system
Screenshot of an Workload with established scheduler trust
workloadmgr project-quota-type-list
workloadmgr project-quota-type-show <quota_type_id>
workloadmgr project-allowed-quota-create --quota-type-id quota_type_id
                                         --allowed-value allowed_value 
                                         --high-watermark high_watermark 
                                         --project-id project_id
workloadmgr project-allowed-quota-list <project_id>
workloadmgr project-allowed-quota-show <allowed_quota_id>
workloadmgr project-allowed-quota-update [--allowed-value <allowed_value>]
                                         [--high-watermark <high_watermark>]
                                         [--project-id <project_id>]
                                         <allowed_quota_id>
workloadmgr project-allowed-quota-delete <allowed_quota_id>
Screenshot of Horizon integration for Workload Manager Quotas

File Search

Definition

The file search functionality allows the user to search for files and folders located on a chosen VM in a workload in one or more Backups.

Navigating to the file search tab in Horizon

The file search tab is part of every workload overview. To reach it follow these steps:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload a file search shall be done in

  5. Click the workload name to enter the Workload overview

  6. Click File Search to enter the file search tab

Configuring and starting a file search Horizon

A file search runs against a single virtual machine for a chosen subset of backups using a provided search string.

To run a file search the following elements need to be decided and configured

Choose the VM the file search shall run against

Under VM Name/ID choose the VM that the search is done upon. The drop down menu provides a list of all VMs that are part of any Snapshot in the Workload.

VMs that are no longer activly protected by the Workload but are still part of an existing Snapshot are listed in red.

Set the File Path

The File Path defines the search string that is run against the chosen VM and Snapshots. This search string does support basic RegEx.

The File Path has to start with a '/'

Windows partitions are fully supported. Each partition is its own Volume with its own root. Use '/Windows' instead of 'C:\Windows'

The file search does not go into deeper directories and always searches on the directory provided in the File Path

Example File Path for all files inside /etc : /etc/*

Define the Snapshots to search in

"Filter Snapshots by" is the third and last component that needs to be set. This defines which Snapshots are going to be searched.

There are 3 possibilities for a pre-filtering:

  1. All Snapshots - Lists all Snapshots that contain the chosen VM from all available Snapshots

  2. Last Snapshots - Choose between the last 10, 25, 50, or custom Snapshots and click Apply to get the list of the available Snapshots for the chosen VM that match the criteria.

  3. Date Range - Set a start and end date and click apply to get the list of all available Snapshots for the chosen VM within the set dates.

After the pre-filtering is done all matching Snapshots are automatic prechosen. Uncheck any Snapshot that shall not be searched.

When no Snapshot is chosen the file search will not start.

Start the File Search and retrieve the results in Horizon

To start a File Search the following elements need to be set:

  • A VM to search in has to be chosen

  • A valid File Path provided

  • At least one Snapshot to search in selected

Once those have been set click "Search" to start the file search.

Do not navigate to any other Horizon tab or website after starting the File Search. Results are lost and the search has to be repeated to regain them.

After a short time the results will be presented. The results are presented in a tabular format grouped by Snapshots and Volumes inside the Snapshot.

For each found file or folder the following information are provided:

  • POSIX permissions

  • Amount of links pointing to the file or folder

  • User ID who owns the file or folder

  • Group ID assigned to the file or folder

  • Actual size in Bytes of the file or folder

  • Time of creation

  • Time of last modification

  • Time of last access

  • Full path to the found file or folder

Once the Snapshot of interest has been identified it is possible to go directly to the Snapshot using the "View Snapshot" option at the top of the table. It is also possible to directly mount the Snapshot using the "Mound Snapshot" Button at the end of the table.

Doing a CLI File Search

workloadmgr filepath-search [--snapshotids <snapshotid>]
                            [--end_filter <end_filter>]
                            [--start_filter <start_filter>]
                            [--date_from <date_from>]
                            [--date_to <date_to>]
                            <vm_id> <file_path>
  • <vm_id> ➡️ ID of the VM to be searched

  • <file_path> ➡️ Path of the file to search for

  • --snapshotids <snapshotid> ➡️ Search only in specified snapshot ids snapshot-id: include the instance with this UUID

  • --end_filter <end_filter> ➡️Displays last snapshots, example , last 10 snapshots, default 0 means displays all snapshots

  • --start_filter <start_filter> ➡️Displays snapshots starting from , example , snapshot starting from 5, default 0 means starts from first snapshot

  • --date_from <date_from> ➡️ From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If time isn't specified then it takes 00:00 by default

  • --date_to <date_to> ➡️ To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day),Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to

Install Dynamic Backup Target

Starting t4o-6.x, a new feature has been introduced to add new backup target without the need of redeploying complete t4o. Please follow mentioned steps to add a backup target

1. Update backup target specific yaml file

1.1] Navigate to the backup targets chart

cd triliovault-cfg-scripts/openstack-helm/trilio-backup-targets/values_overrides

1.2] Update the <backup-target-type>.yaml file (nfs.yaml, other_s3.yaml or amazon_s3.yaml).

NFS Backup Target Example

trilio_backup_target:
  backup_target_name: 'NFS_BackupTarget'
  backup_target_type: 'nfs'
  is_default: true
  nfs_server: '10.10.0.1'
  nfs_shares: /home/openstack-helm
  nfs_options: "nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10"
  storage_size: 20Gi
  storage_class_name: nfs

images:
  trilio_backup_targets: docker.io/trilio/trilio-wlm-helm:<image-tag>

S3 Backup Target Example

trilio_backup_target:
  backup_target_name: 'S3_BackupTarget'
  backup_target_type: 's3'
  is_default: true

  # S3 Configuration
  s3_type: 'amazon_s3'           # if not Amazon S3, use 'other_s3'
  s3_access_key: 'ACCESSKEY1'
  s3_secret_key: 'SECRETKEY1'
  s3_region_name: 'REGION1'
  s3_bucket: 'BUCKET1'
  s3_endpoint_url: ''             # required only for 'other_s3'
  s3_signature_version: 'default'
  s3_auth_version: 'DEFAULT'
  s3_ssl_enabled: true
  s3_ssl_verify: true
  s3_ssl_ca_cert: ''              # add CA cert for 'other_s3'

images:
  trilio_backup_targets: docker.io/trilio/trilio-wlm-helm:<image-tag>

a. If using Amazon S3, s3_endpoint_url and s3_ssl_ca_cert can be left empty.

b. If using Other S3 (like MinIO, CEPH S3, etc.), s3_endpoint_url and s3_ssl_ca_cert (if required) must be set.

2. Install/Upgrade Backup Target

2.1] Helm Install/Upgrade Command

Please ensure that values_overrides/<backup-target-type>.yaml file is updated with correct NFS/S3 backup target configuration.

helm upgrade --install <release-name> ./trilio-backup-targets \
  -n trilio-openstack \
  -f ./trilio-backup-targets/values_overrides/nfs.yaml \
  --wait \
  --timeout 5m

a. Replace release-name with a name for respective backup target (e.g., nfs-bt, s3-bt).

b. Replace values_overrides/nfs.yaml with the actual path to respective values overrides file.

3. Verification

3.1] Verify Backup Target Pods

Check if the backup target pods are running

kubectl get pods -n trilio-openstack -l component=nfs-mount

3.2] Check Mounts (for NFS targets)

Exec into the nfs-mount pod and check if the mount is successful.

kubectl exec -n trilio-openstack -it <nfs-mount-pod-name> -- mount | grep trilio

Mount path like below should be visible

/var/lib/trilio/triliovault-mounts/<base64-nfs-path>

3.3] Verify Persistent Volumes (PV/PVCs)

Check if volumes are properly created:

kubectl get pv | grep trilio-openstack
kubectl get pvc -n trilio-openstack | grep trilio

Required installation/updates are done.

If everything looks good, the NFS/S3 backup target is successfully installed and ready to use.

Add new backup target on RHOSO18.0

Please follow the below steps to add new backup target on RHOSO18.

1] Mount Backup target on Trilio Control Plane

Navigate to trilio ctlplane-scripts directory

cd /PATH/TO/triliovault-cfg-scripts/redhat-director-scripts/rhosp18/ctlplane-scripts

Plan which type of backup target you want to add. T4O supports two types of backup targets.

  1. nfs

  2. s3

If you want to add backup target of type 'nfs' then edit the following yaml file and create TVOBackupTarget resource.

vi tvo-backup-target-cr-nfs.yaml
oc apply -f tvo-backup-target-cr-nfs.yaml

If you want to add backup target of type 's3' then edit the following yaml file. In following secret user needs to mention base64 encoded s3 bucket access key and secret key.

To get base64 encoded string use linux command

echo -n "s3_key_string" | base64
cd ctlplane-scripts/
vi trilio-s3-backup-target-secret.yaml
oc -n trilio-openstack apply -f trilio-s3-backup-target-secret.yaml

If your S3 bucket is an Amazon S3 bucket.

vi tvo-backup-target-cr-amazon-s3.yaml
oc apply -f tvo-backup-target-cr-amazon-s3.yaml

If your S3 is of any other type edit following file and create TVOBackupTarget resource

vi tvo-backup-target-cr-other-s3.yaml
oc apply -f tvo-backup-target-cr-other-s3.yaml

Check deployment progress

oc get pods -n trilio-openstack
oc get daemonsets -n trilio-openstack

2] Mount Backup target on Trilio Data Plane

Navigate to data plane scripts directory

cd /PATH/TO/triliovault-cfg-scripts/redhat-director-scripts/rhosp18/dataplane-scripts/

Create templates needed for adding backup targets. Please note that use unique backup target name for parameter <BACKUP_TARGET_NAME>. You should not have used this backup target name earlier for any other trilio backup target. For parameter <BACKUP_TARGET_TYPE>, valid choices are ‘s3' and 'nfs’

./create-templates.sh <BACKUP_TARGET_NAME> <BACKUP_TARGET_TYPE>
./create-templates.sh s3-bt8 s3

A new directory gets created with backup target name having necessary templates. You can list these templates.

cd <BACKUP_TARGET_NAME>/
ls -ll

Backup target name gets set in all the yaml files created in this directory. BACKUP_TARGET_NAME gets converted to all small case and if any underscore ‘_' character is there in it, it gets replaced by hyphen character '-'. Add backup target details to template

Change to <BACKUP_TARGET_NAME> if not done already

cd <BACKUP_TARGET_NAME>/

Edit config map file

vi cm-trilio-backup-target.yaml

Create config map

oc -n openstack apply -f cm-trilio-backup-target.yaml

If you are adding 's3' type backup target, then only you need to create following secret. We have already filled in all details in yaml file. We just need to apply it.

oc -n openstack apply -f ../../ctlplane-scripts/trilio-s3-backup-target-secret.yaml

Set Trilio Ansible Runner container image url and tag in parameter 'openStackAnsibleEERunnerImage:' of trilio-add-backup-target-service.yaml You don’t need to change any other parameter.

vi trilio-add-backup-target-service.yaml

Create custom data plane service for this backup target

oc -n openstack apply -f trilio-add-backup-target-service.yaml
sleep 5s

Edit trilio-add-backup-target-deployment.yaml file and set parameter ‘nodeSets' with correct value from your environment. You don’t need to change any other parameter.

Get OpenStackDataPlaneNodeSet name

oc -n openstack get OpenStackDataPlaneNodeSet 

Set 'nodeSets' parameter

vi trilio-add-backup-target-deployment.yaml

Trigger deployment of this backup target

oc -n openstack apply -f trilio-add-backup-target-deployment.yaml

Check logs Edit <DEPLOYMENT_NAME>, take it from above deployment yaml.

# Get deployment pod name
oc -n openstack get pods -l openstackdataplanedeployment=<DEPLOYMENT_NAME>
# Example:
oc -n openstack get pods -l openstackdataplanedeployment=edpm-trilio-add-backup-target-s3-bt8   
## Check logs
oc logs <POD_NAME>
# Example:
oc logs trilio-add-backup-target-s3-bt8-edpm-trilio-add-backup-tarkx5xx

If this pod does not get created, it means you have not used unique backup target name or some other issue happend. Please verify that

oc get openstackdataplanedeployment | grep trilio

3] Add Backup Target Records

Login to triliovault-wlm-api pod and run below CLI command to create backup target in Trilio DB.

oc get pods -n trilio-openstack | grep wlm-api
oc exec -n trilio-openstack -it <trilio wlm api pod name> bash
source <admin rc file>

For NFS Backup Target:

workloadmgr backup-target-create --type nfs --filesystem-export <filesystem_export> --btt-name <btt name>

Sample command:

workloadmgr backup-target-create --type nfs --filesystem-export 192.168.0.53:/home/rhosp2 --btt-name bt3-nfs

For Object lock enabled S3 Backup Target:

workloadmgr backup-target-create --type s3 --s3-endpoint-url <s3_endpoint_url> --s3-bucket <s3_bucket> --btt-name <btt name> --immutable --metadata object_lock=1 bucket=s3-object-lock

Sample command:

workloadmgr backup-target-create --type s3 --s3-endpoint-url https://s3.wasabisys.com --s3-bucket object-locked-s3-2 --btt-name bt5-s3 --immutable --metadata object_lock=1 bucket=s3-object-lock 

For non-object lock S3 Backup Target:

workloadmgr backup-target-create --type s3 --s3-endpoint-url <s3_endpoint_url> --s3-bucket <s3_bucket> --btt-name <btt name>

Sample command:

workloadmgr backup-target-create --type s3 --s3-endpoint-url https://s3.wasabisys.com --s3-bucket qa-sachin --btt-name s3-bt5

6.0.0

Learn about artifacts related to Trilio for OpenStack 6.0.0

Trilio Branch

Branch of the repository to be used for DevOps scripts.

trilio_branch : 6.0.0

Deployment Scripts

git clone -b 6.0.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/

Container/Revision Information

RHOSP Version
CONTAINER-TAG-VERSION

RHOSP 17.1

6.0.0-rhosp17.1

RHOSP Version
Container URLs

RHOSP 17.1

Parameters / Variables
Values

triliovault-pkg-source

deb [trusted=yes] https://apt.fury.io/trilio-6-0 /

channel

6.0/stable

Charm names

Supported releases

Revisions

Jammy (Ubuntu 22.04)

37

Jammy (Ubuntu 22.04)

17

Jammy (Ubuntu 22.04)

34

Jammy (Ubuntu 22.04)

10

Package Repositories & Versions

All packages are Python 3 (py3.6 - py3.10) compatible only.

Repo URL:

https://pypi.fury.io/trilio-6-0/
Name
Version

contegoclient

6.0.24

s3fuse

6.0.24

tvault-horizon-plugin

6.0.24

workloadmgr

6.0.24

workloadmgrclient

6.0.24

Repo URL:

deb [trusted=yes] https://apt.fury.io/trilio-6-0/ /
Name
Version

python3-contegoclient

6.0.24

python3-dmapi

6.0.24

python3-namedatomiclock

1.1.3

python3-s3-fuse-plugin

6.0.24

python3-tvault-contego

6.0.24

python3-tvault-horizon-plugin

6.0.24

python3-workloadmgrclient

6.0.24

workloadmgr

6.0.24

workloadmgr

5.2.8.15

Repo URL:

https://yum.fury.io/trilio-6-0/

To enable, add the following file /etc/yum.repos.d/fury.repo:

[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-6-0/
enabled=1
gpgcheck=0
Name
Linux Distribution
Version

python3-contegoclient-el9

RHEL9

6.0.24-6.0

python3-dmapi-el9

RHEL9

6.0.24-6.0

python3-s3fuse-plugin-el9

RHEL9

6.0.24-6.0

python3-trilio-fusepy-el9

RHEL9

3.0.1-1

python3-tvault-contego-el9

RHEL9

6.0.24-6.0

python3-tvault-horizon-plugin-el9

RHEL9

6.0.24-6.0

python3-workloadmgrclient-el9

RHEL9

6.0.24-6.0

python3-workloadmgr-el9

RHEL9

6.0.24-6.0

6.1.0

Learn about artifacts related to Trilio for OpenStack 6.1.0

Trilio Branch

Branch of the repository to be used for DevOps scripts.

Deployment Scripts

Container/Revision Information

RHOSP Version
CONTAINER-TAG-VERSION
RHOSP Version
Container URLs
Parameters / Variables
Values

Package Repositories & Versions

All packages are Python 3 (py3.6 - py3.10) compatible only.

Repo URL:

Name
Version

Repo URL:

Name
Version

Repo URL:

To enable, add the following file /etc/yum.repos.d/fury.repo:

Name
Linux Distribution
Version

6.1.1

Learn about artifacts related to Trilio for OpenStack 6.1.1

Trilio Branch

Branch of the repository to be used for DevOps scripts.

Deployment Scripts

Container/Revision Information

RHOSP Version
Container URLs
Parameters / Variables
Values
OpenStack Helm Version
Container URLs

Package Repositories & Versions

All packages are Python 3 (py3.6 - py3.10) compatible only.

Repo URL:

Name
Version

Repo URL:

Name
Version

Repo URL:

To enable, add the following file /etc/yum.repos.d/fury.repo:

Name
Linux Distribution
Version

Getting started with Trilio on Canonical OpenStack

Trilio and Canonical have started a partnership to ensure a native deployment of Trilio using JuJu Charms.

Those JuJu Charms are publicly available as Open Source Charms.

Pease refer section for required Charm details.

Prerequisite

A Canonical OpenStack base setup deployed for a required release. Refer

Steps to install the Trilio charms

1. Export the OpenStack base bundle

2. Create a Trilio overlay bundle as per the OpenStack setup release using the charms given above.

NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

Trilio File Search functionality requires that the Trilio Workload manager (trilio-wlm) be deployed as a virtual machine. File Search will not function if the Trilio Workload manager (trilio-wlm) is running as a lxd container(s).

If trilio-wlm service is assigned to any nova-compute node then wlm mysql router service fails to start. Hence, please ensure to assign trilio-wlm service to some other node.

Sample Trilio overlay bundles (T5O release wise) available in triliovault-cfg-scripts repository at path : juju-charms/sample_overlay_bundles

Value of trilio_branch can be taken from release specific

Following table provides the details of various values to be updated in overlay bundle.

3. T4O Deployment

3.1] Do a dry run to check if the Trilio bundle is working

3.2] Trigger deployment

3.3] Wait until all the Trilio units are deployed successfully. Check the status via juju status command.

4. Post Deployment Steps

4.1] Once the deployment is complete, perform the below operations:

a. Create cloud admin trust & add licence

Note: Reach out to the Trilio support team for the license file.

5. Verify the T4O Deployment

5.1] After T4O deployment steps are over, it can take sometime for all units to get deployed successfully. Deployment is considered successful when all the units show Unit is Ready in the message column.

To verify the same, following command (& sample output) can be used to fetch the Trilio units/applications.

6. Troubleshooting T4O Deployment

6.1] To debug any specific unit : juju debug-log --include <UNIT_NAME_IN_ERROR>

Eg. If trilio-wlm/6 unit is in 'error' state, it's logs can be fetched using following command. Specific correct unit number from respective deployment using juju status command.

juju debug log --include trilio-wlm/6

For multipath enabled environments, perform the following actions

  1. log into each nova compute node

  2. add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf

  3. restart tvault-contego service

File Search

Start File Search

POST https://$(tvm_address):8780/v1/$(tenant_id)/search

Starts a File Search with the given parameters

Path Parameters

Name
Type
Description

Headers

Name
Type
Description

Body format

Get File Search Results

POST https://$(tvm_address):8780/v1/$(tenant_id)/search/<search_id>

Starts a filesearch with the given parameters

Path Parameters

Name
Type
Description

Headers

Name
Type
Description

Workload Import & Migration

Each Trilio Workload has a dedicated owner. The ownership of a Workload is defined by:

  • OpenStack User - The OpenStack User-ID is assigned to a Workload

  • OpenStack Project - The OpenStack Project-ID is assigned to a Workload

  • OpenStack Cloud - The Trilio Serviceuser-ID is assigned to a Workload

OpenStack Users can update the User ownership of a Workload by modifying the Workload.

This ownership secures, that only the owners of a Workload are able to work with it.

OpenStack Administrators can reassign Workloads or reimport Workloads from older Trilio installations.

Import workloads

Workload import allows to import Workloads existing on the Backup Target into the Trilio database.

The Workload import is designed to import Workloads, which are owned by the Cloud.

It will not import or list any Workloads that are owned by a different cloud.

To get a list of importable Workloads use the following CLI command:

  • --project_id <project_id> ➡️ List workloads belongs to given project only.

To import Workloads into the Trilio database use the following CLI command:

  • --workloadids <workloadid> ➡️ Specify workload ids to import only specified workloads. Repeat option for multiple workloads.

Orphaned Workloads

The definition of an orphaned Workload is from the perspective of a specific Trilio installation. Any workload that is located on the Backup Target Storage, but not known to the TrilioVualt installation is considered orphaned.

Further is to divide between Workloads that were previously owned by Projects/Users in the same cloud or are migrated from a different cloud.

The following CLI command provides the list of orphaned workloads:

  • --migrate_cloud {True,False} ➡️ Set to True if you want to list workloads from other clouds as well. Default is False.

  • --generate_yaml {True,False} ➡️ Set to True if want to generate output file in yaml format, which would be further used as input for workload reassign API.

Running this command against a Backup Target with many Workloads can take a bit of time. Trilio is reading the complete Storage and verifies every found Workload against the Workloads known in the database.

Reassigning Workloads

OpenStack administrators are able to reassign a Workload to a new owner. This involves the possibility to migrate a Workload from one cloud to another or between projects.

Reassigning a workload only changes the database of the target Trilio installation. When the Workload was managed before by a different Trilio installation, will that installation not be updated.

Use the following CLI command to reassign a Workload:

  • --old_tenant_ids <old_tenant_id> ➡️ Specify old tenant ids from which workloads need to reassign to new tenant. Specify multiple times to choose Workloads from multiple tenants.

  • --new_tenant_id <new_tenant_id> ➡️ Specify new tenant id to which workloads need to reassign from old tenant. Only one target tenant can be specified.

  • --workload_ids <workload_id> ➡️ Specify workload_ids which need to reassign to new tenant. If not provided then all the workloads from old tenant will get reassigned to new tenant. Specifiy multiple times for multiple workloads.

  • --user_id <user_id> ➡️ Specify user id to which workloads need to reassign from old tenant. only one target user can be specified.

  • --migrate_cloud {True,False} ➡️ Set to True if want to reassign workloads from other clouds as well. Default if False

  • --source-btt <source-btt> [<source-btt> ...] ➡️ It searches workloads in the given Backup Target Types Id. If not provided then considers default BTT. Only single --source-btt is allowed if --workload-ids are provided. --source-btt <source-btt-1> <source-btt-2> ... <source-btt-N>

  • --source-btt-all ➡️ This will search in all Backup Target Types and reassign the workloads. Only allowed when --workload-ids is NOT provided. User must provide --old_tenant_ids and --new_tenant_id to use it.

  • --map_file ➡️ Provide file path(relative or absolute) including file name of reassign map file. Provide list of old workloads mapped to new tenants. Format for this file is YAML.

A sample mapping file with explanations is shown below:

Note:- We cannot reassign the immutable backup target workload from one cloud to another. To do this, we need to follow the steps outlined below

Steps:-

  1. Download the immutable workload directory from its mount path and upload/copy it to the non-immutable backup target mount path.

  2. We need to update the backup_target_types and backup_media_target metadata property of the workload and all it’s snapshots at backup target to the current backup target where it exists. Here reference Please perform following steps to accomplish the task.

  3. Check the original backup_target_types and backup_media_target metadata property of the workload using either less or jq command. If you already knows this information then you may skip this step.

  1. Find & replace the backup_target_types property of the workload and it’s all respective snapshots

  1. Find & replace the backup_media_target property of the workload and it’s all respective snapshots

  1. Verify the changes

  1. Now try reassigning the workload.

workloadmgr workload-get-importworkloads-list [--project_id <project_id>]
workloadmgr workload-importworkloads [--workloadids <workloadid>]
workloadmgr workload-get-orphaned-workloads-list [--migrate_cloud {True,False}]
                                                 [--generate_yaml {True,False}]
workloadmgr workload-reassign-workloads
                                        [--old_tenant_ids <old_tenant_id>]
                                        [--new_tenant_id <new_tenant_id>]
                                        [--workload_ids <workload_id>]
                                        [--user_id <user_id>]
                                        [--migrate_cloud {True,False}]
                                        [--map_file <map_file>]
                                        [--source-btt <source-btt> [<source-btt> ...]]
                                        [--source-btt-all]
reassign_mappings:
   - old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
     new_tenant_id: new_tenant_id
     user_id: user_id
     source_btt: source_btt # list of source_btt ID's where provided workload IDs will be searched
     source_btt_all: True # searches all workloads in all available BTTs
     workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
     migrate_cloud: True/False #Set to True if want to reassign workloads from
                  # other clouds as well. Default is False
    
   - old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
     new_tenant_id: new_tenant_id
     user_id: user_id
     source_btt: source_btt # list of source_btt ID's where provided workload IDs will be searched
     source_btt_all: True # searches all workloads in all available BTTs
     workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
     migrate_cloud: True/False #Set to True if want to reassign workloads from
                  # other clouds as well. Default is False
    a. Using less command
        i. less <file_system_mountpath>/workload_<workload_uuid>/workload_db
        ii. Example
            1. less /var/trilio/triliovault-mounts/L2hvbWUva29sbGEv/workload_385d0e94-d602-4963-96c2-28bebea352f1/workload_db
        iii. search backup_target_types and backup_media_target in the file and note down it’s respective values

    b. Using jq command
        i. jq '.metadata[] | select(.key == "backup_media_target") | .value' <file_system_mountpath>/workload_<workload_uuid>/workload_db
        ii. Example
            jq '.metadata[] | select(.key == "backup_target_types") | .value' /var/trilio/triliovault-mounts/L2hvbWUva29sbGEv/workload_385d0e94-d602-4963-96c2-28bebea352f1_backup/workload_db
        iii. search backup_target_types and backup_media_target and note down it’s respective values
    a. grep -rl 'old_BTT' <file_system_mountpath>/workload_<workload_uuid> | xargs sed -i 's/<old_BTT>/<new_BTT>/g'
    b. Here,
        i. old_BTT  ---> Old Backup Target Type name
        ii. new_BTT ---> New Backup Target Type name  
    c. Example
        i. grep -rl 'nfs_1' /var/trilio/triliovault-mounts/L2hvbWUva29sbGEv/workload_385d0e94-d602-4963-96c2-28bebea352f1 | xargs sed -i 's/nfs_1/nfs_2/g'
    a. grep -rl '<old_filesystem_export>' <file_system_mountpath>/workload_<workload_uuid> | xargs sed -i 's/<old_filesystem_export>/<new_filesystem_export>/g'
    b. Here,
        i. old_filesystem_export  ---> File system export path of old Backup Target
        ii. new_filesystem_export ---> File system export path of new Backup Target
    c. Note: Please make sure to use {{}} before every {{/ }} character in old_filesystem_export & new_filesystem_export places.
    d. Example
        i. grep -rl '192.168.0.51:/home/kolla/' /var/trilio/triliovault-mounts/L2hvbWUva29sbGEv/workload_385d0e94-d602-4963-96c2-28bebea352f1 | xargs sed -i 's/192.168.0.51:\/home\/kolla\//192.168.0.52:\/home\/kolla_new\//g'
        ii. Here the original filesystem export is 192.168.0.51:/home/kolla/ which we have mentioned in above command as 192.168.0.51:\/home\/kolla\/ by adding additional }} before every {{/ character. The same changes are expected for new filesystem export path as well.
    a. The following commands must show the files having the updated changes.
        i. grep -rl 'new_BTT' <file_system_mountpath>/workload_<workload_uuid>
        ii. grep -rl '<new_filesystem_export>' <file_system_mountpath>/workload_<workload_uuid>
registry.connect.redhat.com/trilio/trilio-datamover:6.0.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:6.0.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:6.0.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:6.0.0-rhosp17.1
trilio-charmers-trilio-wlm
trilio-charmers-trilio-dm-api
trilio-charmers-trilio-data-mover
trilio-charmers-trilio-horizon-plugin
trilio_branch : 6.1.0

RHOSO 18.0

6.1.0-rhoso18.0

RHOSP 17.1

6.1.0-rhosp17.1

RHOSO 18.0

registry.connect.redhat.com/trilio/trilio-datamover-rhoso:6.1.0-rhoso18.0
registry.connect.redhat.com/trilio/trilio-datamover-api-rhoso:6.1.0-rhoso18.0
registry.connect.redhat.com/trilio/trilio-horizon-plugin-rhoso:6.1.0-rhoso18.0
registry.connect.redhat.com/trilio/trilio-wlm:6.1.0-rhoso18.0
registry.connect.redhat.com/trilio/trilio-openstack-operator-rhoso:6.1.0-rhoso18.0
registry.connect.redhat.com/trilio/trilio-ansible-runner-rhoso:6.1.0-rhoso18.0

RHOSP 17.1

registry.connect.redhat.com/trilio/trilio-datamover:6.1.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:6.1.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:6.1.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:6.1.0-rhosp17.1

triliovault-pkg-source

deb [trusted=yes] https://apt.fury.io/trilio-6-1 /

channel

6.0/stable

Charm names

Supported releases

Revisions

trilio-charmers-trilio-wlm

Jammy (Ubuntu 22.04)

40

trilio-charmers-trilio-dm-api

Jammy (Ubuntu 22.04)

17

trilio-charmers-trilio-data-mover

Jammy (Ubuntu 22.04)

34

trilio-charmers-trilio-horizon-plugin

Jammy (Ubuntu 22.04)

10

contegoclient

6.1.1

s3fuse

6.1.1

tvault-horizon-plugin

6.1.1

workloadmgr

6.1.1

workloadmgrclient

6.1.1

python3-contegoclient

6.1.1

python3-dmapi

6.1.1

python3-namedatomiclock

1.1.3

python3-s3-fuse-plugin

6.1.1

python3-tvault-contego

6.1.1

python3-tvault-horizon-plugin

6.1.1

python3-workloadmgrclient

6.1.1

workloadmgr

6.1.1

python3-contegoclient-el9

RHEL9

6.1.1-6.1

python3-dmapi-el9

RHEL9

6.1.1-6.1

python3-s3fuse-plugin-el9

RHEL9

6.1.1-6.1

python3-trilio-fusepy-el9

RHEL9

3.0.1-1

python3-tvault-contego-el9

RHEL9

6.1.1-6.1

python3-tvault-horizon-plugin-el9

RHEL9

6.1.1-6.1

python3-workloadmgrclient-el9

RHEL9

6.1.1-6.1

python3-workloadmgr-el9

RHEL9

6.1.1-6.1

git clone -b 6.1.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp18/
git clone -b 6.1.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/
https://pypi.fury.io/trilio-6-1/
deb [trusted=yes] https://apt.fury.io/trilio-6-1/ /
https://yum.fury.io/trilio-6-1/
[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-6-1/
enabled=1
gpgcheck=0
trilio_branch : 6.1.1

RHOSP Version

CONTAINER-TAG-VERSION

tvo-operator Version Operator Hub

RHOSO 18.0

6.1.1-rhoso18.0

6.1.4

RHOSP 17.1

6.1.1-rhosp17.1

NA

RHOSO 18.0

registry.connect.redhat.com/trilio/trilio-datamover-rhoso:6.1.1-rhoso18.0
registry.connect.redhat.com/trilio/trilio-datamover-api-rhoso:6.1.1-rhoso18.0
registry.connect.redhat.com/trilio/trilio-horizon-plugin-rhoso:6.1.1-rhoso18.0
registry.connect.redhat.com/trilio/trilio-wlm:6.1.1-rhoso18.0
registry.connect.redhat.com/trilio/trilio-openstack-operator-rhoso:6.1.1-rhoso18.0
registry.connect.redhat.com/trilio/trilio-ansible-runner-rhoso:6.1.1-rhoso18.0

RHOSP 17.1

registry.connect.redhat.com/trilio/trilio-datamover:6.1.1-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:6.1.1-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:6.1.1-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:6.1.1-rhosp17.1

triliovault-pkg-source

deb [trusted=yes] https://apt.fury.io/trilio-6-1 /

channel

6.0/stable

Charm names

Supported releases

Revisions

trilio-charmers-trilio-wlm

Jammy (Ubuntu 22.04)

40

trilio-charmers-trilio-dm-api

Jammy (Ubuntu 22.04)

17

trilio-charmers-trilio-data-mover

Jammy (Ubuntu 22.04)

34

trilio-charmers-trilio-horizon-plugin

Jammy (Ubuntu 22.04)

10

Bobcat (2023.2)

docker.io/trilio/trilio-datamover-helm:6.1.1-2023.2
docker.io/trilio/trilio-datamover-api-helm:6.1.1-2023.2
docker.io/trilio/trilio-wlm-helm:6.1.1-2023.2
docker.io/trilio/trilio-horizon-plugin-helm:6.1.1-2023.2

Antelope (2023.1)

docker.io/trilio/trilio-datamover-helm:6.1.1-2023.1
docker.io/trilio/trilio-datamover-api-helm:6.1.1-2023.1
docker.io/trilio/trilio-wlm-helm:6.1.1-2023.1
docker.io/trilio/trilio-horizon-plugin-helm:6.1.1-2023.1

contegoclient

6.1.1

s3fuse

6.1.1

tvault-horizon-plugin

6.1.1.1

workloadmgr

6.1.1.1

workloadmgrclient

6.1.1

python3-contegoclient

6.1.1

python3-dmapi

6.1.1.1

python3-namedatomiclock

1.1.3

python3-s3-fuse-plugin

6.1.1

python3-tvault-contego

6.1.1.1

python3-tvault-horizon-plugin

6.1.1.1

python3-workloadmgrclient

6.1.1

workloadmgr

6.1.1.1

python3-contegoclient-el9

RHEL9

6.1.1-6.1

python3-dmapi-el9

RHEL9

6.1.1.1-6.1

python3-s3fuse-plugin-el9

RHEL9

6.1.1-6.1

python3-trilio-fusepy-el9

RHEL9

3.0.1-1

python3-tvault-contego-el9

RHEL9

6.1.1.1-6.1

python3-tvault-horizon-plugin-el9

RHEL9

6.1.1.1-6.1

python3-workloadmgrclient-el9

RHEL9

6.1.1-6.1

python3-workloadmgr-el9

RHEL9

6.1.1.1-6.1

git clone -b 6.1.1 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp18/
git clone -b 6.1.1 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/
git clone -b 6.1.1 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/openstack-helm/
https://pypi.fury.io/trilio-6-1/
deb [trusted=yes] https://apt.fury.io/trilio-6-1/ /
https://yum.fury.io/trilio-6-1/
[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-6-1/
enabled=1
gpgcheck=0
juju export-bundle --filename openstack_base_file.yaml
git clone https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts
git checkout {{ trilio_branch }}
cd juju-charms/sample_overlay_bundles

Parameters

Summary

triliovault-pkg-source

Trilio debian package repo url; Refer release specific Resources page

machines

List of Machines available on canonical openstack setup

channel

Channel name as provided in release specific Resources page

revision

Latest values as provided by Trilio. Refer release specific Resources page

trilio-backup-targets

List of all the back up targets and respective details which user wants to have deployed against T4O

backup-target-name

Name of the back up target; can be any relevant string as defined by the user

backup-target-type

s3 or nfs

is-default

Can be true or false; ideally only one backup target can have is-default set to true

s3-type

Can be set as amazon_s3 or other_s3

s3-access-key

S3 Access Key

s3-secret-key

S3 Secret Key

s3-region-name

S3 Region

s3-bucket

S3 bucket

s3-endpoint-url

To be kept blank as set in sample overlay bundle

s3-signature-version

To be set as default

s3-auth-version

To be set as DEFAULT

s3-ssl-enabled

Set true for SSL enabled S3 endpoint URL

s3-ssl-verify

Set true for SSL enabled S3 endpoint URL

s3-self-signed-cert

Set as true if S3 SSL/TLS certificates are self signed, else false

s3-ssl-ca_cert

Required if S3 SSL/TLS certificates are self signed, else the parameter to be left blank.

s3-bucket-object-lock-enabled

Set as true of S3 bucket is having object lock enabled, else set as false

nfs-shares

NFS server IP and share path

nfs-options

Options as per respective NFS server

juju deploy --dry-run ./openstack_base_file.yaml --overlay <Trilio bundle path>
juju deploy ./openstack_base_file.yaml --overlay <Trilio bundle path>
juju run --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
juju attach-resource trilio-wlm license=<Path to trilio license file>
juju run --wait trilio-wlm/leader create-license
juju run --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
juju attach-resource trilio-wlm license=<Path to trilio license file>
juju run-action --wait trilio-wlm/leader create-license
juju status | grep -i trilio

trilio-data-mover               5.2.8.14  active      3  trilio-charmers-trilio-data-mover      latest/candidate   22  no       Unit is ready
trilio-data-mover-mysql-router  8.0.39    active      3  mysql-router                           8.0/stable        200  no       Unit is ready
trilio-dm-api                   5.2.8     active      1  trilio-charmers-trilio-dm-api          latest/candidate   17  no       Unit is ready
trilio-dm-api-mysql-router      8.0.39    active      1  mysql-router                           8.0/stable        200  no       Unit is ready
trilio-horizon-plugin           5.2.8.8   active      1  trilio-charmers-trilio-horizon-plugin  latest/candidate   10  no       Unit is ready
trilio-wlm                      5.2.8.15  active      1  trilio-charmers-trilio-wlm             latest/candidate   18  no       Unit is ready
trilio-wlm-mysql-router         8.0.39    active      1  mysql-router                           8.0/stable        200  no       Unit is ready
  trilio-data-mover-mysql-router/2   active    idle            172.20.1.5                          Unit is ready
  trilio-data-mover/1                active    idle            172.20.1.5                          Unit is ready
  trilio-data-mover-mysql-router/0*  active    idle            172.20.1.7                          Unit is ready
  trilio-data-mover/2                active    idle            172.20.1.7                          Unit is ready
  trilio-data-mover-mysql-router/1   active    idle            172.20.1.8                          Unit is ready
  trilio-data-mover/0*               active    idle            172.20.1.8                          Unit is ready
  trilio-horizon-plugin/0*           active    idle            172.20.1.27                         Unit is ready
trilio-dm-api/0*                     active    idle   1/lxd/2  172.20.1.29     8784/tcp            Unit is ready
  trilio-dm-api-mysql-router/0*      active    idle            172.20.1.29                         Unit is ready
trilio-wlm/0*                        active    idle   1        172.20.1.4      8780/tcp            Unit is ready
  trilio-wlm-mysql-router/0*         active    idle            172.20.1.4                          Unit is ready
Resources
Compatibility Matrix
Resources

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to run the search in

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

{
   "file_search":{
      "start":<Integer>,
      "end":<Integer>,
      "filepath":"<Reg-Ex String>",
      "date_from":<Date Format: YYYY-MM-DDTHH:MM:SS>,
      "date_to":<Date Format: YYYY-MM-DDTHH:MM:SS>,
      "snapshot_ids":[
         "<Snapshot-ID>"
      ],
      "vm_id":"<VM-ID>"
   }
}

tvm_address

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Tenant/Project to run the search in

search_id

string

ID of the File Search to get

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 13:24:28 GMT
Content-Type: application/json
Content-Length: 819
Connection: keep-alive
X-Compute-Request-Id: req-d57bea9a-9968-4357-8743-e0b906466063

{
   "file_search":{
      "created_at":"2020-11-09T13:23:25.000000",
      "updated_at":"2020-11-09T13:23:48.000000",
      "id":14,
      "deleted_at":null,
      "status":"completed",
      "error_msg":null,
      "filepath":"/etc/h*",
      "json_resp":"[
                      {
                         "ed4f29e8-7544-4e1c-af8a-a76031211926":[
                            {
                               "/dev/vda1":[
                                  "/etc/hostname",
                                  "/etc/hosts"
                               ],
                               "/etc/hostname":{
                                  "dev":"2049",
                                  "ino":"32",
                                  "mode":"33204",
                                  "nlink":"1",
                                  "uid":"0",
                                  "gid":"0",
                                  "rdev":"0",
                                  "size":"1",
                                  "blksize":"1024",
                                  "blocks":"2",
                                  "atime":"1603455255",
                                  "mtime":"1603455255",
                                  "ctime":"1603455255"
                               },
                               "/etc/hosts":{
                                  "dev":"2049",
                                  "ino":"127",
                                  "mode":"33204",
                                  "nlink":"1",
                                  "uid":"0",
                                  "gid":"0",
                                  "rdev":"0",
                                  "size":"37",
                                  "blksize":"1024",
                                  "blocks":"2",
                                  "atime":"1603455257",
                                  "mtime":"1431011050",
                                  "ctime":"1431017172"
                               }
                            }
                         ]
                      }
                  ]",
      "vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
   }
}
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 13:23:25 GMT
Content-Type: application/json
Content-Length: 244
Connection: keep-alive
X-Compute-Request-Id: req-bdfd3fb8-5cbf-4108-885f-63160426b2fa

{
   "file_search":{
      "created_at":"2020-11-09T13:23:25.698534",
      "updated_at":null,
      "id":14,
      "deleted_at":null,
      "status":"executing",
      "error_msg":null,
      "filepath":"/etc/h*",
      "json_resp":null,
      "vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
   }
}

Uninstalling from RHOSP

Clean Trilio Datamover API and Workloadmanager services

The following steps need to be run on all nodes, which have the Trilio Datamover API & Workloadmanager services running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the below entries

OS::TripleO::Services::TrilioDatamoverApi
OS::TripleO::Services::TrilioWlmApi
OS::TripleO::Services::TrilioWlmWorkloads
OS::TripleO::Services::TrilioWlmScheduler
OS::TripleO::Services::TrilioWlmCron

Once the role that runs the Trilio Datamover API & Workloadmanager services has been identified will the following commands clean the nodes from the service.

Run all commands as root or user with sudo permissions.

Remove triliovault_datamover_api container.

podman rm -f triliovault_datamover_api
podman rm -f triliovault_datamover_api_db_sync
podman rm -f triliovault_datamover_api_init_log

Clean Trilio Datamover API service conf directory.

rm -rf /var/lib/config-data/puppet-generated/triliovaultdmapi
rm /var/lib/config-data/puppet-generated/triliovaultdmapi.md5sum
rm -rf /var/lib/config-data/triliovaultdmapi*
rm -f /var/lib/config-data/triliovault_datamover_api*

Clean Trilio Datamover API service log directory.

rm -rf /var/log/containers/triliovault-datamover-api/

Remove triliovault_wlm_api container.

podman rm -f triliovault_wlm_api
podman rm -f triliovault_wlm_api_cloud_trust_init
podman rm -f triliovault_wlm_api_db_sync
podman rm -f triliovault_wlm_api_config_dynamic
podman rm -f triliovault_wlm_api_init_log

Clean Trilio Workloadmanager API service conf directory.

rm -rf /var/lib/config-data/puppet-generated/triliovaultwlmapi
rm /var/lib/config-data/puppet-generated/triliovaultwlmapi.md5sum
rm -rf /var/lib/config-data/triliovaultwlmapi*
rm -f /var/lib/config-data/triliovault_wlm_api*

Clean Trilio Workloadmanager API service log directory.

rm -rf /var/log/containers/triliovault-wlm-api/

Remove triliovault_wlm_workloads container.

podman rm -f triliovault_wlm_workloads
podman rm -f triliovault_wlm_workloads_config_dynamic
podman rm -f triliovault_wlm_workloads_init_log

Clean Trilio Workloadmanager Workloads service conf directory.

rm -rf /var/lib/config-data/puppet-generated/triliovaultwlmworkloads
rm /var/lib/config-data/puppet-generated/triliovaultwlmworkloads.md5sum
rm -rf /var/lib/config-data/triliovaultwlmworkloads*

Clean Trilio Workloadmanager Workloads service log directory.

rm -rf /var/log/containers/triliovault-wlm-api/

Remove triliovault_wlm_scheduler container.

podman rm -f triliovault_wlm_scheduler
podman rm -f triliovault_wlm_scheduler_config_dynamic
podman rm -f triliovault_wlm_scheduler_init_log

Clean Trilio Workloadmanager Scheduler service conf directory.

rm -rf /var/lib/config-data/puppet-generated/triliovaultwlmscheduler
rm /var/lib/config-data/puppet-generated/triliovaultwlmscheduler.md5sum
rm -rf /var/lib/config-data/triliovaultwlmscheduler*

Clean Trilio Workloadmanager Scheduler service log directory.

rm -rf /var/log/containers/triliovault-wlm-scheduler/

Remove triliovault-wlm-cron-podman-0 container from controller.

podman rm -f triliovault-wlm-cron-podman-0
podman rm -f triliovault_wlm_cron_config_dynamic
podman rm -f triliovault_wlm_cron_init_log

Clean Trilio Workloadmanager Cron service conf directory.

rm -rf /var/lib/config-data/puppet-generated/triliovaultwlmcron
rm /var/lib/config-data/puppet-generated/triliovaultwlmcron.md5sum
rm -rf /var/lib/config-data//triliovaultwlmcron*

Clean Trilio Workloadmanager Cron service log directory.

rm -rf /var/log/containers/triliovault-wlm-cron/

Clean Trilio Datamover Service

The following steps need to be run on all nodes, which have the Trilio Datamover service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::TrilioDatamover.

Once the role that runs the Trilio Datamover service has been identified will the following commands clean the nodes from the service.

Run all commands as root or user with sudo permissions.

Remove triliovault_datamover container.

podman rm -f triliovault_datamover

Unmount the Trilio Backup Target on the compute host.

## Following steps are applicable for all supported RHOSP releases.

# Check triliovault backup target mount point
mount | grep trilio

# Unmount it
-- If it's NFS	(COPY UUID_DIR from your compute host using above command)
umount /var/lib/nova/triliovault-mounts/<UUID_DIR>

-- If it's S3
umount /var/lib/nova/triliovault-mounts

# Verify that it's unmounted		
mount | grep trilio
	
df -h  | grep trilio

# Remove mount point directory after verifying that backup target unmounted successfully.
# Otherwise actual data from backup target may get cleaned.	

rm -rf /var/lib/nova/triliovault-mounts

Clean Trilio Datamover service conf directory.

rm -rf /var/lib/config-data/puppet-generated/triliovaultdm/
rm /var/lib/config-data/puppet-generated/triliovaultdm.md5sum
rm -rf /var/lib/config-data/triliovaultdm*

Clean log directory of Trilio Datamover service.

rm -rf /var/log/containers/triliovault-datamover/

Clean wlm-cron resource from pcs cluster

Remove wlm cron resource from pcs cluster on the controller node.

pcs resource delete triliovault-wlm-cron

Clean Trilio HAproxy resources

The following steps need to be run on all nodes, which have the HAproxy service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::HAproxy.

Once the role that runs the HAproxy service has been identified will the following commands clean the nodes from all the Trilio resources.

Run all commands as root or user with sudo permissions.

Edit the following file inside the HAproxy container and remove all Trilio entries.

/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg

An example of these entries is given below.

listen triliovault_datamover_api
  bind 172.30.5.23:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
  bind 172.30.5.23:8784 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
  balance roundrobin
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request set-header X-Forwarded-Port %[dst_port]
  maxconn 50000
  option httpchk
  option httplog
  retries 5
  timeout check 10m
  timeout client 10m
  timeout connect 10m
  timeout http-request 10m
  timeout queue 10m
  timeout server 10m
  server overcloudtrain1-controller-0.internalapi.trilio.local 172.30.5.28:8784 check fall 5 inter 2000 rise 2 verifyhost overcloudtrain1-controller-0.internalapi.trilio.local

listen triliovault_wlm_api
  bind 172.30.5.23:13781 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
  bind 172.30.5.23:8781 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
  balance roundrobin
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request set-header X-Forwarded-Port %[dst_port]
  maxconn 50000
  option httpchk
  option httplog
  retries 5
  timeout check 10m
  timeout client 10m
  timeout connect 10m
  timeout http-request 10m
  timeout queue 10m
  timeout server 10m
  server overcloudtrain1-controller-0.internalapi.trilio.local 172.30.5.28:8780 check fall 5 inter 2000 rise 2 verifyhost overcloudtrain1-controller-0.internalapi.trilio.local

Restart the HAproxy container once all edits have been done.

podman restart haproxy-bundle-podman-0

Clean Trilio Keystone resources

Trilio registers services and users in Keystone. Those need to be unregistered and deleted.

openstack service delete dmapi
openstack user delete dmapi
openstack service delete TrilioVaultWLM
openstack user delete triliovault

Clean Trilio database resources

Trilio creates databases for dmapi and workloadmgr services. These databases need to be cleaned.

Login into the database cluster

podman exec -it galera-bundle-podman-0 mysql -u root

Run the following SQL statements to clean the database.

## Clean database
DROP DATABASE dmapi;

## Clean dmapi user
=> List 'dmapi' user accounts
MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
+-------+-----------------------------------------+
| user  | host                                    |
+-------+-----------------------------------------+
| dmapi | %                                       |
| dmapi | 172.30.5.28                             |
| dmapi | overcloudtrain1internalapi.trilio.local |
+-------+-----------------------------------------+
3 rows in set (0.000 sec)

=> Delete those user accounts
MariaDB [(none)]> DROP USER dmapi@'%';
Query OK, 0 rows affected (0.005 sec)

MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.006 sec)

MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.005 sec)

=> Verify that dmapi user got cleaned
MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
Empty set (0.00 sec)

## Clean database
DROP DATABASE workloadmgr;

## Clean workloadmgr user
=> List 'workloadmgr' user accounts
MariaDB [(none)]> select user, host from mysql.user where user='workloadmgr';
+-------------+-----------------------------------------+
| user        | host                                    |
+-------------+-----------------------------------------+
| workloadmgr | %                                       |
| workloadmgr | 172.30.5.28                             |
| workloadmgr | overcloudtrain1internalapi.trilio.local |
+-------------+-----------------------------------------+
3 rows in set (0.000 sec)

=> Delete those user accounts
MariaDB [(none)]> DROP USER workloadmgr@'%';
Query OK, 0 rows affected (0.012 sec)

MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.006 sec)

MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.005 sec)

=> Verify that workloadmgr user got cleaned
MariaDB [(none)]> select user, host from mysql.user where user='workloadmgr';
Empty set (0.000 sec)

Revert overcloud deploy command

Remove the following entries from roles_data.yaml used in the overcloud deploy command.

  • OS::TripleO::Services::TrilioDatamoverApi

  • OS::TripleO::Services::TrilioWlmApi

  • OS::TripleO::Services::TrilioWlmWorkloads

  • OS::TripleO::Services::TrilioWlmScheduler

  • OS::TripleO::Services::TrilioWlmCron

  • OS::TripleO::Services::TrilioDatamover

In the case that the overcloud deploy command used prior to the deployment of Trilio is still available, it can directly be used.

Follow these steps to clean the overcloud deploy command from all Trilio entries.

  1. Remove trilio_env.yaml entry

  2. Remove trilio endpoint map file Replace with original map file if existing

Revert back to the original RHOSP Horizon container

Run the cleaned overcloud deploy command.

Post Installation Health-Check

After the installation and configuration of Trilio for OpenStack did succeed the following steps can be done to verify that the Trilio installation is healthy.

On the Controller node

Make sure below containers are in a running state, triliovault-wlm-cron would be running on only one of the controllers in case of multi-controller setup.

  • triliovault_datamover_api

  • triliovault_wlm_api

  • triliovault_wlm_scheduler

  • triliovault_wlm_workloads

  • triliovault-wlm-cron

If the containers are in a restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

[root@overcloudtrain5-controller-0 /]# podman ps  | grep trilio-
76511a257278  undercloudqa.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1          kolla_start           12 days ago   Up 12 days ago           horizon
5c5acec33392  cluster.common.tag/trilio-wlm:pcmklatest                                                          /bin/bash /usr/lo...  7 days ago    Up 7 days ago            triliovault-wlm-cron-podman-0
8dc61a674a7f  undercloudqa.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1           kolla_start           7 days ago    Up 7 days ago            triliovault_datamover_api
a945fbf80554  undercloudqa.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1                     kolla_start           7 days ago    Up 7 days ago            triliovault_wlm_scheduler
402c9fdb3647  undercloudqa.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1                     kolla_start           7 days ago    Up 6 days ago            triliovault_wlm_workloads
f9452e4b3d14  undercloudqa.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1                     kolla_start           7 days ago    Up 6 days ago            triliovault_wlm_api

After successful deployment, triliovault-wlm-cron service would get added in pcs cluster as a cluster resource, you can verify through pcs status command

[root@overcloudtrain5-controller-0 /]# pcs status
Cluster name: tripleo_cluster
Cluster Summary:
  * Stack: corosync
  * Current DC: overcloudtrain5-controller-0 (version 2.0.5-9.el8_4.3-ba59be7122) - partition with quorum
  * Last updated: Mon Jul 24 11:19:05 2023
  * Last change:  Mon Jul 17 10:38:45 2023 by root via cibadmin on overcloudtrain5-controller-0
  * 4 nodes configured
  * 14 resource instances configured

Node List:
  * Online: [ overcloudtrain5-controller-0 ]
  * GuestOnline: [ galera-bundle-0@overcloudtrain5-controller-0 rabbitmq-bundle-0@overcloudtrain5-controller-0 redis-bundle-0@overcloudtrain5-controller-0 ]

Full List of Resources:
  * ip-172.30.6.27      (ocf::heartbeat:IPaddr2):        Started overcloudtrain5-controller-0
  * ip-172.30.6.16      (ocf::heartbeat:IPaddr2):        Started overcloudtrain5-controller-0
  * Container bundle: haproxy-bundle [cluster.common.tag/openstack-haproxy:pcmklatest]:
    * haproxy-bundle-podman-0   (ocf::heartbeat:podman):         Started overcloudtrain5-controller-0
  * Container bundle: galera-bundle [cluster.common.tag/openstack-mariadb:pcmklatest]:
    * galera-bundle-0   (ocf::heartbeat:galera):         Master overcloudtrain5-controller-0
  * Container bundle: rabbitmq-bundle [cluster.common.tag/openstack-rabbitmq:pcmklatest]:
    * rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster):       Started overcloudtrain5-controller-0
  * Container bundle: redis-bundle [cluster.common.tag/openstack-redis:pcmklatest]:
    * redis-bundle-0    (ocf::heartbeat:redis):  Master overcloudtrain5-controller-0
  * Container bundle: openstack-cinder-volume [cluster.common.tag/openstack-cinder-volume:pcmklatest]:
    * openstack-cinder-volume-podman-0  (ocf::heartbeat:podman):         Started overcloudtrain5-controller-0
  * Container bundle: triliovault-wlm-cron [cluster.common.tag/trilio-wlm:pcmklatest]:
    * triliovault-wlm-cron-podman-0     (ocf::heartbeat:podman):         Started overcloudtrain5-controller-0

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Verify the HAproxy configuration under:

/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg

On Compute node

Make sure the Trilio Datamover container is in running state and no other Trilio container is deployed on compute nodes.

[root@overcloudtrain5-novacompute-0 heat-admin]# podman  ps | grep -i datamover
c750a8d0471f  undercloudqa.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1              kolla_start  7 days ago   Up 7 days ago           triliovault_datamover

Check if provided backup target is mounted well on Compute host.

[root@overcloudtrain5-novacompute-0 heat-admin]# df -h  | grep triliovault-mounts
172.30.1.9:/mnt/rhosptargetnfs  7.0T  5.1T  2.0T  72% /var/lib/nova/triliovault-mounts/L21udC9yaG9zcHRhcmdldG5mcw==

On the node with Horizon service

Make sure the horizon container is in a running state. Please note that the Horizon container is replaced with Trilio's Horizon container. This container will have the latest OpenStack horizon + Trilio horizon plugin.

[root@overcloudtrain5-controller-0 heat-admin]# podman ps  | grep horizon
76511a257278  undercloudqa.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1          kolla_start           12 days ago   Up 12 days ago           horizon
solutions/openstack/backing-file-update at master · trilioData/solutionsGitHub

Workload Policies

Trilio’s tenant driven backup service gives tenants control over backup policies. However, sometimes it may be too much control to tenants and the cloud admins may want to limit what policies are allowed by tenants. For example, a tenant may become overzealous and only uses full backups every 1 hr interval. If every tenant were to pursue this backup policy, it puts a severe strain on cloud infrastructure. Instead, if cloud admin can define predefined backup policies and each tenant is only limited to those policies then cloud administrators can exert better control over backup service.

Workload policy is similar to nova flavor where a tenant cannot create arbitrary instances. Instead, each tenant is only allowed to use the nova flavors published by the admin.

List and showing available Workload policies

Using Horizon

To see all available Workload policies in Horizon follow these steps:

  1. Login to Horizon using admin user.

  2. Click on Admin Tab.

  3. Navigate to Backups-Admin

  4. Navigate to Trilio

  5. Navigate to Policy

The following information are shown in the policy tab for each available policy:

  • Creation time

  • Projects assigned

  • Name

  • Scheduling & Retention policy

  • Description

Using CLI

  • <policy_id>➡️ Id of the policy to show

Create a policy

Using Horizon

To create a policy in Horizon follow these steps:

  1. Login to Horizon using admin user.

  2. Click on Admin Tab.

  3. Navigate to Backups-Admin

  4. Navigate to Trilio

  5. Navigate to Policy

  6. Click new policy

  7. Provide a policy name on the Details tab

  8. Provide a description on the Details tab

  9. Provide the retention and scheduler details in the Policy tab

  10. Click create

Using CLI

  • --policy-fields <key=key-name> ➡️ Specify following key value pairs for policy fields. 'start_time' : '10:30 PM' --policy-fields start_time='10:30 PM'

  • --hourly ➡️ Specify following key value pairs for hourly jobschedule interval= where n is no of hours within list (1,2,3,4,6,8,12,24),retention=,snapshot_type=<full|incremental> For example --hourly interval='4',retention='1',snapshot_type='incremental' If you don't specify this option, following default value 'interval' : '1' 'retention' : '30' 'snapshot_type' : 'incremental'

  • --daily ➡️ Specify following key value pairs for daily jobschedule backup_time='1:30 12:30 00:30',retention=,snapshot_type=<full|incremental> For example --daily backup_time='01:00 02:00 11:00',retention='1',snapshot_type='incremental'

  • --weekly ➡️ Specify following key value pairs for weekly jobschedule backup_day=[mon,tue,wed,thu,fri,sat,sun],retention=,snapshot_type=<full|incremental> For example --weekly backup_day='sun mon tue',retention='1',snapshot_type='incremental'

  • --monthly ➡️ Specify following key value pairs for monthly jobschedule month_backup_day=<1-31|last>, 'last': last day of the month,retention=,snapshot_type=<full|incremental> For example --monthly month_backup_day='1 2 3',retention=1,snapshot_type='incremental'

  • --yearly ➡️ Specify following key value pairs for yearly jobschedule backup_month=[jan,feb,mar,apr,may,jun,jul,aug,sep,oct,nov,dec],retention=,snapshot_type=<full|incremental> For example --yearly backup_month='jan feb',retention='1',snapshot_type='full'

  • --manual ➡️ Specify following key value pairs for manual jobschedule retention=,retention_days_to_keep= For example --manual retention=30,retention_days_to_keep=30

  • --display-description <display_description> ➡️ Optional policy description. (Default=No description)

  • --metadata <key=keyname> ➡️ Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

  • <display_name> ➡️ the name the policy will get

Edit a policy

Using Horizon

To edit a policy in Horizon follow these steps:

  1. Login to Horizon using admin user.

  2. Click on Admin Tab.

  3. Navigate to Backups-Admin

  4. Navigate to Trilio

  5. Navigate to Policy

  6. identify the policy to edit

  7. click on "Edit policy" at the end of the line of the chosen policy

  8. edit the policy as desired - all values can be changed

  9. Click "Update"

Using CLI

  • --display-name <display-name>➡️Name of the policy

  • --display-description <display_description> ➡️ Optional policy description. (Default=No description)

  • --policy-fields <key=key-name> ➡️ Specify following key value pairs for policy fields 'start_time' : '10:30 PM' --policy-fields start_time='10:30 PM'

  • --hourly ➡️ Specify following key value pairs for hourly jobschedule interval= where n is no of hours within list (1,2,3,4,6,8,12,24),retention=,snapshot_type=<full|incremental> For example --hourly interval='4',retention='1',snapshot_type='incremental' If you don't specify this option, following default value 'interval' : '1' 'retention' : '30' 'snapshot_type' : 'incremental'

  • --daily ➡️ Specify following key value pairs for daily jobschedule backup_time='1:30 12:30 00:30',retention=,snapshot_type=<full|incremental> For example --daily backup_time='01:00 02:00 11:00',retention='1',snapshot_type='incremental'

  • --weekly ➡️ Specify following key value pairs for weekly jobschedule backup_day=[mon,tue,wed,thu,fri,sat,sun],retention=,snapshot_type=<full|incremental> For example --weekly backup_day='sun mon tue',retention='1',snapshot_type='incremental'

  • --monthly ➡️ Specify following key value pairs for monthly jobschedule month_backup_day=<1-31|last>, 'last': last day of the month,retention=,snapshot_type=<full|incremental> For example --monthly month_backup_day='1 2 3',retention=1,snapshot_type='incremental'

  • --yearly ➡️ Specify following key value pairs for yearly jobschedule backup_month=[jan,feb,mar,apr,may,jun,jul,aug,sep,oct,nov,dec],retention=,snapshot_type=<full|incremental> For example --yearly backup_month='jan feb',retention='1',snapshot_type='full'

  • --manual ➡️ Specify following key value pairs for manual jobschedule retention=,retention_days_to_keep= For example --manual retention=30,retention_days_to_keep=30

  • --metadata <key=keyname> ➡️ Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

  • <policy_id> ➡️ ID of the policy

Assign/Remove a policy

Using Horizon

To assign or remove a policy in Horizon follow these steps:

  1. Login to Horizon using admin user.

  2. Click on Admin Tab.

  3. Navigate to Backups-Admin

  4. Navigate to Trilio

  5. Navigate to Policy

  6. identify the policy to assign/remove

  7. click on the small arrow at the end of the line of the chosen policy to open the submenu

  8. click "Add/Remove Projects"

  9. Choose projects to add or remove by using the plus/minus buttons

  10. Click "Apply"

Using CLI

  • --add_project <project_id> ➡️ ID of the project to assign policy to. Use multiple times to assign multiple projects.

  • --remove_project <project_id> ➡️ ID of the project to remove policy from. Use multiple times to remove multiple projects.

  • <policy_id>➡️policy to be assigned or removed

Delete a policy

Using Horizon

To delete a policy in Horizon follow these steps:

  1. Login to Horizon using admin user.

  2. Click on Admin Tab.

  3. Navigate to Backups-Admin

  4. Navigate to Trilio

  5. Navigate to Policy

  6. identify the policy to assign/remove

  7. click on the small arrow at the end of the line of the chosen policy to open the submenu

  8. click "Delete Policy"

  9. Confirm by clicking "Delete"

Using CLI

  • <policy_id> ➡️ID of the policy to be deleted

Multi-Region Deployments

This document discusses OpenStack multi-region deployments and how Trilio (or T4O, which stands for Trilio For OpenStack) can be deployed in multi-region OpenStack clouds.

OpenStack is designed to be scalable. To manage scale, OpenStack supports various resource segregation constructs: Regions, cells, and availability zones to manage OpenStack resources. Resource segregation is essential to define fault domains and localize network traffic.

Regions

From an end user's perspective, OpenStack regions are equivalent to regions in Amazon Web Services. Regions live in separate data centers, often named after their location. If your organization has a data center in Chicago and one in Boston, you'll have at least a CHG and a BOS region. Users who want to disperse their workloads geographically will place some in CHG and some in BOS. Regions have separate API endpoints for all services except for Keystone. Users, Tenants, and Domains are shaped across regions through a single Keystone deployment.

Availability Zones

Availability Zones are an end-user visible logical abstraction for partitioning a cloud without knowing the physical infrastructure. Availability zones can partition a cloud on arbitrary factors, such as location (country, data center, rack), network layout, and power source. Because of the flexibility, the names and purposes of availability zones can vary massively between clouds.

In addition, other services, such as the and the , also provide an availability zone feature. However, the implementation of these features differs vastly between these different services. Please look at the documentation for these other services for more information on their implementation of this feature.

Cells

Cells functionality enables OpenStack to scale the compute in a more distributed fashion without using complicated technologies like database and message queue clustering. It supports vast deployments.

Cloud architects can partition OpenStack Compute Cloud into groups called cells. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service but no nova-compute services. Each child cell should run all the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a regular Compute deployment in that each cell has its database server and message queue broker.

This document discusses OpenStack multi-region deployments and how Trilio can be deployed in multi-region OpenStack clouds.

Multi-Region OpenStack Deployments

OpenStack offers lots of flexibility with multi-region deployments, and each organization architects the OpenStack that meets their business needs. OpenStack only suggests that the Keystone service is shared between regions, and the rest of the service endpoints differ for different regions. RabbitMQ and MySQL databases can be shared or deployed independently.

The following code section is a snippet of OpenStack services endpoints in two regions.

Trilio in a Multi-Region OpenStack

Trilio backup and recovery service is architecturally similar to OpenStack services. It has an API endpoint, a scheduler service, and workload services. Cloud architects must deploy Trilio similarly to Nova or Cinder service with an instance of Trilio in each region, as shown below. Trilio deployment must support any OpenStack multi-region deployments compatible with OpenInfra recommendations.

Trilio services endpoints in a multi-region OpenStack deployment are shown below.

A Reference Deployment on Kolla-Ansible OpenStack

Reference Document:

Deployment of Trilio on the Kolla multi-region cloud is straightforward. We need to deploy Trilio in every region of the Kolla OpenStack cloud using the Kolla-ansible deploy command.

Please take a look at the Trilio install document for Kolla.

For example, the Kolla OpenStack cloud has three regions.

  1. RegionOne

  2. RegionTwo

  3. RegionThree

To deploy Multi-Region Trilio on this cloud, we need to install Trilio in each region.

Please follow the below steps:

  1. Identify the kolla ansible inventory file for each region.

  2. Identify the kolla-ansible deploy command that was used for OpenStack deployment for each region(Most probably, this is the same for all regions)

  3. The customer might have used a separate “/etc/kolla/globals.yml“ file for each region deployment. Please check those details.

  4. Deploy Trilio for the first region ( In our example, ‘RegionOne' ). Use it’s globals.yml. Follow the Trilio for Kolla-ansible. No other configuration changes are needed for this deployment.

  5. Now, for the following region deployment (RegionTwo), you can identify an ansible inventory file and the '/etc/kolla/globals.yml' file for that region. Also, could you identify the kolla-ansible deploy command used for that region?

  6. Append the appropriate Trilio globals yml file to the/etc/kolla/globals.yml file; refer to section 3.1 from the Trilio install .

  7. Populate all Trilio config parameters in the ‘/etc/kolla/globals.yml’ file of RegionTwo. Or you can copy them from RegionOne’s '/etc/kolla/globals.yml' file.

  8. Append Trilio inventory to the RegionTwo inventory file. Please take a look at section 3.4 of the Trilio install document.

  9. Repeat (Pull Trilio container images) from the Trilio install document for RegionTwo.

  10. If you use separate Kolla Ansible servers for each region, you must perform all the steps mentioned in the Trilio install document for Kolla again for RegionTwo. Using the same Kolla Ansible server for all region deployments, you can skip this for RegionTwo and all subsequent regions.

  11. Review the Trilio install document, and if any other standard config file(like /etc/kolla/passwords.yml) is defined separately for each region, we need to check that Trilio uses that config file and perform any related steps from the Trilio install document.

  12. Run the Kolla-ansible deploy command, which will deploy Trilio on RegionTwo.

References

workloadmgr policy-list
workloadmgr policy-show <policy_id>
workloadmgr policy-create --policy-fields <key=key-name>
                        [--hourly interval=<n>,retention=<count>,snapshot_type=<incremental|full>]
                        [--daily backup_time=<time>,retention=<count>,snapshot_type=<incremental|full>]
                        [--weekly backup_day=<days>,retention=<count>,snapshot_type=<incremental|full>]
                        [--monthly month_backup_day=<date>,retention=<count>,snapshot_type=<incremental|full>]
                        [--yearly backup_month=<month>,retention=<count>,snapshot_type=<incremental|full>]
                        [--manual retention=<snapshots count>,retention_days_to_keep=<num of days>]
                        [--display-description <display_description>]
                        [--metadata <key=key-name>]
                        <display_name>
workloadmgr policy-update [--display-name <display-name>]
                          [--display-description <display-description>]
                          [--policy-fields <key=key-name>]
                          [--hourly interval=<n>,retention=<count>,snapshot_type=<incremental|full>]
                          [--daily backup_time=<time>,retention=<count>,snapshot_type=<incremental|full>]
                          [--weekly backup_day=<days>,retention=<count>,snapshot_type=<incremental|full>]
                          [--monthly month_backup_day=<days>,retention=<count>,snapshot_type=<incremental|full>]
                          [--yearly backup_month=<months>,retention=<count>,snapshot_type=<incremental|full>]
                          [--manual retention=<snapshots count>,retention_days_to_keep=<num of days>] 
                          [--metadata <key=key-name>]
                          <policy_id>
workloadmgr policy-assign [--add_project <project_id>]
                          [--remove_project <project_id>]
                          <policy_id>
workloadmgr policy-delete <policy_id>
region=RegionOne
Network Subnet: 172.21.6/23

| neutron     | network      | RegionOne                                                                |
|             |              |   public: https://172.21.6.20:9696                                       |
|             |              | RegionOne                                                                |
|             |              |   internal: https://172.21.6.20:9696                                     |
|             |              | RegionOne                                                                |
|             |              |   admin: https://172.21.6.20:9696                                        |
|             |              |                                                                          |                                    |
|             |              |                                                                          |
| nova        | compute      | RegionOne                                                                |
|             |              |   public: https://172.21.6.21:8774/v2.1                                  |
|             |              | RegionOne                                                                |
|             |              |   admin: https://172.21.6.21:8774/v2.1                                   |
|             |              | RegionOne                                                                |
|             |              |   internal: https://172.21.6.21:8774/v2.1                                |
|             |              |                                                                          |
+-------------+--------------+--------------------------------------------------------------------------+

region=RegionTwo
Network Subnet: 172.21.31/23

| neutron     | network      | RegionTwo                                                                |
|             |              |   public: https://172.31.6.20:9696                                       |
|             |              | RegionTwo                                                                |
|             |              |   internal: https://172.31.6.20:9696                                     |
|             |              | RegionTwo                                                                |
|             |              |   admin: https://172.31.6.20:9696                                        |
|             |              |                                                                          |                                   |
|             |              |                                                                          |
| nova        | compute      | RegionTwo                                                                |
|             |              |   public: https://172.31.6.21:8774/v2.1                                  |
|             |              | RegionTwo                                                                |
|             |              |   admin: https://172.31.6.21:8774/v2.1                                   |
|             |              | RegionTwo                                                                |
|             |              |   internal: https://172.31.6.21:8774/v2.1                                |
|             |              |                                                                          |
+-------------+--------------+--------------------------------------------------------------------------+
region=RegionOne
Network Subnet: 172.21.6/23

| neutron     | network      | RegionOne                                                                |
|             |              |   public: https://172.21.6.20:9696                                       |
|             |              | RegionOne                                                                |
|             |              |   internal: https://172.21.6.20:9696                                     |
|             |              | RegionOne                                                                |
|             |              |   admin: https://172.21.6.20:9696                                        |
|             |              |                                                                          |
| workloadmgr | workloads    | RegionOne                                                                |
|             |              |   internal: https://172.21.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
|             |              | RegionOne                                                                |
|             |              |   public: https://172.21.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b   |
|             |              | RegionOne                                                                |
|             |              |   admin: https://172.21.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b    |
|             |              |                                                                          |
| dmapi       | datamover    | RegionOne                                                                |
|             |              |   internal: https://172.21.6.22:8784/v2                                  |
|             |              | RegionOne                                                                |
|             |              |   public: https://172.21.6.22:8784/v2                                    |
|             |              | RegionOne                                                                |
|             |              |   admin: https://172.21.6.22:8784/v2                                     |
|             |              |                                                                          |
| nova        | compute      | RegionOne                                                                |
|             |              |   public: https://172.21.6.21:8774/v2.1                                  |
|             |              | RegionOne                                                                |
|             |              |   admin: https://172.21.6.21:8774/v2.1                                   |
|             |              | RegionOne                                                                |
|             |              |   internal: https://172.21.6.21:8774/v2.1                                |
|             |              |                                                                          |
+-------------+--------------+--------------------------------------------------------------------------+

region=RegionTwo
Network Subnet: 172.21.31/23

| neutron     | network      | RegionTwo                                                                |
|             |              |   public: https://172.31.6.20:9696                                       |
|             |              | RegionTwo                                                                |
|             |              |   internal: https://172.31.6.20:9696                                     |
|             |              | RegionTwo                                                                |
|             |              |   admin: https://172.31.6.20:9696                                        |
|             |              |                                                                          |
| workloadmgr | workloads    | RegionTwo                                                                |
|             |              |   internal: https://172.31.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
|             |              | RegionTwo                                                                |
|             |              |   public: https://172.31.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b   |
|             |              | RegionTwo                                                               |
|             |              |   admin: https://172.31.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b    |
|             |              |                                                                          |
| dmapi       | datamover    | RegionTwo                                                                |
|             |              |   internal: https://172.31.6.22:8784/v2                                  |
|             |              | RegionTwo                                                                |
|             |              |   public: https://172.31.6.22:8784/v2                                    |
|             |              | RegionTwo                                                                |
|             |              |   admin: https://172.31.6.22:8784/v2                                     |
|             |              |                                                                          |
| nova        | compute      | RegionTwo                                                                |
|             |              |   public: https://172.31.6.21:8774/v2.1                                  |
|             |              | RegionTwo                                                                |
|             |              |   admin: https://172.31.6.21:8774/v2.1                                   |
|             |              | RegionTwo                                                                |
|             |              |   internal: https://172.31.6.21:8774/v2.1                                |
|             |              |                                                                          |
+-------------+--------------+--------------------------------------------------------------------------+
networking service
block storage service
OpenStack Docs: Multiple Regions Deployment with Kolla
Getting started with Trilio on Kolla-Ansible OpenStack
install guide
document
step 7
OpenStack Docs: Multiple Regions Deployment with Kolla
Evaluation of OpenStack Multi-Region Keystone Deployments
Multi-Region OpenStack Deployment
Multi-Region OpenStack with Trilio Services

Snapshot Mount

Mount Snapshot

POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/mount

Mounts a Snapshot to the provided File Recovery Manager

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project the Snapshot is located in

snapshot_id

string

ID of the Snapshot to mount

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 15:29:03 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-9d779802-9c65-463a-973c-39cdffcba82e

Body Format

{
   "mount":{
      "mount_vm_id":"15185195-cd8d-4f6f-95ca-25983a34ed92",
      "options":{
         
      }
   }
}

List of mounted Snapshots in Tenant

GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/mounted/list

Provides the list of all Snapshots mounted in a Tenant

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant to search for mounted Snapshots

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgr

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 15:44:42 GMT
Content-Type: application/json
Content-Length: 228
Connection: keep-alive
X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a

{
   "mounted_snapshots":[
      {
         "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
         "snapshot_name":"snapshot",
         "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
         "mounturl":"[\"http://192.168.100.87\"]",
         "status":"mounted"
      }
   ]
}

List of mounted Snapshots in Workload

GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/snapshots/mounted/list

Provides the list of all Snapshots mounted in a specified Workload

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant to search for mounted Snapshots

workload_id

string

ID of the Workload to search for mounted Snapshots

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgr

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 15:44:42 GMT
Content-Type: application/json
Content-Length: 228
Connection: keep-alive
X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a

{
   "mounted_snapshots":[
      {
         "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
         "snapshot_name":"snapshot",
         "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
         "mounturl":"[\"http://192.168.100.87\"]",
         "status":"mounted"
      }
   ]
}

Dismount Snapshot

POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/dismount

Unmounts a Snapshot of the provided File Recovery Manager

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project the Snapshot is located in

snapshot_id

string

ID of the Snapshot to dismount

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 16:03:49 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-abf69be3-474d-4cf3-ab41-caa56bb611e4

Body Format

{
   "mount": 
      {
          "options": null
      }
}

Snapshots

Definition

A Snapshot is a single Trilio backup of a workload including all data and metadata. It contains the information of all VM's that are protected by the workload.

List of Snapshots

Using Horizon

  1. Login to Horizon

  2. Navigate to the Backups

  3. Navigate to Workloads

  4. Identify the workload to show the details on

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

The List of Snapshots for the chosen Workload contains the following additional information:

  • Creation Time

  • Name of the Snapshot

  • Description of the Snapshot

  • Total amount of Restores from this Snapshot

    • Total amount of succeeded Restores

    • Total amount of failed Restores

  • Snapshot Type

  • Snapshot Size

  • Snapshot Status

Using CLI

 workloadmgr snapshot-list [--workload_id <workload_id>]
                           [--tvault_node <host>]
                           [--date_from <date_from>]
                           [--date_to <date_to>]
                           [--all {True,False}]
  • --workload_id <workload_id> ➡️ Filter results by workload_id

  • --tvault_node <host> ➡️ List all the snapshot operations scheduled on a tvault node(Default=None)

  • --date_from <date_from>➡️ From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If don't specify time then it takes 00:00 by default

  • --date_to <date_to>➡️To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day), Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to

  • --all {True,False} ➡️ List all snapshots of all the projects(valid for admin user only)

Creating a Snapshot

Snapshots are automatically created by the Trilio scheduler. If necessary or in case of deactivated scheduler is it possible to create a Snapshot on demand.

Using Horizon

There are 2 possibilities to create a snapshot on demand.

Possibility 1: From the Workloads overview

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that shall create a Snapshot

  5. Click "Create Snapshot"

  6. Provide a name and description for the Snapshot

  7. Decide between Full and Incremental Snapshot

  8. Click "Create"

Possibility 2: From the Workload Snapshot list

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that shall create a Snapshot

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Click "Create Snapshot"

  8. Provide a name and description for the Snapshot

  9. Decide between Full and Incremental Snapshot

  10. Click "Create"

Using CLI

workloadmgr workload-snapshot [--full] [--display-name <display-name>]
                              [--display-description <display-description>]
                              <workload_id>
  • <workload_id>➡️ID of the workload to snapshot.

  • --full➡️ Specify if a full snapshot is required.

  • --display-name <display-name>➡️Optional snapshot name. (Default=None)

  • --display-description <display-description>➡️Optional snapshot description. (Default=None)

Snapshot overview

Each Snapshot contains a lot of information about the backup. These information can be seen in the Snapshot overview.

Using Horizon

To reach the Snapshot Overview follow these steps:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to show

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the searched Snapshot in the Snapshot list

  8. Click the Snapshot Name

Details Tab

The Snapshot Details Tab shows the most important information about the Snapshot.

  • Snapshot Name / Description

  • Snapshot Type

  • Time Taken

  • Size

  • Which VMs are part of the Snapshot

  • for each VM in the Snapshot

    • Instance Info - Name & Status

    • Security Group(s) - Name & Type

    • Flavor - vCPUs, Disk & RAM

    • Networks - IP, Networkname & Mac Address

    • Attached Volumes - Name, Type, size (GB), Mount Point & Restore Size

    • Misc - Original ID of the VM

Restores Tab

The Snapshot Restores Tab shows the list of Restores that have been started from the chosen Snapshot. It is possible to start Restores from here.

Please refer to the Restores User Guide to learn more about Restores.

Misc. Tab

The Snapshot Miscellaneous Tab provides the remaining metadata information about the Snapshot.

  • Creation Time

  • Last Update time

  • Snapshot ID

  • Workload ID of the Workload containing the Snapshot

Using CLI

workloadmgr snapshot-show [--output <output>] <snapshot_id>
  • <snapshot_id>➡️ID of the snapshot to be shown

  • --output <output>➡️Option to get additional snapshot details, Specify --output metadata for snapshot metadata, Specify --output networks for snapshot vms networks, Specify --output disks for snapshot vms disks

Delete Snapshots

Once a Snapshot is no longer needed, it can be safely deleted from a Workload.

The retention policy will automatically delete the oldest Snapshots according to the configure policy.

You have to delete all Snapshots to be able to delete a Workload.

Deleting a Trilio Snapshot will not delete any OpenStack Cinder Snapshots. Those need to be deleted separately if desired.

Using Horizon

There are 2 possibilities to delete a Snapshot.

Possibility 1: Single Snapshot deletion through the submenu

To delete a single Snapshot through the submenu follow these steps:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to delete

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the searched Snapshot in the Snapshot list

  8. Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu

  9. Click "Delete Snapshot"

  10. Confirm by clicking "Delete"

Possibility 2: Multiple Snapshot deletion through checkbox in Snapshot overview

To delete one or more Snapshots through the Snapshot overview do the following:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to show

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the searched Snapshots in the Snapshot list

  8. Check the checkbox for each Snapshot that shall be deleted

  9. Click "Delete Snapshots"

  10. Confirm by clicking "Delete"

Using CLI

workloadmgr snapshot-delete <snapshot_id>
  • <snapshot_id>➡️ID of the snapshot to be deleted

Snapshot Cancel

Ongoing Snapshots can be canceled.

Canceled Snapshots will be treated like errored Snapshots

Using Horizon

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to cancel

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the searched Snapshot in the Snapshot list

  8. Click "Cancel" on the same line as the identified Snapshot

  9. Confirm by clicking "Cancel"

Using CLI

workloadmgr snapshot-cancel <snapshot_id>
  • <snapshot_id>➡️ID of the snapshot to be canceled

Snapshot Mount

Definition

Trilio allows you to view or download a file from the snapshot. Any changes to the files or directories when snapshot is mounted are temporary and are discarded when the snapshot is unmounted. Mounting is a faster way to restore a single or multiple files. To mount a snapshot follow these steps.

Supported File Recovery Manager Image

Create a File Recovery Manager Instance

It is recommended to do these steps once to the chosen cloud-Image and then upload the modified cloud image to Glance.

  • Create an OpenStack image using a Linux based cloud-image like Ubuntu, CentOS or RHEL with the following metadata parameters.

  • Spin up an instance from that image It is recommended to have at least 8GB RAM for the mount operation. Bigger Snapshots can require more RAM.

Steps to apply on CentOS and RHEL cloud-images

  • install and activate qemu-guest-agent

  • Edit /etc/sysconfig/qemu-ga and remove the following from BLACKLIST_RPC section

  • Disable SELINUX in /etc/sysconfig/selinux

  • Install python3 and lvm2

  • Reboot the Instance

Steps to apply on Ubuntu cloud-images

  • install and activate qemu-guest-agent

  • Verify the loaded path of qemu-guest-agent

Loaded path init.d (Ubuntu 18.04)

Follow this path when systemctl returns the following loaded path

Edit /etc/init.d/qemu-guest-agent and add Freeze-Hook file path in daemon args

Loaded path systemd (Ubuntu 20.04)

Follow this path when systemctl returns the following loaded path

Edit qemu-guest-agent systemd file

Add the following lines

Finalize the FRM on Ubuntu

  • Restart qemu-guest-agent service

  • Install Python3

  • Reboot the VM

Mounting a Snapshot

Mounting a Snapshot to a File Recovery Manager provides read access to all data that is located on the in the mounted Snapshot.

It is possible to run the mounting process against any OpenStack instance. During this process will the instance be rebooted.

Always mount Snapshots to File Recovery Manager instances only.

To be able to successfully mount Windows (NTFS) Snapshots the ntfs filesystem support is required on the File Recovery Manager instance.

Unmount any mounted Snapshot once there is no further need to keep it mounted. Mounted Snapshots will not be purged by the Retention policy.

Using Horizon

There are 2 possibilities to mount a Snapshot in Horizon.

Through the Snapshot list

To mount a Snapshot through the Snapshot list follow these steps:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to mount

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the searched Snapshot in the Snapshot list

  8. Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu

  9. Click "Mount Snapshot"

  10. Choose the File Recovery Manager instance to mount to

  11. Confirm by clicking "Mount"

Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:

tvault_recovery_manager=yes

Through the File Search results

To mount a Snapshot through the File Search results follow these steps:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to mount

  5. Click the workload name to enter the Workload overview

  6. Navigate to the File Search tab

  7. Identify the Snapshot to be mounted

  8. Click "Mount Snapshot" for the chosen Snapshot

  9. Choose the File Recovery Manager instance to mount to

  10. Confirm by clicking "Mount"

Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:

tvault_recovery_manager=yes

Using CLI

  • <snapshot_id> ➡️ ID of the Snapshot to be mounted

  • <mount_vm_id> ➡️ ID of the File Recovery Manager instance to mount the Snapshot to.

Accessing the File Recovery Manager

The File Recovery Manager is a normal Linux based OpenStack instance.

It can be accessed via SSH or SSH based tools like FileZila or WinSCP.

SSH login is often disabled by default in cloud-images. Enable SSH login if necessary.

The mounted Snapshot can be found at the following path:

/home/ubuntu/tvault-mounts/mounts/

Each VM in the Snapshot has its own directory using the VM_ID as the identifier.

Identifying mounted Snapshots

Sometimes a Snapshot is mounted for a longer time and it needs to be identified, which Snapshots are mounted.

Using Horizon

There are 2 possibilities to identify mounted Snapshots inside Horizon.

From the File Recovery Manager instance Metadata

  1. Login to Horizon

  2. Navigate to Compute

  3. Navigate to Instances

  4. Identify the File Recovery Manager Instance

  5. Click on the Name of the File Recovery Manager Instance to bring up its details

  6. On the Overview tab look for Metadata

  7. Identify the value for mounted_snapshot_url

The mounted_snapshot_url contains the Snapshot ID of the Snapshot that has been mounted last.

This value only gets updated, when a new Snapshot is mounted.

From the Snapshot list

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to mount

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Search for the Snapshot that has the option "Unmount Snapshot"

Using CLI

  • --workloadid <workloadid> ➡️ Restrict the list to snapshots in the provided workload

Unmounting a Snapshot

Once a mounted Snapshot is no longer needed it is possible and recommended to unmount the snapshot.

Unmounting a Snapshot frees the File Recovery Manager instance to mount the next Snapshot and allows Trilio retention policy to purge the former mounted Snapshot.

Deleting the File Recovery Manager instance will not update the Trilio appliance. The Snapshot will be considered mounted until an unmount command has been received.

Using Horizon

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to mount

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Search for the Snapshot that has the option "Unmount Snapshot"

  8. Click "Unmount Snapshot"

Using the CLI

  • <snapshot_id> ➡️ ID of the snapshot to unmount.

Cloud Image Name

Version

Supported

Ubuntu

Bionic(18.04)

✔️

Ubuntu

Focal(20.04)

✔️

Centos

Centos8

✔️

Centos

Centos8 stream

✔️

RHEL

RHEL7

✔️

RHEL

RHEL8

✔️

RHEL

RHEL9

✔️

openstack image create \
--file <File Manager Image Path> \
--container-format bare \
--disk-format qcow2 \
--public \
--property hw_qemu_guest_agent=yes \
--property tvault_recovery_manager=yes \
--property hw_disk_bus=virtio \
tvault-file-manager
guest-file-read
guest-file-write
guest-file-open
guest-file-close
SELINUX=disabled
yum install python3 lvm2
apt-get update
apt-get install qemu-guest-agent
systemctl enable qemu-guest-agent
Loaded: loaded (/etc/init.d/qemu-guest-agent; generated)
DAEMON_ARGS="-F/etc/qemu/fsfreeze-hook"
Loaded: loaded (/usr/lib/systemd/system/qemu-guest-agent.service; disabled; vendor preset: enabled)
systemctl edit qemu-guest-agent
[Service]
ExecStart=
ExecStart=/usr/sbin/qemu-ga -F/etc/qemu/fsfreeze-hook
systemctl restart qemu-guest-agent
apt-get install python3
workloadmgr snapshot-mount <snapshot_id> <mount_vm_id>
workloadmgr snapshot-mounted-list [--workloadid <workloadid>]
workloadmgr snapshot-dismount <snapshot_id>
Do a File Search
Logo

Workloads

Definition

A workload is a backup job that protects one or more Virtual Machines according to a configured policy. There can be as many workloads as needed. But each VM can only be part of one Workload.

Using encrypted Workload will lead to longer backup times. The following timings have been seen in Trilio labs:

Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : ~108GB

  1. For unencrypted WL : 62 min

  2. For encrypted WL : 82 min

Snapshot time for Windows Image booted VM. No additional data except OS. : ~12 GB

  1. For unencrypted WL : 10 min

  2. For encrypted WL : 18 min5

List of Workloads

Using Horizon

To view all available workloads of a project inside Horizon do:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

The overview in Horizon lists all workloads with the following additional information:

  • Creation time

  • Workload Name

  • Workload description

  • Total amount of Snapshots inside this workload

    • Total amount of succeeded Snapshots

    • Total amount of failed Snapshots

  • Status of the Workload

  • Scheduler Trust

    • This must be shown as Established if the schduler for the workload is eabled.

    • User will have to re-create the trust if it shows as Broken.

  • Encryption

    • Whether the workload is encrypted or not

Using CLI

workloadmgr workload-list [--all {True,False}] [--nfsshare <nfsshare>]
  • --all {True,False}➡️List all workloads of all projects (valid for admin user only)

  • --nfsshare <nfsshare>➡️List all workloads of nfsshare (valid for admin user only)

Workload Create

The encryption options of the workload creation process are only available when the Barbican service is installed and available.

Using Horizon

To create a workload inside Horizon do the following steps:

  1. Login to Horizon

  2. Navigate to the Backups

  3. Navigate to Workloads

  4. Click "Create Workload"

  5. Provide Workload Name and Workload Description on the first tab "Details"

  6. Choose if the Workload is encrypted on the first tab "Details"

  7. Provide the secret UUID if Workload is encrypted on the first tab "Details"

  8. Choose the required Backup Target Type on the first tab "Details"

  9. Choose the VMs to protect on the second Tab "Workload Members"

  10. Provide the manual snapshot retention count on the tab "Schedule"

  11. Choose the Policy if available to use on tab "Schedule"

  12. Choose if the scheduler required on tab "Schedule"

  13. Decide and provide the schedule and retention counts of the workload on the tab "Schedule"

  14. If required check "Pause VM" on the Tab "Options"

  15. Click create

The created Workload will be available after a few seconds and starts to take backups according to the provided schedule and policy.

Using CLI

workloadmgr workload-create [--display-name <display-name>]
                            [--display-description <display-description>]
                            [--source-platform <source-platform>]
                            [--jobschedule <key=key-name>]
                            [--hourly [<key=key-name> ...]]
                            [--daily [<key=key-name> ...]]
                            [--weekly [<key=key-name> ...]]
                            [--monthly [<key=key-name> ...]]
                            [--yearly [<key=key-name> ...]] 
                            [--manual <key=key-name> [<key=key-name> ...]]
                            [--metadata <key=key-name>]
                            [--policy-id <policy_id>]
                            [--encryption <True/False>]
                            [--secret-uuid <secret_uuid>]
                            [--backup-target-type <backup_target_type>]
                            <instance-id=instance-uuid> [<instance-id=instance-uuid> ...]
  • --display-name➡️Optional workload name. (Default=None)

  • --display-description➡️Optional workload description. (Default=None)

  • --source-platform➡️Workload source platform is required. Supported platforms is 'openstack'

  • --jobschedule➡️Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM'

  • --hourly➡️Specify following key value pairs for hourly jobschedule interval= where n is no of hours within list (1,2,3,4,6,8,12,24) retention= snapshot_type=<full|incremental>

  • --daily➡️Specify following key value pairs for daily jobschedule backup_time='1:30 22:30 00:30' retention= snapshot_type=<full|incremental>

  • --weekly➡️Specify following key value pairs for weekly jobschedule backup_day=[mon,tue,wed,thu,fri,sat,sun] retention= snapshot_type=<full|incremental>

  • --monthly➡️Specify following key value pairs for monthly jobschedule month_backup_day=<1-31|last>, 'last': last day of the month retention= snapshot_type=<full|incremental>

  • --yearly➡️Specify following key value pairs for yearly jobschedule backup_month=[jan,feb,mar,apr,may,jun,jul,aug,sep,oct,nov,dec] retention= snapshot_type=<full|incremental>

  • --manual➡️Specify following key value pairs for manual jobschedule retention= retention_days_to_keep=. retention_days_to_keep only available for immutable Backup Targets

  • --metadata➡️Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

  • --policy-id <policy_id>➡️ID of the policy to assign to the workload

  • --encryption <True/False> ➡️Enable/Disable encryption for this workload

  • --secret-uuid <secret_uuid> ➡️UUID of the Barbican secret to be used for the workload

  • --backup-target-type <backup_target_type> : arrow_right:Backup Target Type ID for this workload

  • <instance-id=instance-uuid>➡️Required to set atleast one instance, Specify an instance to include in the workload. Specify option multiple times to include multiple instances

Workload Overview

A workload contains many information, which can be seen in the workload overview.

Using Horizon

To enter the workload overview inside Horizon do the following steps:

  1. Login to Horizon

  2. Navigate to the Backups

  3. Navigate to Workloads

  4. Identify the workload to show the details on

  5. Click the workload name to enter the Workload overview

Details Tab

The Workload Details tab provides you with the general most important information about the workload:

  • Name

  • Description

  • Availability Zone

  • List of protected VMs including the information of qemu guest agent availability

The status of the qemu-guest-agent just shows, whether the necessary OpenStack configuration has been done for this VM to provide qemu guest agent integration. It does not check, whether the qemu guest agent is installed and configured on the VM.

It is possible to navigate to the protected VM directly from the list of protected VMs.

Snapshots Tab

The Workload Snapshots Tab shows the list of all available Snapshots in the chosen Workload.

From here it is possible to work with the Snapshots, create Snapshots on demand and start Restores.

Please refer to the Snapshot and Restore User Guide to learn more about those.

Policy Tab

The Workload Policy Tab gives an overview of the current configured scheduler and retention policy. The following elements are shown:

  • Scheduler Enabled / Disabled

  • Start Date / Time

  • End Date / Time

  • RPO

  • Time till next Snapshot run

  • Retention Policy and Value

Filesearch Tab

The Workload Filesearch Tab provides access to the powerful search engine, which allows to find files and folders on Snapshots without the need of a restore.

Please refer to the File Search User Guide to learn more about this feature.

Misc. Tab

The Workload Miscellaneous Tab shows the remaining metadata of the Workload. The following information are provided:

  • Creation time

  • last update time

  • Workload ID

Using CLI

workloadmgr workload-show <workload_id> [--verbose <verbose>]
  • <workload_id> ➡️ ID/name of the workload to show

  • --verbose➡️option to show additional information about the workload

Edit a Workload

Workloads can be modified in all components to match changing needs.

Editing a Workload will set the User, who edits the Workload, as the new owner.

Using Horizon

To edit a workload in Horizon do the following steps:

  1. Login to the Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload to be modified

  5. Click the small arrow next to "Create Snapshot" to open the sub-menu

  6. Click "Edit Workload"

  7. Modify the workload as desired - All parameters except backup target type can be changed

  8. Click "Update"

Using CLI

usage: workloadmgr workload-modify [--display-name <display-name>]
                                   [--display-description <display-description>]
                                   [--instance <instance-id=instance-uuid>]
                                   [--jobschedule <key=key-name>]
                                   [--hourly [<key=key-name> ...]]
                                   [--daily [<key=key-name> ...]]
                                   [--weekly [<key=key-name> ...]]
                                   [--monthly [<key=key-name> ...]]
                                   [--yearly [<key=key-name> ...]]
                                   [--manual [<key=key-name> ...]] 
                                   [--metadata <key=key-name>]
                                   [--policy-id <policy_id>]
                                   <workload_id>
  • --display-name ➡️ Optional workload name. (Default=None)

  • --display-description➡️Optional workload description. (Default=None)

  • --instance <instance-id=instance-uuid>➡️Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID

  • --jobschedule <key=key-name>➡️Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. If don't specify timezone, then by default it takes your local machine timezone 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM'

  • --hourly➡️Specify following key value pairs for hourly jobschedule interval= where n is no of hours within list (1,2,3,4,6,8,12,24) retention= snapshot_type=<full|incremental>

  • --daily➡️Specify following key value pairs for daily jobschedule backup_time='1:30 22:30 00:30' retention= snapshot_type=<full|incremental>

  • --weekly➡️Specify following key value pairs for weekly jobschedule backup_day=[mon,tue,wed,thu,fri,sat,sun] retention= snapshot_type=<full|incremental>

  • --monthly➡️Specify following key value pairs for monthly jobschedule month_backup_day=<1-31|last>, 'last': last day of the month retention= snapshot_type=<full|incremental>

  • --yearly➡️Specify following key value pairs for yearly jobschedule backup_month=[jan,feb,mar,apr,may,jun,jul,aug,sep,oct,nov,dec] retention= snapshot_type=<full|incremental>

  • --manual➡️Specify following key value pairs for manual jobschedule retention= retention_days_to_keep=. retention_days_to_keep only available for immutable Backup Targets

  • --metadata <key=key-name>➡️Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

  • --policy-id <policy_id>➡️ID of the policy to assign

  • <workload_id> ➡️ID of the workload to edit

Delete a Workload

Once a workload is no longer needed it can be safely deleted.

All Snapshots need to be deleted before the workload gets deleted. Please refer to the Snapshots User Guide to learn how to delete Snapshots.

Using Horizon

To delete a workload do the following steps:

  1. Login to Horizon

  2. Navigate to the Backups

  3. Navigate to Workloads

  4. Identify the workload to be deleted

  5. Click the small arrow next to "Create Snapshot" to open the sub-menu

  6. Click "Delete Workload"

  7. Confirm by clicking "Delete Workload" yet again

Using CLI

workloadmgr workload-delete [--database_only <True/False>] <workload_id>
  • <workload_id> ➡️ ID/name of the workload to delete

  • --database_only <True/False>➡️Keep True if want to delete from database only.(Default=False)

Unlock a Workload

Workloads that are actively taking backups or restores are locked for further tasks. It is possible to unlock a workload by force if necessary.

It is highly recommend to use this feature only as last resort in case of backups/restores being stuck without failing or a restore is required while a backup is running.

Using Horizon

  1. Login to the Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload to unlock

  5. Click the small arrow next to "Create Snapshot" to open the sub-menu

  6. Click "Unlock Workload"

  7. Confirm by clicking "Unlock Workload" yet again

Using CLI

workloadmgr workload-unlock <workload_id>
  • <workload_id> ➡️ ID of the workload to unlock

Reset a Workload

In rare cases it might be necessary to start a backup chain all over again, to ensure the quality of the created backups. To not recreate a Workload in such cases is it possible to reset a Workload.

The Workload reset will:

  • Cancel all ongoing tasks

  • Delete all existing OpenStack Trilio Snapshots from the protected VMs

  • recalculate the next Snapshot time

  • take a full backup at the next Snapshot

Using Horizon

To reset a Workload do the following steps:

  1. Login to the Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload to reset

  5. Click the small arrow next to "Create Snapshot" to open the sub-menu

  6. Click "Reset Workload"

  7. Confirm by clicking "Reset Workload" yet again

Using CLI

workloadmgr workload-reset <workload_id>
  • <workload_id> ➡️ ID/name of the workload to reset

Schedulers

Disable Workload Scheduler

POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/pause

Disables the scheduler of a given Workload

Path Parameters

Name
Type
Description

Headers

Name
Type
Description

Enable Workload Scheduler

POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/resume

Enables the scheduler of a given Workload

Path Parameters

Name
Type
Description

Headers

Name
Type
Description

Scheduler Trust Status

GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>

Validates the Scheduler trust for a given Workload

Path Parameters

Name
Type
Description

Headers

Name
Type
Description

All following API commands require an Authentication token against a user with admin-role in the authentication project.

Global Job Scheduler status

GET https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler

Requests the status of the Global Job Scheduler

Path Parameters

Name
Type
Description

Headers

Name
Type
Description

Disable Global Job Scheduler

POST https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/disable

Requests disabling the Global Job Scheduler

Path Parameters

Name
Type
Description

Headers

Name
Type
Description

Enable Global Job Scheduler

POST https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/enable

Requests enabling the Global Job Scheduler

Path Parameters

Name
Type
Description

Headers

Name
Type
Description

Backups-Admin Area

Trilio provides Backup-as-a-Service, which allows OpenStack Users to manage and control their backups themselves. This doesn't eradicate the need for a Backup Administrator, who has an overview of the complete Backup Solution.

To provide Backup Administrators with the tools they need does Trilio for OpenStack provide a Backup-Admin area in Horizon in addition to the API and CLI.

Access the Backups-Admin area

To access the Backups-Admin area follow these steps:

  1. Login to Horizon using admin user.

  2. Click on Admin Tab.

  3. Navigate to Backups-Admin Tab.

  4. Navigate to Trilio page.

The Backups-Admin area provides the following features.

It is possible to reduce the shown information down to a single tenant. That way seeing the exact impact the chosen Tenant has.

Status overview

The status overview is always visible in the Backups-Admin area. It provides the most needed information on a glance, including:

  • Storage Usage (nfs only)

  • Number of protected VMs compared to number of existing VMs

  • Number of currently running Snapshots

  • Status of TVault Nodes

  • Status of Contego Nodes

The status of nodes is filled when the services are running and in good status.

Workloads tab

This tab provides information about all currently existing Workloads. It is the most important overview tab for every Backup Administrator and therefor the default tab shown when opening the Backup-Admins area.

The following information are shown:

  • User-ID that owns the Workload

  • Project that contains the Workload

  • Workload name

  • Availability Zone

  • Amount of protected VMs

  • Performance information about the last 30 backups

    • How much data was backed up (green bars)

    • How long did the Backup take (red line)

  • Piechart showing amount of Full (Blue) Backups compared to Incremental (Red) Backups

  • Number of successful Backups

  • Number of failed Backups

  • Storage used by that Workload

  • Which Backup target is used

  • When is the next Snapshot run

  • What is the general intervall of the Workload

  • Scheduler Status including a Switch to deactivate/activate the Workload

Usage tab

Administrators often need to figure out, where a lot of resources are used up, or they need to quickly provide usage information to a billing system. This tab helps in these tasks by providing the following information:

  • Storage used by a Tenant

  • VMs protected by a Tenant

It is possible to drill down to see the same information per workload and finally per protected VM.

The Usage tab includes workloads and VMs that are no longer actively used by a Tenant, but exist on the backup target.

Nodes tab

This tab displays information about Trilio cluster nodes. The following information are shown:

  • Node name

  • Node ID

  • Trilio Version of the node

  • IP Address

  • Node Status including a Switch to deactivate/activate the Node

Node status can be controlled through CLI as well.

To deactivate the Trilio Node use:

  • --reason➡️Optional reason for disabling workload service

  • <node_name>➡️name of the Trilio node

To activate the Trilio Node use:

  • <node_name>➡️name of the Trilio node

Data Movers tab (Trilio Data Mover Service)

This tab displays information about Trilio contego service. The following information are shown:

  • Service-Name

  • Compute Node the service is running on

  • Service Status from OpenStack perspective (enabled/disabled)

  • Version of the Service

  • General Status

Storage tab

This tab displays information about the backup target storage. It contains the following information:

  • Storage Name

Clicking on the Storage name provides an overview of all workloads stored on that storage.

  • Capacity of the storage

  • Total utilization of the storage

  • Status of the storage

  • Statistic information

    • Percentage all storages are used

    • Percentage how much storage is used for full backups

    • Amount of Full backups versus Incremental backups

Audit tab

Audit logs provide the sequence of workload related activities done by users, like workload creation, snapshot creation, etc. The following information are shown:

  • Time of the entry

  • What task has been done

  • Project the task has performed in

  • User that performed the task

The Audit log can be searched for strings to find for example only entries down by a specific user.

Additionally, can the shown timeframe be changed as necessary.

License tab

The license tab provides an overview over the current license and allows to upload new licenses, or validate the current license.

A license validation is automatically done, when opening the tab.

The following information about an active license are shown:

  • Organization (License name)

  • License ID

  • Purchase date - when was the license created

  • License Expiry Date

  • Maintenance Expiry Date

  • License value

  • License Edition

  • License Version

  • License Type

  • Description of the License

  • Evaluation (True/False)

  • EULA - when was the license agreed

Trilio will stop all activities once a license is no longer valid or expired.

Policy tab

The policy tab gives Administrators the possibility to work with workload policies.

Please use in the Admin guide to learn more about how to create and use Workload Policies.

Settings tab

This tab manages all global settings for the whole cloud. Trilio has two types of settings:

  1. Email settings

  2. Job scheduler settings.

Email Settings

These settings will be used by Trilio to send email reports of snapshots and restores to users.

Configuring the Email settings is a must-have to provide Email notification to OpenStack users.

The following information are required to configure the email settings:

  • SMTP Server

  • SMTP username

  • SMTP password

  • SMTP port

  • SMTP timeout

  • Sender email address

A test email can be sent directly from the configuration page.

To work with email settings through CLI use the following commands:

To set an email setting for the first time or after deletion use:

  • --description➡️Optional description (Default=None) ➡️ Not required for email settings

  • --category➡️Optional setting category (Default=None) ➡️ Not required for email settings

  • --type➡️settings type ➡️ set to email_settings

  • --is-public➡️sets if the setting can be seen publicly ➡️ set to False

  • --is-hidden➡️sets if the setting will always be hidden ➡️ set to False

  • --metadata➡️sets if the setting can be seen publicly ➡️ Not required for email settings

  • <name>➡️name of the setting ➡️ Take from the list below

  • <value>➡️value of the setting ➡️ Take value type from the list below

To update an already set email setting through CLI use:

  • --description➡️Optional description (Default=None) ➡️ Not required for email settings

  • --category➡️Optional setting category (Default=None) ➡️ Not required for email settings

  • --type➡️settings type ➡️ set to email_settings

  • --is-public➡️sets if the setting can be seen publicly ➡️ set to False

  • --is-hidden➡️sets if the setting will always be hidden ➡️ set to False

  • --metadata➡️sets if the setting can be seen publicly ➡️ Not required for email settings

  • <name>➡️name of the setting ➡️ Take from the list below

  • <value>➡️value of the setting ➡️ Take value type from the list below

To show an already set email setting use:

  • --get_hidden➡️show hidden settings (True) or not (False) ➡️ Not required for email settings, use False if set

  • <setting_name>➡️name of the setting to show➡️ Take from the list below

To delete a set email setting use:

  • <setting_name>➡️name of the setting to delete ➡️ Take from the list below

Setting name
Value type
example

Disable/Enable Job Scheduler

The Global Job Scheduler can be used to deactivate all scheduled workloads without modifying each one of them.

To activate/deactivate the Global Job Scheduler through the Backups-Admin area:

  1. Login to Horizon using admin user.

  2. Click on Admin Tab.

  3. Navigate to Backups-Admin Tab.

  4. Navigate to Trilio page.

  5. Navigate to the Settings tab

  6. Click "Disable/Enable Job Scheduler"

  7. Check or Uncheck the box for "Job Scheduler Enabled"

  8. Confirm by clicking on "Change"

The Global Job Scheduler can be controlled through CLI as well.

To get the status of the Global Job Scheduler use:

To deactivate the Global Job Scheduler use:

To activate the Global Job Scheduler use:

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of Tenant/Project the Workload is located in

workload_id

string

ID of the Workload to disable the Scheduler in

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of Tenant/Project the Workload is located in

workload_id

string

ID of the Workload to disable the Scheduler in

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:06:01 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-4eb1863e-3afa-4a2c-b8e6-91a41fe37f78

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of Tenant/Project the Workload is located in

workload_id

string

ID of the Workload to disable the Scheduler in

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:31:49 GMT
Content-Type: application/json
Content-Length: 1223
Connection: keep-alive
X-Compute-Request-Id: req-c6f826a9-fff7-442b-8886-0770bb97c491

{
   "scheduler_enabled":true,
   "trust":{
      "created_at":"2020-10-23T14:35:11.000000",
      "updated_at":null,
      "deleted_at":null,
      "deleted":false,
      "version":"4.0.115",
      "name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
      "project_id":"c76b3355a164498aa95ddbc960adc238",
      "user_id":"ccddc7e7a015487fa02920f4d4979779",
      "value":"871ca24f38454b14b867338cb0e9b46c",
      "description":"token id for user ccddc7e7a015487fa02920f4d4979779 project c76b3355a164498aa95ddbc960adc238",
      "category":"identity",
      "type":"trust_id",
      "public":false,
      "hidden":true,
      "status":"available",
      "metadata":[
         {
            "created_at":"2020-10-23T14:35:11.000000",
            "updated_at":null,
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"a3cc9a01-3d49-4ff8-ad8e-b12a7b3c68b0",
            "settings_name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
            "settings_project_id":"c76b3355a164498aa95ddbc960adc238",
            "key":"role_name",
            "value":"member"
         }
      ]
   },
   "is_valid":true,
   "scheduler_obj":{
      "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
      "user_id":"ccddc7e7a015487fa02920f4d4979779",
      "project_id":"c76b3355a164498aa95ddbc960adc238",
      "user_domain_id":"default",
      "user":"ccddc7e7a015487fa02920f4d4979779",
      "tenant":"c76b3355a164498aa95ddbc960adc238"
   }
}

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of Tenant/Project the Workload is located in

workload_id

string

ID of the Workload to disable the Scheduler in

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:45:27 GMT
Content-Type: application/json
Content-Length: 30
Connection: keep-alive
X-Compute-Request-Id: req-cd447ce0-7bd3-4a60-aa92-35fc43b4729b

{"global_job_scheduler": true}

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of Tenant/Project the Workload is located in

workload_id

string

ID of the Workload to disable the Scheduler in

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:49:29 GMT
Content-Type: application/json
Content-Length: 31
Connection: keep-alive
X-Compute-Request-Id: req-6f49179a-737a-48ab-91b7-7e7c460f5af0

{"global_job_scheduler": false}

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of Tenant/Project the Workload is located in

workload_id

string

ID of the Workload to disable the Scheduler in

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:50:11 GMT
Content-Type: application/json
Content-Length: 30
Connection: keep-alive
X-Compute-Request-Id: req-ed279acc-9805-4443-af91-44a4420559bc

{"global_job_scheduler": true}
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 11:52:56 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-99f51825-9b47-41ea-814f-8f8141157fc7
workloadmgr workload-service-disable [--reason <reason>] <node_name>
workloadmgr workload-service-enable <node_name>
workloadmgr setting-create [--description <description>]
                           [--category <category>]
                           [--type <type>]
                           [--is-public {True,False}]
                           [--is-hidden {True,False}]
                           [--metadata <key=value>]
                           <name> <value>
workloadmgr setting-update [--description <description>]
                           [--category <category>]
                           [--type <type>]
                           [--is-public {True,False}]
                           [--is-hidden {True,False}]
                           [--metadata <key=value>]
                           <name> <value>
workloadmgr setting-show [--get_hidden {True,False}] <setting_name>
workloadmgr setting-delete <setting_name>

smtp_default_recipient

String

[email protected]

smtp_default_sender

String

[email protected]

smtp_port

Integer

587

smtp_server_name

String

Mailserver_A

smtp_server_username

String

admin

smtp_server_password

String

password

smtp_timeout

Integer

10

smtp_email_enable

Boolean

True

workloadmgr get-global-job-scheduler
workloadmgr disable-global-job-scheduler
workloadmgr enable-global-job-scheduler
Workload Policies

E-Mail Notification Settings

E-Mail Notification Settings are done through the settings API. Use the values from the following table to set Email Notifications up through API.

Setting name
Settings Type
Value type
example

smtp_default___recipient

email_settings

String

[email protected]

smtp_default___sender

email_settings

String

[email protected]

smtp_port

email_settings

Integer

587

smtp_server_name

email_settings

String

Mailserver_A

smtp_server_username

email_settings

String

admin

smtp_server_password

email_settings

String

password

smtp_timeout

email_settings

Integer

10

smtp_email_enable

email_settings

Boolean

True

Create Setting

POST https://$(tvm_address):8780/v1/$(tenant_id)/settings

Creates a Trilio setting.

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Tenant/Project to work with

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 11:55:43 GMT
Content-Type: application/json
Content-Length: 403
Connection: keep-alive
X-Compute-Request-Id: req-ac16c258-7890-4ae7-b7f4-015b5aa4eb99

{
   "settings":[
      {
         "created_at":"2021-02-04T11:55:43.890855",
         "updated_at":null,
         "deleted_at":null,
         "deleted":false,
         "version":"4.0.115",
         "name":"smtp_port",
         "project_id":"4dfe98a43bfa404785a812020066b4d6",
         "user_id":null,
         "value":"8080",
         "description":null,
         "category":null,
         "type":"email_settings",
         "public":false,
         "hidden":0,
         "status":"available",
         "is_public":false,
         "is_hidden":false
      }
   ]
}

Body format

Setting create requires a Body in json format, to provide the requested information.

{
   "settings":[
      {
         "category":null,
         "name":<String Setting_name>,
         "is_public":false,
         "is_hidden":false,
         "metadata":{
            
         },
         "type":<String Setting type>,
         "value":<String Setting Value>,
         "description":null
      }
   ]
}

Show Setting

GET https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>

Shows all details of a specified setting

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Project/Tenant where to find the Workload

setting_name

string

Name of the setting to show

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 12:01:27 GMT
Content-Type: application/json
Content-Length: 380
Connection: keep-alive
X-Compute-Request-Id: req-404f2808-7276-4c2b-8870-8368a048c28c

{
   "setting":{
      "created_at":"2021-02-04T11:55:43.000000",
      "updated_at":null,
      "deleted_at":null,
      "deleted":false,
      "version":"4.0.115",
      "name":"smtp_port",
      "project_id":"4dfe98a43bfa404785a812020066b4d6",
      "user_id":null,
      "value":"8080",
      "description":null,
      "category":null,
      "type":"email_settings",
      "public":false,
      "hidden":false,
      "status":"available",
      "metadata":[
         
      ]
   }
}

Modify Setting

PUT https://$(tvm_address):8780/v1/$(tenant_id)/settings

Modifies the provided setting with the given details.

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Tenant/Project to work with w

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 12:05:59 GMT
Content-Type: application/json
Content-Length: 403
Connection: keep-alive
X-Compute-Request-Id: req-e92e2c38-b43a-4046-984e-64cea3a0281f

{
   "settings":[
      {
         "created_at":"2021-02-04T11:55:43.000000",
         "updated_at":null,
         "deleted_at":null,
         "deleted":false,
         "version":"4.0.115",
         "name":"smtp_port",
         "project_id":"4dfe98a43bfa404785a812020066b4d6",
         "user_id":null,
         "value":"8080",
         "description":null,
         "category":null,
         "type":"email_settings",
         "public":false,
         "hidden":0,
         "status":"available",
         "is_public":false,
         "is_hidden":false
      }
   ]
}

Body format

Workload modify requires a Body in json format, to provide the information about the values to modify.

{
   "settings":[
      {
         "category":null,
         "name":<String Setting_name>,
         "is_public":false,
         "is_hidden":false,
         "metadata":{
            
         },
         "type":<String Setting type>,
         "value":<String Setting Value>,
         "description":null
      }
   ]
}

Delete Setting

DELETE https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>

Deletes the specified Workload.

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Tenant where to find the Workload in

setting_name

string

Name of the setting to delete

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication Token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 11:49:17 GMT
Content-Type: application/json
Content-Length: 1223
Connection: keep-alive
X-Compute-Request-Id: req-5a8303aa-6c90-4cd9-9b6a-8c200f9c2473

Trilio Installation on RHOSO18.0

The Red Hat OpenStack Services on OpenShift 18.0 is the supported and recommended method to deploy and maintain any RHOSO installation.

Trilio is integrating natively into the RHOSO. Manual deployment methods are not supported for RHOSO.

1. Prepare for deployment

Refer to the link Resources to get release specific values of the placeholders, viz Container URLs, trilio_branch, RHOSO Version and CONTAINER-TAG-VERSION in this document as per the OpenStack environment:

1.1] Clone triliovault-cfg-scripts repository

The following steps are to be done on the bastion node on an already installed RHOSO environment.

The following command clones the triliovault-cfg-scripts github repository.

git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp18

1.2] Create image pull secret

cd triliovault-cfg-scripts/redhat-director-scripts/rhosp18/ctlplane-scripts/
chmod +x create-image-pull-secret.sh
./create-image-pull-secret.sh <TRILIO_IMAGE_REGISTRY_URL> <TRILIO_IMAGE_REGISTRY_USER> <TRILIO_IMAGE_REGISTRY_PASSWORD>

Note: Use the following URLs for Trilio image registries, as needed.
Docker registry url : docker.io
Redhat registry url : registry.connect.redhat.com

2] Install Trilio for OpenStack Operator

2.1] Install operator - tvo-operator

Please get value of parameter TVO_OPERATOR_CONTAINER_IMAGE_URL from this release artifact documentation. This is Trilio for OpenStack Operator container image tag.

cd redhat-director-scripts/rhosp18/ctlplane-scripts
chmod +x install_operator.sh
./install_operator.sh <TVO_OPERATOR_CONTAINER_IMAGE_URL>

Example:- 
Redhat_registry_tvo_operator_conatiner_image_Url:- ./install_operator.sh registry.connect.redhat.com/trilio/trilio-openstack-operator-rhoso:6.1.0-rhoso18.0

Docker_tvo_operator_conatiner_image_Url:- ./install_operator.sh docker.io/trilio/tvo-operator:6.1.0-maint-1-rhoso18.0

2.2] Verify the tvo operator pod got created

oc get pods -A | grep tvo-operator

2.3] Verify that operator CRD is installed

oc get crds | grep tvo

3] Install Trilio OpenStack Control Plane Services

3.1] Create namespace for Trilio control plane services

oc create namespace trilio-openstack

3.2] Provide necessary Trilio inputs like backup target details, openstack details etc in yaml format

vi tvo-operator-inputs.yaml

Operator parameter details from file tvo-operator-inputs.yaml that user need to edit:

Parameter
Description

images

Please refer to

common

- trustee_role should be creator,member if barbican is enabled. Otherwise trustee_role should be member. Any openstack user that wants to create backup jobs and take backups needs this role in respective openstack project. - memcached_servers value should be fetched using command oc -n openstack get memcached -o jsonpath='{.items[*].status.serverList[*]}'| tr ' ' ','

triliovault_backup_targets

- User need to choose which backup targets(Where backups taken by TVO will get stored) to use for this TVO deployment. - User can use multiple backup targets of type ‘NFS' or 'S3’ type like NFS share, Amazon S3 bucket, Ceph S3 bucket etc. - For Amazon S3 backup target s3_type: ‘amazon_s3’ - For all other S3 backup targets s3_type: 'other_s3' - For Amazon S3, s3_endpoint_url value will be empty string. Internally we pick it correctly. - For Amazon s3 s3_self_signed_cert is always 'false'

keystone.common

- 'keystone_interface' set it to any of the value [’internal', 'public']. This interface will be used for communication between TVO and OpenStack services. - 'service_project_name': This is project name where all services are registered. - ‘service_project_domain_name': service project’s domain name - 'admin_role_name': Admin role name - 'cloud_admin_user_name': OpenStack cloud admin user name - 'cloud_admin_user_password': OpenStack cloud admin user password - 'cloud_admin_project_name': Cloud admin project name - 'auth_url': Keystone auth url of respective interface provided in keystone_interface parameter. - ‘auth_uri': Just append '/v3’ to auth_url - 'keystone_auth_protocol': https or http Auth protocol of keystone endpoint url of provided keystone_interface - 'keystone_auth_host': Full host name from keystone auth_url -'is_self_signed_ssl_cert': True/False, Whether the TLS certs used by keystone endpoint url mentioned in auth_url parameter uses self signed certs or not

keystone.datamover_api and keystone.wlm_api

For both components datamover_api and wlm_api we have same set of parameters. - ‘user': This is openstack user that is used by service datamover_api. Please don’t change this one. - ‘password': User can set this to any value. This is password for openstack user mentioned in parameter 'user’ - ‘service_name': Don’t need to change - 'service_type': Don’t need to change - 'service_desc': Don’t need to change - ‘internal_endpoint': Trilio service internal endpoint. Please refer other openstack service endpoints and set this one accordingly. - ‘public_endpoint': User just need to set replace parameter 'PUBLIC_ENDPOINT_DOMAIN’ here. Please refer other openstack services public endpoint url. - ‘public_auth_host': FQDN mentioned in parameter 'public_endpoint’

database.common

- 'root_user_name': OpenStack database system root user name. Keep this as it is. Don’t need to change unless you know that root username is changed. - 'root_password': Database root user password using command oc -n openstack get secret osp-secret -o jsonpath='{.data.DbRootPassword}' | base64 --decode - 'host': Database host/FQDN name oc -n openstack get secret nova-api-config-data -o jsonpath='{.data.01-nova\.conf}' | base64 --decode | awk '/connection =/ {match($0, /@([^?/]+)/, arr); print arr[1]; exit}' - 'port': Database port

database.datamover_api and database.wlm_api

- 'user': Do not change - 'password': Set any password for trilio database users - 'database': Do not change

rabbitmq.common

- ‘admin_user': Provide rabbitmq admin user name using command. oc -n openstack exec -it rabbitmq-server-0 bash rabbitmqctl list_users - ‘admin_password': Provide rabbitmq admin user’s password. Generally in rhoso18 cloud, default_user… is the admin user. But if the default user is not administrator then you need to use some other secrets or commands to find out the password of that user. Refer command: oc -n openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d - 'host': Provide rabbitmq cluster host name using command oc -n openstack get secret rabbitmq-default-user -o jsonpath='{.data.host}' | base64 -d - 'port': Provide rabbitmq management API port on which it can be connected using rabbitmqadmin command. Generally this is 15671 for RHOSO. So you can keep it as it is. 5671 is not a management API port. Refer command:oc -n openstack get cm rabbitmq-server-conf -o jsonpath='{.data.userDefinedConfiguration\.conf}' | grep management.ssl.port - 'driver': Rabbitmq driver - ‘ssl': If SSL/TLS is enabled on rabbitmq, set this to true other wise set it to false. This is boolean parameter.

rabbitmq.datamover_api and rabbitmq.wlm_api

- 'user': Do not change this. - ‘password': User need to set this as per their choice - 'vhost': Do not change this ‘transport_url': User needs to set this. Edit ‘${PASSWORD}' and '${RABBITMQ_HOST}’ from given default url. You can edit SSL and port settings if necessary. oc describe secret rabbitmq-default-user -n openstack oc get secret rabbitmq-default-user -n openstack -o jsonpath='{.data.username}' | base64 --decode

pod.replicas

These parameters sets number of replicas for Trilio components. Default values are standard. Unless needed you don’t need to change it. Please note that number of replicas for triliovault_wlm_cron pod should always be set to 1.

3.3] Set correct labels to Kubernetes nodes.

Trilio control plane services will be deployed on OpenShift nodes having label trilio-control-plane=enabled . It is recommended to use three Kubernetes nodes for Trilio control plane services. Please use following commands to assign correct labels to nodes.

Get list of OpenShift nodes

oc get nodes

Assign ‘trilio-control-plane=enabled' label to any three nodes of your choice where you want to deploy TVO control plane services.

oc label nodes <Openshift_node_name1> trilio-control-plane=enabled
oc label nodes <Openshift_node_name2> trilio-control-plane=enabled
oc label nodes <Openshift_node_name3> trilio-control-plane=enabled

Verify list of nodes having 'trilio-control-plane=enabled' label

oc get nodes -l trilio-control-plane=enabled

3.4] Create TLS certificate secrets

Following script creates TLS certificates for Trilio services and defines secrets having these certs.

Edit '$PUBLIC_ENDPOINT_DOMAIN' parameter in utils/certicate.yaml file and set it to correct value. Refer openstack keystone service public endpoint.

cd redhat-director-scripts/rhosp18/ctlplane-scripts/
vi certificate.yaml

Create certificates and secrets

cd redhat-director-scripts/rhosp18/ctlplane-scripts/
./create_cert_secrets.sh

You can verify if these cert secrets are created in 'trilio-openstack' namespace.

oc -n trilio-openstack describe secret cert-triliovault-datamover-public-svc 
oc -n trilio-openstack describe secret cert-triliovault-datamover-internal-svc
oc -n trilio-openstack describe secret cert-triliovault-wlm-public-svc
oc -n trilio-openstack describe secret cert-triliovault-wlm-internal-svc

3.5] Run deploy command

cd redhat-director-scripts/rhosp18/ctlplane-scripts/
./deploy_tvo_control_plane.sh

3.6] Check logs

oc logs -f tvo-operator-controller-manager-846f46787-5qnm2 -n tvo-operator-system

3.7] Check deployment status

oc get tvocontrolplane -n trilio-openstack
oc describe tvocontrolplane <TVO_CONTROL_PLANE_OBEJCT_NAME> -n trilio-openstack

3.8] Verify successful deployment of T4O control plane services.

[root@localhost ctlplane-scripts]# oc get pods -n trilio-openstack
NAME                                                READY   STATUS      RESTARTS   AGE
job-triliovault-datamover-api-db-init-ttq46         0/1     Completed   0          6m27s
job-triliovault-datamover-api-keystone-init-2ddh6   0/1     Completed   0          9m6s
job-triliovault-datamover-api-rabbitmq-init-27sx9   0/1     Completed   0          8m51s
job-triliovault-wlm-cloud-trust-lcncp               0/1     Completed   0          4m59s
job-triliovault-wlm-db-init-c48z4                   0/1     Completed   0          6m22s
job-triliovault-wlm-keystone-init-gxlmc             0/1     Completed   0          8m7s
job-triliovault-wlm-rabbitmq-init-6g94w             0/1     Completed   0          6m31s
triliovault-datamover-api-6f5fc957c9-j426z          1/1     Running     0          5m
triliovault-datamover-api-6f5fc957c9-sn9z8          1/1     Running     0          5m
triliovault-datamover-api-6f5fc957c9-xmvs5          1/1     Running     0          5m
triliovault-object-store-bt1-s3-9bfdf45d-5pqqh      1/1     Running     0          5m
triliovault-object-store-bt1-s3-9bfdf45d-g6g7m      1/1     Running     0          5m
triliovault-object-store-bt1-s3-9bfdf45d-tc7zb      1/1     Running     0          5m
triliovault-object-store-bt2-s3-68ff4548cc-9kc9s    1/1     Running     0          5m
triliovault-object-store-bt2-s3-68ff4548cc-cj94f    1/1     Running     0          5m
triliovault-object-store-bt2-s3-68ff4548cc-wfkgf    1/1     Running     0          5m
triliovault-object-store-bt3-s3-6bf58b9f77-6sqp7    1/1     Running     0          15m
triliovault-object-store-bt3-s3-6bf58b9f77-9tlh6    1/1     Running     0          15m
triliovault-object-store-bt3-s3-6bf58b9f77-g67cl    1/1     Running     0          15m
triliovault-wlm-api-66b46c7b6-7kjw4                 1/1     Running     0          5m
triliovault-wlm-api-66b46c7b6-rsknb                 1/1     Running     0          5m
triliovault-wlm-api-66b46c7b6-s8r92                 1/1     Running     0          5m
triliovault-wlm-cron-59f8ccfd-fhn8p                 1/1     Running     0          5m
triliovault-wlm-scheduler-569ccc654-bphrd           1/1     Running     0          5m
triliovault-wlm-scheduler-569ccc654-jjhjh           1/1     Running     0          5m
triliovault-wlm-scheduler-569ccc654-klbn4           1/1     Running     0          5m
triliovault-wlm-workloads-6869cff4b8-8spmz          1/1     Running     0          4m59s
triliovault-wlm-workloads-6869cff4b8-sf652          1/1     Running     0          4m59s
triliovault-wlm-workloads-6869cff4b8-x8h7z          1/1     Running     0          4m59s

Verify if wlm cloud trust created successfully

oc logs <job-triliovault-wlm-cloud-trust> -n trilio-openstack

4] Install Trilio Data Plane Services

Set context to ‘openstack' namespace. All the trilio data plane resources will be created in 'openstack’ namespace.

oc config set-context --current --namespace=openstack

4.1] Fill all input parameters needed for Trilio data plane services in following config map yaml file.

Create a config map containing all the input parameters required for the Trilio Data Plane services deployment. The user needs to edit the required parameter details in the file below.

cd redhat-director-scripts/rhosp18/dataplane-scripts/
vi cm-trilio-datamover.yaml
Parameter
Description

dmapi_transport_url

To get dmapi_transport_url, use the following command oc -n trilio-openstack get secret triliovault-datamover-api-etc -o jsonpath='{.data.triliovault-datamover-api\.conf}' | base64 -d | grep transport_url

dmapi_database_connection

To get dmapi_database_connection, use the following command oc -n trilio-openstack get secret triliovault-datamover-api-etc -o jsonpath='{.data.triliovault-datamover-api\.conf}' | base64 -d | grep connection

cinder_backend_ceph, libvirt_images_rbd_ceph_conf, ceph_cinder_user and oslomsg_rpc_use_ssl

User does not need to change these parameters

images

Please refer to

trilio_container_registry_username, trilio_container_registry_password and trilio_container_registry_url

User needs to update the following fileds with using either Docker or the Redhat registry details, as per the requirement.

triliovault_backup_targets

- User need to choose which backup targets(Where backups taken by TVO will get stored) to use for this TVO deployment. - User can use multiple backup targets of type ‘NFS' or 'S3’ type like NFS share, Amazon S3 bucket, Ceph S3 bucket etc. - For Amazon S3 backup target s3_type: ‘amazon_s3’ - For all other S3 backup targets s3_type: 'other_s3' - For Amazon S3, s3_endpoint_url value will be empty string. Internally we pick it correctly. - For Amazon s3 s3_self_signed_cert is always 'false' - Note:- Please provide the same BTT details that are used in the tvo-operator-inputs.yaml file

4.2] Create cm-trilio-datamover config map

## Create config map
oc -n openstack apply -f cm-trilio-datamover.yaml

4.3] Edit file ‘trilio-datamover-service.yaml' and set correct tag for container image 'openStackAnsibleEERunnerImage

vi trilio-datamover-service.yaml

4.4] Following script creates CRD “OpenStackDataPlaneService“ resource for Trilio

oc -n openstack apply -f trilio-datamover-service.yaml

4.5] Trigger Deployment of Trilio data plane services

In this step we will trigger the ansible scripts execution to deploy Trilio data plane components. Get Data Plane NodeSet names using following command

oc -n openstack get OpenStackDataPlaneNodeSet 

Edit two things in following file

  • Set Unique ‘name' for every ansible execution for ‘OpenStackDataPlaneDeployment’

  • Set correct name for 'nodeSets’ parameter. Take nodeSet name from previous step.

vi trilio-data-plane-deployment.yaml

To check list of deployment names alreday used, please use following command

## To check list of deployment names alreday used, please use following command
oc -n openstack get OpenStackDataPlaneDeployment

Trigger Trilio Data plane deployment execution

## Trigger deployment
oc -n openstack apply -f trilio-data-plane-deployment.yaml 

4.6] Check deployment logs.

Edit parameter name : <OpenStackDataPlaneDeployment_NAME>, use name from above steps.

oc -n openstack get pod -l openstackdataplanedeployment=<OpenStackDataPlaneDeployment_NAME>
oc -n openstack logs -f <trilio-datamover-pod-name>

If it fails or completes and you want to run it again, you need to change the name of CR resource ‘OpenStackDataPlaneDeployment' to something new and unique in following template 'trilio-data-plane-deployment.yaml' and create it again using oc create command.

4.7] Verify deployment completed well

Login to one of the compute node and check trilio compute service containers.

podman ps | grep trilio

5] Install Trilio Horizon Plugin

Pre-requisite: You should have created image pull secret for Trilio container images.

  1. Get the openstackversion CR

[kni@localhost ~]$ oc get openstackversion -n openstack
NAME                     TARGET VERSION      AVAILABLE VERSION   DEPLOYED VERSION
openstack-controlplane   18.0.2-20240923.2   18.0.2-20240923.2   18.0.2-20240923.2
  1. Edit the openstackversion CR resource/object and change horizonImage undercustomContainerImages Set 'horizonImage:' to Trilio Horizon Plugin container image url as shown below.

oc edit openstackversion <OPENSTACKVERSION_RESOURCE_NAME> -n openstack

For example: if resource name is 'openstack-controlplane'

oc edit openstackversion openstack-controlplane -n openstack
apiVersion: core.openstack.org/v1beta1
kind: OpenStackVersion
metadata:
  name: openstack-controlplane
spec:
  customContainerImages:
    horizonImage: docker.io/trilio/trilio-horizon-plugin:<IMAGE_TAG>

[...]
  1. Save changes and exit. [Use escape + Colon + wq like linux vi editor.]

  2. Verify if changes are done correctly using below command.

oc describe openstackversion <OPENSTACKVERSION_RESOURCE_NAME> -n openstack
  1. You can access the OpenStack horizon using your same URL and login using same credentials. This openstack horizon will have Trilio UI components as well by verifying using the UI horizon.

Managing Trusts

OpenStack Administrators should never have the need to directly work with the trusts created.

The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.

List Trusts

GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts

Provides the lists of trusts for the given Tenant.

Path Parameters

Name
Type
Description

Query Parameters

Name
Type
Description

Headers

Name
Type
Description

Create Trust

POST https://$(tvm_address):8780/v1/$(tenant_id)/trusts

Creates a workload in the provided Tenant/Project with the given details.

Path Parameters

Name
Type
Description

Headers

Name
Type
Description

Body Format

Show Trust

GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>

Shows all details of a specified trust

Path Parameters

Name
Type
Description

Headers

Name
Type
Description

Delete Trust

DELETE https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>

Deletes the specified trust.

Path Parameters

Name
Type
Description

Headers

Name
Type
Description

Validate Scheduler Trust

GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>

Validates the Trust of a given Workload.

Path Parameters

Name
Type
Description

Headers

Name
Type
Description
Resources
Resources

tvm_name

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Tenant / Project to fetch the trusts from

is_cloud_admin

boolean

true/false

X-Auth-Project-Id

string

project to run the authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

tvm_address

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Tenant/Project to create the Trust for

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:43:36 GMT
Content-Type: application/json
Content-Length: 868
Connection: keep-alive
X-Compute-Request-Id: req-2151b327-ea74-4eec-b606-f0df358bc2a0

{
   "trust":[
      {
         "created_at":"2021-01-21T11:43:36.140407",
         "updated_at":null,
         "deleted_at":null,
         "deleted":false,
         "version":"4.0.115",
         "name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
         "project_id":"4dfe98a43bfa404785a812020066b4d6",
         "user_id":"adfa32d7746a4341b27377d6f7c61adb",
         "value":"1c981a15e7a54242ae54eee6f8d32e6a",
         "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
         "category":"identity",
         "type":"trust_id",
         "public":false,
         "hidden":1,
         "status":"available",
         "is_public":false,
         "is_hidden":true,
         "metadata":[
            
         ]
      }
   ]
}
{
   "trusts":{
      "role_name":"member",
      "is_cloud_trust":false
   }
}

tvm_address

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Project/Tenant where to find the Workload

workload_id

string

ID of the Workload to show

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:39:12 GMT
Content-Type: application/json
Content-Length: 888
Connection: keep-alive
X-Compute-Request-Id: req-3c2f6acb-9973-4805-bae3-cd8dbcdc2cb4

{
   "trust":{
      "created_at":"2020-11-26T13:15:29.000000",
      "updated_at":null,
      "deleted_at":null,
      "deleted":false,
      "version":"4.0.115",
      "name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
      "project_id":"4dfe98a43bfa404785a812020066b4d6",
      "user_id":"adfa32d7746a4341b27377d6f7c61adb",
      "value":"703dfabb4c5942f7a1960736dd84f4d4",
      "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
      "category":"identity",
      "type":"trust_id",
      "public":false,
      "hidden":true,
      "status":"available",
      "metadata":[
         {
            "created_at":"2020-11-26T13:15:29.000000",
            "updated_at":null,
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"86aceea1-9121-43f9-b55c-f862052374ab",
            "settings_name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
            "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
            "key":"role_name",
            "value":"member"
         }
      ]
   }
}

tvm_address

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Tenant where to find the Trust in

trust_id

string

ID of the Trust to delete

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication Token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:41:51 GMT
Content-Type: application/json
Content-Length: 888
Connection: keep-alive
X-Compute-Request-Id: req-d838a475-f4d3-44e9-8807-81a9c32ea2a8

tvm_address

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Project/Tenant where to find the Workload

workload_id

string

ID of the Workload to validate the Trust of

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

{
   "scheduler_enabled":true,
   "trust":{
      "created_at":"2021-01-21T11:43:36.000000",
      "updated_at":null,
      "deleted_at":null,
      "deleted":false,
      "version":"4.0.115",
      "name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
      "project_id":"4dfe98a43bfa404785a812020066b4d6",
      "user_id":"adfa32d7746a4341b27377d6f7c61adb",
      "value":"1c981a15e7a54242ae54eee6f8d32e6a",
      "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
      "category":"identity",
      "type":"trust_id",
      "public":false,
      "hidden":true,
      "status":"available",
      "metadata":[
         {
            "created_at":"2021-01-21T11:43:36.000000",
            "updated_at":null,
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"d98d283a-b096-4a68-826a-36f99781787d",
            "settings_name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
            "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
            "key":"role_name",
            "value":"member"
         }
      ]
   },
   "is_valid":true,
   "scheduler_obj":{
      "workload_id":"209c13fa-e743-4ccd-81f7-efdaff277a1f",
      "user_id":"adfa32d7746a4341b27377d6f7c61adb",
      "project_id":"4dfe98a43bfa404785a812020066b4d6",
      "user_domain_id":"default",
      "user":"adfa32d7746a4341b27377d6f7c61adb",
      "tenant":"4dfe98a43bfa404785a812020066b4d6"
   }
}
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:21:57 GMT
Content-Type: application/json
Content-Length: 868
Connection: keep-alive
X-Compute-Request-Id: req-fa48f0ad-aa76-42fa-85ea-1e5461889fb3

{
   "trust":[
      {
         "created_at":"2020-11-26T13:10:53.000000",
         "updated_at":null,
         "deleted_at":null,
         "deleted":false,
         "version":"4.0.115",
         "name":"trust-6e290937-de9b-446a-a406-eb3944e5a034",
         "project_id":"4dfe98a43bfa404785a812020066b4d6",
         "user_id":"cloud_admin",
         "value":"dbe2e160d4c44d7894836a6029644ea0",
         "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
         "category":"identity",
         "type":"trust_id",
         "public":false,
         "hidden":true,
         "status":"available",
         "metadata":[
            {
               "created_at":"2020-11-26T13:10:54.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"e9ec386e-79cf-4f6b-8201-093315648afe",
               "settings_name":"trust-6e290937-de9b-446a-a406-eb3944e5a034",
               "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
               "key":"role_name",
               "value":"admin"
            }
         ]
      }
   ]
}

Getting started with Trilio on OpenStack-Helm

1. Prepare for deployment

1.1] Install Helm CLI Client

Ensure the Helm CLI client is installed on the node from which you are installing Trilio for OpenStack.

curl -O https://get.helm.sh/helm-v3.17.2-linux-amd64.tar.gz
tar -zxvf helm*.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
rm -rf linux-amd64 helm*.tar.gz

1.2] Remove Existing Trilio for OpenStack Docker Images

Check if there are existing Trilio for OpenStack Docker images, remove them before proceeding.

## List existing TrilioVault Docker images
docker images | grep trilio
## Remove existing Trilio for OpenStack Docker images
docker image rm <trilio-docker-img-name>

1.3] Install NFS Client Package (Optional)

If you plan to use NFS as a backup target, install nfs-common on each Kubernetes node where TrilioVault is running. Skip this step for S3 backup targets.

## SSH into each Kubernetes node with Trilio for OpenStack enabled (control plane and compute nodes)
apt-get install nfs-common -y

1.4] Install Necessary Dependencies

Run the following command on the installation node:

sudo apt update -y && sudo apt install make jq -y

2] Clone Helm Chart Repository

Refer to the link Resources to get release specific values of the placeholder, viz trilio_branch.

git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/
helm dep up
cd ../../../

3] Configure Container Image Tags

Select the appropriate image values file based on your OpenStack-Helm setup. Update the Trilio-OpenStack image tags accordingly.

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/2023.2.yaml

4] Create the trilio-openstack Namespace

To isolate trilio-openstack services, create a dedicated Kubernetes namespace:

kubectl create namespace trilio-openstack
kubectl config set-context --current --namespace=trilio-openstack

5] Label Kubernetes Nodes for TrilioVault Control Plane

Trilio for OpenStack control plane services should run on Kubernetes nodes labeled as triliovault-control-plane=enabled. It is recommended to use three Kubernetes nodes for high availability.

Steps:

1] Retrieve the OpenStack control plane node names:

kubectl get nodes --show-labels | grep openstack-control-plane

2] Assign the triliovault-control-plane label to the selected nodes:

kubectl label nodes <NODE_NAME_1> triliovault-control-plane=enabled
kubectl label nodes <NODE_NAME_2> triliovault-control-plane=enabled
kubectl label nodes <NODE_NAME_3> triliovault-control-plane=enabled

3] Verify the nodes with the assigned label:

kubectl get nodes --show-labels | grep triliovault-control-plane

6] Configure the Backup Target for Trilio-OpenStack

Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

The following backup target types are supported by Trilio

a) NFS

b) S3

Steps:

If using NFS as the backup target, define its details in:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/nfs.yaml

If using S3, configure its details in:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/s3.yaml

If using S3 with TLS enabled and self-signed certificates, store the CA certificate in:

triliovault-cfg-scripts/openstack-helm/trilio-openstack/files/s3-cert.pem

The deployment scripts will automatically place this certificate in the required location.

We will be using this yaml file in trilio-openstack 'install' command at later step in this document.

7] Provide Cloud Admin Credentials in keystone.yaml

The cloud admin user in Keystone must have the admin role on the cloud domain. Update the required credentials:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/keystone.yaml

8] Retrieve and Configure Keystone, Database and RabbitMQ Credentials

1] Fetch Internal and Public Domain Names of the Kubernetes Cluster.

2] Fetch Keystone, RabbitMQ, and Database Admin Credentials.

a) These credentials are required for Trilio deployment.

b) Navigate to the utils directory:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils

c) Generate the admin credentials file using the previously retrieved domain names

./get_admin_creds.sh <internal_domain_name> <public_domain_name>
Example:
./get_admin_creds.sh cluster.local triliodata.demo

3] Verify that the credentials file is created:

cat ../values_overrides/admin_creds.yaml

Please ensure that the correct ca.crt is present inside the secret “trilio-ca-cert” in the openstack namespace. If the secret is created with a different name, make sure to update the reference in the ./get_admin_creds.sh script before executing it.

9] Configure Ceph Storage (if used as Nova/Cinder Backend)

If your setup uses Ceph as the storage backend for Nova/Cinder, configure Ceph settings for Trilio.

Manual Approach

1] Edit the Ceph configuration file:

vi triliovault-cfg-scripts/openstack-helm/trilio-openstack/values_overrides/ceph.yaml

2] Set rbd_user and keyring (the user should have read/write access to the Nova/Cinder pools). By default, the cinder/nova user usually has these permissions, but it’s recommended to verify. Copy the contents of /etc/ceph/ceph.conf into the appropriate Trilio template file:

vi ../templates/bin/_triliovault-ceph.conf.tpl

Automated Approach

1] Run the Ceph configuration script:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils
./get_ceph.sh

2] Verify that the output is correctly written to:

cat ../values_overrides/ceph.yaml

10] Create Docker Registry Credentials Secret

Trilio images are hosted in a private registry. You must create an ImagePullSecret in the triliovault namespace.

1] Navigate to the utilities directory:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils

2] Run the script with Trilio’s Docker registry credentials kubernetes secret:

./create_image_pull_secret.sh <TRILIO_REGISTRY_USERNAME> <TRILIO_REGISTRY_PASSWORD>

3] Verify that the secret has been created successfully:

kubectl describe secret triliovault-image-registry -n trilio-openstack

11. Install Trilio for OpenStack Helm Chart

11.1] Review the Installation Script

11.1.1] Open the install.sh Script

The install.sh script installs the Trilio Helm chart in the trilio-openstack namespace.

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils/
vi install.sh

11.1.2] Configure Backup Target

Modify the script to select the appropriate backup target:

a) NFS Backup Target: Default configuration includes nfs.yaml.

b) S3 Backup Target: Replace nfs.yaml with s3.yaml.

Example Configuration for S3:

helm upgrade --install trilio-openstack ./trilio-openstack --namespace=trilio-openstack \
     --values=./trilio-openstack/values_overrides/image_pull_secrets.yaml \
     --values=./trilio-openstack/values_overrides/keystone.yaml \
     --values=./trilio-openstack/values_overrides/s3.yaml \
     --values=./trilio-openstack/values_overrides/2023.2.yaml \
     --values=./trilio-openstack/values_overrides/admin_creds.yaml \
     --values=./trilio-openstack/values_overrides/tls_public_endpoint.yaml \
     --values=./trilio-openstack/values_overrides/ceph.yaml \
     --values=./trilio-openstack/values_overrides/db_drop.yaml \
     --values=./trilio-openstack/values_overrides/ingress.yaml \
     --values=./trilio-openstack/values_overrides/triliovault_passwords.yaml
     echo -e "Waiting for TrilioVault pods to reach running state"
     ./trilio-openstack/utils/wait_for_pods.sh trilio-openstack
     kubectl get pods

11.1.3] Select the Appropriate OpenStack Helm Version

Use the correct YAML file based on your OpenStack Helm Version:

  • Antelope → 2023.1.yaml

  • Bobcat (Default)→ 2023.2.yaml

11.1.4] Validate values_overrides Configuration

Ensure the correct configurations are used:

  • Disable Ceph in ceph.yaml if not applicable.

  • Remove tls_public_endpoint.yaml if TLS is unnecessary.

11.2] Uninstall Existing Trilio for OpenStack Chart

For a fresh install, uninstall any previous deployment follow the OpenStack-Helm Uninstall Guide.

11.3] Run the Installation Script

Execute the installation:

cd triliovault-cfg-scripts/openstack-helm/trilio-openstack/utils/
./install.sh

11.4] Configure DNS for Public Endpoints

11.4.1] Retrieve Ingress External IP

kubectl get service -n openstack | grep LoadBalancer
Example Output:

public-openstack      LoadBalancer   10.105.43.185    192.168.2.50   80:30162/TCP,443:30829/TCP

11.4.2] Fetch TrilioVault FQDNs

kubectl -n trilio-openstack get ingress
Example Output:


root@master:~# kubectl -n trilio-openstack get ingress
NAME                                   CLASS           HOSTS                                                                                                                   ADDRESS       PORTS     AGE
triliovault-datamover                  nginx           triliovault-datamover,triliovault-datamover.trilio-openstack,triliovault-datamover.trilio-openstack.svc.cluster.local   192.168.2.5   80        14h
triliovault-datamover-cluster-fqdn     nginx-cluster   triliovault-datamover.triliodata.demo                                                                                                 80, 443   14h
triliovault-datamover-namespace-fqdn   nginx           triliovault-datamover.triliodata.demo                                                                                   192.168.2.5   80, 443   14h
triliovault-wlm                        nginx           triliovault-wlm,triliovault-wlm.trilio-openstack,triliovault-wlm.trilio-openstack.svc.cluster.local                     192.168.2.5   80        14h
triliovault-wlm-cluster-fqdn           nginx-cluster   triliovault-wlm.triliodata.demo                                                                                                       80, 443   14h
triliovault-wlm-namespace-fqdn         nginx           triliovault-wlm.triliodata.demo                                                                                         192.168.2.5   80, 443   14h

If the ingress service doesn’t have an IP assigned, follow these steps:

1] Check the Ingress Controller Deployment

Look for the ingress-nginx-controller deployment, typically in the ingress-nginx or kube-system namespace:

kubectl -n openstack get deployment ingress-nginx-controller -o yaml | grep watch-namespace

2] Verify the --watch-namespace Arg

If the controller has a --watch-namespace argument, it means it’s watching only specific namespaces for ingress resources.

3] Update watch-namespace to include trilio-openstack

Edit the deployment to include trilio-openstack in the comma-separated list of namespaces:

kubectl edit deployment ingress-nginx-controller -n ingress-nginx

Example --watch-namespace arg:

--watch-namespace=trilio-openstack,openstack

4] Restart the Controller

This will happen automatically when you edit the deployment, but you can manually trigger it if needed:

kubectl rollout restart deployment ingress-nginx-controller -n openstack

11.5] Verify Installation

11.5.1] Check Helm Release Status

helm status trilio-openstack

11.5.2] Validate Deployed Containers

Ensure correct image versions are used by checking container tags or SHA digests.

11.5.3] Verify Pod Status

kubectl get pods -n trilio-openstack
Example Output:


NAME                                                 READY   STATUS      RESTARTS   AGE
triliovault-datamover-api-5c7fbb949c-2m8dc           1/1     Running     0          21h
triliovault-datamover-api-5c7fbb949c-kxspg           1/1     Running     0          21h
triliovault-datamover-api-5c7fbb949c-z4wkn           1/1     Running     0          21h
triliovault-datamover-db-init-7k7jg                  0/1     Completed   0          21h
triliovault-datamover-db-sync-6jkgs                  0/1     Completed   0          21h
triliovault-datamover-ks-endpoints-gcrht             0/3     Completed   0          21h
triliovault-datamover-ks-service-nnnvh               0/1     Completed   0          21h
triliovault-datamover-ks-user-td44v                  0/1     Completed   0          20h
triliovault-datamover-openstack-compute-node-4gkv8   1/1     Running     0          21h
triliovault-datamover-openstack-compute-node-6lbc4   1/1     Running     0          21h
triliovault-datamover-openstack-compute-node-pqslx   1/1     Running     0          21h
triliovault-wlm-api-7647c4b45c-52449                 1/1     Running     0          21h
triliovault-wlm-api-7647c4b45c-h47mw                 1/1     Running     0          21h
triliovault-wlm-api-7647c4b45c-rjbvl                 1/1     Running     0          21h
triliovault-wlm-cloud-trust-h8xgq                    0/1     Completed   0          20h
triliovault-wlm-cron-574ff78486-54rqg                1/1     Running     0          21h
triliovault-wlm-db-init-hvk65                        0/1     Completed   0          21h
triliovault-wlm-db-sync-hpl4c                        0/1     Completed   0          21h
triliovault-wlm-ks-endpoints-4bsxl                   0/3     Completed   0          21h
triliovault-wlm-ks-service-btcb4                     0/1     Completed   0          21h
triliovault-wlm-ks-user-gnfdh                        0/1     Completed   0          20h
triliovault-wlm-rabbit-init-ws262                    0/1     Completed   0          21h
triliovault-wlm-scheduler-669f4758b4-ks7qr           1/1     Running     0          21h
triliovault-wlm-workloads-5ff86448c-mj8p2            1/1     Running     0          21h
triliovault-wlm-workloads-5ff86448c-th6f4            1/1     Running     0          21h
triliovault-wlm-workloads-5ff86448c-zhr4m            1/1     Running     0          21h

11.5.4] Check Job Completion

kubectl get jobs -n trilio-openstack

Example Output:

NAME                                 COMPLETIONS   DURATION   AGE
triliovault-datamover-db-init        1/1           5s         21h
triliovault-datamover-db-sync        1/1           8s         21h
triliovault-datamover-ks-endpoints   1/1           17s        21h
triliovault-datamover-ks-service     1/1           18s        21h
triliovault-datamover-ks-user        1/1           19s        21h
triliovault-wlm-cloud-trust          1/1           2m10s      20h
triliovault-wlm-db-init              1/1           5s         21h
triliovault-wlm-db-sync              1/1           20s        21h
triliovault-wlm-ks-endpoints         1/1           17s        21h
triliovault-wlm-ks-service           1/1           17s        21h
triliovault-wlm-ks-user              1/1           19s        21h
triliovault-wlm-rabbit-init          1/1           4s         21h

11.5.5] Verify NFS Backup Target (if applicable)

kubectl get pvc -n trilio-openstack

Example Output:

triliovault-nfs-pvc-172-25-0-10-mnt-tvault-42424   Bound   triliovault-nfs-pv-172-25-0-10-mnt-tvault-42424   20Gi   RWX   nfs   6d

11.5.6] Validate S3 Backup Target (if applicable)

Ensure S3 is correctly mounted on all WLM pods.

Trilio-OpenStack Helm Chart Installation is Done!

Logs:

1] triliovault-datamover-api service logs.

Logs available on kuberentes nodes

kubectl get nodes --show-labels | grep triliovault-control-plane
-- SSH to these kuberentes nodes
ssh <KUBERNETES_NODE_NAME>
-- See logs
vi /var/log/triliovault-datamover-api/triliovault-datamover-api.log
## Other approach: kubectl stdout and stderr logs
-- List triliovault-datamover-api pods 
kubectl get pods | grep triliovault-datamover-api
-- See logs 
kubectl logs <triliovault-datamover-api-pod-name>
# Example:
root@helm1:~# kubectl get pods | grep triliovault-datamover-api
triliovault-datamover-api-c87899fb7-dq2sd            1/1     Running     0          3d18h
triliovault-datamover-api-c87899fb7-j4fdz            1/1     Running     0          3d18h
triliovault-datamover-api-c87899fb7-nm8pt            1/1     Running     0          3d18h
root@helm1:~# kubectl logs triliovault-datamover-api-c87899fb7-dq2sd

2] trliovault-datamover service logs

Logs available on kuberentes nodes

kubectl get nodes --show-labels | grep openstack-compute-node
-- SSH to these kuberentes nodes
ssh <KUBERNETES_NODE_NAME>
-- See logs
vi /var/log/triliovault-datamover/triliovault-datamover.log
## Other approach: kubectl stdout and stderr logs
-- List triliovault-datamover-api pods 
kubectl get pods | grep triliovault-datamover-openstack
-- See logs 
kubectl logs <triliovault-datamover-pod-name>
# Example:
root@helm1:~# kubectl get pods | grep triliovault-datamover-openstack
triliovault-datamover-openstack-compute-node-2krmj   1/1     Running     0          3d19h
triliovault-datamover-openstack-compute-node-9f5w7   1/1     Running     0          3d19h
root@helm1:~# kubectl logs triliovault-datamover-openstack-compute-node-2krmj

3] triliovault-wlm-api, triliovault-wlm-cron, triliovault-wlm-scheduler, triliovault-wlm-workloads services logs

Logs available on kuberentes nodes

kubectl get nodes --show-labels | grep triliovault-control-plane
-- SSH to these kuberentes nodes
ssh <KUBERNETES_NODE_NAME>
-- Log files are available in following directory.
ls /var/log/triliovault-wlm/
## Sample command output
root@helm4:~# ls -ll /var/log/triliovault-wlm/
total 26576
-rw-r--r-- 1 42424 42424  2079322 Mar 20 07:55 triliovault-wlm-api.log
-rw-r--r-- 1 42424 42424 25000088 Mar 20 00:41 triliovault-wlm-api.log.1
-rw-r--r-- 1 42424 42424    12261 Mar 16 12:40 triliovault-wlm-cron.log
-rw-r--r-- 1 42424 42424    10263 Mar 16 12:36 triliovault-wlm-scheduler.log
-rw-r--r-- 1 42424 42424    87918 Mar 16 12:36 triliovault-wlm-workloads.log
## Other approach: kubectl stdout and stderr logs
-- List triliovault-wlm services pods
kubectl get pods | grep triliovault-wlm
-- See logs 
kubectl logs <triliovault-wlm-service-pod-name>
# Example:
root@helm1:~# kubectl get pods | grep triliovault-wlm
triliovault-wlm-api-7b956f7b8-84gtw                  1/1     Running     0          3d19h
triliovault-wlm-api-7b956f7b8-85mdk                  1/1     Running     0          3d19h
triliovault-wlm-api-7b956f7b8-hpcpt                  1/1     Running     0          3d19h
triliovault-wlm-cloud-trust-rdh8n                    0/1     Completed   0          3d19h
triliovault-wlm-cron-78bdb4b959-wzrfs                1/1     Running     0          3d19h
triliovault-wlm-db-drop-dhfgj                        0/1     Completed   0          3d19h
triliovault-wlm-db-init-snrsr                        0/1     Completed   0          3d19h
triliovault-wlm-db-sync-wffk5                        0/1     Completed   0          3d19h
triliovault-wlm-ks-endpoints-zvqtf                   0/3     Completed   0          3d19h
triliovault-wlm-ks-service-6425q                     0/1     Completed   0          3d19h
triliovault-wlm-ks-user-fmgsx                        0/1     Completed   0          3d19h
triliovault-wlm-rabbit-init-vsdn6                    0/1     Completed   0          3d19h
triliovault-wlm-scheduler-649b95ffd6-bkqxt           1/1     Running     0          3d19h
triliovault-wlm-workloads-6b98679d45-2kjdq           1/1     Running     0          3d19h
triliovault-wlm-workloads-6b98679d45-mxvhp           1/1     Running     0          3d19h
triliovault-wlm-workloads-6b98679d45-v4dn8           1/1     Running     0          3d19h
# kubectl logs triliovault-wlm-api-7b956f7b8-84gtw
# kubectl logs triliovault-wlm-cron-78bdb4b959-wzrfs
# kubectl logs triliovault-wlm-scheduler-649b95ffd6-bkqxt
# kubectl logs triliovault-wlm-workloads-6b98679d45-mxvhp

12] Install Trilio for OpenStack Horizon Plugin

Below are the steps to patch the Horizon deployment in an OpenStack Helm setup to install the Trilio Horizon Plugin.

12.1] Pre-requisites

  • Horizon is deployed via OpenStack Helm and is running in the openstack namespace.

  • Docker registry secret triliovault-image-registry must already exist in the openstack namespace from the steps performed during Trilio Installation.

kubectl describe secret triliovault-image-registry -n openstack
  • If not already created, follow this command:

kubectl create secret docker-registry triliovault-image-registry \
  --docker-server="docker.io" \
  --docker-username=<TRILIO_REGISTRY_USERNAME> \
  --docker-password=<TRILIO_REGISTRY_PASSWORD> \
  -n openstack

12.2] Patch Horizon Deployment

Use the command below to patch the Horizon deployment with the Trilio Horizon Plugin image. Update the image tag as needed for your release.

kubectl -n openstack patch deployment horizon \
  --type='strategic' \
  -p '{
    "spec": {
      "template": {
        "spec": {
          "containers": [
            {
              "name": "horizon",
              "image": "docker.io/trilio/trilio-horizon-plugin-helm:6.1.0-dev-maint1-1-2023.2"
            }
          ],
          "imagePullSecrets": [
            {
              "name": "triliovault-image-registry"
            }
          ]
        }
      }
    }
  }'

12.3] Verification

After patching:

  1. Ensure the Horizon pods are restarted and running with the new image:

kubectl get pods -n openstack -l application=horizon,component=server -o jsonpath="{.items[*].spec.containers[*].image}" | tr ' ' '\n'
  1. Access the Horizon dashboard and verify the TrilioVault section appears in the UI.

Workload Import and Migration

import Workload list

GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/get_list/import_workloads

Provides the list of all importable workloads

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Tenant/Project to work in

Query Parameters

Name
Type
Description

project_id

string

restricts the output to the given project

Headers

Name
Type
Description

X-Auth-Project-Id

string

project to run the authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 10:34:10 GMT
Content-Type: application/json
Content-Length: 7888
Connection: keep-alive
X-Compute-Request-Id: req-9d73e5e6-ca5a-4c07-bdf2-ec2e688fc339

{
   "workloads":[
      {
         "created_at":"2020-11-02T13:40:06.000000",
         "updated_at":"2020-11-09T09:53:30.000000",
         "id":"18b809de-d7c8-41e2-867d-4a306407fb11",
         "user_id":"ccddc7e7a015487fa02920f4d4979779",
         "project_id":"c76b3355a164498aa95ddbc960adc238",
         "availability_zone":"nova",
         "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
         "name":"Workload_1",
         "description":"no-description",
         "interval":null,
         "storage_usage":null,
         "instances":null,
         "metadata":[
            {
               "created_at":"2020-11-09T09:57:23.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"ee27bf14-e460-454b-abf5-c17e3d484ec2",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "key":"63cd8d96-1c4a-4e61-b1e0-3ae6a17bf533",
               "value":"c8468146-8117-48a4-bfd7-49381938f636"
            },
            {
               "created_at":"2020-11-05T10:27:06.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"22d3e3d6-5a37-48e9-82a1-af2dda11f476",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "key":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
               "value":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2"
            },
            {
               "created_at":"2020-11-09T09:37:20.000000",
               "updated_at":"2020-11-09T09:57:23.000000",
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"61615532-6165-45a2-91e2-fbad9eb0b284",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "key":"b083bb70-e384-4107-b951-8e9e7bbac380",
               "value":"c8468146-8117-48a4-bfd7-49381938f636"
            },
            {
               "created_at":"2020-11-02T13:40:24.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"5a53c8ee-4482-4d6a-86f2-654d2b06e28c",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "key":"backup_media_target",
               "value":"10.10.2.20:/upstream"
            },
            {
               "created_at":"2020-11-05T10:27:14.000000",
               "updated_at":"2020-11-09T09:57:23.000000",
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"5cb4dc86-a232-4916-86bf-42a0d17f1439",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "key":"e33c1eea-c533-4945-864d-0da1fc002070",
               "value":"c8468146-8117-48a4-bfd7-49381938f636"
            },
            {
               "created_at":"2020-11-02T13:40:06.000000",
               "updated_at":"2020-11-02T14:10:30.000000",
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"506cd466-1e15-416f-9f8e-b9bdb942f3e1",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "key":"hostnames",
               "value":"[\"cirros-1\", \"cirros-2\"]"
            },
            {
               "created_at":"2020-11-02T13:40:06.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"093a1221-edb6-4957-8923-cf271f7e43ce",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "key":"pause_at_snapshot",
               "value":"0"
            },
            {
               "created_at":"2020-11-02T13:40:06.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"79baaba8-857e-410f-9d2a-8b14670c4722",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "key":"policy_id",
               "value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
            },
            {
               "created_at":"2020-11-02T13:40:06.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"4e23fa3d-1a79-4dc8-86cb-dc1ecbd7008e",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "key":"preferredgroup",
               "value":"[]"
            },
            {
               "created_at":"2020-11-02T14:10:30.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"ed06cca6-83d8-4d4c-913b-30c8b8418b80",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "key":"topology",
               "value":"\"\\\"\\\"\""
            },
            {
               "created_at":"2020-11-02T13:40:23.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"4b6a80f7-b011-48d4-b5fd-f705448de076",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "key":"workload_approx_backup_size",
               "value":"6"
            }
         ],
         "jobschedule":"(dp0\nVfullbackup_interval\np1\nV-1\np2\nsVretention_policy_type\np3\nVNumber of Snapshots to Keep\np4\nsVend_date\np5\nVNo End\np6\nsVstart_time\np7\nV01:45 PM\np8\nsVinterval\np9\nV5\np10\nsVenabled\np11\nI00\nsVretention_policy_value\np12\nV10\np13\nsVtimezone\np14\nVUTC\np15\nsVstart_date\np16\nV11/02/2020\np17\nsVappliance_timezone\np18\nVUTC\np19\ns.",
         "status":"locked",
         "error_msg":null,
         "links":[
            {
               "rel":"self",
               "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
            },
            {
               "rel":"bookmark",
               "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
            }
         ],
         "scheduler_trust":null
      }
   ]
}

orphaned Workload list

GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/orphan_workloads

Provides the list of all orphaned workloads

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio Service

tenant_id

string

ID of the Tenant/Project to work in

Query Parameters

Name
Type
Description

migrate_cloud

boolean

True also shows Workloads from different clouds

Headers

Name
Type
Description

X-Auth-Project-Id

string

project to run the authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 10:42:01 GMT
Content-Type: application/json
Content-Length: 120143
Connection: keep-alive
X-Compute-Request-Id: req-b443f6e7-8d8e-413f-8d91-7c30ba166e8c

{
   "workloads":[
      {
         "created_at":"2019-04-24T14:09:20.000000",
         "updated_at":"2019-05-16T09:10:17.000000",
         "id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
         "user_id":"6ef8135faedc4259baac5871e09f0044",
         "project_id":"863b6e2a8e4747f8ba80fdce1ccf332e",
         "availability_zone":"nova",
         "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
         "name":"comdirect_test",
         "description":"Daily UNIX Backup 03:15 PM Full 7D Keep 8",
         "interval":null,
         "storage_usage":null,
         "instances":null,
         "metadata":[
            {
               "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
               "deleted":false,
               "created_at":"2019-05-16T09:13:54.000000",
               "updated_at":null,
               "value":"ca544215-1182-4a8f-bf81-910f5470887a",
               "version":"3.2.46",
               "key":"40965cbb-d352-4618-b8b0-ea064b4819bb",
               "deleted_at":null,
               "id":"5184260e-8bb3-4c52-abfa-1adc05fe6997"
            },
            {
               "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
               "deleted":true,
               "created_at":"2019-04-24T14:09:30.000000",
               "updated_at":"2019-05-16T09:01:23.000000",
               "value":"10.10.2.20:/upstream",
               "version":"3.2.46",
               "key":"backup_media_target",
               "deleted_at":"2019-05-16T09:01:23.000000",
               "id":"02dd0630-7118-485c-9e42-b01d23aa882c"
            },
            {
               "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
               "deleted":false,
               "created_at":"2019-05-16T09:13:51.000000",
               "updated_at":null,
               "value":"51693eca-8714-49be-b409-f1f1709db595",
               "version":"3.2.46",
               "key":"eb7d6b13-21e4-45d1-b888-d3978ab37216",
               "deleted_at":null,
               "id":"4b79a4ef-83d6-4e5a-afb3-f4e160c5f257"
            },
            {
               "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
               "deleted":true,
               "created_at":"2019-04-24T14:09:20.000000",
               "updated_at":"2019-05-16T09:01:23.000000",
               "value":"[\"Comdirect_test-2\", \"Comdirect_test-1\"]",
               "version":"3.2.46",
               "key":"hostnames",
               "deleted_at":"2019-05-16T09:01:23.000000",
               "id":"0cb6a870-8f30-4325-a4ce-e9604370198e"
            },
            {
               "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
               "deleted":false,
               "created_at":"2019-04-24T14:09:20.000000",
               "updated_at":"2019-05-16T09:01:23.000000",
               "value":"0",
               "version":"3.2.46",
               "key":"pause_at_snapshot",
               "deleted_at":null,
               "id":"5d4f109c-9dc2-48f3-a12a-e8b8fa4f5be9"
            },
            {
               "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
               "deleted":true,
               "created_at":"2019-04-24T14:09:20.000000",
               "updated_at":"2019-05-16T09:01:23.000000",
               "value":"[]",
               "version":"3.2.46",
               "key":"preferredgroup",
               "deleted_at":"2019-05-16T09:01:23.000000",
               "id":"9a223fbc-7cad-4c2c-ae8a-75e6ee8a6efc"
            },
            {
               "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
               "deleted":true,
               "created_at":"2019-04-24T14:11:49.000000",
               "updated_at":"2019-05-16T09:01:23.000000",
               "value":"\"\\\"\\\"\"",
               "version":"3.2.46",
               "key":"topology",
               "deleted_at":"2019-05-16T09:01:23.000000",
               "id":"77e436c0-0921-4919-97f4-feb58fb19e06"
            },
            {
               "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
               "deleted":true,
               "created_at":"2019-04-24T14:09:30.000000",
               "updated_at":"2019-05-16T09:01:23.000000",
               "value":"121",
               "version":"3.2.46",
               "key":"workload_approx_backup_size",
               "deleted_at":"2019-05-16T09:01:23.000000",
               "id":"79aa04dd-a102-4bd8-b672-5b7a6ce9e125"
            }
         ],
         "jobschedule":"(dp1\nVfullbackup_interval\np2\nV7\nsVretention_policy_type\np3\nVNumber of days to retain Snapshots\np4\nsVend_date\np5\nV05/31/2019\np6\nsVstart_time\np7\nS'02:15 PM'\np8\nsVinterval\np9\nV24 hrs\np10\nsVenabled\np11\nI01\nsVretention_policy_value\np12\nI8\nsS'appliance_timezone'\np13\nS'UTC'\np14\nsVtimezone\np15\nVAfrica/Porto-Novo\np16\nsVstart_date\np17\nS'04/24/2019'\np18\ns.",
         "status":"locked",
         "error_msg":null,
         "links":[
            {
               "rel":"self",
               "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
            },
            {
               "rel":"bookmark",
               "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
            }
         ],
         "scheduler_trust":null
      }
   ]
}

Import Workload

POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/import_workloads

Imports all or the provided workloads

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of the Trilio Service

tenant_id

string

ID of the Tenant/Project to take the Snapshot in

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run authentication against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 11:03:55 GMT
Content-Type: application/json
Content-Length: 100
Connection: keep-alive
X-Compute-Request-Id: req-0e58b419-f64c-47e1-adb9-21ea2a255839

{
   "workloads":{
      "imported_workloads":[
         "faa03-f69a-45d5-a6fc-ae0119c77974"        
      ],
      "failed_workloads":[
 
      ]
   }
}

Body format

{
   "workload_ids":[
      "<workload_id>"
   ],
   "upgrade":true
}

Workload Quotas

List Quota Types

GET https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types

Lists all available Quota Types

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of Tenant/Project

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 15:40:56 GMT
Content-Type: application/json
Content-Length: 1625
Connection: keep-alive
X-Compute-Request-Id: req-2ad95c02-54c6-4908-887b-c16c5e2f20fe

{
   "quota_types":[
      {
         "created_at":"2020-10-19T10:05:52.000000",
         "updated_at":"2020-10-19T10:07:32.000000",
         "deleted_at":null,
         "deleted":false,
         "version":"4.0.115",
         "id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
         "display_name":"Workloads",
         "display_description":"Total number of workload creation allowed per project",
         "status":"available"
      },
      {
         "created_at":"2020-10-19T10:05:52.000000",
         "updated_at":"2020-10-19T10:07:32.000000",
         "deleted_at":null,
         "deleted":false,
         "version":"4.0.115",
         "id":"b7273a06-2e08-11ea-889c-7440bb00b67d",
         "display_name":"Snapshots",
         "display_description":"Total number of snapshot creation allowed per project",
         "status":"available"
      },
      {
         "created_at":"2020-10-19T10:05:52.000000",
         "updated_at":"2020-10-19T10:07:32.000000",
         "deleted_at":null,
         "deleted":false,
         "version":"4.0.115",
         "id":"be323f58-2e08-11ea-889c-7440bb00b67d",
         "display_name":"VMs",
         "display_description":"Total number of VMs allowed per project",
         "status":"available"
      },
      {
         "created_at":"2020-10-19T10:05:52.000000",
         "updated_at":"2020-10-19T10:07:32.000000",
         "deleted_at":null,
         "deleted":false,
         "version":"4.0.115",
         "id":"c61324d0-2e08-11ea-889c-7440bb00b67d",
         "display_name":"Volumes",
         "display_description":"Total number of volume attachments allowed per project",
         "status":"available"
      },
      {
         "created_at":"2020-10-19T10:05:52.000000",
         "updated_at":"2020-10-19T10:07:32.000000",
         "deleted_at":null,
         "deleted":false,
         "version":"4.0.115",
         "id":"f02dd7a6-2e08-11ea-889c-7440bb00b67d",
         "display_name":"Storage",
         "display_description":"Total storage (in Bytes) allowed per project",
         "status":"available"
      }
   ]
}

Show Quota Type

GET https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types/<quota_type_id>

Requests the details of a Quota Type

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of Tenant/Project to work in

quota_type_id

string

ID of the Quota Type to show

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 15:44:43 GMT
Content-Type: application/json
Content-Length: 342
Connection: keep-alive
X-Compute-Request-Id: req-5bf629fe-ffa2-4c90-b704-5178ba2ab09b

{
   "quota_type":{
      "created_at":"2020-10-19T10:05:52.000000",
      "updated_at":"2020-10-19T10:07:32.000000",
      "deleted_at":null,
      "deleted":false,
      "version":"4.0.115",
      "id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
      "display_name":"Workloads",
      "display_description":"Total number of workload creation allowed per project",
      "status":"available"
   }
}

Create allowed Quota

POST https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>

Creates an allowed Quota with the given parameters

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to work in

project_id

string

ID of the Tenant/Project to create the allowed Quota in

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 15:51:51 GMT
Content-Type: application/json
Content-Length: 24
Connection: keep-alive
X-Compute-Request-Id: req-08c8cdb6-b249-4650-91fb-79a6f7497927

{
   "allowed_quotas":[
      {
         
      }
   ]
}

Body Format

{
   "allowed_quotas":[
      {
         "project_id":"<project_id>",
         "quota_type_id":"<quota_type_id>",
         "allowed_value":"<integer>",
         "high_watermark":"<Integer>"
      }
   ]
}

List allowed Quota

GET https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>

Lists all allowed Quotas for a given project.

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to work in

project_id

string

ID of the Tenant/Project to list allowed Quotas from

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:01:39 GMT
Content-Type: application/json
Content-Length: 766
Connection: keep-alive
X-Compute-Request-Id: req-e570ce15-de0d-48ac-a9e8-60af429aebc0

{
   "allowed_quotas":[
      {
         "id":"262b117d-e406-4209-8964-004b19a8d422",
         "project_id":"c76b3355a164498aa95ddbc960adc238",
         "quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
         "allowed_value":5,
         "high_watermark":4,
         "version":"4.0.115",
         "quota_type_name":"Workloads"
      },
      {
         "id":"68e7203d-8a38-4776-ba58-051e6d289ee0",
         "project_id":"c76b3355a164498aa95ddbc960adc238",
         "quota_type_id":"f02dd7a6-2e08-11ea-889c-7440bb00b67d",
         "allowed_value":-1,
         "high_watermark":-1,
         "version":"4.0.115",
         "quota_type_name":"Storage"
      },
      {
         "id":"ed67765b-aea8-4898-bb1c-7c01ecb897d2",
         "project_id":"c76b3355a164498aa95ddbc960adc238",
         "quota_type_id":"be323f58-2e08-11ea-889c-7440bb00b67d",
         "allowed_value":50,
         "high_watermark":25,
         "version":"4.0.115",
         "quota_type_name":"VMs"
      }
   ]
}

Show allowed Quota

GET https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quota/<allowed_quota_id>

Shows details for a given allowed Quota

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to work in

<allowed_quota_id>

string

ID of the allowed Quota to show

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:15:07 GMT
Content-Type: application/json
Content-Length: 268
Connection: keep-alive
X-Compute-Request-Id: req-d87a57cd-c14c-44dd-931e-363158376cb7

{
   "allowed_quotas":{
      "id":"262b117d-e406-4209-8964-004b19a8d422",
      "project_id":"c76b3355a164498aa95ddbc960adc238",
      "quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
      "allowed_value":5,
      "high_watermark":4,
      "version":"4.0.115",
      "quota_type_name":"Workloads"
   }
}

Update allowed Quota

PUT https://$(tvm_address):8780/v1/$(tenant_id)/update_allowed_quota/<allowed_quota_id>

Updates an allowed Quota with the given parameters

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to work in

<allowed_quota_id>

string

ID of the allowed Quota to update

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:24:04 GMT
Content-Type: application/json
Content-Length: 24
Connection: keep-alive
X-Compute-Request-Id: req-a4c02ee5-b86e-4808-92ba-c363b287f1a2

{"allowed_quotas": [{}]}

Body Format

{
   "allowed_quotas":{
      "project_id":"c76b3355a164498aa95ddbc960adc238",
      "allowed_value":"20000",
      "high_watermark":"18000"
   }
}

Delete allowed Quota

DELETE https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<allowed_quota_id>

Deletes a given allowed Quota

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to work in

<allowed_quota_id>

string

ID of the allowed Quota to delete

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:33:09 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-alive

Snapshots

List Snapshots

GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots

Lists all Snapshots.

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Projects to fetch the Snapshots from

Query Parameters

Name
Type
Description

host

string

host name of the TVM that took the Snapshot

workload_id

string

ID of the Workload to list the Snapshots off

date_from

string

starting date of Snapshots to show

</p>

Format: YYYY-MM-DDTHH:MM:SS

string

ending date of Snapshots to show

</p>

Format: YYYY-MM-DDTHH:MM:SS

all

boolean

admin role required - True lists all Snapshots of all Workloads

Headers

Name
Type
Description

X-Auth-Project-Id

string

project to run the authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 12:58:38 GMT
Content-Type: application/json
Content-Length: 266
Connection: keep-alive
X-Compute-Request-Id: req-ed391cf9-aa56-4c53-8153-fd7fb238c4b9

{
   "snapshots":[
      {
         "id":"1ff16412-a0cd-4e6a-9b4a-b5d4440fffc4",
         "created_at":"2020-11-02T14:03:18.000000",
         "status":"available",
         "snapshot_type":"full",
         "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
         "name":"snapshot",
         "description":"-",
         "host":"TVM1"
      }
   ]
}

Take Snapshot

POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of the Trilio Service

tenant_id

string

ID of the Tenant/Project to take the Snapshot in

workload_id

string

ID of the Workload to take the Snapshot in

Query Parameters

Name
Type
Description

full

boolean

True creates a full Snapshot

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run authentication against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 13:58:38 GMT
Content-Type: application/json
Content-Length: 283
Connection: keep-alive
X-Compute-Request-Id: req-fb8dc382-e5de-4665-8d88-c75b2e473f5c

{
   "snapshot":{
      "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
      "created_at":"2020-11-04T13:58:37.694637",
      "status":"creating",
      "snapshot_type":"full",
      "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
      "name":"API taken 2",
      "description":"API taken description 2",
      "host":""
   }
}

Body format

When creating a Snapshot it is possible to provide additional information

This Body is completely optional

{
   "snapshot":{
      "is_scheduled":<true/false>,
      "name":"<name>",
      "description":"<description>"
   }
}

Show Snapshot

GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

Shows the details of a specified Snapshot

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of the Trilio Service

tenant_id

string

ID of the Tenant/Project to take the Snapshot from

snapshot_id

string

ID of the Snapshot to show

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 14:07:18 GMT
Content-Type: application/json
Content-Length: 6609
Connection: keep-alive
X-Compute-Request-Id: req-f88fb28f-f4ce-4585-9c3c-ebe08a3f60cd

{
   "snapshot":{
      "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
      "created_at":"2020-11-04T13:58:37.000000",
      "updated_at":"2020-11-04T14:06:03.000000",
      "finished_at":"2020-11-04T14:06:03.000000",
      "user_id":"ccddc7e7a015487fa02920f4d4979779",
      "project_id":"c76b3355a164498aa95ddbc960adc238",
      "status":"available",
      "snapshot_type":"full",
      "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
      "instances":[
         {
            "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
            "name":"cirros-2",
            "status":"available",
            "metadata":{
               "availability_zone":"nova",
               "config_drive":"",
               "data_transfer_time":"0",
               "object_store_transfer_time":"0",
               "root_partition_type":"Linux",
               "trilio_ordered_interfaces":"192.168.100.80",
               "vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.80\", \"config_drive\": \"\"}",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "workload_name":"Workload_1"
            },
            "flavor":{
               "vcpus":"1",
               "ram":"512",
               "disk":"1",
               "ephemeral":"0"
            },
            "security_group":[
               {
                  "name":"default",
                  "security_group_type":"neutron"
               }
            ],
            "nics":[
               {
                  "mac_address":"fa:16:3e:cf:10:91",
                  "ip_address":"192.168.100.80",
                  "network":{
                     "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                     "name":"robert_internal",
                     "cidr":null,
                     "network_type":"neutron",
                     "subnet":{
                        "id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
                        "name":"robert_internal",
                        "cidr":"192.168.100.0/24",
                        "ip_version":4,
                        "gateway_ip":"192.168.100.1"
                     }
                  }
               }
            ],
            "vdisks":[
               {
                  "label":null,
                  "resource_id":"fa888089-5715-4228-9e5a-699f8f9d59ba",
                  "restore_size":1073741824,
                  "vm_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                  "volume_id":"51491d30-9818-4332-b056-1f174e65d3e3",
                  "volume_name":"51491d30-9818-4332-b056-1f174e65d3e3",
                  "volume_size":"1",
                  "volume_type":"iscsi",
                  "volume_mountpoint":"/dev/vda",
                  "availability_zone":"nova",
                  "metadata":{
                     "readonly":"False",
                     "attached_mode":"rw"
                  }
               }
            ]
         },
         {
            "id":"e33c1eea-c533-4945-864d-0da1fc002070",
            "name":"cirros-1",
            "status":"available",
            "metadata":{
               "availability_zone":"nova",
               "config_drive":"",
               "data_transfer_time":"0",
               "object_store_transfer_time":"0",
               "root_partition_type":"Linux",
               "trilio_ordered_interfaces":"192.168.100.176",
               "vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.176\", \"config_drive\": \"\"}",
               "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
               "workload_name":"Workload_1"
            },
            "flavor":{
               "vcpus":"1",
               "ram":"512",
               "disk":"1",
               "ephemeral":"0"
            },
            "security_group":[
               {
                  "name":"default",
                  "security_group_type":"neutron"
               }
            ],
            "nics":[
               {
                  "mac_address":"fa:16:3e:cf:4d:27",
                  "ip_address":"192.168.100.176",
                  "network":{
                     "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                     "name":"robert_internal",
                     "cidr":null,
                     "network_type":"neutron",
                     "subnet":{
                        "id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
                        "name":"robert_internal",
                        "cidr":"192.168.100.0/24",
                        "ip_version":4,
                        "gateway_ip":"192.168.100.1"
                     }
                  }
               }
            ],
            "vdisks":[
               {
                  "label":null,
                  "resource_id":"c8293bb0-031a-4d33-92ee-188380211483",
                  "restore_size":1073741824,
                  "vm_id":"e33c1eea-c533-4945-864d-0da1fc002070",
                  "volume_id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                  "volume_name":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                  "volume_size":"1",
                  "volume_type":"iscsi",
                  "volume_mountpoint":"/dev/vda",
                  "availability_zone":"nova",
                  "metadata":{
                     "readonly":"False",
                     "attached_mode":"rw"
                  }
               }
            ]
         }
      ],
      "name":"API taken 2",
      "description":"API taken description 2",
      "host":"TVM1",
      "size":44171264,
      "restore_size":2147483648,
      "uploaded_size":44171264,
      "progress_percent":100,
      "progress_msg":"Snapshot of workload is complete",
      "warning_msg":null,
      "error_msg":null,
      "time_taken":428,
      "pinned":false,
      "metadata":[
         {
            "created_at":"2020-11-04T14:05:57.000000",
            "updated_at":null,
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"16fc1ce5-81b2-4c07-ac63-6c9232e0418f",
            "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
            "key":"backup_media_target",
            "value":"10.10.2.20:/upstream"
         },
         {
            "created_at":"2020-11-04T13:58:37.000000",
            "updated_at":null,
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"5a56bbad-9957-4fb3-9bbc-469ec571b549",
            "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
            "key":"cancel_requested",
            "value":"0"
         },
         {
            "created_at":"2020-11-04T14:05:29.000000",
            "updated_at":"2020-11-04T14:05:45.000000",
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"d36abef7-9663-4d88-8f2e-ef914f068fb4",
            "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
            "key":"data_transfer_time",
            "value":"0"
         },
         {
            "created_at":"2020-11-04T14:05:57.000000",
            "updated_at":null,
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"c75f9151-ef87-4a74-acf1-42bd2588ee64",
            "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
            "key":"hostnames",
            "value":"[\"cirros-1\", \"cirros-2\"]"
         },
         {
            "created_at":"2020-11-04T14:05:29.000000",
            "updated_at":"2020-11-04T14:05:45.000000",
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"02916cce-79a2-4ad9-a7f6-9d9f59aa8424",
            "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
            "key":"object_store_transfer_time",
            "value":"0"
         },
         {
            "created_at":"2020-11-04T14:05:57.000000",
            "updated_at":null,
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"96efad2f-a24f-4cde-8e21-9cd78f78381b",
            "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
            "key":"pause_at_snapshot",
            "value":"0"
         },
         {
            "created_at":"2020-11-04T14:05:57.000000",
            "updated_at":null,
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"572a0b21-a415-498f-b7fa-6144d850ef56",
            "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
            "key":"policy_id",
            "value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
         },
         {
            "created_at":"2020-11-04T14:05:57.000000",
            "updated_at":null,
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"dfd7314d-8443-4a95-8e2a-7aad35ef97ea",
            "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
            "key":"preferredgroup",
            "value":"[]"
         },
         {
            "created_at":"2020-11-04T14:05:57.000000",
            "updated_at":null,
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"2e17e1e4-4bb1-48a9-8f11-c4cd2cfca2a9",
            "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
            "key":"topology",
            "value":"\"\\\"\\\"\""
         },
         {
            "created_at":"2020-11-04T14:05:57.000000",
            "updated_at":null,
            "deleted_at":null,
            "deleted":false,
            "version":"4.0.115",
            "id":"33762790-8743-4e20-9f50-3505a00dbe76",
            "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
            "key":"workload_approx_backup_size",
            "value":"6"
         }
      ],
      "restores_info":""
   }
}

Delete Snapshot

DELETE https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

Deletes a specified Snapshot

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to find the Snapshot in

snapshot_id

string

ID of the Snapshot to delete

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 14:18:36 GMT
Content-Type: application/json
Content-Length: 56
Connection: keep-alive
X-Compute-Request-Id: req-82ffb2b6-b28e-4c73-89a4-310890960dbc

{"task": {"id": "a73de236-6379-424a-abc7-33d553e050b7"}}

Cancel Snapshot

GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/cancel

Cancels the Snapshot process of a given Snapshot

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to find the Snapshot in

snapshot_id

string

ID of the Snapshot to cancel

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 14:26:44 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-47a5a426-c241-429e-9d69-d40aed0dd68d

Restores

Definition

A Restore is the workflow to bring back the backed up VMs from a Trilio Snapshot.

List of Restores

Using Horizon

To reach the list of Restores for a Snapshot follow these steps:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to show

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the searched Snapshot in the Snapshot list

  8. Click the Snapshot Name

  9. Navigate to the Restores tab

Using CLI

workloadmgr restore-list [--snapshot_id <snapshot_id>]
  • --snapshot_id <snapshot_id> ➡️ ID of the Snapshot to show the restores of

Restores overview

Using Horizon

To reach the detailed Restore overview follow these steps:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to show

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the searched Snapshot in the Snapshot list

  8. Click the Snapshot Name

  9. Navigate to the Restores tab

  10. Identify the restore to show

  11. Click the restore name

Details Tab

The Restore Details Tab shows the most important information about the Restore.

  • Name

  • Description

  • Restore Type

  • Status

  • Time taken

  • Size

  • Progress Message

  • Progress

  • Host

  • Restore Options

The Restore Options are the restore.json provided to Trilio.

  • List of VMs restored

    • restored VM Name

    • restored VM Status

    • restored VM ID

Misc Tab

The Misc tab provides additional Metadata information.

  • Creation Time

  • Restore ID

  • Snapshot ID containing the Restore

  • Workload

Using CLI

workloadmgr restore-show [--output <output>] <restore_id>
  • <restore_id> ➡️ ID of the restore to be shown

  • --output <output> ➡️ Option to get additional restore details, Specify --output metadata for restore metadata,--output networks --output subnets --output routers --output flavors

Delete a Restore

Once a Restore is no longer needed, it can be safely deleted from a Workload.

Deleting a Restore will only delete the Trilio information about this Restore. No OpenStack resources are getting deleted.

Using Horizon

There are 2 possibilities to delete a Restore.

Possibility 1: Single Restore deletion through the submenu

To delete a single Restore through the submenu follow these steps:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to delete

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the searched Snapshot in the Snapshot list

  8. Click the Snapshot Name

  9. Navigate to the Restore tab

  10. Click "Delete Restore" in the line of the restore in question

  11. Confirm by clicking "Delete Restore"

Possibility 2: Multiple Restore deletion through a checkbox in Snapshot overview

To delete one or more Restores through the Restore list do the following:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to show

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the searched Snapshots in the Snapshot list

  8. Enter the Snapshot by clicking the Snapshot name

  9. Navigate to the Restore tab

  10. Check the checkbox for each Restore that shall be deleted

  11. Click "Delete Restore" in the menu above

  12. Confirm by clicking "Delete Restore"

Using CLI

workloadmgr restore-delete <restores_id>
  • <restore_id> ➡️ ID of the restore to be deleted

Cancel a Restore

Ongoing Restores can be canceled.

Using Horizon

To cancel a Restore in Horizon follow these steps:

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to delete

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the searched Snapshot in the Snapshot list

  8. Click the Snapshot Name

  9. Navigate to the Restore tab

  10. Identify the ongoing Restore

  11. Click "Cancel Restore" in the line of the restore in question

  12. Confirm by clicking "Cancel Restore"

Using CLI

workloadmgr restore-cancel <restore_id>
  • <restore_id> ➡️ ID of the restore to be deleted

One Click Restore

The One Click Restore will bring back all VMs from the Snapshot in the same state as they were backed up. They will:

  • be located in the same cluster in the same datacenter

  • use the same storage domain

  • connect to the same network

  • have the same flavor

The user can't change any Metadata.

The One Click Restore requires, that the original VM's that have been backed up are deleted or otherwise lost. If even one VM is still existing, will the One Click Restore fail.

The One Click Restore will automatically update the Workload to protect the restored VMs.

Using Horizon

There are 2 possibilities to start a One Click Restore.

Possibility 1: From the Snapshot list

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to be restored

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the Snapshot to be restored

  8. Click "One Click Restore" in the same line as the identified Snapshot

  9. (Optional) Provide a name / description

  10. Click "Create"

Possibility 2: From the Snapshot overview

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to be restored

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the Snapshot to be restored

  8. Click the Snapshot Name

  9. Navigate to the "Restores" tab

  10. Click "One Click Restore"

  11. (Optional) Provide a name / description

  12. Click "Create"

Using CLI

workloadmgr snapshot-oneclick-restore [--display-name <display-name>]
                                      [--display-description <display-description>]
                                      <snapshot_id>
  • <snapshot_id> ➡️ ID of the snapshot to restore.

  • --display-name <display-name> ➡️ Optional name for the restore.

  • --display-description <display-description> ➡️ Optional description for restore.

Selective Restore

The Selective Restore is the most complex restore Trilio has to offer. It allows to adapt the restored VMs to the exact needs of the User.

With the selective restore the following things can be changed:

  • Which VMs are getting restored

  • Name of the restored VMs

  • Which networks to connect with

  • Which Storage domain to use

  • Which DataCenter / Cluster to restore into

  • Which flavor the restored VMs will use

The Selective Restore is always available and doesn't have any prerequirements.

The Selective Restore will automatically update the Workload to protect the created instance in the case that the original instance is no longer existing.

Using Horizon

There are 2 possibilities to start a Selective Restore.

Possibility 1: From the Snapshot list

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to be restored

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the Snapshot to be restored

  8. Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

  9. Click on "Selective Restore"

  10. Configure the Selective Restore as desired

  11. Click "Restore"

Possibility 2: From the Snapshot overview

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to be restored

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the Snapshot to be restored

  8. Click the Snapshot Name

  9. Navigate to the "Restores" tab

  10. Click "Selective Restore"

  11. Configure the Selective Restore as desired

  12. Click "Restore"

Using CLI

workloadmgr snapshot-selective-restore [--display-name <display-name>]
                                       [--display-description <display-description>]
                                       [--filename <filename>]
                                       <snapshot_id>
  • <snapshot_id> ➡️ ID of the snapshot to restore.

  • --display-name <display-name> ➡️ Optional name for the restore.

  • --display-description <display-description> ➡️ Optional description for restore.

  • --filename <filename> ➡️ Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.

Inplace Restore

The Inplace Restore covers those use cases, where the VM and its Volumes are still available, but the data got corrupted or needs to a rollback for other reasons.

It allows the user to restore only the data of a selected Volume, which is part of a backup.

The Inplace Restore only works when the original VM and the original Volume are still available and connected. Trilio is checking this by the saved Object-ID.

The Inplace Restore will not create any new RHV resources. Please use one of the other restore options if new Volumes or VMs are required.

Using Horizon

There are 2 possibilities to start an Inplace Restore.

Possibility 1: From the Snapshot list

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to be restored

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the Snapshot to be restored

  8. Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

  9. Click on "Inplace Restore"

  10. Configure the Inplace Restore as desired

  11. Click "Restore"

Possibility 2: From the Snapshot overview

  1. Login to Horizon

  2. Navigate to Backups

  3. Navigate to Workloads

  4. Identify the workload that contains the Snapshot to be restored

  5. Click the workload name to enter the Workload overview

  6. Navigate to the Snapshots tab

  7. Identify the Snapshot to be restored

  8. Click the Snapshot Name

  9. Navigate to the "Restores" tab

  10. Click "Inplace Restore"

  11. Configure the Inplace Restore as desired

  12. Click "Restore"

Using CLI

workloadmgr snapshot-inplace-restore [--display-name <display-name>]
                                     [--display-description <display-description>]
                                     [--filename <filename>]
                                     <snapshot_id>
  • <snapshot_id> ➡️ ID of the snapshot to restore.

  • --display-name <display-name> ➡️ Optional name for the restore.

  • --display-description <display-description> ➡️ Optional description for restore.

  • --filename <filename> ➡️ Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.

required restore.json for CLI

The workloadmgr client CLI is using a restore.json file to define the restore parameters for the selective and the inplace restore.

An example for a selective restore of this restore.json is shown below. A detailed analysis and explanation is given afterwards.

The restore.json requires many information about the backed up resources. All required information can be gathered in the Snapshot overview.

{
    oneclickrestore: False,
    restore_type: selective, 
    type: openstack, 
    openstack: 
        {
            instances: 
                [
                    {
                        include: True, 
                        id: 890888bc-a001-4b62-a25b-484b34ac6e7e,                        
                        name: cdcentOS-1, 
                        availability_zone:, 
                        nics: [], 
                        vdisks: 
                            [
                                {
                                    id: 4cc2b474-1f1b-4054-a922-497ef5564624, 
                                    new_volume_type:, 
                                    availability_zone: nova
                                }
                            ], 
                        flavor: 
                            {
                                ram: 512, 
                                ephemeral: 0, 
                                vcpus: 1,
                                swap:,
                                disk: 1, 
                                id: 1
                            }                         
                    }
                ], 
            restore_topology: True, 
            networks_mapping: 
                {
                    networks: []
                }
        }
}

General required information

Before the exact details of the restore are to be provided it is necessary to provide the general metadata for the restore.

  • name➡️the name of the restore

  • description➡️the description of the restore

  • oneclickrestore <True/False>➡️If the restore is a oneclick restore. Setting this to True will override all other settings and a One Click Restore is started.

  • restore_type <oneclick/selective/inplace>➡️defines the restore that is intended

  • type openstack➡️defines that the restore is into an openstack cloud.

  • openstackstarts the exact definition of the restore

Selective Restore required information

The Selective Restore requires a lot of information to be able to execute the restore as desired.

Those information are divided into 3 components:

  • instances

  • restore_topology

  • networks_mapping

Information required in instances

This part contains all information about all instances that are part of the Snapshot to restore and how they are to be restored.

Even when VMs are not to be restored are they required inside the restore.json to allow a clean execution of the restore.

Each instance requires the following information

  • id ➡️ original id of the instance

  • include <True/False> ➡️ Set True when the instance shall be restored

All further information are only required, when the instance is part of the restore.

  • name ➡️ new name of the instance

  • availability_zone ➡️ Nova Availability Zone the instance shall be restored into. Leave empty for "Any Availability Zone"

  • Nics ➡️ list of openstack Neutron ports that shall be attached to the instance. Each Neutron Port consists of:

    • id ➡️ ID of the Neutron port to use

    • mac_address ➡️ Mac Address of the Neutron port

    • ip_address ➡️ IP Address of the Neutron port

    • network ➡️ network the port is assigned to. Contains the following information:

      • id ➡️ ID of the network the Neutron port is part of

      • subnet➡️subnet the port is assigned to. Contains the following information:

        • id ➡️ ID of the network the Neutron port is part of

To use the next free IP available in the set Nics to an empty list [ ]

Using an empty list for Nics combined with the Network Topology Restore, will the restore automatically restore the original IP address of the instance.

  • vdisks ➡️ List of all Volumes that are part of the instance. Each Volume requires the following information:

    • id ➡️ Original ID of the Volume

    • new_volume_type ➡️ The Volume Type to use for the restored Volume. Leave empty for Volume Type None

    • availability_zone ➡️ The Cinder Availability Zone to use for the Volume. The default Availability Zone of Cinder is Nova

  • flavor➡️Defines the Flavor to use for the restored instance. Contains the following information:

    • ram➡️How much RAM the restored instance will have (in MB)

    • ephemeral➡️How big the ephemeral disk of the instance will be (in GB)

    • vcpus➡️How many vcpus the restored instance will have available

    • swap➡️How big the Swap of the restored instance will be (in MB). Leave empty for none.

    • disk➡️Size of the root disk the instance will boot with

    • id➡️ID of the flavor that matches the provided information

The root disk needs to be at least as big as the root disk of the backed up instance was.

The following example describes a single instance with all values.

'instances':[
  {
     'name':'cdcentOS-1-selective',
     'availability_zone':'US-East',
     'nics':[
       {
          'mac_address':'fa:16:3e:00:bd:60',
          'ip_address':'192.168.0.100',
          'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
          'network':{
             'subnet':{
                'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
             },
             'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
          }
       }
     ],
     'vdisks':[
       {
          'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
          'new_volume_type':'ceph',
          'availability_zone':'nova'
       }
     ],
     'flavor':{
        'ram':2048,
        'ephemeral':0,
        'vcpus':1,
        'swap':'',
        'disk':20,
        'id':'2'
     },
     'include':True,
     'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
  }
]

Information required in network topology restore or network mapping

Do not mix network topology restore together with network mapping.

To activate a network topology restore set:

restore_topology:True

To activate network mapping set:

restore_topology:False

When the network mapping is activated it is used, it is necessary to provide the mapping details, which are part of the networks_mapping block:

  • networks ➡️ list of snapshot_network and target_network pairs

    • snapshot_network ➡️ the network backed up in the snapshot, contains the following:

      • id ➡️ Original ID of the network backed up

      • subnet ➡️ the subnet of the network backed up in the snapshot, contains the following:

        • id ➡️ Original ID of the subnet backed up

    • target_network ➡️ the existing network to map to, contains the following

      • id ➡️ ID of the network to map to

      • subnet ➡️ the subnet of the network backed up in the snapshot, contains the following:

        • id ➡️ ID of the subnet to map to

Full selective restore example

{
   'oneclickrestore':False,
   'openstack':{
      'instances':[
         {
            'name':'cdcentOS-1-selective',
            'availability_zone':'US-East',
            'nics':[
               {
                  'mac_address':'fa:16:3e:00:bd:60',
                  'ip_address':'192.168.0.100',
                  'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
                  'network':{
                     'subnet':{
                        'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
                     },
                     'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
                  }
               }
            ],
            'vdisks':[
               {
                  'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
                  'new_volume_type':'ceph',
                  'availability_zone':'nova'
               }
            ],
            'flavor':{
               'ram':2048,
               'ephemeral':0,
               'vcpus':1,
               'swap':'',
               'disk':20,
               'id':'2'
            },
            'include':True,
            'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
         }
      ],
      'restore_topology':False,
      'networks_mapping':{
         'networks':[
            {
               'snapshot_network':{
                  'subnet':{
                     'id':'8b609440-4abf-4acf-a36b-9a0fa70c383c'
                  },
                  'id':'8b871820-f92e-41f6-80b4-00555a649b4c'
               },
               'target_network':{
                  'subnet':{
                     'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
                  },
                  'id':'d5047e84-077e-4b38-bc43-e3360b0ad174',
                  'name':'internal'
               }
            }
         ]
      }
   },
   'restore_type':'selective',
   'type':'openstack'
}

Inplace Restore required information

The Inplace Restore requires less information thana selective restore. It only requires the base file with some information about the Instances and Volumes to be restored.

Information required in instances

  • id ➡️ ID of the instance inside the Snapshot

  • restore_boot_disk ➡️ Set to True if the boot disk of that VM shall be restored.

When the boot disk is at the same time a Cinder Disk, both values need to be set true.

  • include ➡️ Set to True if at least one Volume from this instance shall be restored

  • vdisks ➡️ List of disks, that are connected to the instance. Each disk contains:

    • id ➡️ Original ID of the Volume

    • restore_cinder_volume ➡️ set to true if the Volume shall be restored

Network mapping information required

There are no network information required, but the field have to exist as empty value for the restore to work.

Full Inplace restore example

{
   'oneclickrestore':False,
   'restore_type':'inplace',
   'type':'openstack',   
   'openstack':{
      'instances':[
         {
            'restore_boot_disk':True,
            'include':True,
            'id':'ba8c27ab-06ed-4451-9922-d919171078de',
            'vdisks':[
               {
                  'restore_cinder_volume':True,
                  'id':'04d66b70-6d7c-4d1b-98e0-11059b89cba6',
               }
            ]
         }
      ]
   }
}

Getting started with Trilio on Red-Hat OpenStack Platform (RHOSP)

The Red Hat OpenStack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.

Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.

1. Prepare for deployment

Refer to the link Resources to get release specific values of the placeholders, viz Container URLs, trilio_branch, RHOSP Version and CONTAINER-TAG-VERSION in this document as per the OpenStack environment:

1.1] Select 'backup target' type

Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

The following backup target types are supported by Trilio

a) NFS

Need NFS share path

b) Amazon S3

- S3 Access Key - Secret Key - Region - Bucket name

c) Other S3 compatible storage (Like Ceph based S3)

- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

1.2] Clone triliovault-cfg-scripts repository

The following steps are to be done on the undercloud node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.

All commands need to be run as a stack user on the undercloud node

Refer to the word <RHOSP_RELEASE_DIRECTORY> as rhosp17 in the below sections

The following command clones the triliovault-cfg-scripts github repository.

cd /home/stack
source stackrc
git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>

1.3] Set executable permissions for all shell scripts

cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
chmod +x *.sh

1.4] If the backup target type is 'Ceph based S3' with SSL:

If your backup target is CEPH S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For all s3 backup target with self signed TLS certificates, user need to copy ca chain files in following location and in given file name format in trilio puppet module. Edit <S3_BACKUP_TARGET_NAME>, <S3_SELF_SIGNED_CERT_CA_CHAIN_FILE> parameters in following command.

 cp <S3_SELF_SIGNED_CERT_CA_CHAIN_FILE> /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/puppet/trilio/files/s3-cert-<S3_BACKUP_TARGET_NAME>.pem

For example if S3_BACKUP_TARGET_NAME = BT2_S3 and your S3_SELF_SIGNED_CERT_CA_CHAIN_FILE='s3-ca.pem' then command to copy this ca chain file to trilio puppet module would be

cp s3-ca.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/puppet/trilio/files/s3-cert-BT2_S3.pem

2] Update the overcloud roles data file to include Trilio services

Trilio contains multiple services. Add these services to your roles_data.yaml.

In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:

/usr/share/openstack-tripleo-heat-templates/roles_data.yaml

Add the following services to the roles_data.yaml

All commands need to be run as a 'stack' user

2.1] Add Trilio Datamover Api and Trilio Workload Manager services to role data file

This service needs to share the same role as the keystone and database service. In the case of the pre-defined roles, these services will run on the role Controller. In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone service installed.

Add the following line to the identified role:

- OS::TripleO::Services::TrilioDatamoverApi
- OS::TripleO::Services::TrilioWlmApi
- OS::TripleO::Services::TrilioWlmWorkloads
- OS::TripleO::Services::TrilioWlmScheduler
- OS::TripleO::Services::TrilioWlmCron
- OS::TripleO::Services::TrilioObjectStore

2.2] Add Trilio Datamover Service to role data file

This service needs to share the same role as the nova-compute service. In the case of the pre-defined roles will the nova-compute service run on the role Compute. In the case of custom-defined roles, it is necessary to use the role that the nova-compute service uses.

Add the following line to the identified role:

- OS::TripleO::Services::TrilioDatamover
- OS::TripleO::Services::TrilioObjectStore

3] Prepare Trilio container images

All commands need to be run as a 'stack' user

Trilio containers are pushed to the RedHat Container Registry. Registry URL: 'registry.connect.redhat.com'. For Container URLs, please refer Resources

There are three registry methods available in the RedHat OpenStack Platform.

  1. Remote Registry

  2. Local Registry

  3. Satellite Server

3.1] Remote Registry

Follow this section when 'Remote Registry' is used.

In this method, container images get downloaded directly on overcloud nodes during overcloud deploy/update command execution. Users can set the remote registry to a redhat registry or any other private registry that they want to use. The user needs to provide credentials for the registry in containers-prepare-parameter.yaml file.

  1. Make sure other OpenStack service images are also using the same method to pull container images. If it's not the case you can not use this method.

  2. Populate containers-prepare-parameter.yaml with content like the following. Important parameters are push_destination: false, ContainerImageRegistryLogin: true and registry credentials. Trilio container images are published to registry registry.connect.redhat.com. Credentials of registry 'registry.redhat.io' will work for registry.connect.redhat.com registry too.

Note: This file containers-prepare-parameter.yaml

File Name: containers-prepare-parameter.yaml
parameter_defaults:
  ContainerImagePrepare:
  - push_destination: false
    set:
      namespace: registry.redhat.io/...
      ...
  ...
  ContainerImageRegistryCredentials:
    registry.redhat.io:
      myuser: 'p@55w0rd!'
    registry.connect.redhat.com:
      myuser: 'p@55w0rd!'
  ContainerImageRegistryLogin: true

Redhat document for remote registry method: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/director_installation_and_usage/preparing-for-director-installation#container-image-preparation-parameters

Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat

3. Make sure you have network connectivity to the above registries from all overcloud nodes. Otherwise image pull operation will fail.

4. The user needs to manually populate the trilio_env.yaml file with Trilio container image URLs as given below:

trilio_env.yaml file path:

cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/

$ grep '<CONTAINER-TAG-VERSION>-rhosp17.1' trilio_env.yaml
   ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1

At this step, you have configured Trilio image URLs in the necessary environment file.

3.2] Local Registry

Follow this section when 'local registry' is used on the undercloud.

In this case, it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts that will pull the containers from registry.connect.redhat.com and push them to the undercloud and update the trilio_env.yaml.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/scripts/

sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp17.1

## Example of running the script with parameters
sudo ./prepare_trilio_images.sh undercloudqa17.ctlplane.trilio.local 6.0.0-rhosp17.1


## Verify changes
grep '<CONTAINER-TAG-VERSION>-rhosp17.1' ../environments/trilio_env.yaml
   ContainerTriliovaultDatamoverImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultDatamoverApiImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultWlmImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerHorizonImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1

$  openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp17.1
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1                |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1               |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1                    |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1                          |

At this step, you have downloaded Trilio container images and configured Trilio image URLs in the necessary environment file.

3.3] Red Hat Satellite Server

Follow this section when a Satellite Server is used for the container registry.

Pull the Trilio containers on the Red Hat Satellite using the given Red Hat registry URLs.

Populate the trilio_env.yaml with container URLs.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments

$ grep '<CONTAINER-TAG-VERSION>-rhosp17.1' trilio_env.yaml
   ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1

At this step, you have downloaded Trilio container images into the RedHat satellite server and configured Trilio image URLs in the necessary environment file.

4] Provide environment details in trilio-env.yaml

Edit /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml file and provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still, it is recommended to verify the container URLs.

You don't need to provide anything for resource_registry, keep it as it is.

Parameter
Description

CloudAdminUserName

Default value is admin.

Provide the cloudadmin user name of your overcloud

CloudAdminProjectName

Default value is admin.

Provide the cloudadmin project name of your overcloud

CloudAdminDomainName

Default value is default.

Provide the cloudadmin project name of your overcloud

CloudAdminPassword

Provide the cloudadmin user's password of your overcloud

ContainerTriliovaultDatamoverImage

Trilio Datamover Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

ContainerTriliovaultDatamoverApiImage

Trilio DatamoverApi Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

ContainerTriliovaultWlmImage

Trilio WLM Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

ContainerHorizonImage

Horizon Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

TrilioBackupTargets

List of Backup Targets for TrilioVault. These backup targets will be used to store backups taken by TrilioVault. Backup target examples and format of NFS and S3 types are already provided in the trilio_env.yaml file. Details of respective prameters under TrilioBackupTargets given in next section

TrilioDatamoverOptVolumes

User can specify list of extra volumes that they want to mount on 'triliovault_datamover' container.

Refer the Configure Custom Volume/Directory Mounts for the Trilio Datamover Service section in this doc

4.1] TrilioBackupTargets Details

T4O supports setting up multiple target backend for storing snapshots. User can define any number of storage backends as required. At a high level, NFS and S3 are supported.

Following table provides the details of parameters to be set in trilio_env.yaml file against all the S3 target backends.

Parameters
Description

backup_target_name

User Defined Name of the target backend. Can be any name which can help in quick identifying respective target

backup_target_type

s3

is_default

Can be true or false. Ideally, any one of the multiple target backends specified in trilio_env.yaml file must be marked as true.

s3_type

Could be either of amazon s3 OR ceph_s3 depending upon which S3 is to be configured with T4O.

s3_access_key

S3 Access Key

s3_secret_key

S3 Secret Key

s3_region_name

S3 Region name

s3_bucket

S3 Bucket

s3_endpoint_url

S3 endpoint url

s3_signature_version

Provide S3 signature version

s3_auth_version

Provide S3 auth version

s3_ssl_enabled

true

s3_ssl_verify

true

s3_self_signed_cert

true

s3_bucket_object_lock_enabled

If S3 bucket is having object lock enabled, then this should be set as true else false

Following table provides the details of parameters to be set in trilio_env.yaml file against all the NFS target backends.

Parameters
Description

backup_target_name

User Defined Name of the target backend. Can be any name which can help in quick identifying respective target

backup_target_type

nfs

is_default

Can be true or false. Ideally, any one of the multiple target backends specified in trilio_env.yaml file must be marked as true.

nfs_options

'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10' These parameter set NFS mount options. Keep default values, unless a special requirement exists

is_multi_ip_nfs

true or false depending upon whether the storage backend is NFS having single or multiple IP.

nfs_shares

NFS IP and share path. To be kept in case of single IP NFS. Eg. 11.30.1.10:/mnt/share

multi_ip_nfs_map

NFS IPs and share paths. To be kept in case of multiple NFS IPs. Sample below multi_ip_nfs_map:  controller1: 192.168.2.3:/var/nfsshare  controller2: 192.168.2.4:/var/nfsshare  compute0: 192.168.3.2:/var/nfsshare  compute1: 192.168.3.4:/var/nfsshare

4.2] Update triliovault object store yaml

After you fill in details of backup targets in trilio_env.yaml, user needs to run following script from ‘scripts' directory on undercloud node. This script will update ‘services/triliovault-object-store.yaml' file. User don’t need to verify that.

cd redhat-director-scripts/rhosp17/scripts/
dnf install python3-ruamel-yaml
python3 update_object_store_yaml.py

5.1] Change the directory and run the script

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
./generate_passwords.sh

5.2] Output will be written to the below file.

Include this file in your overcloud deploy command as an environment file with the option "-e"

-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/passwords.yaml

6] Fetch ids of required OpenStack resources

6.1] User needs to source 'overcloudrc' file of cloud admin user. This is needed to run OpenStack CLI.

For only this section user needs to source the cloudrc file of overcloud node

source <OVERCLOUD_RC_FILE>

6.2] User must have filled in the cloud admin user details of overcloud in 'trilio_env.yaml' file in the 'Provide environment details in trilio-env.yaml' section. If not please do so.

vi /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml

6.3] Cloud admin user should have admin role on cloud admin domain

openstack role add --user <cloud-Admin-UserName> --domain <Cloud-Admin-DomainName> admin

# Example
openstack role add --user admin --domain default admin

6.4] After this, user can run the following script.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts
./create_wlm_ids_conf.sh

The output will be written to

cat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/triliovault_wlm_ids.conf

7] Load necessary Linux drivers on all Controller and Compute nodes

For TrilioVault functionality to work, we need the following Linux kernel modules to be loaded on all controllers and compute nodes(Where Trilio WLM and Datamover services are going to be installed).

7.1] Install nbd module

modprobe nbd nbds_max=128
lsmod | grep nbd

7.2] Install fuse module

modprobe fuse
lsmod | grep fuse

8] Upload Trilio puppet module

All commands need to be run as a 'stack' user on undercloud node

8.1] Source the stackrc

source stackrc

8.2] The following commands upload the Trilio puppet module to the overcloud registry. The actual upload happens upon the next deployment.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/scripts/
./upload_puppet_module.sh

## Output of above command looks like following
Creating tarball...
Tarball created.
renamed '/tmp/puppet-modules-MUIyvXI/puppet-modules.tar.gz' -> '/var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz'
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
[stack@uc17-1 scripts]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml
parameter_defaults:
  DeployArtifactFILEs:
  - /var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz

## Above command creates following file.
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml

9] Deploy overcloud with Trilio environment

9.1] Include the environment file trilio_defaults.yaml in overcloud deploy command with `-e` option as shown below.

This YAML file holds the default values, like default Trustee Role is creator and Keystone endpoint interface is Internal. There are some other parameters as well those User can update as per their requirements.

-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_defaults.yaml 

9.2] Additionally include the following heat environment files and roles data file we mentioned in the above sections in an overcloud deploy command:

  1. trilio_env.yaml

  2. roles_data.yaml

  3. trilio_passwords.yaml

  4. trilio_defaults.yaml

  5. Use the correct Trilio endpoint map file as per the available Keystone endpoint configuration. You have to remove your OpenStack's endpoint map file from overcloud deploy command and instead of that use Trilio endpoint map file.

    1. Instead of tls-endpoints-public-dns.yaml file, use triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_dns.yaml

    2. Instead of tls-endpoints-public-ip.yaml file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_ip.yaml

    3. Instead of tls-everywhere-endpoints-dns.yaml file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_everywhere_dns.yaml

    4. Instead of no-tls-endpoints-public-ip.yaml file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_non_tls_endpoints_ip.yaml

To include new environment files use -e option and for roles data files use -r option.\

Below is an example of an overcloud deploy command with Trilio environment:

openstack overcloud deploy --stack overcloudtrain5 --templates \
  --libvirt-type qemu \
  --ntp-server 192.168.1.34 \
  -e /home/stack/templates/node-info.yaml \
  -e /home/stack/containers-prepare-parameter.yaml \
  -e /home/stack/templates/ceph-config.yaml \
  -e /home/stack/templates/cinder_size.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-simple-crypto.yaml \
  -e /home/stack/templates/configure-barbican.yaml \
  -e /home/stack/templates/multidomain_horizon.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml \
  -e /home/stack/templates/tls-parameters.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml \
  -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/trilio_env.yaml \
  -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/trilio_env_tls_everywhere_dns.yaml \
  -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/trilio_defaults.yaml \
  -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/trilio_passwords.yaml \
  -r /usr/share/openstack-tripleo-heat-templates/roles_data.yaml

Post-deployment for the multipath enabled environment, log into respective Datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

10] Verify deployment

Please follow this documentation to verify the deployment.

11] Troubleshooting for overcloud deployment failures

Trilio components will be deployed using puppet scripts.

In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html

openstack stack failures list overcloud
heat stack-list --show-nested -f "status=FAILED"
heat resource-list --nested-depth 5 overcloud | grep FAILED

=> If any Trilio containers do not start well or are in a restarting state on the Controller/Compute node, use the following logs to debug.

podman logs <trilio-container-name>

tailf /var/log/containers/<trilio-container-name>/<trilio-container-name>.log

12] Advanced Settings/Configuration

12.1] Configure Multi-IP NFS

This section is only required when the Multi-IP feature for NFS is required.

This feature allows us to set the IP to access the NFS Volume per datamover instead of globally.

i] On Undercloud node, change the directory

cd triliovault-cfg-scripts/common/

ii] Edit file triliovault_nfs_map_input.yml in the current directory and provide compute host and NFS share/IP map.

Get the overcloud Controller and Compute hostnames from the following command. Check Name column. Use exact host names in the triliovault_nfs_map_input.yml file.

Run this command on undercloud by sourcing stackrc.

(undercloud) [stack@ucqa161 ~]$ openstack server list
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+

Edit the input map file triliovault_nfs_map_input.yml and fill in all the details. Refer to this page for details about the structure.

Below is an example of how you can set the multi-IP NFS details:

You can not configure the different IPs for the Controllers/WLM nodes, you need to use the same share on all the controller nodes. You can configure the different IPs for Compute/Datamover nodes

$ cat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
# TriliovaultMultiIPNfsMap represents datamover, WLM nodes (compute and controller nodes) and it's NFS share mapping.
parameter_defaults:
  TriliovaultMultiIPNfsMap:
    overcloudtrain4-controller-0: 172.30.1.11:/rhospnfs
    overcloudtrain4-controller-1: 172.30.1.11:/rhospnfs
    overcloudtrain4-controller-2: 172.30.1.11:/rhospnfs
    overcloudtrain4-novacompute-0: 172.30.1.12:/rhospnfs
    overcloudtrain4-novacompute-1: 172.30.1.13:/rhospnfs

iii] Update pyyaml on the undercloud node only

If pip isn't available please install pip on the undercloud.

sudo pip3 install PyYAML==5.1 3

Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.

python3 ./generate_nfs_map.py

iv] Validate output map file

The result will be stored in the triliovault_nfs_map_output.yml file

Open file triliovault_nfs_map_output.yml available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

v] Append this output map file to triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

Validate the changes in the file triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

vi] Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

vii] Ensure that MultiIPNfsEnabled is set to true in the trilio_env.yaml file and that NFS is used as a backup target.

/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml

viii] After this you need to run the overcloud deployment.

12.2] Haproxy customized configuration for Trilio dmapi service

The existing default HAproxy configuration works fine with most of the environments. Only when timeout issues with the Trilio Datamover Api are observed or other reasons are known, change the configuration as described here.

Following is the HAproxy conf file location on HAproxy nodes of the overcloud. Trilio Datamover API service HAproxy configuration gets added to this file.

/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg

Trilio Datamover HAproxy default configuration from the above file looks as follows:

listen triliovault_datamover_api
  bind 172.30.4.53:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
  bind 172.30.4.53:8784 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
  balance roundrobin
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request set-header X-Forwarded-Port %[dst_port]
  maxconn 50000
  option httpchk
  option httplog
  option forwardfor
  retries 5
  timeout check 10m
  timeout client 10m
  timeout connect 10m
  timeout http-request 10m
  timeout queue 10m
  timeout server 10m
  server overcloudtraindev2-controller-0.internalapi.trilio.local 172.30.4.57:8784 check fall 5 inter 2000 rise 2 verifyhost overcloudtraindev2-controller-0.internalapi.trilio.local

The user can change the following configuration parameter values.

retries 5
timeout http-request 10m
timeout queue 10m
timeout connect 10m
timeout client 10m
timeout server 10m
timeout check 10m
balance roundrobin
maxconn 50000

To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for editing.

/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/services/triliovault-datamover-api.yaml

ii) Search the following entries and edit as required

          tripleo::haproxy::trilio_datamover_api::options:
             'retries': '5'
             'maxconn': '50000'
             'balance': 'roundrobin'
             'timeout http-request': '10m'
             'timeout queue': '10m'
             'timeout connect': '10m'
             'timeout client': '10m'
             'timeout server': '10m'
             'timeout check': '10m'

iii) Save the changes and do the overcloud deployment again to reflect these changes for overcloud nodes.

12.3] Configure Custom Volume/Directory Mounts for the Trilio Datamover Service

i) If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use a variable named 'TrilioDatamoverOptVolumes' is available in the below file.

triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml

To add one more extra volume/directoy mount to the Trilio Datamover Service container it is necessary that volumes/directories should already be mounted on the Compute host

ii) The variable 'TrilioDatamoverOptVolumes' accepts a list of volume/bind mounts. User needs to edit the file and add their volume mounts in the below format.

TrilioDatamoverOptVolumes:
   - <mount-dir-on-compute-host>:<mount-dir-inside-the-datamover-container>

## For example, below is the `/mnt/mount-on-host` mount directory mounted on Compute host that directory you want to mount on the `/mnt/mount-inside-container` directory inside the Datamover container

[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436   2.5T  2.3T  234G  91% /mnt/mount-on-host

## Then provide that mount in the below format

TrilioDatamoverOptVolumes:
   - /mnt/mount-on-host:/mnt/mount-inside-container

iii) Lastly you need to do overcloud deploy/update.

After successful deployment, you will see that volume/directory mount will be mounted inside the Trilio Datamover Service container.

[root@overcloudtrain5-novacompute-0 heat-admin]# podman exec -itu root triliovault_datamover bash
[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436   2.5T  2.3T  234G  91% /mnt/mount-inside-container

12.4] Advanced Ceph Configration (Optional)

We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user'.

Details about multiple ceph configuration can be found here.

Restores

List Restores

GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/detail

Lists Restores with details

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to fetch the Restores from

Query Parameters

Name
Type
Description

snapshot_id

string

ID of the Snapshot to fetch the Restores from

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 11:28:43 GMT
Content-Type: application/json
Content-Length: 4308
Connection: keep-alive
X-Compute-Request-Id: req-0bc531b6-be6e-43b4-90bd-39ef26ef1463

{
   "restores":[
      {
         "id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
         "created_at":"2020-11-05T10:17:40.000000",
         "updated_at":"2020-11-05T10:17:40.000000",
         "finished_at":"2020-11-05T10:27:20.000000",
         "user_id":"ccddc7e7a015487fa02920f4d4979779",
         "project_id":"c76b3355a164498aa95ddbc960adc238",
         "status":"available",
         "restore_type":"restore",
         "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
         "links":[
            {
               "rel":"self",
               "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
            },
            {
               "rel":"bookmark",
               "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
            }
         ],
         "name":"OneClick Restore",
         "description":"-",
         "host":"TVM2",
         "size":2147483648,
         "uploaded_size":2147483648,
         "progress_percent":100,
         "progress_msg":"Restore from snapshot is complete",
         "warning_msg":null,
         "error_msg":null,
         "time_taken":580,
         "restore_options":{
            "name":"OneClick Restore",
            "oneclickrestore":true,
            "restore_type":"oneclick",
            "openstack":{
               "instances":[
                  {
                     "name":"cirros-2",
                     "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                     "availability_zone":"nova"
                  },
                  {
                     "name":"cirros-1",
                     "id":"e33c1eea-c533-4945-864d-0da1fc002070",
                     "availability_zone":"nova"
                  }
               ]
            },
            "type":"openstack",
            "description":"-"
         },
         "metadata":[
            {
               "created_at":"2020-11-05T10:27:20.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"91ab2495-1903-4d75-982b-08a4e480835b",
               "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
               "key":"data_transfer_time",
               "value":"0"
            },
            {
               "created_at":"2020-11-05T10:27:20.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"e0e01eec-24e0-4abd-9b8c-19993a320e9f",
               "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
               "key":"object_store_transfer_time",
               "value":"0"
            },
            {
               "created_at":"2020-11-05T10:27:20.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"eb909267-ba9b-41d1-8861-a9ec22d6fd84",
               "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
               "key":"restore_user_selected_value",
               "value":"Oneclick Restore"
            }
         ]
      },
      {
         "id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
         "created_at":"2020-11-04T14:37:31.000000",
         "updated_at":"2020-11-04T14:37:31.000000",
         "finished_at":"2020-11-04T14:45:27.000000",
         "user_id":"ccddc7e7a015487fa02920f4d4979779",
         "project_id":"c76b3355a164498aa95ddbc960adc238",
         "status":"error",
         "restore_type":"restore",
         "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
         "links":[
            {
               "rel":"self",
               "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/4673d962-f6a5-4209-8d3e-b9f2e9115f07"
            },
            {
               "rel":"bookmark",
               "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/4673d962-f6a5-4209-8d3e-b9f2e9115f07"
            }
         ],
         "name":"OneClick Restore",
         "description":"-",
         "host":"TVM2",
         "size":2147483648,
         "uploaded_size":2147483648,
         "progress_percent":100,
         "progress_msg":"",
         "warning_msg":null,
         "error_msg":"Failed restoring snapshot: Error creating instance e271bd6e-f53e-4ebc-875a-5787cc4dddf7",
         "time_taken":476,
         "restore_options":{
            "name":"OneClick Restore",
            "oneclickrestore":true,
            "restore_type":"oneclick",
            "openstack":{
               "instances":[
                  {
                     "name":"cirros-2",
                     "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                     "availability_zone":"nova"
                  },
                  {
                     "name":"cirros-1",
                     "id":"e33c1eea-c533-4945-864d-0da1fc002070",
                     "availability_zone":"nova"
                  }
               ]
            },
            "type":"openstack",
            "description":"-"
         },
         "metadata":[
            {
               "created_at":"2020-11-04T14:45:27.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"be6dc7e2-1be2-476b-9338-aed986be3b55",
               "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
               "key":"data_transfer_time",
               "value":"0"
            },
            {
               "created_at":"2020-11-04T14:45:27.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"2e4330b7-6389-4e21-b31b-2503b5441c3e",
               "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
               "key":"object_store_transfer_time",
               "value":"0"
            },
            {
               "created_at":"2020-11-04T14:45:27.000000",
               "updated_at":null,
               "deleted_at":null,
               "deleted":false,
               "version":"4.0.115",
               "id":"561c806b-e38a-496c-a8de-dfe96cb3e956",
               "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
               "key":"restore_user_selected_value",
               "value":"Oneclick Restore"
            }
         ]
      }
   ]
}

Get Restore

GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>

Provides all details about the specified Restore

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to fetch the restore from

restore_id

string

ID of the restore to show

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 14:04:45 GMT
Content-Type: application/json
Content-Length: 2639
Connection: keep-alive
X-Compute-Request-Id: req-30640219-e94e-4651-9b9e-49f5574e2a7f

{
   "restore":{
      "id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
      "created_at":"2020-11-05T10:17:40.000000",
      "updated_at":"2020-11-05T10:17:40.000000",
      "finished_at":"2020-11-05T10:27:20.000000",
      "user_id":"ccddc7e7a015487fa02920f4d4979779",
      "project_id":"c76b3355a164498aa95ddbc960adc238",
      "status":"available",
      "restore_type":"restore",
      "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
      "snapshot_details":{
         "created_at":"2020-11-04T13:58:37.000000",
         "updated_at":"2020-11-05T10:27:22.000000",
         "deleted_at":null,
         "deleted":false,
         "version":"4.0.115",
         "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
         "user_id":"ccddc7e7a015487fa02920f4d4979779",
         "project_id":"c76b3355a164498aa95ddbc960adc238",
         "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
         "snapshot_type":"full",
         "display_name":"API taken 2",
         "display_description":"API taken description 2",
         "size":44171264,
         "restore_size":2147483648,
         "uploaded_size":44171264,
         "progress_percent":100,
         "progress_msg":"Creating Instance: cirros-2",
         "warning_msg":null,
         "error_msg":null,
         "host":"TVM1",
         "finished_at":"2020-11-04T14:06:03.000000",
         "data_deleted":false,
         "pinned":false,
         "time_taken":428,
         "vault_storage_id":null,
         "status":"available"
      },
      "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
      "instances":[
         {
            "id":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2",
            "name":"cirros-2",
            "status":"available",
            "metadata":{
               "config_drive":"",
               "instance_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
               "production":"1"
            }
         },
         {
            "id":"b083bb70-e384-4107-b951-8e9e7bbac380",
            "name":"cirros-1",
            "status":"available",
            "metadata":{
               "config_drive":"",
               "instance_id":"e33c1eea-c533-4945-864d-0da1fc002070",
               "production":"1"
            }
         }
      ],
      "networks":[
         
      ],
      "subnets":[
         
      ],
      "routers":[
         
      ],
      "links":[
         {
            "rel":"self",
            "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
         },
         {
            "rel":"bookmark",
            "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
         }
      ],
      "name":"OneClick Restore",
      "description":"-",
      "host":"TVM2",
      "size":2147483648,
      "uploaded_size":2147483648,
      "progress_percent":100,
      "progress_msg":"Restore from snapshot is complete",
      "warning_msg":null,
      "error_msg":null,
      "time_taken":580,
      "restore_options":{
         "name":"OneClick Restore",
         "oneclickrestore":true,
         "restore_type":"oneclick",
         "openstack":{
            "instances":[
               {
                  "name":"cirros-2",
                  "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                  "availability_zone":"nova"
               },
               {
                  "name":"cirros-1",
                  "id":"e33c1eea-c533-4945-864d-0da1fc002070",
                  "availability_zone":"nova"
               }
            ]
         },
         "type":"openstack",
         "description":"-"
      },
      "metadata":[
         
      ]
   }
}

Delete Restore

DELETE https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>

Deletes the specified Restore

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to fetch the Restore from

restore_id

string

ID of the Restore to be deleted

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 14:21:07 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-0e155b21-8931-480a-a749-6d8764666e4d

Cancel Restore

GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>/cancel

Cancels an ongoing Restore

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of the Trilio service

tenant_id

string

ID of the Tenant/Project to fetch the Restore from

restore_id

string

ID of the Restore to cancel

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 15:13:30 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-98d4853c-314c-4f27-bd3f-f81bda1a2840

One Click Restore

POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

Starts a restore according to the provided information

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to do the restore in

snapshot_id

string

ID of the snapshot to restore

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 14:30:56 GMT
Content-Type: application/json
Content-Length: 992
Connection: keep-alive
X-Compute-Request-Id: req-7e18c309-19e5-49cb-a07e-90dd368fddae

{
   "restore":{
      "id":"3df1d432-2f76-4ebd-8f89-1275428842ff",
      "created_at":"2020-11-05T14:30:56.048656",
      "updated_at":"2020-11-05T14:30:56.048656",
      "finished_at":null,
      "user_id":"ccddc7e7a015487fa02920f4d4979779",
      "project_id":"c76b3355a164498aa95ddbc960adc238",
      "status":"restoring",
      "restore_type":"restore",
      "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
      "links":[
         {
            "rel":"self",
            "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
         },
         {
            "rel":"bookmark",
            "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
         }
      ],
      "name":"One Click Restore",
      "description":"One Click Restore",
      "host":"",
      "size":0,
      "uploaded_size":0,
      "progress_percent":0,
      "progress_msg":null,
      "warning_msg":null,
      "error_msg":null,
      "time_taken":0,
      "restore_options":{
         "openstack":{
            
         },
         "type":"openstack",
         "oneclickrestore":true,
         "vmware":{
            
         },
         "restore_type":"oneclick"
      },
      "metadata":[
         
      ]
   }
}

Body Format

The One-Click restore requires a body to provide all necessary information in json format.

{
   "restore":{
      "options":{
         "openstack":{
            
         },
         "type":"openstack",
         "oneclickrestore":true,
         "vmware":{},
         "restore_type":"oneclick"
      },
      "name":"One Click Restore",
      "description":"One Click Restore"
   }
}

Selective Restore

POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

Starts a restore according to the provided information.

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to do the restore in

snapshot_id

string

ID of the snapshot to restore

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 09:53:31 GMT
Content-Type: application/json
Content-Length: 1713
Connection: keep-alive
X-Compute-Request-Id: req-84f00d6f-1b12-47ec-b556-7b3ed4c2f1d7

{
   "restore":{
      "id":"778baae0-6c64-4eb1-8fa3-29324215c43c",
      "created_at":"2020-11-09T09:53:31.037588",
      "updated_at":"2020-11-09T09:53:31.037588",
      "finished_at":null,
      "user_id":"ccddc7e7a015487fa02920f4d4979779",
      "project_id":"c76b3355a164498aa95ddbc960adc238",
      "status":"restoring",
      "restore_type":"restore",
      "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
      "links":[
         {
            "rel":"self",
            "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
         },
         {
            "rel":"bookmark",
            "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
         }
      ],
      "name":"API",
      "description":"API Created",
      "host":"",
      "size":0,
      "uploaded_size":0,
      "progress_percent":0,
      "progress_msg":null,
      "warning_msg":null,
      "error_msg":null,
      "time_taken":0,
      "restore_options":{
         "openstack":{
            "instances":[
               {
                  "vdisks":[
                     {
                        "new_volume_type":"iscsi",
                        "id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                        "availability_zone":"nova"
                     }
                  ],
                  "name":"cirros-1-selective",
                  "availability_zone":"nova",
                  "nics":[
                     
                  ],
                  "flavor":{
                     "vcpus":1,
                     "disk":1,
                     "swap":"",
                     "ram":512,
                     "ephemeral":0,
                     "id":"1"
                  },
                  "include":true,
                  "id":"e33c1eea-c533-4945-864d-0da1fc002070"
               },
               {
                  "include":false,
                  "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe"
               }
            ],
            "restore_topology":false,
            "networks_mapping":{
               "networks":[
                  {
                     "snapshot_network":{
                        "subnet":{
                           "id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
                        },
                        "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26"
                     },
                     "target_network":{
                        "subnet":{
                           "id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
                        },
                        "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                        "name":"internal"
                     }
                  }
               ]
            }
         },
         "restore_type":"selective",
         "type":"openstack",
         "oneclickrestore":false
      },
      "metadata":[
         
      ]
   }
}

Body Format

The One-Click restore requires a body to provide all necessary information in json format.

{
   "restore":{
    "name":"<restore name>",
    "description":"<restore description>",
	  "options":{
         "openstack":{
            "instances":[
               {
                  "name":"<new name of instance>",
                  "include":<true/false>,
                  "id":"<original id of instance to be restored>"
				  "availability_zone":"<availability zone>",
				  "vdisks":[
                     {
                        "id":"<original ID of Volume>",
                        "new_volume_type":"<new volume type>",
                        "availability_zone":"<Volume availability zone>"
                     }
                  ],
                  "nics":[
                     {
                         'mac_address':'<mac address of the pre-created port>',
                         'ip_address':'<IP of the pre-created port>',
                         'id':'<ID of the pre-created port>',
                         'network':{
                            'subnet':{
                               'id':'<ID of the subnet of the pre-created port>'
                            },
                         'id':'<ID of the network of the pre-created port>'
                      }
                  ],
                  "flavor":{
                     "vcpus":<Integer>,
                     "disk":<Integer>,
                     "swap":<Integer>,
                     "ram":<Integer>,
                     "ephemeral":<Integer>,
                     "id":<Integer>
                  }
               }
            ],
            "restore_topology":<true/false>,
            "networks_mapping":{
               "networks":[
                  {
                     "snapshot_network":{
                        "subnet":{
                           "id":"<ID of the original Subnet ID>"
                        },
                        "id":"<ID of the original Network ID>"
                     },
                     "target_network":{
                        "subnet":{
                           "id":"<ID of the target Subnet ID>"
                        },
                        "id":"<ID of the target Network ID>",
                        "name":"<name of the target network>"
                     }
                  }
               ]
            }
         },
         "restore_type":"selective",
         "type":"openstack",
         "oneclickrestore":false
      }
   }
}

Inplace Restore

POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

Starts a restore according to the provided information

Path Parameters

Name
Type
Description

tvm_address

string

IP or FQDN of Trilio service

tenant_id

string

ID of the Tenant/Project to do the restore in

snapshot_id

string

ID of the snapshot to restore

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 12:53:03 GMT
Content-Type: application/json
Content-Length: 1341
Connection: keep-alive
X-Compute-Request-Id: req-311fa97e-0fd7-41ed-873b-482c149ee743

{
   "restore":{
      "id":"0bf96f46-b27b-425c-a10f-a861cc18b82a",
      "created_at":"2020-11-09T12:53:02.726757",
      "updated_at":"2020-11-09T12:53:02.726757",
      "finished_at":null,
      "user_id":"ccddc7e7a015487fa02920f4d4979779",
      "project_id":"c76b3355a164498aa95ddbc960adc238",
      "status":"restoring",
      "restore_type":"restore",
      "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
      "links":[
         {
            "rel":"self",
            "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
         },
         {
            "rel":"bookmark",
            "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
         }
      ],
      "name":"API",
      "description":"API description",
      "host":"",
      "size":0,
      "uploaded_size":0,
      "progress_percent":0,
      "progress_msg":null,
      "warning_msg":null,
      "error_msg":null,
      "time_taken":0,
      "restore_options":{
         "restore_type":"inplace",
         "type":"openstack",
         "oneclickrestore":false,
         "openstack":{
            "instances":[
               {
                  "restore_boot_disk":true,
                  "include":true,
                  "id":"7c1bb5d2-aa5a-44f7-abcd-2d76b819b4c8",
                  "vdisks":[
                     {
                        "restore_cinder_volume":true,
                        "id":"f6b3fef6-4b0e-487e-84b5-47a14da716ca"
                     }
                  ]
               },
               {
                  "restore_boot_disk":true,
                  "include":true,
                  "id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b",
                  "vdisks":[
                     {
                        "restore_cinder_volume":true,
                        "id":"53204f34-019d-4ba8-ada1-e6ab7b8e5b43"
                     }
                  ]
               }
            ]
         }
      },
      "metadata":[
         
      ]
   }
}

Body Format

The One-Click restore requires a body to provide all necessary information in json format.

{
   "restore":{
      "name":"<restore-name>",
      "description":"<restore-description>",
      "options":{
         "restore_type":"inplace",
         "type":"openstack",
         "oneclickrestore":false,
         "openstack":{
            "instances":[
               {
                  "restore_boot_disk":<Boolean>,
                  "include":<Boolean>,
                  "id":"<ID of the instance the volumes are attached to>",
                  "vdisks":[
                     {
                        "restore_cinder_volume":<boolean>,
                        "id":"<ID of the Volume to restore>"
                     }
                  ]
               }
            ]
         }
      }
   }
}

Backup Targets

This document explains the concepts of Backup Targets and Backup Target Types in Trilio, their purpose, and how they provide additional flexibility and control for backup storage management.


1. Backup Targets (BTs)

Definition:

Backup Targets are storage backends where backups are stored. These can be any of the supported storage systems such as NFS (Network File System) or S3 (Simple Storage Service). Backup Targets act as the foundational layer for storing backup data.

Key Characteristics of Backup Targets:

  • They are storage systems connected to Trilio.

  • Supported storage types include:

    • NFS: A shared file system accessible over a network.

    • S3: An object storage service typically offered by cloud providers.

  • Multiple Backup Targets can be defined within a single environment to provide storage flexibility.

How to configure Backup Target(s):

The Admin user will have to plan and make the desired Backup Targets available before deploying Trilio. Backup Targets need to be configured during the deployment of Trilio that will get mounted on the required hosts.

The below-mentioned entries need to be defined in the configuration file of the workloadmgrservices to enable the required Backup Targets, however, these parameters need not be populated manually. The deployment scripts will take care of it.

  • Define all the enabled backup target names separated by a comma under the DEFAULTsection of the config file and the config parameter enabled_backends

    • Example:

[DEFAULT]
.
.
enabled_backends = NFS_BT1, S3_BT1, S3_BT2, S3_BT3
.
.

where, NFS_BT1, S3_BT1, S3_BT2, and S3_BT3 are unique names of the Backup Targets and will have their own config sections with the same names in the configuration file, as mentioned below.

  • Define each of the Backup Target sections using all required parameters. These required parameters are based on the type of storage.

    • For NFS storage:

      • vault_storage_type = nfs

      • vault_storage_filesystem_export = <NFS_SHARE>

      • vault_storage_nfs_options = nolock,soft,timeo=600,intr,lookupcache=none,retrans=10

    • For S3 storage:

      • vault_storage_type = s3

      • vault_s3_endpoint_url = <S3_ENDPOINT_URL>

        • In the case of AWS S3, this parameter should be kept blank.

      • vault_s3_bucket = <S3_BUCKET_NAME>

      • vault_storage_filesystem_export = <S3_ENDPOINT_HOSTNAME>/<S3_BUCKET_NAME>

        • In the case of AWS S3, only the bucket name should be provided.

      • immutable = 1

        • If the Object-Lockis enabled on the S3 Bucket, else this parameter can be set to 0

    • Apart from the above-mentioned storage-specific parameters, the user must mention which among these Backup Targets should be the default one by defining the below parameter under that storage section:

      • is_default = 1

    • Example:

# NFS Backup Target-1 as a default backup target
[NFS_BT1]
vault_storage_type = nfs
vault_storage_filesystem_export = 192.168.1.35:/mnt/trilio/share1
vault_storage_nfs_options = nolock,soft,timeo=600,intr,lookupcache=none,retrans=10
is_default = 1

# Ceph S3 Backup Target-2
[S3_BT2]
vault_storage_type = s3
vault_s3_endpoint_url = https://cephs3.triliodata.demo
vault_s3_bucket = trilio-test-bucket
vault_storage_filesystem_export = cephs3.triliodata.demo/trilio-test-bucket
immutable = 0
is_default = 0

# Ceph S3 Backup Target-3 with Object-lock enabled bucket
[S3_BT3]
vault_storage_type = s3
vault_s3_endpoint_url = https://cephs3.triliodata.demo
vault_s3_bucket = object-locked-cephs3-bucket
immutable = 1
vault_storage_filesystem_export = cephs3.triliodata.demo/object-locked-cephs3-bucket

# AWS S3 Backup Target-4 with Object-lock enabled bucket
[S3_BT4]
vault_storage_type = s3
vault_s3_endpoint_url =
vault_s3_bucket = object-locked-aws-s3-01
immutable = 1
vault_storage_filesystem_export = object-locked-aws-s3-01

List & Show Configured BTs

Using Horizon Dashboard

  1. Log in to the OpenStack Horizon Dashboard as an Admin user.

  2. Navigate to the Admin-> Backups-Admin -> Backup Targets

  3. On the page, click the Backup Targets tab to see the list of Backup Target Types.

Using CLI

Create Backup Target:

  • Command:

workloadmgr backup-target-create
  • Alias:

openstack workloadmgr backup target create
  • Options:

# workloadmgr help backup-target-create
usage: workloadmgr backup-target-create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent]
                                        [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]
                                        [--s3-endpoint-url <s3_endpoint_url>] [--s3-bucket <s3_bucket>]
                                        [--filesystem-export <filesystem_export>] [--type <type>]
                                        [--btt-name <btt_name>] [--default] [--immutable]
                                        [--metadata metadata [metadata ...]]

Create backup target

optional arguments:
  -h, --help            show this help message and exit
  --s3-endpoint-url <s3_endpoint_url>
                        S3 endpoint URL.
  --s3-bucket <s3_bucket>
                        S3 bucket. Required for s3 backup target type
  --filesystem-export <filesystem_export>
                        BT filesystem export path. Required for nfs backup target only. For s3 it's handled internally
  --type <type>
                        Required BT type. Eg. nfs, s3
  --btt-name <btt_name>
                        optional Backup Target Type name. If not provided, then we use filesystem export value to generate BTT name
  --default             denotes whether Backup Target is default
  --immutable           denotes whether Backup Target is immutable
  --metadata metadata [metadata ...]
                        Specify a key value pairs to include in the BT metadata Eg. --metadata key1=value1 key2=value2 keyN=valueN
  • Example:

Delete Backup Target:

  • Command:

workloadmgr backup-target-delete
  • Alias:

openstack workloadmgr backup target delete
  • Options:

# workloadmgr help backup-target-delete
usage: workloadmgr backup-target-delete [-h] <backup_target_id>
Delete existing backup target
positional arguments:
  <backup_target_id>
                        ID of the backup target to delete.
  • Example:

List the available Backup Targets:

  • Command:

workloadmgr backup-target-list
  • Alias:

openstack workloadmgr backup target list
  • Options:

# workloadmgr help backup-target-list
usage: workloadmgr backup-target-list [-h]
                                      [-f {csv,json,table,value,yaml}]
                                      [-c COLUMN]
                                      [--quote {all,minimal,none,nonnumeric}]
                                      [--noindent]
                                      [--max-width <integer>]
                                      [--fit-width] [--print-empty]
                                      [--sort-column SORT_COLUMN]
                                      [--sort-ascending | --sort-descending]

List all the backup targets.

options:
  -h, --help            show this help message and exit

output formatters:
  output formatter options

  -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated to
                        show multiple columns
  --sort-column SORT_COLUMN
                        specify the column(s) to sort the data (columns
                        specified first have a priority, non-existing columns
                        are ignored), can be repeated
  --sort-ascending      sort the column(s) in ascending order
  --sort-descending     sort the column(s) in descending order

CSV Formatter:
  --quote {all,minimal,none,nonnumeric}
                        when to include quotes, defaults to nonnumeric

json formatter:
  --noindent            whether to disable indenting the JSON

table formatter:
  --max-width <integer>
                        Maximum display width, <1 to disable. You can also use
                        the CLIFF_MAX_TERM_WIDTH environment variable, but the
                        parameter takes precedence.
  --fit-width           Fit the table to the display width. Implied if --max-
                        width greater than 0. Set the environment variable
                        CLIFF_FIT_WIDTH=1 to always enable
  --print-empty         Print empty table if there is no data to show.
  
  • Example:

Show Details of a Backup Target:

  • Command:

workloadmgr backup-target-show
  • Alias:

openstack workloadmgr backup target show
  • Options:

# workloadmgr help backup-target-show
usage: workloadmgr backup-target-show [-h]
                                      [-f {json,shell,table,value,yaml}]
                                      [-c COLUMN] [--noindent]
                                      [--prefix PREFIX]
                                      [--max-width <integer>]
                                      [--fit-width] [--print-empty]
                                      <backup_target_id>

Show details about backup targets

positional arguments:
  <backup_target_id>
                        ID of the backup target.

options:
  -h, --help            show this help message and exit

output formatters:
  output formatter options

  -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated to
                        show multiple columns

json formatter:
  --noindent            whether to disable indenting the JSON

shell formatter:
  a format a UNIX shell can parse (variable="value")

  --prefix PREFIX
                        add a prefix to all variable names

table formatter:
  --max-width <integer>
                        Maximum display width, <1 to disable. You can also use
                        the CLIFF_MAX_TERM_WIDTH environment variable, but the
                        parameter takes precedence.
  --fit-width           Fit the table to the display width. Implied if --max-
                        width greater than 0. Set the environment variable
                        CLIFF_FIT_WIDTH=1 to always enable
  --print-empty         Print empty table if there is no data to show.
  • Example:

Backup Target Set Default:

  • Command:

workloadmgr backup-target-set-default
  • Alias:

openstack workloadmgr backup target set default
  • Options:

# workloadmgr help backup-target-set-default
usage: workloadmgr backup-target-set-default [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent]
                                             [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]
                                             <backup_target_id>

Set existing backup target as default, it's respective one of BTT will be set as default, If BTT doesn't exists then it will be created and
set as default BTT

positional arguments:
  <backup_target_id>
                        ID of the backup target which needs to be set as default
  • Example:


2. Backup Target Types (BTTs)

Definition:

Backup Target Types are an abstraction layer over Backup Targets. They provide additional administrative controls and can be categorized based on their scope and access permissions.

Types of Backup Target Types:

  1. Public:

    • Accessible by all users and projects in the system.

    • Suitable for shared storage scenarios where multiple teams or tenants use the same backup infrastructure.

  2. Private:

    • Restricted to specific projects.

    • Private Backup Target Types can be assigned to one or multiple projects, allowing project-specific control over backup storage.

Relationship Between Backup Targets and Backup Target Types:

  • A many-to-one relationship exists between Backup Target Types and Backup Targets.

    • Multiple Backup Target Types can map to a single Backup Target.

    • This allows administrators to define different policies or access levels for a shared storage backend.

Pre-created Backup Target Types

  • Trilio creates the BTTs of all the Backup Targets that are configured during deployment with the same name as the Backup Targets.

  • It inherits the provided configuration options for each Backup Target and creates the Public Backup Target Types by default.

List Available BTTs

Using Horizon Dashboard

  1. Log in to the OpenStack Horizon Dashboard as an Admin user.

  2. Navigate to the Admin-> Backups-Admin -> Backup Targets

  3. On the page, click the Backup Target Types tab to see the list of Backup Target Types.

Using CLI

  • Command:

workloadmgr backup-target-type-list
  • Alias:

openstack workloadmgr backup target type list
  • Options:

# workloadmgr help backup-target-type-list
usage: workloadmgr backup-target-type-list [-h]
                                           [-f {csv,json,table,value,yaml}]
                                           [-c COLUMN]
                                           [--quote {all,minimal,none,nonnumeric}]
                                           [--noindent]
                                           [--max-width <integer>]
                                           [--fit-width] [--print-empty]
                                           [--sort-column SORT_COLUMN]
                                           [--sort-ascending | --sort-descending]
                                           [--detail {True,False}]
                                           [--project-id <project_id>]

List all the backup target types.

options:
  -h, --help            show this help message and exit
  --detail {True,False}
                        List detail backup target types
  --project-id <project_id>
                        ID of the project.

output formatters:
  output formatter options

  -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated to
                        show multiple columns
  --sort-column SORT_COLUMN
                        specify the column(s) to sort the data (columns
                        specified first have a priority, non-existing columns
                        are ignored), can be repeated
  --sort-ascending      sort the column(s) in ascending order
  --sort-descending     sort the column(s) in descending order

CSV Formatter:
  --quote {all,minimal,none,nonnumeric}
                        when to include quotes, defaults to nonnumeric

json formatter:
  --noindent            whether to disable indenting the JSON

table formatter:
  --max-width <integer>
                        Maximum display width, <1 to disable. You can also use
                        the CLIFF_MAX_TERM_WIDTH environment variable, but the
                        parameter takes precedence.
  --fit-width           Fit the table to the display width. Implied if --max-
                        width greater than 0. Set the environment variable
                        CLIFF_FIT_WIDTH=1 to always enable
  --print-empty         Print empty table if there is no data to show.
  • Example:

Show Details of a BTT

Using Horizon Dashboard

  • Most of the relevant information about the BTT can be seen while Listing the BTTs.

  • Trilio does not provide a separate GUI for showing the additional details of the BTTs.

  • But, Trilio does provide a CLI command to get the additional details.

Using CLI

  • Command:

workloadmgr backup-target-type-show
  • Alias:

openstack workloadmgr backup target type show
  • Options:

# workloadmgr help  backup-target-type-show
usage: workloadmgr backup-target-type-show [-h] [-f {json,shell,table,value,yaml}]
                                           [--print-empty]
                                           <backup_target_id>

Show details about backup target types

positional arguments:
  <backup_target_id>
                        ID of the backup target.

options:
  -h, --help            show this help message and exit

output formatters:
  output formatter options

  -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated to show

json formatter:
  --noindent            whether to disable indenting the JSON

shell formatter:
  a format a UNIX shell can parse (variable="value")

  --prefix PREFIX
                        add a prefix to all variable names

table formatter:
  --max-width <integer>
                        Maximum display width, <1 to disable. You can also use the
  --fit-width           Fit the table to the display width. Implied if --max-width
  --print-empty         Print empty table if there is no data to show.
  • Example:

Create a BTT

Using Horizon Dashboard

  1. Log in to the OpenStack Horizon Dashboard as an Admin user.

  2. Navigate to the Admin-> Backups-Admin -> Backup Targets

  3. On the page, click the Backup Target Types tab to see the list of Backup Target Types.

  4. Click on the button to open the Backup Target Type Create wizard, and follow the instructions to create the BTT.

Using CLI

  • Command:

workloadmgr backup-target-type-create
  • Alias:

openstack workloadmgr backup target type create
  • Options:

# workloadmgr help  backup-target-type-create
usage: workloadmgr backup-target-type-create [-h]
                                             [-f {json,shell,table,value,yaml}]
                                             [-c COLUMN] [--noindent]
                                             [--prefix PREFIX]
                                             [--max-width <integer>]
                                             [--fit-width] [--print-empty]
                                             [--default]
                                             [--description <description>]
                                             (--public | --project-ids <project-ids> [<project-ids> ...])
                                             [--metadata <key=key-name>]
                                             [--backup-targets-id <backup_targets_id>]
                                             <name>

Create backup target type

positional arguments:
  <name>        required BTT name.

options:
  -h, --help            show this help message and exit
  --default             denotes whether BTT is default
  --description <description>
                        Optional BTT description. (Default=None)
  --public              denotes whether BTT is of public type
  --project-ids <project-ids> [<project-ids> ...]
                        Required to assign BTT to projects
  --metadata <key=key-name>
                        Specify a key value pairs to include in the BTT metadata
                        Specify option multiple times to include multiple keys.
                        key=value
  --backup-targets-id <backup_targets_id>
                        ID of the backup target for which BTT would be created

output formatters:
  output formatter options

  -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated to
                        show multiple columns

json formatter:
  --noindent            whether to disable indenting the JSON

shell formatter:
  a format a UNIX shell can parse (variable="value")

  --prefix PREFIX
                        add a prefix to all variable names

table formatter:
  --max-width <integer>
                        Maximum display width, <1 to disable. You can also use
                        the CLIFF_MAX_TERM_WIDTH environment variable, but the
                        parameter takes precedence.
  --fit-width           Fit the table to the display width. Implied if --max-
                        width greater than 0. Set the environment variable
                        CLIFF_FIT_WIDTH=1 to always enable
  --print-empty         Print empty table if there is no data to show.
  • Example:

Modify a BTT

Modification of the Default Backup Target Type is not allowed.

Using Horizon Dashboard

  1. Log in to the OpenStack Horizon Dashboard as an Admin user.

  2. Navigate to the Admin-> Backups-Admin -> Backup Targets

  3. On the page, click the Backup Target Types tab to see the list of Backup Target Types.

  4. Click on the button under the Actions column of the BTT List table of the desired BTT to open the Edit Backup Target Type wizard.

  5. Once the required changes are done, click on the Edit button on the wizard to save the changes.

Using CLI

  • Command:

workloadmgr backup-target-type-modify
  • Alias:

openstack workloadmgr backup target type modify
  • Options:

# workloadmgr help  backup-target-type-modif
usage: workloadmgr backup-target-type-modify [-h] [-f {json,shell,table,value,yaml
                                             [--print-empty] [--name <name>] [--de
                                             (--public | --project-ids <project-id
                                             [--backup-target-type-id <backup_targ

Modify existing backup target type

options:
  -h, --help            show this help message and exit
  --name <name>
                        Optional BTT name. (Default=None)
  --default             denotes whether BTT is default
  --description <description>
                        Optional BTT description. (Default=None)
  --public              denotes whether BTT is of public type
  --project-ids <project-ids> [<project-ids> ...]
                        Required to assign BTT to projects
  --metadata <key=key-name>
                        Specify a key value pairs to include in the BTT metadata S
  --backup-target-type-id <backup_target_type_id>
                        ID of the backup target type for which given projects will

output formatters:
  output formatter options

  -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated to show

json formatter:
  --noindent            whether to disable indenting the JSON

shell formatter:
  a format a UNIX shell can parse (variable="value")

  --prefix PREFIX
                        add a prefix to all variable names

table formatter:
  --max-width <integer>
                        Maximum display width, <1 to disable. You can also use the
  --fit-width           Fit the table to the display width. Implied if --max-width
  --print-empty         Print empty table if there is no data to show.
  • Example:

Assign/Unassign Project(s) to/from a BTT

Project assignment is allowed only to the Private Backup Target Types.

Using Horizon Dashboard

  • Log in to the OpenStack Horizon Dashboard as an Admin user.

  • Navigate to the Admin-> Backups-Admin -> Backup Targets

  • On the page, click the Backup Target Types tab to see the list of Backup Target Types.

  • Click the dropdown button under the Actions column of the BTT List table of the desired Private BTT and click on the button to open the Edit Backup Target Type Access wizard.

  • Select the Projects to be assigned, unselect the projects to be unassigned, and click on the Save button on the wizard to save the changes.

Using CLI

Assigning Projects:

  • Command:

workloadmgr backup-target-type-add-projects
  • Alias:

openstack workloadmgr backup target type add projects

Options:

# workloadmgr help  backup-target-type-add-projects
usage: workloadmgr backup-target-type-add-projects [-h]
                                                   [-f {json,shell,table,value,yaml}]
                                                   [-c COLUMN]
                                                   [--noindent]
                                                   [--prefix PREFIX]
                                                   [--max-width <integer>]
                                                   [--fit-width] [--print-empty]
                                                   [--backup-target-type-id <backup_target_type_id>]
                                                   [--project-ids <project-ids> [<project-ids> ...]]

Assign projects to existing backup target type

options:
  -h, --help            show this help message and exit
  --backup-target-type-id <backup_target_type_id>
                        ID of the backup target type for which given projects
                        will be assigned
  --project-ids <project-ids> [<project-ids> ...]
                        Required to assign BTT to projects

output formatters:
  output formatter options

  -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated to
                        show multiple columns

json formatter:
  --noindent            whether to disable indenting the JSON

shell formatter:
  a format a UNIX shell can parse (variable="value")

  --prefix PREFIX
                        add a prefix to all variable names

table formatter:
  --max-width <integer>
                        Maximum display width, <1 to disable. You can also use
                        the CLIFF_MAX_TERM_WIDTH environment variable, but the
                        parameter takes precedence.
  --fit-width           Fit the table to the display width. Implied if --max-
                        width greater than 0. Set the environment variable
                        CLIFF_FIT_WIDTH=1 to always enable
  --print-empty         Print empty table if there is no data to show.

Example:

Unassigning Projects:

  • Command:

workloadmgr backup-target-type-remove-projects
  • Alias:

openstack workloadmgr backup target type remove projects

Options:

# workloadmgr help  backup-target-type-remove-projects
usage: workloadmgr backup-target-type-remove-projects [-h]
                                                      [-f {json,shell,table,value,yaml}]
                                                      [-c COLUMN]
                                                      [--noindent]
                                                      [--prefix PREFIX]
                                                      [--max-width <integer>]
                                                      [--fit-width]
                                                      [--print-empty]
                                                      [--backup-target-type-id <backup_target_type_id>]
                                                      [--project-ids <project-ids> [<project-ids> ...]]

Remove already assigned projects from backup target types

options:
  -h, --help            show this help message and exit
  --backup-target-type-id <backup_target_type_id>
                        ID of the backup target type for which given projects
                        will be assigned
  --project-ids <project-ids> [<project-ids> ...]
                        Required to assign BTT to projects

output formatters:
  output formatter options

  -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated to
                        show multiple columns

json formatter:
  --noindent            whether to disable indenting the JSON

shell formatter:
  a format a UNIX shell can parse (variable="value")

  --prefix PREFIX
                        add a prefix to all variable names

table formatter:
  --max-width <integer>
                        Maximum display width, <1 to disable. You can also use
                        the CLIFF_MAX_TERM_WIDTH environment variable, but the
                        parameter takes precedence.
  --fit-width           Fit the table to the display width. Implied if --max-
                        width greater than 0. Set the environment variable
                        CLIFF_FIT_WIDTH=1 to always enable  --print-empty         Print empty table if there is no data to show.

Example:

Add/Remove BTT Metadata

Using Horizon Dashboard

Metadata updates are only possible through CLI

Using CLI

Adding Metadata:

  • Command:

workloadmgr backup-target-type-add-metadata
  • Alias:

openstack workloadmgr backup target type add metadata

Options:

# workloadmgr help backup-target-type-add-metadata
usage: workloadmgr backup-target-type-add-metadata [-h]
                                                   [-f {json,shell,table,value,yaml}]
                                                   [-c COLUMN] [--noindent]
                                                   [--prefix PREFIX]
                                                   [--max-width <integer>]
                                                   [--fit-width]
                                                   [--print-empty]
                                                   [--backup-target-type-id <backup_target_type_id>]
                                                   [--metadata <key=key-name>]

Add metadata to existing backup target type

optional arguments:
  -h, --help            show this help message and exit
  --backup-target-type-id <backup_target_type_id>
                        ID of the backup target type for which given metadata
                        will be created
  --metadata <key=key-name>
                        Specify a key value pairs to include in the BTT
                        metadata Specify option multiple times to include
                        multiple keys. key=value

output formatters:
  output formatter options

  -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated to
                        show multiple columns

json formatter:
  --noindent            whether to disable indenting the JSON

shell formatter:
  a format a UNIX shell can parse (variable="value")

  --prefix PREFIX       add a prefix to all variable names

table formatter:
  --max-width <integer>
                        Maximum display width, <1 to disable. You can also use
                        the CLIFF_MAX_TERM_WIDTH environment variable, but the
                        parameter takes precedence.
  --fit-width           Fit the table to the display width. Implied if --max-
                        width greater than 0. Set the environment variable
                        CLIFF_FIT_WIDTH=1 to always enable
  --print-empty         Print empty table if there is no data to show.
  • Example:

Removing Metadata:

  • Command:

workloadmgr backup-target-type-remove-metadata
  • Alias:

openstack workloadmgr backup target type remove metadata
  • Options:

# workloadmgr help backup-target-type-remove-metadata
usage: workloadmgr backup-target-type-remove-metadata [-h]
                                                      [-f {json,shell,table,value,yaml}]
                                                      [-c COLUMN] [--noindent]
                                                      [--prefix PREFIX]
                                                      [--max-width <integer>]
                                                      [--fit-width]
                                                      [--print-empty]
                                                      [--backup-target-type-id <backup_target_type_id>]
                                                      [--metadata-keys <metadata-keys> [<metadata-keys> ...]]

Remove metadata from existing backup target type

optional arguments:
  -h, --help            show this help message and exit
  --backup-target-type-id <backup_target_type_id>
                        ID of the backup target type for which given projects
                        will be assigned
  --metadata-keys <metadata-keys> [<metadata-keys> ...]
                        Required to remove metadata of BTT

output formatters:
  output formatter options

  -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml}
                        the output format, defaults to table
  -c COLUMN, --column COLUMN
                        specify the column(s) to include, can be repeated to
                        show multiple columns

json formatter:
  --noindent            whether to disable indenting the JSON

shell formatter:
  a format a UNIX shell can parse (variable="value")

  --prefix PREFIX       add a prefix to all variable names

table formatter:
  --max-width <integer>
                        Maximum display width, <1 to disable. You can also use
                        the CLIFF_MAX_TERM_WIDTH environment variable, but the
                        parameter takes precedence.
  --fit-width           Fit the table to the display width. Implied if --max-
                        width greater than 0. Set the environment variable
                        CLIFF_FIT_WIDTH=1 to always enable
  --print-empty         Print empty table if there is no data to show.

Example:

Delete a BTT

Removing Backup Target Types from an active Workload can lead to inconsistent behavior and potential backup operation failures.

Using Horizon Dashboard

  • Log in to the OpenStack Horizon Dashboard as an Admin user.

  • Navigate to the Admin-> Backups-Admin -> Backup Targets

  • On the page, click the Backup Target Types tab to see the list of Backup Target Types.

  • Click the dropdown button under the Actions column of the BTT List table of the desired Private BTT, click on the button, and confirm the deletion once prompted.

  • Deletion of multiple BTTs can be done by selecting the check boxes of the desired BTTs and then clicking the button at the top-right corner.

Using CLI

  • Command:

workloadmgr backup-target-type-delete
  • Alias:

openstack workloadmgr backup target type delete
  • Options:

# workloadmgr help backup-target-type-delete
usage: workloadmgr backup-target-type-delete [-h] <backup_target_type_id>

Delete existing backup target type

positional arguments:
  <backup_target_type_id>
                        ID of the backup target type to delete.

optional arguments:
  -h, --help            show this help message and exit
  • Example:


3. User Interaction with Backup Target Types

How Users Choose Backup Storage:

  • Any user can select a Public Backup Target Type for storing backups, as these are universally accessible.

  • For Private Backup Target Types, users can only select them if the Backup Target Type is explicitly assigned to their project.

  • The user will have the option to select these Backup Target Types while creating a workload.

  • Please note that once the workload is created with the chosen Backup Target Type, it can not be modified. The user has to recreate the workload if the Backup Target Type needs to be changed.


Example runbook for Disaster Recovery using NFS

This runbook will demonstrate how to set up Disaster Recovery with Trilio for a given scenario.

The chosen scenario is following an actively used Trilio customer environment.

Scenario

There are two OpenStack clouds available "OpenStack Cloud A" and OpenStack Cloud B". "OpenStack Cloud B" is the Disaster Recovery restore point of "OpenStack Cloud A" and vice versa. Both clouds have an independent Trilio installation integrated. These Trilio installations are writing their Backups to NFS targets. "Trilio A" is writing to "NFS A1" and "Trilio B" is writing to "NFS B1". The NFS Volumes used are getting synced against another NFS Volume on the other side. "NFS A1" is syncing with "NFS B2" and "NFS B1" is syncing with "NFS A2". The syncing process is set up independently from Trilio and will always favor the newer dataset.

Disaster Recovery Scenario

This scenario will cover the Disaster Recovery of a single Workload and a complete Cloud. All processes are done be the OpenStack administrator.

Prerequisites for the Disaster Recovery process

This runbook will assume that the following is true:

  • "OpenStack Cloud A" and "OpenStack Cloud B" both have an active Trilio installation with a valid license

  • "OpenStack Cloud A" and "OpenStack Cloud B" have free resources to host additional VMs

  • "OpenStack Cloud A" and "OpenStack Cloud B" have Tenants/Projects available that are the designated restore points for Tenant/Projects of the other side

  • Access to a user with the admin role permissions on domain level

  • One of the OpenStack clouds is down/lost

For ease of writing will this runbook assume from here on, that "OpenStack Cloud A" is down and the Workloads are getting restored into "OpenStack Cloud B".

In the case of the usage of shared Tenant networks, beyond the floating IP, the following additional requirement is needed: All Tenant Networks, Routers, Ports, Floating IPs, and DNS Zones are created

Disaster Recovery of a single Workload

A single Workload can do a Disaster Recovery in this Scenario, while both Clouds are still active. To do so the following high-level process needs to be followed:

  1. Copy the Workload directories to the configured NFS Volume

  2. Make the right Mount-Paths available

  3. Reassign the Workload

  4. Restore the Workload

  5. Clean up

Copy the Workload directories to the configured NFS Volume

This process only shows how to get a Workload from "OpenStack Cloud A" to "OpenStack Cloud B". The vice versa process is similar.

As only a single Workload is to be recovered it is more efficient to copy the data of that single Workload over to the "NFS B1" Volume, which is used by "Trilio B".

Mount "NFS B2" Volume to a Trilio VM

It is recommended to use the Trilio VM as a connector between both NFS Volumes, as the nova user is available on the Trilio VM.

# mount <NFS B2-IP/NFS B2-FQDN>:/<VOL-Path> /mnt

Identify the Workload on the "NFS B2" Volume

Trilio Workloads are identified by their ID und which they are stored on the Backup Target. See below example:

workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105

In the case that the Workload ID is not known can available Metadata inside the Workload directories be used to identify the correct Workload.

/…/workload_<id>/workload_db <<< Contains User ID and Project ID of Workload owner
/…/workload_<id>/workload_vms_db <<< Contains VM IDs and VM Names of all VMs actively protected be the Workload

Copy the Workload

The identified workload needs to be copied with all subdirectories and files. Afterward, it is necessary to adjust the ownership to nova:nova with the right permissions.

# cp /mnt/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
# chown -R nova:nova /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
# chmod -R 644 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105

Make the Mount-Paths available

Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.

#qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 516K
cluster_size: 65536

backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

The MTAuMTAuMi4yMDovdXBzdHJlYW0= part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.

This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.

Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.

Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.

Identify the base64 hash values

The used hash values can be calculated using the base64 tool in any Linux distribution.

# echo -n 10.10.2.20:/NFS_A1 | base64
MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

# echo -n 10.20.3.22:/NFS_B2 | base64
MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0

Create and bind the paths

Based on the identified base64 hash values the following paths are required on each Compute node.

/var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

and

/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0

In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.

#mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
#mount --bind 
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.

#vi /etc/fstab
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ 		/ var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl	none		bind 		0 0

Reassign the workload

Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by OpenStack administrators.

Add admin-user to required domains and projects

To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.

# source {customer admin rc file}  
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
# openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>

Discover orphaned Workloads from NFS-Storage of Target Cloud

Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.

# workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True    
+------------+--------------------------------------+----------------------------------+----------------------------------+  
|     Name   |                  ID                  |            Project ID            |  User ID                         |  
+------------+--------------------------------------+----------------------------------+----------------------------------+  
| Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 |  329880dedb4cd357579a3279835f392 |  
| Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 |  329880dedb4cd357579a3279835f392 |  
+------------+--------------------------------------+----------------------------------+----------------------------------+

List available projects on Target Cloud in the Target Domain

The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.

# openstack project list --domain <target_domain>  
+----------------------------------+----------+  
| ID                               | Name     |  
+----------------------------------+----------+  
| 01fca51462a44bfa821130dce9baac1a | project1 |  
| 33b4db1099ff4a65a4c1f69a14f932ee | project2 |  
| 9139e694eb984a4a979b5ae8feb955af | project3 |  
+----------------------------------+----------+ 

List available users on the Target Cloud in the Target Project that have the right backup trustee role

To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.

# openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| Role                             | User                             | Group | Project                          | Domain | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
| 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
| 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+

Reassign the workload to the target project

Now that all informations have been gathered the workload can be reassigned to the target project.

# workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True    
+-----------+--------------------------------------+----------------------------------+----------------------------------+  
|    Name   |                  ID                  |            Project ID            |  User ID                         |  
+-----------+--------------------------------------+----------------------------------+----------------------------------+  
| project1  | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |  
+-----------+--------------------------------------+----------------------------------+----------------------------------+ 

Verify the workload is available at the desired target_project

After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.

# workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
+-------------------+------------------------------------------------------------------------------------------------------+
| Property          | Value                                                                                                |
+-------------------+------------------------------------------------------------------------------------------------------+
| availability_zone | nova                                                                                                 |
| created_at        | 2019-04-18T02:19:39.000000                                                                           |
| description       | Test Linux VMs                                                                                       |
| error_msg         | None                                                                                                 |
| id                | ac9cae9b-5e1b-4899-930c-6aa0600a2105                                                                 |
| instances         | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id":                      |
|                   | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}]                                     |
| interval          | None                                                                                                 |
| jobschedule       | True                                                                                                 |
| name              | Test Linux                                                                                           |
| project_id        | 2fc4e2180c2745629753305591aeb93b                                                                     |
| scheduler_trust   | None                                                                                                 |
| status            | available                                                                                            |
| storage_usage     | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
|                   | "snap_count": 13}}                                                                                   |
| updated_at        | 2019-11-15T02:32:43.000000                                                                           |
| user_id           | 72e65c264a694272928f5d84b73fe9ce                                                                     |
| workload_type_id  | f82ce76f-17fe-438b-aa37-7a023058e50d                                                                 |
+-------------------+------------------------------------------------------------------------------------------------------+

Restore the workload

The reassigned workload can be restored using Horizon following the procedure described here.

This runbook will continue on the CLI only path.

Prepare the selective restore by getting the snapshot information

To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.

List all Snapshots of the workload to restore to identify the snapshot to restore

# workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True

+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
|         Created At         |     Name     |                  ID                  |             Workload ID              | Snapshot Type |   Status  |    Host   |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |      full     | available | Upstream2 |
| 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
| 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+

Get Snapshot Details with network details for the desired snapshot

# workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9

+-------------------+--------------------------------------+
| Snapshot property | Value                                |
+-------------------+--------------------------------------+
| description       | None                                 |
| host              | Upstream2                            |
| id                | 7e39e544-537d-4417-853d-11463e7396f9 |
| name              | jobscheduler                         |
| progress_percent  | 100                                  |
| restore_size      | 44040192 Bytes or Approx (42.0MB)    |
| restores_info     |                                      |
| size              | 1310720 Bytes or Approx (1.2MB)      |
| snapshot_type     | incremental                          |
| status            | available                            |
| time_taken        | 154 Seconds                          |
| uploaded_size     | 1310720                              |
| workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+

+----------------+---------------------------------------------------------------------------------------------------------------------+
|   Instances    |                                                        Value                                                        |
+----------------+---------------------------------------------------------------------------------------------------------------------+
|     Status     |                                                      available                                                      |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
|     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
|      Name      |                                                     Test-Linux-1                                                    |
|       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
|                |                                                                                                                     |
|     Status     |                                                      available                                                      |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
|     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
|      Name      |                                                     Test-Linux-2                                                    |
|       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
|                |                                                                                                                     |
+----------------+---------------------------------------------------------------------------------------------------------------------+

+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
|   Networks  | Value                                                                                                                                        |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
|  ip_address | 172.20.20.20                                                                                                                                 |
|    vm_id    | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                                                                                         |
|   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44', 
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
| mac_address | fa:16:3e:74:58:bb                                                                                                                            |
|             |                                                                                                                                              |
|  ip_address | 172.20.20.13                                                                                                                                 |
|    vm_id    | 3fd869b2-16bd-4423-b389-18d19d37c8e0                                                                                                         |
|   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
| mac_address | fa:16:3e:6b:46:ae                                                                                                                            |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+

Get Snapshot Details with disk details for the desired Snapshot

[root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9

+-------------------+--------------------------------------+
| Snapshot property | Value                                |
+-------------------+--------------------------------------+
| description       | None                                 |
| host              | Upstream2                            |
| id                | 7e39e544-537d-4417-853d-11463e7396f9 |
| name              | jobscheduler                         |
| progress_percent  | 100                                  |
| restore_size      | 44040192 Bytes or Approx (42.0MB)    |
| restores_info     |                                      |
| size              | 1310720 Bytes or Approx (1.2MB)      |
| snapshot_type     | incremental                          |
| status            | available                            |
| time_taken        | 154 Seconds                          |
| uploaded_size     | 1310720                              |
| workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+

+----------------+---------------------------------------------------------------------------------------------------------------------+
|   Instances    |                                                        Value                                                        |
+----------------+---------------------------------------------------------------------------------------------------------------------+
|     Status     |                                                      available                                                      |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
|     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
|      Name      |                                                     Test-Linux-1                                                    |
|       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
|                |                                                                                                                     |
|     Status     |                                                      available                                                      |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
|     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
|      Name      |                                                     Test-Linux-2                                                    |
|       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
|                |                                                                                                                     |
+----------------+---------------------------------------------------------------------------------------------------------------------+

+-------------------+--------------------------------------------------+
|       Vdisks      |                      Value                       |
+-------------------+--------------------------------------------------+
| volume_mountpoint |                     /dev/vda                     |
|    restore_size   |                     22020096                     |
|    resource_id    |       ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a       |
|    volume_name    |       0027b140-a427-46cb-9ccf-7895c7624493       |
|    volume_type    |                       None                       |
|       label       |                       None                       |
|    volume_size    |                        1                         |
|     volume_id     |       0027b140-a427-46cb-9ccf-7895c7624493       |
| availability_zone |                       nova                       |
|       vm_id       |       38b620f1-24ae-41d7-b0ab-85ffc2d7958b       |
|      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
|                   |                                                  |
| volume_mountpoint |                     /dev/vda                     |
|    restore_size   |                     22020096                     |
|    resource_id    |       8007ed89-6a86-447e-badb-e49f1e92f57a       |
|    volume_name    |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
|    volume_type    |                       None                       |
|       label       |                       None                       |
|    volume_size    |                        1                         |
|     volume_id     |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
| availability_zone |                       nova                       |
|       vm_id       |       3fd869b2-16bd-4423-b389-18d19d37c8e0       |
|      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
|                   |                                                  |
+-------------------+--------------------------------------------------+

Prepare the selective restore by creating the restore.json file

The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.

{
   u'description':u'<description of the restore>',
   u'oneclickrestore':False,
   u'restore_type':u'selective',
   u'type':u'openstack',
   u'name':u'<name of the restore>'
   u'openstack':{
      u'instances':[
         {
            u'name':u'<name instance 1>',
            u'availability_zone':u'<AZ instance 1>',
            u'nics':[ #####Leave empty for network topology restore
            ],
            u'vdisks':[
               {
                  u'id':u'<old disk id>',
                  u'new_volume_type':u'<new volume type name>',
                  u'availability_zone':u'<new cinder volume AZ>'
               }
            ],
            u'flavor':{
               u'ram':<RAM in MB>,
               u'ephemeral':<GB of ephemeral disk>,
               u'vcpus':<# vCPUs>,
               u'swap':u'<GB of Swap disk>',
               u'disk':<GB of boot disk>,
               u'id':u'<id of the flavor to use>'
            },
            u'include':<True/False>,
            u'id':u'<old id of the instance>'
         } #####Repeat for each instance in the snapshot
      ],
      u'restore_topology':<True/False>,
      u'networks_mapping':{
         u'networks':[ #####Leave empty for network topology restore
            
         ]
      }
   }
}

Run the selective restore

To do the actual restore use the following command:

# workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}

Verify the restore

To verify the success of the restore from a Trilio perspective the restore status is checked.

[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af

+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
|         Created At         |       Name       |                  ID                  |             Snapshot ID              |   Size   |   Status  |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+

[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
+------------------+------------------------------------------------------------------------------------------------------+
| Property         | Value                                                                                                |
+------------------+------------------------------------------------------------------------------------------------------+
| created_at       | 2019-09-24T12:44:38.000000                                                                           |
| description      | -                                                                                                    |
| error_msg        | None                                                                                                 |
| finished_at      | 2019-09-24T12:46:07.000000                                                                           |
| host             | Upstream2                                                                                            |
| id               | 5b4216d0-4bed-460f-8501-1589e7b45e01                                                                 |
| instances        | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata":   |
|                  | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}]     |
| name             | OneClick Restore                                                                                     |
| progress_msg     | Restore from snapshot is complete                                                                    |
| progress_percent | 100                                                                                                  |
| project_id       | 8e16700ae3614da4ba80a4e57d60cdb9                                                                     |
| restore_options  | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
|                  | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]},   |
|                  | "type": "openstack", "name": "OneClick Restore"}                                                     |
| restore_type     | restore                                                                                              |
| size             | 41126400                                                                                             |
| snapshot_id      | 5928554d-a882-4881-9a5c-90e834c071af                                                                 |
| status           | available                                                                                            |
| time_taken       | 89                                                                                                   |
| updated_at       | 2019-09-24T12:44:38.000000                                                                           |
| uploaded_size    | 41126400                                                                                             |
| user_id          | d5fbd79f4e834f51bfec08be6d3b2ff2                                                                     |
| warning_msg      | None                                                                                                 |
| workload_id      | 02b1aca2-c51a-454b-8c0f-99966314165e                                                                 |
+------------------+------------------------------------------------------------------------------------------------------+

Clean up

After the Desaster Recovery Process has been successfully completed it is recommended to bring the TVM installation back into its original state to be ready for the next DR process.

Delete the workload

Delete the workload that got restored.

# workloadmgr workload-delete <workload_id>

Remove the database entry

The Trilio database is following the OpenStack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.

To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.

Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.

This script can be found here: https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabase

Remove the admin user from the project

After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.

# source {customer admin rc file}  
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
# openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>

Disaster Recovery of a complete cloud

This Scenario will cover the Disaster Recovery of a full cloud. It is assumed that the source cloud is down or lost completly. To do the disaster recovery the following high-level process needs to be followed:

  1. Reconfigure the Target Trilio installation

  2. Make the right Mount-Paths available

  3. Reassign the Workload

  4. Restore the Workload

  5. Reconfigure the Target Trilio installation back to the original one

  6. Clean up

Reconfigure the Target Trilio installation

Before the Desaster Recovery Process can start is it necessary to make the backups to be restored available for the Trilio installation. The following steps need to be done to completely reconfigure the Trilio installation.

During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.

Add NFS B2 to the Trilio Appliance Cluster

To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be fully reconfigured to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.

Edit the workloadmgr.conf

# vi /etc/workloadmgr/workloadmgr.conf

Look for the line defining the NFS mounts

vault_storage_nfs_export = <NFS_B1/NFS_B1-FQDN>:/<VOL-B1-Path>

Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path>,<NFS-IP/NFS-FQDN>:/<VOL—2-Path>

Write and close the workloadmgr.conf

Restart the wlm-workloads service

# systemctl restart wlm-workloads

Add NFS B2 to the Trilio Datamovers

Trilio is integrating natively into the OpenStack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.

To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.

Edit the tvault-contego.conf

# vi /etc/tvault-contego/tvault-contego.conf

Look for the line defining the NFS mounts

vault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path>

Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>

Write and close the tvault-contego.conf

Restart the tvault-contego service

# systemctl restart tvault-contego

Make the Mount-Paths available

Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.

#qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 516K
cluster_size: 65536

backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

The MTAuMTAuMi4yMDovdXBzdHJlYW0= part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.

This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.

Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.

Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.

Identify the base64 hash values

The used hash values can be calculated using the base64 tool in any Linux distribution.

# echo -n 10.10.2.20:/NFS_A1 | base64
MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

# echo -n 10.20.3.22:/NFS_B2 | base64
MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0

Create and bind the paths

Based on the identified base64 hash values the following paths are required on each Compute node.

/var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

and

/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0

In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.

#mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
#mount --bind 
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.

#vi /etc/fstab
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ 		/ var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl	none		bind 		0 0

Reassign the workload

Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by OpenStack administrators.

Add admin-user to required domains and projects

To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.

# source {customer admin rc file}  
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
# openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>

Discover orphaned Workloads from NFS-Storage of Target Cloud

Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.

# workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True    
+------------+--------------------------------------+----------------------------------+----------------------------------+  
|     Name   |                  ID                  |            Project ID            |  User ID                         |  
+------------+--------------------------------------+----------------------------------+----------------------------------+  
| Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 |  329880dedb4cd357579a3279835f392 |  
| Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 |  329880dedb4cd357579a3279835f392 |  
+------------+--------------------------------------+----------------------------------+----------------------------------+

List available projects on Target Cloud in the Target Domain

The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.

# openstack project list --domain <target_domain>  
+----------------------------------+----------+  
| ID                               | Name     |  
+----------------------------------+----------+  
| 01fca51462a44bfa821130dce9baac1a | project1 |  
| 33b4db1099ff4a65a4c1f69a14f932ee | project2 |  
| 9139e694eb984a4a979b5ae8feb955af | project3 |  
+----------------------------------+----------+ 

List available users on the Target Cloud in the Target Project that have the right backup trustee role

To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.

# openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| Role                             | User                             | Group | Project                          | Domain | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
| 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
| 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+

Reassign the workload to the target project

Now that all informations have been gathered the workload can be reassigned to the target project.

# workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True    
+-----------+--------------------------------------+----------------------------------+----------------------------------+  
|    Name   |                  ID                  |            Project ID            |  User ID                         |  
+-----------+--------------------------------------+----------------------------------+----------------------------------+  
| project1  | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |  
+-----------+--------------------------------------+----------------------------------+----------------------------------+ 

Verify the workload is available at the desired target_project

After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.

# workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
+-------------------+------------------------------------------------------------------------------------------------------+
| Property          | Value                                                                                                |
+-------------------+------------------------------------------------------------------------------------------------------+
| availability_zone | nova                                                                                                 |
| created_at        | 2019-04-18T02:19:39.000000                                                                           |
| description       | Test Linux VMs                                                                                       |
| error_msg         | None                                                                                                 |
| id                | ac9cae9b-5e1b-4899-930c-6aa0600a2105                                                                 |
| instances         | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id":                      |
|                   | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}]                                     |
| interval          | None                                                                                                 |
| jobschedule       | True                                                                                                 |
| name              | Test Linux                                                                                           |
| project_id        | 2fc4e2180c2745629753305591aeb93b                                                                     |
| scheduler_trust   | None                                                                                                 |
| status            | available                                                                                            |
| storage_usage     | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
|                   | "snap_count": 13}}                                                                                   |
| updated_at        | 2019-11-15T02:32:43.000000                                                                           |
| user_id           | 72e65c264a694272928f5d84b73fe9ce                                                                     |
| workload_type_id  | f82ce76f-17fe-438b-aa37-7a023058e50d                                                                 |
+-------------------+------------------------------------------------------------------------------------------------------+

Restore the workload

The reassigned workload can be restored using Horizon following the procedure described here.

This runbook will continue on the CLI only path.

Prepare the selective restore by getting the snapshot information

To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.

List all Snapshots of the workload to restore to identify the snapshot to restore

# workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True

+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
|         Created At         |     Name     |                  ID                  |             Workload ID              | Snapshot Type |   Status  |    Host   |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |      full     | available | Upstream2 |
| 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
| 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+

Get Snapshot Details with network details for the desired snapshot

# workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9

+-------------------+--------------------------------------+
| Snapshot property | Value                                |
+-------------------+--------------------------------------+
| description       | None                                 |
| host              | Upstream2                            |
| id                | 7e39e544-537d-4417-853d-11463e7396f9 |
| name              | jobscheduler                         |
| progress_percent  | 100                                  |
| restore_size      | 44040192 Bytes or Approx (42.0MB)    |
| restores_info     |                                      |
| size              | 1310720 Bytes or Approx (1.2MB)      |
| snapshot_type     | incremental                          |
| status            | available                            |
| time_taken        | 154 Seconds                          |
| uploaded_size     | 1310720                              |
| workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+

+----------------+---------------------------------------------------------------------------------------------------------------------+
|   Instances    |                                                        Value                                                        |
+----------------+---------------------------------------------------------------------------------------------------------------------+
|     Status     |                                                      available                                                      |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
|     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
|      Name      |                                                     Test-Linux-1                                                    |
|       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
|                |                                                                                                                     |
|     Status     |                                                      available                                                      |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
|     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
|      Name      |                                                     Test-Linux-2                                                    |
|       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
|                |                                                                                                                     |
+----------------+---------------------------------------------------------------------------------------------------------------------+

+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
|   Networks  | Value                                                                                                                                        |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
|  ip_address | 172.20.20.20                                                                                                                                 |
|    vm_id    | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                                                                                         |
|   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44', 
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
| mac_address | fa:16:3e:74:58:bb                                                                                                                            |
|             |                                                                                                                                              |
|  ip_address | 172.20.20.13                                                                                                                                 |
|    vm_id    | 3fd869b2-16bd-4423-b389-18d19d37c8e0                                                                                                         |
|   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
| mac_address | fa:16:3e:6b:46:ae                                                                                                                            |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+

Get Snapshot Details with disk details for the desired Snapshot

[root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9

+-------------------+--------------------------------------+
| Snapshot property | Value                                |
+-------------------+--------------------------------------+
| description       | None                                 |
| host              | Upstream2                            |
| id                | 7e39e544-537d-4417-853d-11463e7396f9 |
| name              | jobscheduler                         |
| progress_percent  | 100                                  |
| restore_size      | 44040192 Bytes or Approx (42.0MB)    |
| restores_info     |                                      |
| size              | 1310720 Bytes or Approx (1.2MB)      |
| snapshot_type     | incremental                          |
| status            | available                            |
| time_taken        | 154 Seconds                          |
| uploaded_size     | 1310720                              |
| workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+

+----------------+---------------------------------------------------------------------------------------------------------------------+
|   Instances    |                                                        Value                                                        |
+----------------+---------------------------------------------------------------------------------------------------------------------+
|     Status     |                                                      available                                                      |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
|     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
|      Name      |                                                     Test-Linux-1                                                    |
|       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
|                |                                                                                                                     |
|     Status     |                                                      available                                                      |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
|     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
|      Name      |                                                     Test-Linux-2                                                    |
|       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
|                |                                                                                                                     |
+----------------+---------------------------------------------------------------------------------------------------------------------+

+-------------------+--------------------------------------------------+
|       Vdisks      |                      Value                       |
+-------------------+--------------------------------------------------+
| volume_mountpoint |                     /dev/vda                     |
|    restore_size   |                     22020096                     |
|    resource_id    |       ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a       |
|    volume_name    |       0027b140-a427-46cb-9ccf-7895c7624493       |
|    volume_type    |                       None                       |
|       label       |                       None                       |
|    volume_size    |                        1                         |
|     volume_id     |       0027b140-a427-46cb-9ccf-7895c7624493       |
| availability_zone |                       nova                       |
|       vm_id       |       38b620f1-24ae-41d7-b0ab-85ffc2d7958b       |
|      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
|                   |                                                  |
| volume_mountpoint |                     /dev/vda                     |
|    restore_size   |                     22020096                     |
|    resource_id    |       8007ed89-6a86-447e-badb-e49f1e92f57a       |
|    volume_name    |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
|    volume_type    |                       None                       |
|       label       |                       None                       |
|    volume_size    |                        1                         |
|     volume_id     |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
| availability_zone |                       nova                       |
|       vm_id       |       3fd869b2-16bd-4423-b389-18d19d37c8e0       |
|      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
|                   |                                                  |
+-------------------+--------------------------------------------------+

Prepare the selective restore by creating the restore.json file

The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.

{
   u'description':u'<description of the restore>',
   u'oneclickrestore':False,
   u'restore_type':u'selective',
   u'type':u'openstack',
   u'name':u'<name of the restore>'
   u'openstack':{
      u'instances':[
         {
            u'name':u'<name instance 1>',
            u'availability_zone':u'<AZ instance 1>',
            u'nics':[ #####Leave empty for network topology restore
            ],
            u'vdisks':[
               {
                  u'id':u'<old disk id>',
                  u'new_volume_type':u'<new volume type name>',
                  u'availability_zone':u'<new cinder volume AZ>'
               }
            ],
            u'flavor':{
               u'ram':<RAM in MB>,
               u'ephemeral':<GB of ephemeral disk>,
               u'vcpus':<# vCPUs>,
               u'swap':u'<GB of Swap disk>',
               u'disk':<GB of boot disk>,
               u'id':u'<id of the flavor to use>'
            },
            u'include':<True/False>,
            u'id':u'<old id of the instance>'
         } #####Repeat for each instance in the snapshot
      ],
      u'restore_topology':<True/False>,
      u'networks_mapping':{
         u'networks':[ #####Leave empty for network topology restore
            
         ]
      }
   }
}

Run the selective restore

To do the actual restore use the following command:

# workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}

Verify the restore

To verify the success of the restore from a Trilio perspective the restore status is checked.

[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af

+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
|         Created At         |       Name       |                  ID                  |             Snapshot ID              |   Size   |   Status  |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+

[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
+------------------+------------------------------------------------------------------------------------------------------+
| Property         | Value                                                                                                |
+------------------+------------------------------------------------------------------------------------------------------+
| created_at       | 2019-09-24T12:44:38.000000                                                                           |
| description      | -                                                                                                    |
| error_msg        | None                                                                                                 |
| finished_at      | 2019-09-24T12:46:07.000000                                                                           |
| host             | Upstream2                                                                                            |
| id               | 5b4216d0-4bed-460f-8501-1589e7b45e01                                                                 |
| instances        | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata":   |
|                  | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}]     |
| name             | OneClick Restore                                                                                     |
| progress_msg     | Restore from snapshot is complete                                                                    |
| progress_percent | 100                                                                                                  |
| project_id       | 8e16700ae3614da4ba80a4e57d60cdb9                                                                     |
| restore_options  | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
|                  | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]},   |
|                  | "type": "openstack", "name": "OneClick Restore"}                                                     |
| restore_type     | restore                                                                                              |
| size             | 41126400                                                                                             |
| snapshot_id      | 5928554d-a882-4881-9a5c-90e834c071af                                                                 |
| status           | available                                                                                            |
| time_taken       | 89                                                                                                   |
| updated_at       | 2019-09-24T12:44:38.000000                                                                           |
| uploaded_size    | 41126400                                                                                             |
| user_id          | d5fbd79f4e834f51bfec08be6d3b2ff2                                                                     |
| warning_msg      | None                                                                                                 |
| workload_id      | 02b1aca2-c51a-454b-8c0f-99966314165e                                                                 |
+------------------+------------------------------------------------------------------------------------------------------+

Reconfigure the Target Trilio installation back to the original one

After the Desaster Recovery Process has finished it is necessary to return the Trilio installation to its original configuration. The following steps need to be done to completely reconfigure the Trilio installation.

During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.

Delete NFS B2 to the Trilio Appliance Cluster

To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be fully reconfigured to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.

Edit the workloadmgr.conf

# vi /etc/workloadmgr/workloadmgr.conf

Look for the line defining the NFS mounts

vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>

Delete NFS B2 from the comma-seperated list

vault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path>

Write and close the workloadmgr.conf

Restart the wlm-workloads service

# systemctl restart wlm-workloads

Delete NFS B2 to the Trilio Datamovers

Trilio is integrating natively into the OpenStack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.

To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.

Edit the tvault-contego.conf

# vi /etc/tvault-contego/tvault-contego.conf

Look for the line defining the NFS mounts

vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>

Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path>

Write and close the tvault-contego.conf

Restart the tvault-contego service

# systemctl restart tvault-contego

Clean up

After the Desaster Recovery Process has been successfully completed and the Trilio installation reconfigured to its original state, it is recommended to do the following additional steps to be ready for the next Disaster Recovery process.

Remove the database entry

The Trilio database is following the OpenStack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.

To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.

Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.

This script can be found here: https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabase

Remove the admin user from the project

After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.

# source {customer admin rc file}  
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
# openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>

Workloads

Field Descriptions

Below are some of the most common fields you will find in the request and response bodies while working with these APIs.

Workload Fields

Field
Type
Description

name

String

Name of the workload.

description

String

Description of the workload.

workload_type_id

String

Unique identifier for the selected workload type.

source_platform

String

Specifies the source platform (e.g., openstack).

Instance List

Field
Type
Description

instance-id

String

Unique identifier of the instance to be included in the workload.

Job Schedule

Field
Type
Description

timezone

String

Time zone for the job schedule.

start_date

String

Start date of the schedule (Format: MM/DD/YYYY).

end_date

String

End date of the schedule (Format: MM/DD/YYYY).

start_time

String

Time when the schedule begins (Format: HH:MM AM/PM).

enabled

Boolean

True if scheduling is enabled, False otherwise.

Scheduling Types

Schedule Type
Field
Type
Description
Dependencies

Hourly

interval

Integer

Backup interval in hours (1, 2, 3, 4, 6, 12, 24).

if schedule enabledis set to true, you must provide Hourly field.

retention

Integer

Retention period in backups.

snapshot_type

String

Snapshot type (incremental or full).

Daily

backup_time

List of String

List of specific times (HH:MM, 24-hour format).

Requires hourly

retention

Integer

Retention period in backups.

snapshot_type

String

Snapshot type (incremental or full).

Weekly

backup_day

List of String

Days of the week (mon, tue, wed, thu, fri, sat, sun).

Requires daily

retention

Integer

Retention period in backups.

snapshot_type

String

Only supports full backups.

Monthly

month_backup_day

List of Integer

Days of the month (1-31).

Requires daily

retention

Integer

Retention period in backups.

snapshot_type

String

Only supports full backups.

Yearly

backup_month

List of String

List of months (jan, feb, mar, ... dec).

Requires monthly

retention

Integer

Retention period in backups.

snapshot_type

String

Only supports full backups.

Manual

retention

Integer

Retention period in backups.

retention_days_to_keep

Integer

Number of days to keep backups manually triggered.

Metadata

Field
Type
Description

<key>

String

Custom metadata key-value pairs.

policy_id

String

ID of the backup policy associated with the workload.

Backup Target Types

Field
Type
Description

backup_target_types

String

Backup target type ID specifying where the backups will be stored.

List Workloads

GET https://<wlm_api_endpoint>/workloads

Provides the list of all workloads for the given tenant/project ID

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

Query Parameters

Name
Type
Description

nfs_share

string

lists workloads located on a specific nfs-share

all_workloads

boolean

admin role required - True lists workloads of all tenants/projects

Headers

Name
Type
Description

X-Auth-Project-Id

string

project to run the authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Sample Response
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 29 Oct 2020 14:55:40 GMT
Content-Type: application/json
Content-Length: 3480
Connection: keep-alive
X-Compute-Request-Id: req-a2e49b7e-ce0f-4dcb-9e61-c5a4756d9948

{
   "workloads":[
      {
         "project_id":"4dfe98a43bfa404785a812020066b4d6",
         "user_id":"adfa32d7746a4341b27377d6f7c61adb",
         "id":"8ee7a61d-a051-44a7-b633-b495e6f8fc1d",
         "name":"worklaod1",
         "snapshots_info":"",
         "description":"no-description",
         "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
         "status":"available",
         "created_at":"2020-10-26T12:07:01.000000",
         "updated_at":"2020-10-29T12:22:26.000000",
         "scheduler_trust":null,
         "links":[
            {
               "rel":"self",
               "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
            },
            {
               "rel":"bookmark",
               "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
            }
         ]
      },
      {
         "project_id":"4dfe98a43bfa404785a812020066b4d6",
         "user_id":"adfa32d7746a4341b27377d6f7c61adb",
         "id":"a90d002a-85e4-44d1-96ac-7ffc5d0a5a84",
         "name":"workload2",
         "snapshots_info":"",
         "description":"no-description",
         "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
         "status":"available",
         "created_at":"2020-10-20T09:51:15.000000",
         "updated_at":"2020-10-29T10:03:33.000000",
         "scheduler_trust":null,
         "links":[
            {
               "rel":"self",
               "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
            },
            {
               "rel":"bookmark",
               "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
            }
         ]
      }
   ]
}

Create Workload

POST https://<wlm_api_endpoint>/workloads

Creates a workload in the provided Tenant/Project with the given details.

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Body format

Workload Create requires a Body in json format, to provide the requested information.

Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.

hourly
daily
weekly
monthly
yearly
{
  "workload": {
    "name": "<name of the Workload>",
    "description": "<description of workload>",
    "workload_type_id": "<ID of the chosen Workload Type>",
    "source_platform": "openstack",
    "instances": [
      {
        "instance-id": "<Instance ID>"
      },
      {
        "instance-id": "<Instance ID>"
      }
    ],
    "jobschedule": {
      "timezone": "<timezone>",
      "start_date": "<Date format: MM/DD/YYYY>",
      "end_date": "<Date format: MM/DD/YYYY>",
      "start_time": "<Time format: HH:MM AM/PM>",
      "enabled": "<True/False>",
      "hourly": {
        "interval": "<1, 2, 3, 4, 6, 12, 24 hours>",
        "retention": "<Integer>",
        "snapshot_type": "incremental/full"
      },
      "daily": {
        "depends_on": "hourly",
        "backup_time": [
          "<HH:MM 24-hour format>"
        ],
        "retention": "<Integer>",
        "snapshot_type": "incremental/full"
      },
      "weekly": {
        "depends_on": "daily",
        "backup_day": [
          "<mon, tue, wed, thu, fri, sat, sun>"
        ],
        "retention": "<Integer>",
        "snapshot_type": "full"
      },
      "monthly": {
        "depends_on": "daily",
        "month_backup_day": [
          "<Integer: day of the month (1-31)>"
        ],
        "retention": "<Integer>",
        "snapshot_type": "full"
      },
      "yearly": {
        "depends_on": "monthly",
        "backup_month": [
          "<jan, feb, mar, ... dec>"
        ],
        "retention": "<Integer>",
        "snapshot_type": "full"
      },
      "manual": {
        "retention": "<Integer>",
        "retention_days_to_keep": "<Integer>"
      }
    },
    "metadata": {
      "<key>": "<value>",
      "policy_id": "<policy_id>"
    },
    "backup_target_types": "<backup_target_type_id>"
  }
}

Sample Request Body
{
  "workload": {
    "name": "workload_cli",
    "description": null,
    "source_platform": null,
    "instances": [
      {
        "instance-id": "14309d25-23dd-47da-bf60-febc8c25b636"
      }
    ],
    "jobschedule": {
      "start_date": "01/28/2025",
      "enabled": true,
      "start_time": "02:15 PM",
      "timezone": "Etc/UTC",
      "hourly": {
        "interval": 1,
        "retention": 2,
        "snapshot_type": "incremental"
      },
      "daily": {
        "depends_on": "hourly",
        "backup_time": [
          "14:15"
        ],
        "retention": 2,
        "snapshot_type": "incremental"
      },
      "weekly": {
        "depends_on": "daily",
        "backup_day": [
          "wed"
        ],
        "retention": 2,
        "snapshot_type": "incremental"
      },
      "monthly": {
        "depends_on": "daily",
        "month_backup_day": [
          20
        ],
        "retention": 2,
        "snapshot_type": "full"
      },
      "yearly": {
        "depends_on": "monthly",
        "backup_month": [
          "mar"
        ],
        "retention": 1,
        "snapshot_type": "full"
      },
      "manual": {
        "retention": 21
      }
    },
    "metadata": {},
    "encryption": false,
    "secret_uuid": null,
    "backup_target_types": "6ba9fd82-151b-4f5a-bdbf-44504c2e210e"
  }
}
Sample Response
HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Thu, 29 Oct 2020 15:42:02 GMT
Content-Type: application/json
Content-Length: 703
Connection: keep-alive
X-Compute-Request-Id: req-443b9dea-36e6-4721-a11b-4dce3c651ede

{
   "workload":{
      "project_id":"c76b3355a164498aa95ddbc960adc238",
      "user_id":"ccddc7e7a015487fa02920f4d4979779",
      "id":"c4e3aeeb-7d87-4c49-99ed-677e51ba715e",
      "name":"API created",
      "snapshots_info":"",
      "description":"API description",
      "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
      "status":"creating",
      "created_at":"2020-10-29T15:42:01.000000",
      "updated_at":"2020-10-29T15:42:01.000000",
      "scheduler_trust":null,
      "links":[
         {
            "rel":"self",
            "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
         },
         {
            "rel":"bookmark",
            "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
         }
      ]
   }
}

Show Workload

GET https://<wlm_api_endpoint>/workloads/<workload_id>

Shows all details of a specified workload

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

workload_id

ID of the Workload to show

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Sample Response
HTTP/1.1 200 OK
x-compute-request-id: req-3bd5dd1e-3064-4859-a530-7a5bf9f4278f
content-type: application/json
content-length: 1941
date: Wed, 29 Jan 2025 06:28:36 GMT

{
  "workload": {
    "created_at": "2025-01-28T12:23:49.000000",
    "updated_at": "2025-01-28T14:15:06.000000",
    "id": "4ddf0e47-618d-47d1-9f2f-48f342e1d9af",
    "encryption": false,
    "secret_uuid": null,
    "user_id": "6bbb210a29a043af86b7b0c667747187",
    "project_id": "dee550d3df5b497ca2e05044616bc8b1",
    "availability_zone": "nova",
    "workload_type_id": "f82ce76f-17fe-438b-aa37-7a023058e50d",
    "name": "workload_API",
    "description": "no-description",
    "interval": null,
    "storage_usage": {
      "usage": 0,
      "full": {
        "snap_count": 0,
        "usage": 0
      },
      "incremental": {
        "snap_count": 1,
        "usage": 0
      }
    },
    "instances": [
      {
        "id": "14309d25-23dd-47da-bf60-febc8c25b636",
        "name": "PM",
        "metadata": {}
      }
    ],
    "metadata": {
      "hostnames": "[]",
      "preferredgroup": "[]",
      "workload_approx_backup_size": "2.1",
      "backup_media_target": "192.168.1.34:/mnt/tvault/42436",
      "backup_target_types": "nfs_1",
      "backup_target_type": "nfs_1"
    },
    "jobschedule": {
      "start_date": "01/28/2025",
      "enabled": true,
      "start_time": "02:15 PM",
      "hourly": {
        "interval": "1",
        "retention": "2",
        "snapshot_type": "incremental"
      },
      "daily": {
        "backup_time": ["14:15"],
        "retention": "2",
        "snapshot_type": "incremental"
      },
      "weekly": {
        "backup_day": ["wed"],
        "retention": "2",
        "snapshot_type": "incremental"
      },
      "monthly": {
        "month_backup_day": ["20"],
        "retention": "2",
        "snapshot_type": "full"
      },
      "yearly": {
        "backup_month": ["mar"],
        "retention": "1",
        "snapshot_type": "full"
      },
      "manual": {
        "retention": "21",
        "retention_days_to_keep": "5"
      },
      "timezone": "UTC",
      "global_jobscheduler": true,
      "nextrun": 2783.561769
    },
    "status": "locked",
    "error_msg": null,
    "links": [
      {
        "rel": "self",
        "href": "http://kolla-external-wallaby-dev4.triliodata.demo:8781/v1/dee550d3df5b497ca2e05044616bc8b1/workloads/4ddf0e47-618d-47d1-9f2f-48f342e1d9af"
      },
      {
        "rel": "bookmark",
        "href": "http://kolla-external-wallaby-dev4.triliodata.demo:8781/dee550d3df5b497ca2e05044616bc8b1/workloads/4ddf0e47-618d-47d1-9f2f-48f342e1d9af"
      }
    ],
    "scheduler_trust": null,
    "policy_id": null
  }
}

Modify Workload

PUT https://<wlm_api_endpoint>/workloads/<workload_id>

Modifies a workload in the provided Tenant/Project with the given details.

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

workload_id

ID of the Workload

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Body format

Workload modify requires a Body in json format, to provide the information about the values to modify.

All values in the body are optional.

Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.

hourly
daily
weekly
monthly
yearly
{
  "workload": {
    "name": "<name of the Workload>",
    "description": "<description of workload>",
    "instances": [
      {
        "instance-id": "<Instance ID>"
      },
      {
        "instance-id": "<Instance ID>"
      }
    ],
    "jobschedule": {
      "timezone": "<timezone>",
      "start_date": "<Date format: MM/DD/YYYY>",
      "end_date": "<Date format: MM/DD/YYYY>",
      "start_time": "<Time format: HH:MM AM/PM>",
      "enabled": "<True/False>",
      "hourly": {
        "interval": "<1, 2, 3, 4, 6, 12, 24 hours>",
        "retention": "<Integer>",
        "snapshot_type": "incremental/full"
      },
      "daily": {
        "depends_on": "hourly",
        "backup_time": [
          "<HH:MM 24-hour format>"
        ],
        "retention": "<Integer>",
        "snapshot_type": "incremental/full"
      },
      "weekly": {
        "depends_on": "daily",
        "backup_day": [
          "<mon, tue, wed, thu, fri, sat, sun>"
        ],
        "retention": "<Integer>",
        "snapshot_type": "full"
      },
      "monthly": {
        "depends_on": "daily",
        "month_backup_day": [
          "<Integer: day of the month (1-31)>"
        ],
        "retention": "<Integer>",
        "snapshot_type": "full"
      },
      "yearly": {
        "depends_on": "monthly",
        "backup_month": [
          "<jan, feb, mar, ... dec>"
        ],
        "retention": "<Integer>",
        "snapshot_type": "full"
      },
      "manual": {
        "retention": "<Integer>",
        "retention_days_to_keep": "<Integer>"
      }
    },
    "metadata": {
      "<key>": "<value>",
      "policy_id": "<policy_id>"
    },
    
  }
}

Field Descriptions

Workload Fields

Field
Type
Description

name

String

Name of the workload.

description

String

Description of the workload.

workload_type_id

String

Unique identifier for the selected workload type.

source_platform

String

Specifies the source platform (e.g., openstack).

Instance List

Field
Type
Description

instance-id

String

Unique identifier of the instance to be included in the workload.

Job Schedule

Field
Type
Description

timezone

String

Time zone for the job schedule.

start_date

String

Start date of the schedule (Format: MM/DD/YYYY).

end_date

String

End date of the schedule (Format: MM/DD/YYYY).

start_time

String

Time when the schedule begins (Format: HH:MM AM/PM).

enabled

Boolean

True if scheduling is enabled, False otherwise.

Scheduling Types

Schedule Type
Field
Type
Description
Dependencies

Hourly

interval

Integer

Backup interval in hours (1, 2, 3, 4, 6, 12, 24).

if schedule enabledis set to true, you must provide Hourly field.

retention

Integer

Retention period in backups.

snapshot_type

String

Snapshot type (incremental or full).

Daily

backup_time

List of String

List of specific times (HH:MM, 24-hour format).

Requires hourly

retention

Integer

Retention period in backups.

snapshot_type

String

Snapshot type (incremental or full).

Weekly

backup_day

List of String

Days of the week (mon, tue, wed, thu, fri, sat, sun).

Requires daily

retention

Integer

Retention period in backups.

snapshot_type

String

Only supports full backups.

Monthly

month_backup_day

List of Integer

Days of the month (1-31).

Requires daily

retention

Integer

Retention period in backups.

snapshot_type

String

Only supports full backups.

Yearly

backup_month

List of String

List of months (jan, feb, mar, ... dec).

Requires monthly

retention

Integer

Retention period in backups.

snapshot_type

String

Only supports full backups.

Manual

retention

Integer

Retention period in backups.

retention_days_to_keep

Integer

Number of days to keep backups manually triggered.

Metadata

Field
Type
Description

<key>

String

Custom metadata key-value pairs.

policy_id

String

ID of the backup policy associated with the workload.

Sample Response
HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 12:31:42 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-674a5d71-4aeb-4f99-90ce-7e8d3158d137

Delete Workload

DELETE https://<wlm_api_endpoint>/workloads/<workload_id>

Deletes the specified Workload.

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

workload_id

ID of the Workload

Query Parameters

Name
Type
Description

database_only

boolean

True leaves the Workload data on the Backup Target

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication Token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Sample Response
HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 13:31:00 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-alive

Unlock Workload

POST https://<wlm_api_endpoint>/workloads/<workload_id>/unlock

Unlocks the specified Workload

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

workload_id

ID of the Workload

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication Token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Sample Response
HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 13:41:55 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-alive

Reset Workload

POST https://<wlm_api_endpoint>/workloads/<workload_id>/reset

Resets the defined workload

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

workload_id

ID of the Workload

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to run the authentication against

X-Auth-Token

string

Authentication Token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Sample Response
HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 13:52:30 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-alive

Workload Policies

Field Descriptions

Below are some of the most common fields you will find in the request and response bodies while working with these APIs.

Schedule Type
Field
Type
Description
Dependencies

Hourly

interval

Integer

Backup interval in hours (1, 2, 3, 4, 6, 12, 24).

if schedule enabledis set to true, you must provide Hourly field.

retention

Integer

Retention period in backups.

snapshot_type

String

Snapshot type (incremental or full).

Daily

backup_time

List of String

List of specific times (HH:MM, 24-hour format).

Requires hourly

retention

Integer

Retention period in backups.

snapshot_type

String

Snapshot type (incremental or full).

Weekly

backup_day

List of String

Days of the week (mon, tue, wed, thu, fri, sat, sun).

Requires daily

retention

Integer

Retention period in backups.

snapshot_type

String

Only supports full backups.

Monthly

month_backup_day

List of Integer

Days of the month (1-31).

Requires daily

retention

Integer

Retention period in backups.

snapshot_type

String

Only supports full backups.

Yearly

backup_month

List of String

List of months (jan, feb, mar, ... dec).

Requires monthly

retention

Integer

Retention period in backups.

snapshot_type

String

Only supports full backups.

Manual

retention

Integer

Retention period in backups.

retention_days_to_keep

Integer

Number of days to keep backups manually triggered.

List Policies

GET https://<wlm_api_endpoint>/workload_policy

Requests the list of available Workload Policies

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Sample Response
HTTP/1.1 200 OK
x-compute-request-id: req-199f171f-b6fe-4172-8408-b069da3cfe19
content-type: application/json
content-length: 7615
date: Wed, 29 Jan 2025 09:39:36 GMT
{
  "policy_list": [
    {
      "id": "d29cb349-1953-405d-8f16-301da9c7bc84",
      "created_at": "2025-01-29T09:45:47.000000",
      "updated_at": "2025-01-29T09:45:47.000000",
      "status": "available",
      "name": "policy_api",
      "description": "",
      "metadata": [
        
      ],
      "field_values": [
        {
          "created_at": "2025-01-29T09:45:47.000000",
          "updated_at": null,
          "deleted_at": null,
          "deleted": false,
          "version": "6.0.20",
          "id": "134a9886-1621-4c51-9951-456b8ed578af",
          "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
          "policy_field_name": "start_time",
          "value": "12:00 AM"
        },
        {
          "created_at": "2025-01-29T09:45:47.000000",
          "updated_at": null,
          "deleted_at": null,
          "deleted": false,
          "version": "6.0.20",
          "id": "3de049cf-cb50-4c7f-82ff-88b5c256251f",
          "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
          "policy_field_name": "daily",
          "value": {
            "backup_time": "['01:00']",
            "retention": 7,
            "snapshot_type": "incremental"
          }
        },
        {
          "created_at": "2025-01-29T09:45:47.000000",
          "updated_at": null,
          "deleted_at": null,
          "deleted": false,
          "version": "6.0.20",
          "id": "5e7146cc-4bdf-4b69-8c0d-77146b9b432c",
          "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
          "policy_field_name": "yearly",
          "value": {
            "backup_month": "['mar']",
            "retention": 1,
            "snapshot_type": "full"
          }
        },
        {
          "created_at": "2025-01-29T09:45:47.000000",
          "updated_at": null,
          "deleted_at": null,
          "deleted": false,
          "version": "6.0.20",
          "id": "6ecaea3d-206d-4083-8d00-8fdea340d198",
          "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
          "policy_field_name": "manual",
          "value": {
            "retention": 30
          }
        },
        {
          "created_at": "2025-01-29T09:45:47.000000",
          "updated_at": null,
          "deleted_at": null,
          "deleted": false,
          "version": "6.0.20",
          "id": "7f85955b-2079-4408-95a4-339e235526a9",
          "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
          "policy_field_name": "retentionmanual",
          "value": {
            "retentionmanual": 30
          }
        },
        {
          "created_at": "2025-01-29T09:45:47.000000",
          "updated_at": null,
          "deleted_at": null,
          "deleted": false,
          "version": "6.0.20",
          "id": "b87eb463-2ed1-4869-92bc-256d09767d4d",
          "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
          "policy_field_name": "monthly",
          "value": {
            "month_backup_day": "['3']",
            "retention": 12,
            "snapshot_type": "full"
          }
        },
        {
          "created_at": "2025-01-29T09:45:47.000000",
          "updated_at": null,
          "deleted_at": null,
          "deleted": false,
          "version": "6.0.20",
          "id": "ce970c60-b38d-4b6a-82d6-a2d1b9948947",
          "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
          "policy_field_name": "hourly",
          "value": {
            "interval": "1",
            "retention": 3,
            "snapshot_type": "incremental"
          }
        },
        {
          "created_at": "2025-01-29T09:45:47.000000",
          "updated_at": null,
          "deleted_at": null,
          "deleted": false,
          "version": "6.0.20",
          "id": "fadef33d-9565-47f2-8180-37fadd967203",
          "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
          "policy_field_name": "weekly",
          "value": {
            "backup_day": "['mon']",
            "retention": 7,
            "snapshot_type": "full"
          }
        }
      ]
    }
  ]
}

Show Policy

GET https://<wlm_api_endpoint>/workload_policy/<policy_id>

Requests the details of a given policy

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

policy_id

ID of the Policy

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Sample Response
HTTP/1.1 200 OK
x-compute-request-id: req-d4ffd8c4-5f20-4b74-bba9-9243964b0a61
content-type: application/json
content-length: 3237
date: Wed, 29 Jan 2025 09:54:09 GMT

{
  "policy": {
    "id": "d29cb349-1953-405d-8f16-301da9c7bc84",
    "created_at": "2025-01-29T09:45:47.000000",
    "updated_at": "2025-01-29T09:45:47.000000",
    "user_id": "6bbb210a29a043af86b7b0c667747187",
    "project_id": "dee550d3df5b497ca2e05044616bc8b1",
    "status": "available",
    "name": "policy_api",
    "description": "",
    "field_values": [
      {
        "created_at": "2025-01-29T09:45:47.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "134a9886-1621-4c51-9951-456b8ed578af",
        "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
        "policy_field_name": "start_time",
        "value": "12:00 AM"
      },
      {
        "created_at": "2025-01-29T09:45:47.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "3de049cf-cb50-4c7f-82ff-88b5c256251f",
        "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
        "policy_field_name": "daily",
        "value": {
          "backup_time": "['01:00']",
          "retention": 7,
          "snapshot_type": "incremental"
        }
      },
      {
        "created_at": "2025-01-29T09:45:47.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "5e7146cc-4bdf-4b69-8c0d-77146b9b432c",
        "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
        "policy_field_name": "yearly",
        "value": {
          "backup_month": "['mar']",
          "retention": 1,
          "snapshot_type": "full"
        }
      },
      {
        "created_at": "2025-01-29T09:45:47.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "6ecaea3d-206d-4083-8d00-8fdea340d198",
        "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
        "policy_field_name": "manual",
        "value": {
          "retention": 30
        }
      },
      {
        "created_at": "2025-01-29T09:45:47.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "7f85955b-2079-4408-95a4-339e235526a9",
        "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
        "policy_field_name": "retentionmanual",
        "value": {
          "retentionmanual": 30
        }
      },
      {
        "created_at": "2025-01-29T09:45:47.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "b87eb463-2ed1-4869-92bc-256d09767d4d",
        "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
        "policy_field_name": "monthly",
        "value": {
          "month_backup_day": "['3']",
          "retention": 12,
          "snapshot_type": "full"
        }
      },
      {
        "created_at": "2025-01-29T09:45:47.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "ce970c60-b38d-4b6a-82d6-a2d1b9948947",
        "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
        "policy_field_name": "hourly",
        "value": {
          "interval": "1",
          "retention": 3,
          "snapshot_type": "incremental"
        }
      },
      {
        "created_at": "2025-01-29T09:45:47.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "fadef33d-9565-47f2-8180-37fadd967203",
        "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
        "policy_field_name": "weekly",
        "value": {
          "backup_day": "['mon']",
          "retention": 7,
          "snapshot_type": "full"
        }
      }
    ],
    "metadata": [
      
    ],
    "policy_assignments": [
      {
        "created_at": "2025-01-29T09:49:41.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "0d3f0678-4cce-4e6c-9b47-9a4a475a9b1d",
        "policy_id": "d29cb349-1953-405d-8f16-301da9c7bc84",
        "project_id": "dee550d3df5b497ca2e05044616bc8b1",
        "policy_name": "policy_api",
        "project_name": "cloudproject"
      }
    ]
  }
}

List Assigned Policies

GET https://<wlm_api_endpoint>/workload_policy/assigned/<project_id>

Requests the lists of Policies assigned to a Project.

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

project_id

ID of the Project to fetch assigned policies from

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Sample Response
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:14:01 GMT
Content-Type: application/json
Content-Length: 338
Connection: keep-alive
X-Compute-Request-Id: req-57175488-d267-4dcb-90b5-f239d8b02fe2

{
   "policies":[
      {
         "created_at":"2020-10-29T15:39:13.000000",
         "updated_at":null,
         "deleted_at":null,
         "deleted":false,
         "version":"4.0.115",
         "id":"8b4a6236-63f1-4e2d-b8d1-23b37f4b4346",
         "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
         "project_id":"c76b3355a164498aa95ddbc960adc238",
         "policy_name":"Gold",
         "project_name":"robert"
      }
   ]
}

Create Policy

POST https://<wlm_api_endpoint>/workload_policy

Creates a Policy with the given parameters

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Body Format

{
   "workload_policy":{
      "field_values":{
        "start_time": "<Time format: HH:MM AM/PM>",
        "hourly": {
          "interval": "<1, 2, 3, 4, 6, 12, 24 hours>",
          "retention": "<Integer>",
          "snapshot_type": "incremental/full"
        },
        "daily": {
          "depends_on": "hourly",
          "backup_time": [
            "<HH:MM 24-hour format>"
          ],
          "retention": "<Integer>",
          "snapshot_type": "incremental/full"
        },
        "weekly": {
          "depends_on": "daily",
          "backup_day": [
            "<mon, tue, wed, thu, fri, sat, sun>"
          ],
          "retention": "<Integer>",
          "snapshot_type": "full"
        },
        "monthly": {
          "depends_on": "daily",
          "month_backup_day": [
            "<Integer: day of the month (1-31)>"
          ],
          "retention": "<Integer>",
          "snapshot_type": "full"
        },
        "yearly": {
          "depends_on": "monthly",
          "backup_month": [
            "<jan, feb, mar, ... dec>"
          ],
          "retention": "<Integer>",
          "snapshot_type": "full"
        },
        "manual": {
          "retention": "<Integer>",
          "retention_days_to_keep": "<Integer>"
        }
      },
      "display_name":"<String>",
      "display_description":"<String>",
      "metadata":{
         <key>:<value>
      }
   }
}
Sample Request
{
  "workload_policy": {
    "display_name": "Api_policy_test",
    "display_description": "No description",
    "field_values": {
      "start_time": "10:00 AM",
      "hourly": {
        "interval": "1",
        "retention": "2",
        "snapshot_type": "incremental"
      },
      "daily": {
        "backup_time": [
          "11:40"
        ],
        "retention": "2",
        "snapshot_type": "incremental"
      },
      "weekly": {
        "backup_day": [
          "fri"
        ],
        "retention": "2",
        "snapshot_type": "incremental"
      },
      "monthly": {
        "month_backup_day": [
          "1"
        ],
        "snapshot_type": "full"
      },
      "yearly": {
        "backup_month": [
          "mar"
        ],
        "retention": "1",
        "snapshot_type": "full"
      },
      "manual": "4",
      "retentionmanual": "4"
    },
    "metadata": {
      
    }
  }
}
Sample Response
HTTP/1.1 200 OK
x-compute-request-id: req-538517fb-aca0-4abc-9dc7-ef1ee2af1cd7
content-type: application/json
content-length: 2943
date: Wed, 29 Jan 2025 10:23:38 GMT

{
  "policy": {
    "id": "43885a4d-f9c6-42fd-a8c4-2d1816dbd88d",
    "created_at": "2025-01-29T10:23:38.000000",
    "updated_at": "2025-01-29T10:23:38.000000",
    "status": "available",
    "name": "Api_policy_test",
    "description": "No description",
    "metadata": [
      
    ],
    "field_values": [
      {
        "created_at": "2025-01-29T10:23:38.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "0492f64e-175b-4f8f-91da-e5986d3b9118",
        "policy_id": "43885a4d-f9c6-42fd-a8c4-2d1816dbd88d",
        "policy_field_name": "retentionmanual",
        "value": "V4\np0\n."
      },
      {
        "created_at": "2025-01-29T10:23:38.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "273f07f1-fa68-4883-aec9-8873095d9e0e",
        "policy_id": "43885a4d-f9c6-42fd-a8c4-2d1816dbd88d",
        "policy_field_name": "daily",
        "value": "(dp0\nVbackup_time\np1\n(lp2\nV11:40\np3\nasVretention\np4\nV2\np5\nsVsnapshot_type\np6\nVincremental\np7\ns."
      },
      {
        "created_at": "2025-01-29T10:23:38.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "31432273-d82c-4ea7-802b-353f818b6926",
        "policy_id": "43885a4d-f9c6-42fd-a8c4-2d1816dbd88d",
        "policy_field_name": "yearly",
        "value": "(dp0\nVbackup_month\np1\n(lp2\nVmar\np3\nasVretention\np4\nV1\np5\nsVsnapshot_type\np6\nVfull\np7\ns."
      },
      {
        "created_at": "2025-01-29T10:23:38.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "4ef22bcd-4b7a-4dac-9c02-1b41ec443778",
        "policy_id": "43885a4d-f9c6-42fd-a8c4-2d1816dbd88d",
        "policy_field_name": "start_time",
        "value": "V10:00 AM\np0\n."
      },
      {
        "created_at": "2025-01-29T10:23:38.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "58a13a57-08c3-457a-a912-0217c58ac351",
        "policy_id": "43885a4d-f9c6-42fd-a8c4-2d1816dbd88d",
        "policy_field_name": "hourly",
        "value": "(dp0\nVinterval\np1\nV1\np2\nsVretention\np3\nV2\np4\nsVsnapshot_type\np5\nVincremental\np6\ns."
      },
      {
        "created_at": "2025-01-29T10:23:38.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "979fc748-9434-46a8-8613-24524954ba6e",
        "policy_id": "43885a4d-f9c6-42fd-a8c4-2d1816dbd88d",
        "policy_field_name": "weekly",
        "value": "(dp0\nVbackup_day\np1\n(lp2\nVfri\np3\nasVretention\np4\nV2\np5\nsVsnapshot_type\np6\nVincremental\np7\ns."
      },
      {
        "created_at": "2025-01-29T10:23:38.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "cad874e6-9960-47a8-a904-25161c7704ee",
        "policy_id": "43885a4d-f9c6-42fd-a8c4-2d1816dbd88d",
        "policy_field_name": "monthly",
        "value": "(dp0\nVmonth_backup_day\np1\n(lp2\nV1\np3\nasVsnapshot_type\np4\nVfull\np5\ns."
      },
      {
        "created_at": "2025-01-29T10:23:38.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "f1670b32-0a3b-4876-a51c-14c24f999eab",
        "policy_id": "43885a4d-f9c6-42fd-a8c4-2d1816dbd88d",
        "policy_field_name": "manual",
        "value": "V4\np0\n."
      }
    ]
  }
}

Update Policy

PUT https://<wlm_api_endpoint>/workload_policy/<policy-id>

Updates a Policy with the given information

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

policy_id

ID of the Policy

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Body Format

{
   "workload_policy":{
      "field_values":{
        "start_time": "<Time format: HH:MM AM/PM>",
        "hourly": {
          "interval": "<1, 2, 3, 4, 6, 12, 24 hours>",
          "retention": "<Integer>",
          "snapshot_type": "incremental/full"
        },
        "daily": {
          "depends_on": "hourly",
          "backup_time": [
            "<HH:MM 24-hour format>"
          ],
          "retention": "<Integer>",
          "snapshot_type": "incremental/full"
        },
        "weekly": {
          "depends_on": "daily",
          "backup_day": [
            "<mon, tue, wed, thu, fri, sat, sun>"
          ],
          "retention": "<Integer>",
          "snapshot_type": "full"
        },
        "monthly": {
          "depends_on": "daily",
          "month_backup_day": [
            "<Integer: day of the month (1-31)>"
          ],
          "retention": "<Integer>",
          "snapshot_type": "full"
        },
        "yearly": {
          "depends_on": "monthly",
          "backup_month": [
            "<jan, feb, mar, ... dec>"
          ],
          "retention": "<Integer>",
          "snapshot_type": "full"
        },
        "manual": {
          "retention": "<Integer>",
          "retention_days_to_keep": "<Integer>"
        }
      },
      "display_name":"<String>",
      "display_description":"<String>",
      "metadata":{
         <key>:<value>
      }
   }
}
Sample Response
HTTP/1.1 200 OK
x-compute-request-id: req-9c7473ce-468c-4688-b061-a761258f7c5e
content-type: application/json
content-length: 3013
date: Wed, 29 Jan 2025 10:37:40 GMT

{
  "policy": {
    "id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
    "created_at": "2025-01-29T10:21:11.000000",
    "updated_at": "2025-01-29T10:21:11.000000",
    "status": "available",
    "name": "Api_update_policy",
    "description": "No description",
    "metadata": [
      
    ],
    "field_values": [
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": "2025-01-29T10:36:59.000000",
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "36c76a9b-3599-409c-b226-c59a981d5693",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "hourly",
        "value": "(dp0\nVinterval\np1\nV2\np2\nsVretention\np3\ng2\nsVsnapshot_type\np4\nVincremental\np5\ns."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "44633679-9fdf-4fe0-84e9-0e364cee8d02",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "monthly",
        "value": "(dp0\nVmonth_backup_day\np1\n(lp2\nV1\np3\nasVsnapshot_type\np4\nVfull\np5\ns."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": "2025-01-29T10:36:59.000000",
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "8ddfa5a3-ae39-4430-a474-9074ea3fefbc",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "daily",
        "value": "(dp0\nVbackup_time\np1\n(lp2\nV13:00\np3\nasVretention\np4\nV2\np5\nsVsnapshot_type\np6\nVincremental\np7\ns."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "a0944765-93bd-487a-9dd2-6a756fab1d2f",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "weekly",
        "value": "(dp0\nVbackup_day\np1\n(lp2\nVfri\np3\nasVretention\np4\nV2\np5\nsVsnapshot_type\np6\nVincremental\np7\ns."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": "2025-01-29T10:36:59.000000",
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "ac6267c5-1923-4d24-b911-61427acfdc8b",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "start_time",
        "value": "V11:00 AM\np0\n."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "b945acdd-3780-4494-b63c-42474ea65c24",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "manual",
        "value": "V4\np0\n."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "bad15c15-5eb2-4b4b-8597-639a8c27b947",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "yearly",
        "value": "(dp0\nVbackup_month\np1\n(lp2\nVmar\np3\nasVretention\np4\nV1\np5\nsVsnapshot_type\np6\nVfull\np7\ns."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "cf900224-2b21-4b29-9118-99ec3151d891",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "retentionmanual",
        "value": "V4\np0\n."
      }
    ]
  }
}

Assign Policy

POST https://<wlm_api_endpoint>/workload_policy/<policy-id>

Updates a Policy with the given information

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

policy_id

ID of the Policy

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Content-Type

string

application/json

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Body Format

{
   "policy":{
      "remove_projects":[
         "<project_id>"
      ],
      "add_projects":[
         "<project_id>",
      ]
   }
}
Sample Response
HTTP/1.1 200 OK
x-compute-request-id: req-a8569cd2-05a2-45ce-bae5-41a214759ff8
content-type: application/json
content-length: 3831
date: Wed, 29 Jan 2025 10:44:56 GMT

{
  "policy": {
    "id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
    "created_at": "2025-01-29T10:21:11.000000",
    "updated_at": "2025-01-29T10:21:11.000000",
    "user_id": "6bbb210a29a043af86b7b0c667747187",
    "project_id": "dee550d3df5b497ca2e05044616bc8b1",
    "status": "available",
    "name": "Api_update_policy",
    "description": "No description",
    "field_values": [
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": "2025-01-29T10:36:59.000000",
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "36c76a9b-3599-409c-b226-c59a981d5693",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "hourly",
        "value": "(dp0\nVinterval\np1\nV2\np2\nsVretention\np3\ng2\nsVsnapshot_type\np4\nVincremental\np5\ns."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "44633679-9fdf-4fe0-84e9-0e364cee8d02",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "monthly",
        "value": "(dp0\nVmonth_backup_day\np1\n(lp2\nV1\np3\nasVsnapshot_type\np4\nVfull\np5\ns."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": "2025-01-29T10:36:59.000000",
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "8ddfa5a3-ae39-4430-a474-9074ea3fefbc",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "daily",
        "value": "(dp0\nVbackup_time\np1\n(lp2\nV13:00\np3\nasVretention\np4\nV2\np5\nsVsnapshot_type\np6\nVincremental\np7\ns."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "a0944765-93bd-487a-9dd2-6a756fab1d2f",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "weekly",
        "value": "(dp0\nVbackup_day\np1\n(lp2\nVfri\np3\nasVretention\np4\nV2\np5\nsVsnapshot_type\np6\nVincremental\np7\ns."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": "2025-01-29T10:36:59.000000",
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "ac6267c5-1923-4d24-b911-61427acfdc8b",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "start_time",
        "value": "V11:00 AM\np0\n."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "b945acdd-3780-4494-b63c-42474ea65c24",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "manual",
        "value": "V4\np0\n."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "bad15c15-5eb2-4b4b-8597-639a8c27b947",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "yearly",
        "value": "(dp0\nVbackup_month\np1\n(lp2\nVmar\np3\nasVretention\np4\nV1\np5\nsVsnapshot_type\np6\nVfull\np7\ns."
      },
      {
        "created_at": "2025-01-29T10:21:11.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "cf900224-2b21-4b29-9118-99ec3151d891",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "policy_field_name": "retentionmanual",
        "value": "V4\np0\n."
      }
    ],
    "metadata": [
      
    ],
    "policy_assignments": [
      {
        "created_at": "2025-01-29T10:44:56.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "2177bf6e-116f-45fc-986b-3d20f7084fc7",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "project_id": "c532c5b669304d20bbe9e2157986757c",
        "policy_name": "Api_update_policy",
        "project_name": "AP_test"
      },
      {
        "created_at": "2025-01-29T10:43:28.000000",
        "updated_at": null,
        "deleted_at": null,
        "deleted": false,
        "version": "6.0.20",
        "id": "5b3b445b-e4dd-4381-84cd-c90bdda0b7e7",
        "policy_id": "d3b638c6-d26e-4949-8493-18a4df3123bf",
        "project_id": "dee550d3df5b497ca2e05044616bc8b1",
        "policy_name": "Api_update_policy",
        "project_name": "cloudproject"
      }
    ]
  },
  "failed_ids": [
    
  ]
}

Delete Policy

DELETE https://<wlm_api_endpoint>/workload_policy/<policy_id>

Deletes a given Policy

Path Parameters

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

policy_id

ID of the Policy

Headers

Name
Type
Description

X-Auth-Project-Id

string

Project to authenticate against

X-Auth-Token

string

Authentication token to use

Accept

string

application/json

User-Agent

string

python-workloadmgrclient

Sample Response
HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:56:03 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-alive

Backup Targets

This page contains the API guide for various operations on Backup Targets and Backup Target Types.


Backup Targets

Create Backup Target

Creates the backup target

POST https://<wlm_api_endpoint>/backup_targets

Input Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Body Format:

{
    "backup_target" :
    {
        "s3_endpoint_url": <s3 endpint url>,
        "s3_bucket": <s3 bucket name>,
        "filesystem_export": <filesystem export required for NFS type>,
        "type": <Backup Target type Eg. nfs, s3>,
        "is_default": <integer value 0|1 to specify if default or non-default>,
        "btt_name": <Backup Target Type name, using filesystem_export if not provided>
        "immutable": <integer value 0|1 to specify if s3 Backup Target has object locked enabled>
        "metadata": <dictionary of key-value pair denotes metadata of backup target>
        }
}
Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:25:14 GMT'
'Content-Type': 'application/json'
'Content-Length': '430'
'Connection': 'keep-alive'
'X-Compute-Request-Id': 'req-5730cced-949b-4567-827e-8349a023e716'

{"backup_targets": {"id": "4720baae-5422-466d-a4b2-a4a2c94400f3", "type": "s3", "created_at": "2025-01-30T12:32:27.000000", "updated_at": "2025-01-30T12:32:27.000000", "version": "5.2.8.15", "filesystem_export": "cephs3.triliodata.demo/object-locked-cephs3-2", "filesystem_export_mount_path": "/var/trilio/triliovault-mounts/Y2VwaHMzLnRyaWxpb2RhdGEuZGVtby9vYmplY3QtbG9ja2VkLWNlcGhzMy0y", "is_default": false, "capacity": null, "used": null, "status": "offline", "backup_target_types": [{"created_at": "2025-01-30T12:32:27.000000", "updated_at": null, "version": "5.2.8.15", "user_id": "a62bf1546cdf4b02a3fc08b7aad79acb", "name": "cephs3.triliodata.demo/object-locked-cephs3-2", "description": null, "is_public": true, "is_default": false}], "backup_target_metadata": []}}

Delete a Backup Target

Deleting an existing backup target

Removing Backup Target from an active Workload can lead to inconsistent behavior and potential backup operation failures.

DELETE https://<wlm_api_endpoint>/backup_targets/<backup_target_id>

Input Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgrservice

backup_target_id

Id of the required Backup Target

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Sample Response
HTTP/1.1 202 OK
'Server': 'nginx/1.20.1'
'Date': 'Thu, 30 Jan 2025 13:04:32 GMT'
'Content-Type': 'text/html; charset=UTF-8'
'Content-Length': '0'
'Connection': 'keep-alive'

List Backup Targets

Provides the list of all backup targets

GET https://<wlm_api_endpoint>/backup_targets

Input Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:15:16 GMT',
'Content-Type': 'application/json',
'Content-Length': '1808',
'Connection': 'keep-alive',
'X-Compute-Request-Id': 'req-739464e0-3b84-4e94-866a-f34476915a38'
{
  "backup_targets": [
    {
      "id": "b39847c8-bf65-4cec-9af6-dd65303ca485",
      "type": "nfs",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "nfs_export": "192.168.1.34:/mnt/tvault/162",
      "nfs_export_mount_path": "/var/triliovault-mounts/L21udC90dmF1bHQvMTYy",
      "is_default": true,
      "capacity": "2.4 TB",
      "used": "1.2 TB",
      "status": "available",
      "backup_target_types": [
        {
          "created_at": "2024-03-04T10:59:51.000000",
          "updated_at": null,
          "version": "5.0.204",
          "user_id": null,
          "name": "nfs_1",
          "description": null,
          "is_public": true,
          "is_default": true
        }
      ]
    },
    {
      "id": "2af5f2db-3267-453f-bc57-19884837e274",
      "type": "s3",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "nfs_export": "https://cephs3.triliodata.demo/trilio-qamanual",
      "nfs_export_mount_path": "/var/triliovault-mounts/Y2VwaHMzLnRyaWxpb2RhdGEuZGVtby90cmlsaW8tcWFtYW51YWw=",
      "is_default": false,
      "capacity": null,
      "used": null,
      "status": "offline",
      "backup_target_types": [
        {
          "created_at": "2024-03-04T10:59:51.000000",
          "updated_at": null,
          "version": "5.0.204",
          "user_id": null,
          "name": "s3_2",
          "description": null,
          "is_public": true,
          "is_default": false
        }
      ]
    },
    {
      "id": "1fd7af34-d723-428d-90f5-35d31bf24884",
      "type": "nfs",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "nfs_export": "192.168.1.34:/mnt/tvault/tvm",
      "nfs_export_mount_path": "/var/triliovault-mounts/L21udC90dmF1bHQvdHZt",
      "is_default": false,
      "capacity": "2.4 TB",
      "used": "1.2 TB",
      "status": "available",
      "backup_target_types": [
        {
          "created_at": "2024-03-04T10:59:51.000000",
          "updated_at": null,
          "version": "5.0.204",
          "user_id": null,
          "name": "nfs_2",
          "description": null,
          "is_public": true,
          "is_default": false
        }
      ]
    }
  ]
}

Detailed List of all Backup Targets

Provides a detailed list of all backup targets

GET https://<wlm_api_endpoint>/backup_targets/detail

Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:15:16 GMT',
'Content-Type': 'application/json',
'Content-Length': '1808',
'Connection': 'keep-alive',
'X-Compute-Request-Id': 'req-739464e0-3b84-4e94-866a-f34476915a38'
{
  "backup_targets": [
    {
      "id": "b39847c8-bf65-4cec-9af6-dd65303ca485",
      "type": "nfs",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "nfs_export": "192.168.1.34:/mnt/tvault/162",
      "nfs_export_mount_path": "/var/triliovault-mounts/L21udC90dmF1bHQvMTYy",
      "is_default": true,
      "capacity": "2.4 TB",
      "used": "1.2 TB",
      "status": "available",
      "backup_target_types": [
        {
          "created_at": "2024-03-04T10:59:51.000000",
          "updated_at": null,
          "version": "5.0.204",
          "user_id": null,
          "name": "nfs_1",
          "description": null,
          "is_public": true,
          "is_default": true
        }
      ]
    },
    {
      "id": "2af5f2db-3267-453f-bc57-19884837e274",
      "type": "s3",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "nfs_export": "https://cephs3.triliodata.demo/trilio-qamanual",
      "nfs_export_mount_path": "/var/triliovault-mounts/Y2VwaHMzLnRyaWxpb2RhdGEuZGVtby90cmlsaW8tcWFtYW51YWw=",
      "is_default": false,
      "capacity": null,
      "used": null,
      "status": "offline",
      "backup_target_types": [
        {
          "created_at": "2024-03-04T10:59:51.000000",
          "updated_at": null,
          "version": "5.0.204",
          "user_id": null,
          "name": "s3_2",
          "description": null,
          "is_public": true,
          "is_default": false
        }
      ]
    },
    {
      "id": "1fd7af34-d723-428d-90f5-35d31bf24884",
      "type": "nfs",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "nfs_export": "192.168.1.34:/mnt/tvault/tvm",
      "nfs_export_mount_path": "/var/triliovault-mounts/L21udC90dmF1bHQvdHZt",
      "is_default": false,
      "capacity": "2.4 TB",
      "used": "1.2 TB",
      "status": "available",
      "backup_target_types": [
        {
          "created_at": "2024-03-04T10:59:51.000000",
          "updated_at": null,
          "version": "5.0.204",
          "user_id": null,
          "name": "nfs_2",
          "description": null,
          "is_public": true,
          "is_default": false
        }
      ]
    }
  ]
}

Show Details of a Backup Target

Provides all details of a specific backup target

GET https://<wlm_api_endpoint>/backup_targets/<backup_target_id>

Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

backup_target_id

ID of the Backup Target to be fetched

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:21:14 GMT'
'Content-Type': 'application/json'
'Content-Length': '600'
'Connection': 'keep-alive'
'X-Compute-Request-Id': 'req-0b298919-f2e8-4c50-aa55-ce88b4569f2a'
{
  "backup_targets": {
    "id": "b39847c8-bf65-4cec-9af6-dd65303ca485",
    "type": "nfs",
    "created_at": "2024-03-04T10:59:51.000000",
    "updated_at": "2024-03-04T10:59:51.000000",
    "version": "5.0.204",
    "nfs_export": "192.168.1.34:/mnt/tvault/162",
    "nfs_export_mount_path": "/var/triliovault-mounts/L21udC90dmF1bHQvMTYy",
    "is_default": true,
    "capacity": "2.4 TB",
    "used": "1.2 TB",
    "status": "available",
    "backup_target_types": [
      {
        "created_at": "2024-03-04T10:59:51.000000",
        "updated_at": null,
        "version": "5.0.204",
        "user_id": null,
        "name": "nfs_1",
        "description": null,
        "is_public": true,
        "is_default": true
      }
    ]
  }
}

Backup Target Set Default

Sets the given Backup target as default

GET https://<wlm_api_endpoint>/backup_targets/<backup_target_id>/set_default

Input Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgrservice

backup_target_id

Id of the required Backup Target

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Sample Response
'x-compute-request-id': 'req-295c03f7-8dfb-4b72-a654-9f06a51d9b0c'
'content-type': 'application/json'
'content-length': '911'
'date': 'Tue, 04 Feb 2025 14:19:05 GMT'

{"backup_targets": {"id": "0dcc632a-aed7-45ab-a40e-7cc2deae3994", "type": "s3", "created_at": "2025-02-04T10:43:37.000000", "updated_at": "2025-02-04T10:43:37.000000", "version": "5.2.8.15", "filesystem_export": "cephs3.triliodata.demo/object-locked-cephs3-2", "filesystem_export_mount_path": "/var/trilio/triliovault-mounts/Y2VwaHMzLnRyaWxpb2RhdGEuZGVtby9vYmplY3QtbG9ja2VkLWNlcGhzMy0y", "is_default": true, "capacity": null, "used": null, "status": "offline", "backup_target_types": [{"created_at": "2025-02-04T10:43:37.000000", "updated_at": "2025-02-04T14:19:05.000000", "version": "5.2.8.15", "user_id": "a62bf1546cdf4b02a3fc08b7aad79acb", "name": "cephs3.triliodata.demo/object-locked-cephs3-2", "description": null, "is_public": true, "is_default": true}], "backup_target_metadata": [{"key": "bucket", "value": "s3-object-lock"}, {"key": "immutable", "value": "1"}, {"key": "object_lock", "value": "1"}]}}```

</details>

***

## Backup Target Types: <a href="#backup-target-types" id="backup-target-types"></a>

### **List Backup Target Types**&#x20;

Provides the list of all backup target types

&#x20;<mark style="color:green;">`GET`</mark> `https://<wlm_api_endpoint>/backup_target_types`&#x20;

#### Input Parameters:

Path:

<table><thead><tr><th width="219">Parameter Name</th><th>Description</th></tr></thead><tbody><tr><td>wlm_api_endpoint</td><td>The endpoint URL of the <code>Workloadmgr</code> service</td></tr></tbody></table>

Headers:

| Header Name       | Value/Description                             |
| ----------------- | --------------------------------------------- |
| X-Auth-Project-Id |  Project ID to run the authentication against |
| X-Auth-Token      | Authentication token to use                   |
| Accept            | `application/json`                            |
| User-Agent        | `python-workloadmgrclient`                    |

<details>

<summary>Sample Response</summary>

```json5
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:29:09 GMT'
'Content-Type': 'application/json'
'Content-Length': '1628'
'Connection': 'keep-alive'
'X-Compute-Request-Id': 'req-dd23df0a-37a6-4769-882a-40e67431d997'
{
  "backup_target_types": [
    {
      "id": "13dd2bf2-12c5-4eb8-98a3-0f1dd9f8579f",
      "backup_targets_id": "b39847c8-bf65-4cec-9af6-dd65303ca485",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "user_id": null,
      "name": "nfs_1",
      "is_default": true,
      "description": null,
      "is_public": true,
      "backup_target_type_projects": [],
      "backup_target_type_metadata": []
    },
    {
      "id": "51f11fc5-854b-4cb7-9b64-0a0ace33b0d5",
      "backup_targets_id": "1fd7af34-d723-428d-90f5-35d31bf24884",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "user_id": null,
      "name": "nfs_2",
      "is_default": false,
      "description": null,
      "is_public": true,
      "backup_target_type_projects": [],
      "backup_target_type_metadata": []
    },
    {
      "id": "c65625c1-50bf-4ab7-aa26-f625001e60f1",
      "backup_targets_id": "2af5f2db-3267-453f-bc57-19884837e274",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "user_id": null,
      "name": "s3_2",
      "is_default": false,
      "description": null,
      "is_public": true,
      "backup_target_type_projects": [],
      "backup_target_type_metadata": []
    },
    {
      "id": "fdae1b10-9852-4c68-8879-64ead0aed31b",
      "backup_targets_id": "b39847c8-bf65-4cec-9af6-dd65303ca485",
      "created_at": "2024-03-13T10:25:14.000000",
      "updated_at": "2024-03-13T10:25:14.000000",
      "version": "5.0.204",
      "user_id": "2b1189be3add4806bcb7e0c259b03597",
      "name": "BTT-name",
      "is_default": false,
      "description": null,
      "is_public": true,
      "backup_target_type_projects": [],
      "backup_target_type_metadata": [
        {
          "key": "nfs",
          "value": "secondary"
        }
      ]
    }
  ]
}

Detailed List of all Backup Target Types

Provides a detailed list of all backup target types

GET https://<wlm_api_endpoint>/backup_target_types/detail

Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:29:09 GMT'
'Content-Type': 'application/json'
'Content-Length': '1628'
'Connection': 'keep-alive'
'X-Compute-Request-Id': 'req-dd23df0a-37a6-4769-882a-40e67431d997'
{
  "backup_target_types": [
    {
      "id": "13dd2bf2-12c5-4eb8-98a3-0f1dd9f8579f",
      "backup_targets_id": "b39847c8-bf65-4cec-9af6-dd65303ca485",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "user_id": null,
      "name": "nfs_1",
      "is_default": true,
      "description": null,
      "is_public": true,
      "backup_target_type_projects": [],
      "backup_target_type_metadata": []
    },
    {
      "id": "51f11fc5-854b-4cb7-9b64-0a0ace33b0d5",
      "backup_targets_id": "1fd7af34-d723-428d-90f5-35d31bf24884",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "user_id": null,
      "name": "nfs_2",
      "is_default": false,
      "description": null,
      "is_public": true,
      "backup_target_type_projects": [],
      "backup_target_type_metadata": []
    },
    {
      "id": "c65625c1-50bf-4ab7-aa26-f625001e60f1",
      "backup_targets_id": "2af5f2db-3267-453f-bc57-19884837e274",
      "created_at": "2024-03-04T10:59:51.000000",
      "updated_at": "2024-03-04T10:59:51.000000",
      "version": "5.0.204",
      "user_id": null,
      "name": "s3_2",
      "is_default": false,
      "description": null,
      "is_public": true,
      "backup_target_type_projects": [],
      "backup_target_type_metadata": []
    },
    {
      "id": "fdae1b10-9852-4c68-8879-64ead0aed31b",
      "backup_targets_id": "b39847c8-bf65-4cec-9af6-dd65303ca485",
      "created_at": "2024-03-13T10:25:14.000000",
      "updated_at": "2024-03-13T10:25:14.000000",
      "version": "5.0.204",
      "user_id": "2b1189be3add4806bcb7e0c259b03597",
      "name": "BTT-name",
      "is_default": false,
      "description": null,
      "is_public": true,
      "backup_target_type_projects": [],
      "backup_target_type_metadata": [
        {
          "key": "nfs",
          "value": "secondary"
        }
      ]
    }
  ]
}

Show Details of a Backup Target Type

Provides all details of a specific backup target type

GET https://<wlm_api_endpoint>/backup_target_types/<backup_target_type_id>

Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

backup_target_type_id

ID of the Backup Target Type to be fetched

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:30:31 GMT'
'Content-Type': 'application/json'
'Content-Length': '406'
'Connection': 'keep-alive'
'X-Compute-Request-Id': 'req-f20d2e28-a083-47fe-8eda-a702ac484865'
{
  "backup_target_types": {
    "id": "13dd2bf2-12c5-4eb8-98a3-0f1dd9f8579f",
    "backup_targets_id": "b39847c8-bf65-4cec-9af6-dd65303ca485",
    "created_at": "2024-03-04T10:59:51.000000",
    "updated_at": "2024-03-04T10:59:51.000000",
    "version": "5.0.204",
    "user_id": null,
    "name": "nfs_1",
    "is_default": true,
    "description": null,
    "is_public": true,
    "backup_target_type_projects": [],
    "backup_target_type_metadata": []
  }
}

Create a Backup Target Type

Creates the backup target type

POST https://<wlm_api_endpoint>/backup_target_types

Input Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Body Format:

{
    "backup_target_type" :
    {
        "name": <name of the backup target type>,
        "description": <description of backup target type>,
        "backup_targets_id": <ID of the backup target>,
        "is_default": <integer value 0|1 to specify if default or non-default>,
        "is_public": <integer value 0|1 to specify if public or non-public>,
        "project_list": [
            <list of project IDs on which backup target type will be assigned>
        ],
        "metadata": [
            <list of dictionaries of key-value pair >
            {
                "key":<meta-key>,
                "value":<meta-value>
            },
        ]
    }
}
Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:25:14 GMT'
'Content-Type': 'application/json'
'Content-Length': '430'
'Connection': 'keep-alive'
'X-Compute-Request-Id': 'req-5730cced-949b-4567-827e-8349a023e716'
{
  "backup_target_types": {
    "id": "fdae1b10-9852-4c68-8879-64ead0aed31b",
    "backup_targets_id": "b39847c8-bf65-4cec-9af6-dd65303ca485",
    "created_at": "2024-03-13T10:25:14.000000",
    "version": "5.0.204",
    "user_id": "2b1189be3add4806bcb7e0c259b03597",
    "name": "BTT-name",
    "is_default": false,
    "description": null,
    "is_public": true,
    "backup_target_type_projects": [],
    "backup_target_type_metadata": [
      {
        "key": "nfs",
        "value": "primary"
      }
    ]
  }
}

Update the backup target type

Update an existing backup target type

PUT https://<wlm_api_endpoint>/backup_target_types/<backup_target_type_id>

Input Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgr service

backup_target_type_id

Id of the Backup Target Type to be modified

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Body Format:

{
    "backup_target_type" :
    {
        "name": <name of the backup target type>,
        "description": <description of backup target type>,
        "is_default": <integer value 0|1 to specify if default or non-default>,
        "is_public": <integer value 0|1 to specify if public or non-public>,
        "project_list": [
            <list of project IDs on which backup target type will be assigned>
        ],
        "purge_projects": True <if All the assigned projects need to be purged>,
        "metadata": [
            <list of dictionaries of key-value pair >
            {
                "key":<meta-key>,
                "value":<meta-value>
            },
        ],
        "purge_metadata": True <if All the metadata needs to be purged>
    }
}
Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:27:49 GMT'
'Content-Type': 'application/json'
'Content-Length': '432'
'Connection': 'keep-alive'
'X-Compute-Request-Id': 'req-14b39336-bc84-414a-a128-1d71e7dfb3cf'
{
  "backup_target_types": {
    "id": "fdae1b10-9852-4c68-8879-64ead0aed31b",
    "backup_targets_id": "b39847c8-bf65-4cec-9af6-dd65303ca485",
    "created_at": "2024-03-13T10:25:14.000000",
    "version": "5.0.204",
    "user_id": "2b1189be3add4806bcb7e0c259b03597",
    "name": "BTT-name",
    "is_default": false,
    "description": null,
    "is_public": true,
    "backup_target_type_projects": [],
    "backup_target_type_metadata": [
      {
        "key": "nfs",
        "value": "secondary"
      }
    ]
  }
}

Assign Projects to a Backup Target Type

Add projects to an existing backup target type

POST https://<wlm_api_endpoint>/backup_target_types/<backup_target_type_id>/add_projects

Input Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgrservice

backup_target_type_id

Id of the required Backup Target Type

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Body Format:

{
    "backup_target_type" :
    {
        "backup_target_type_id": <ID of the backup target type>,
        "project_list": [
            <list of project IDs on which backup target type will be assigned>
            ]
        }
}
Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:40:05 GMT'
'Content-Type': 'application/json'
'Content-Length': '921'
'Connection': 'keep-alive'
'X-Compute-Request-Id': 'req-3dd1a577-ef35-4997-83bc-b2e50afaea73'
{
  "backup_target_types": {
    "id": "c65625c1-50bf-4ab7-aa26-f625001e60f1",
    "backup_targets_id": "2af5f2db-3267-453f-bc57-19884837e274",
    "created_at": "2024-03-04T10:59:51.000000",
    "version": "5.0.204",
    "user_id": "2b1189be3add4806bcb7e0c259b03597",
    "name": "s3_2",
    "is_default": false,
    "description": null,
    "is_public": false,
    "backup_target_type_projects": [
      {
        "created_at": "2024-03-13T10:39:14.000000",
        "updated_at": null,
        "version": "5.0.204",
        "id": "5a14a688-e16f-45f9-91c2-6906fb200825",
        "backup_target_types_id": "c65625c1-50bf-4ab7-aa26-f625001e60f1",
        "project_id": "fc439373e340459fb28202b2412e26c0"
      },
      {
        "created_at": "2024-03-13T10:40:05.000000",
        "updated_at": null,
        "version": "5.0.204",
        "id": "c0fe123e-e566-465a-acf2-56e2b27ae9b2",
        "backup_target_types_id": "c65625c1-50bf-4ab7-aa26-f625001e60f1",
        "project_id": "0920f871077c4c079057ce940d8105a8"
      }
    ],
    "backup_target_type_metadata": [
      {
        "key": "dg1",
        "value": "dg2"
      }
    ]
  }
}

Remove Assigned Projects from a Backup Target Type

Add projects to an existing backup target type

PUT https://<wlm_api_endpoint>/backup_target_types/<backup_target_type_id>/remove_projects

Input Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgrservice

backup_target_type_id

Id of the required Backup Target Type

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Body Format:

{
    "backup_target_type" :
    {
        "backup_target_type_id": <ID of the backup target type>,
        "project_list": [
            <list of assigned project IDs that needs to be unassigned>
            ]
        }
}
Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:41:18 GMT'
'Content-Type': 'application/json'
'Content-Length': '671'
'Connection': 'keep-alive'
'X-Compute-Request-Id': 'req-9dfe92c3-5b82-4062-98b0-b00294147b02'
{
  "backup_target_types": {
    "id": "c65625c1-50bf-4ab7-aa26-f625001e60f1",
    "backup_targets_id": "2af5f2db-3267-453f-bc57-19884837e274",
    "created_at": "2024-03-04T10:59:51.000000",
    "version": "5.0.204",
    "user_id": "2b1189be3add4806bcb7e0c259b03597",
    "name": "s3_2",
    "is_default": false,
    "description": null,
    "is_public": false,
    "backup_target_type_projects": [
      {
        "created_at": "2024-03-13T10:39:14.000000",
        "updated_at": null,
        "version": "5.0.204",
        "id": "5a14a688-e16f-45f9-91c2-6906fb200825",
        "backup_target_types_id": "c65625c1-50bf-4ab7-aa26-f625001e60f1",
        "project_id": "fc439373e340459fb28202b2412e26c0"
      }
    ],
    "backup_target_type_metadata": [
      {
        "key": "dg1",
        "value": "dg2"
      }
    ]
  }
}

Add Metadata to Backup Target Type

Adds metadata to an existing backup target type

POST https://<wlm_api_endpoint>/backup_target_types/<backup_target_type_id>/add_metadata

Input Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgrservice

backup_target_type_id

Id of the required Backup Target Type

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Body Format:

{
    "backup_target_type" :
    {
        "backup_target_type_id": <ID of the backup target type>,
        "metadata": [
            <list of dictionaries of key-value pair to be added >
            {
                "key":<meta-key>,
                "value":<meta-value>
            },
        ],
        }
}
Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:44:02 GMT'
'Content-Type': 'application/json'
'Content-Length': '671'
'Connection': 'keep-alive'
'X-Compute-Request-Id': 'req-870c776b-a160-41c0-82a9-d9e7131d77aa'
{
  "backup_target_types": {
    "id": "c65625c1-50bf-4ab7-aa26-f625001e60f1",
    "backup_targets_id": "2af5f2db-3267-453f-bc57-19884837e274",
    "created_at": "2024-03-04T10:59:51.000000",
    "version": "5.0.204",
    "user_id": "2b1189be3add4806bcb7e0c259b03597",
    "name": "s3_2",
    "is_default": false,
    "description": null,
    "is_public": false,
    "backup_target_type_projects": [
      {
        "created_at": "2024-03-13T10:39:14.000000",
        "updated_at": null,
        "version": "5.0.204",
        "id": "5a14a688-e16f-45f9-91c2-6906fb200825",
        "backup_target_types_id": "c65625c1-50bf-4ab7-aa26-f625001e60f1",
        "project_id": "fc439373e340459fb28202b2412e26c0"
      }
    ],
    "backup_target_type_metadata": [
      {
        "key": "dg1",
        "value": "dg2"
      }
    ]
  }
}

Remove Metadata from a Backup Target Type

Removes metadata from an existing backup target type

PUT https://<wlm_api_endpoint>/backup_target_types/<backup_target_type_id>/remove_metadata

Input Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgrservice

backup_target_type_id

Id of the required Backup Target Type

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Body Format:

{
    "backup_target_type" :
    {
        "backup_target_type_id": <ID of the backup target type>,
        "metadata": [
            <list of dictionaries of key-value pair to be removed >
            {
                "key":<meta-key>,
                "value":<meta-value>
            },
        ],
        }
}
Sample Response
HTTP/1.1 200 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:45:12 GMT'
'Content-Type': 'application/json'
'Content-Length': '671'
'Connection': 'keep-alive'
'X-Compute-Request-Id': 'req-7eee91f0-d02f-41e1-afe2-ebcbe39aaa85'
{
  "backup_target_types": {
    "id": "c65625c1-50bf-4ab7-aa26-f625001e60f1",
    "backup_targets_id": "2af5f2db-3267-453f-bc57-19884837e274",
    "created_at": "2024-03-04T10:59:51.000000",
    "version": "5.0.204",
    "user_id": "2b1189be3add4806bcb7e0c259b03597",
    "name": "s3_2",
    "is_default": false,
    "description": null,
    "is_public": false,
    "backup_target_type_projects": [
      {
        "created_at": "2024-03-13T10:39:14.000000",
        "updated_at": null,
        "version": "5.0.204",
        "id": "5a14a688-e16f-45f9-91c2-6906fb200825",
        "backup_target_types_id": "c65625c1-50bf-4ab7-aa26-f625001e60f1",
        "project_id": "fc439373e340459fb28202b2412e26c0"
      }
    ],
    "backup_target_type_metadata": [
      {
        "key": "dg1",
        "value": "dg2"
      }
    ]
  }
}

Delete a Backup Target Type

Deleting an existing backup target type

Removing Backup Target Types from an active Workload can lead to inconsistent behavior and potential backup operation failures.

DELETE https://<wlm_api_endpoint>/backup_target_types/<backup_target_type_id>

Input Parameters:

Path:

Parameter Name
Description

wlm_api_endpoint

The endpoint URL of the Workloadmgrservice

backup_target_type_id

Id of the required Backup Target Type

Headers:

Header Name
Value/Description

X-Auth-Project-Id

Project ID to run the authentication against

X-Auth-Token

Authentication token to use

Accept

application/json

User-Agent

python-workloadmgrclient

Sample Response
HTTP/1.1 202 OK
'Server': 'nginx/1.20.1'
'Date': 'Wed, 13 Mar 2024 10:33:02 GMT'
'Content-Type': 'text/html; charset=UTF-8'
'Content-Length': '0'
'Connection': 'keep-alive'