Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Ceph is the most common OpenSource solution to provide block storage through OpenStack Cinder.
Ceph is a very flexible solution. The possibilities of Ceph require additional steps to the Trilio solution.
Many organizations are looking to modernize their data centers and migrate from technologies like VMware to OpenStack. Although historically, VMware has been a dominant virtualization platform, trusted by many organizations for running mission-critical applications, the rapidly evolving technology landscape has already delivered the next generation of virtualization technologies. OpenStack and Kubernetes are setting the stage for edge computing and Hybrid & Multi-Cloud to emerge as industry standards.
Migrating from VMware to OpenStack offers compelling benefits:
Cost Savings: OpenStack significantly reduces licensing fees.
Vendor Independence: Avoid vendor lock-in and make choices based on how they favor your business.
Innovation and Collaboration: Open-source fosters rapid advancements.
Scalability and Elasticity: Automatically scale resources on demand
Multi-Cloud Support: Manage multiple clouds and data centers from one interface.
Integration with Open Source: Seamlessly integrate with Kubernetes and other open-source tools.
Community Support: Active community provides resources and expertise
By embracing OpenStack, you gain cost savings, flexibility, scalability, and a vibrant community, empowering your cloud infrastructure.
Migrating VMs from VMware to OpenStack has traditionally been a people-intensive and complex process requiring an "expert-intensive" investment.
Organizations must engage in detailed planning, consult experts, and consider utilizing tools and technologies to facilitate smooth migrations. Because each migration scenario is unique, it has been difficult to come up with best `practices and/or automation. This page is about how Trilio’s Intelligent Recovery Platform can help simplify and automate the migration process.
The Trilio solution is using the qcow2 backing file to provide full synthetic backups.
Especially when the NFS backup target is used, there are scenarios at which this backing file needs to be updated to a new mount path.
To make this process easier and streamlined Trilio is providing the following rebase tool.
The following versions can be upgraded to each other:
The upgrade process involves upgrading the Trilio appliance and the OpenStack components and is dependent on the underlying operating system till T4O 4.2 release.
Each T4O release includes a set of artifacts such as version-tagged containers, package repositories, and distribution packages.
To help users quickly identify the resources associated with each release, we have added dedicated sub-pages corresponding to a specific release version.
T4O, by Trilio Data, is a native OpenStack service-based solution that provides policy-based comprehensive backup and recovery for OpenStack workloads. It captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data, and Metadata of an environment) as full or incremental snapshots. These snapshots can be held in a variety of storage environments including NFS, AWS S3, and other S3-compatible storages. With Trilio and its one-click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). With Trilio, IT departments are enabled to fully deploy OpenStack solutions and provide business assurance through enhanced data retention, protection, and integrity.
With the use of Trilio’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. Trilio takes point-in-time backup of the entire workload consisting of compute resources, network configurations, and storage data as one unit. It also takes incremental backups that capture only the changes that were made since the last backup. Incremental snapshots save a considerable amount of storage space as the backup only includes changes since the last backup. The benefits of using VAST for backup and restore could be summarized below:
This documentation serves as the end-user technical resource to accompany Trilio for OpenStack. You will learn about the architecture, installation, and vast number of operations of this product. The intended audience is anyone who wants to understand the value, operations, and nuances of protecting their cloud-native applications with Trilio for OpenStack.
From the T4O 5.0 release, the Trilio appliance is deprecated and all the WLM services are containerized and will be deployed on OpenStack itself.
4.1 GA (4.1.94) or 4.1.HFx
5.x.x
4.2 GA (4.2.64) or 4.2.HFx or 4.2.x
5.x.x
The high-level process is the same for all Distributions.
Uninstall the Horizon Plugin or the Trilio Horizon container
Uninstall the datamover-api container
Uninstall the datamover container
Uninstall the workloadmgr container
set DISABLED = False in following panel files.
These files can be found at <openstack_dashboard_installed_package_path>/enabled/
Example: /usr/share/openstack-dashboard/openstack_dashboard/local/enabled/ for RHOSP17.1
Restart horizon container.
_1806_migration_panel.py
_1807_migration_panel_group.pyAll versions of T4O-5.x releases support NFSv3 and S3 as backup targets on all the compatible distributions.
All versions of T4O-5.x releases support encryption using Barbican service on all the compatible distributions.
QEMU guest agent is highly recommended to be running inside the VM that being backed up to avoid data corruption during backup process. QEMU Guest Agent is a daemon that runs inside a virtual machine (VM) and communicates with the host system (the hypervisor) to provide enhanced management and control of the VM. It is an essential component in virtualized environments specially OpenStack.
Trilio can notify users via E-Mail upon the completion of backup and restore jobs.
The E-Mail will be sent to the owner of the Workload.
To use the E-mail notifications, two requirements need to be met.
Both requirements need to be set or configured by the Openstack Administrator. Please contact your Openstack Administrator to verify the requirements.
As the E-Mail will be sent to the owner of the Workload does the Openstack User, who created the workload, require to have an E-Mail address associated.
Trilio needs to know which E-Mail server to use, to send the E-mail notifications. Backup Administrators can do this in the "Backup Admin" area.
E-Mail notifications are activated tenant wide. To activate the E-Mail notification feature for a tenant follow these steps:
Login to Horizon
Navigate to the Backups
Navigate to Settings
Check/Uncheck the box for "Enable Email Alerts"
The following screenshots show example E-mails send by Trilio.
A quick overview on the Architecture of T4O
Trilio is an add-on service to OpenStack cloud infrastructure and provides backup and disaster recovery solutions for tenant workloads. Trilio is very similar to other OpenStack services including Nova, Cinder, Glance, etc., and adheres to all tenets of OpenStack. It is a stateless service that scales with your cloud.
Trilio has three main software components which are again segregated into multiple services.
This component is registered as a keystone service of type workloads which manages all the workloads being created for utilizing the snapshot and restore functionalities. It has 4 services responsible for managing these workloads, their snapshots, and restores.
workloadmgr-api
workloadmgr-scheduler
workloadmgr-workloads
workloadmgr-cron
Similar to WorkloadManager, this component registers a keystone service of the type datamover which manages the transfer of extensive data to/from the backup targets. It has 2 services that are responsible for taking care of the data transfer and communication with WorkloadManager
datamover-api
datamover
For ease of access and better user experience, T4O provides an integrated UI with the OpenStack dashboard service Horizon,
Trilio API is a Python module that is installed on all OpenStack controller nodes where the nova-api service is running.
Trilio Datamover is a Python module that is installed on every OpenStack compute nodes
Trilio Horizon plugin is installed as an add-on to Horizon servers. This module is installed on every server that runs Horizon service.
Trilio is both a provider and consumer in the OpenStack ecosystem. It uses other OpenStack services such as Nova, Cinder, Glance, Neutron, and Keystone and provides its own services to OpenStack tenants. To accommodate all possible OpenStack deployments, Trilio can be configured to use either public or internal URLs of services. Likewise, Trilio provides its own public, internal, and admin URLs for two of its services WorkloadManager API and Datamover API.
Unlike the previous versions of Trilio for OpenStack, now it utilizes the existing network of the OpenStack deployed environment. The networks for the Trilio services can be configured as per the user's desire in the same way the user configures any other OpenStack service. Additionally, a dedicated network can be provided to Trilio services on both control and compute planes for storing and retrieving backup data from the backup target store.
A Migration Plan is a structured and pre-defined set of instructions that outlines the process and details for migrating Virtual Machines (VMs) from a VMware environment to OpenStack. It serves as a blueprint for orchestrating the migration, including the selection of VMs, and managing the overall migration workflow. Each plan may have a name and may include a description to provide additional context and easy identification. Migration Plans are created and managed by users to ensure a systematic and controlled migration process.
To initiate the migration process, users must create a Migration Plan:
Provide a name and description for the plan.
Select the vCenter from which the VMs are to be migrated.
Using the Manage VMs functionality, select VMs from the list of VMware VMs. This list does not include the VMs that are already part of a Migration Plan.
Any modification to the plan requires users to:
Revisit the plan details for updates.
Execute the "" call to fetch and update resource information in the Trilio service DB.
Failure to perform a "Discover VMs" after plan modification can lead to unexpected outcomes.
Discover VMs is an operation within the migration process that involves querying and collecting resource information from VMware for the VMs included in a Migration Plan. This operation is crucial for maintaining up-to-date details about the VMs, such as configuration, networking, and storage.
After creating or modifying a Migration Plan, users must:
Trigger the "Discover VMs" operation.
This operation collects resource information of VMs in the plan from VMware and stores it in the WLM service DB.
"Discover VMs" must be called every time the plan is updated to ensure accurate information for migration. It ensures accuracy in subsequent migration steps and helps prevent unexpected outcomes due to outdated information.
The user can also delete a Migration Plan if required. But it will need all the initiated migrations to be deleted. Refer to for deleting the initiated migrations for more information.
Before embarking on the installation process for Trilio in your OpenStack environment, it is highly advisable to carefully consider several key elements. These considerations will not only streamline the installation procedure but also ensure the optimal setup and functionality of Trilio's solutions within your OpenStack infrastructure.
Trilio leverages Cinder snapshots to facilitate the computation of both full and incremental backups.
When executing full backups, Trilio orchestrates the generation of Cinder snapshots for all volumes included in the backup job. These Cinder snapshots remain intact for subsequent incremental backup image calculations.
During incremental backup operations, Trilio generates fresh Cinder snapshots and computes the altered blocks between these new snapshots and the earlier retained snapshots from full or previous backups. The old snapshots are subsequently deleted, while the newly generated snapshots are preserved.
Trilio is using the which enables the Trilio service user to act in the name of another Openstack user.
This system is used during all backup and restore features.
Openstack Administrators should never have the need to directly work with the trusts created.
The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.
Trusts can only be worked with via CLI
Trilio uses a widely used tool virt-v2v for migration which is one of the open-source tools by Red Hat. The below-mentioned limitations are from this tool and thus a limitation for Trilio as well:
virt-v2v cannot change the default kernel in the GRUB2 configuration, and the kernel configured in the VM is not changed during the conversion, even if a more optimal version of the kernel is available on the VM.
After converting a virtual machine to KVM, the name of the VM's network interface may change, and thus may require manual configuration.
Consequently, it becomes imperative for each tenant benefiting from Trilio's backup functionality to possess adequate Cinder snapshot quotas capable of accommodating these supplementary snapshots. As a guiding principle, it is recommended to append 2 snapshots for each volume incorporated into the backup quotas for the respective tenant. Additionally, a commensurate increase in volume quotas for the tenant is advisable, as Trilio briefly materializes a volume from the snapshot to access data for backup purposes.
During the restoration process, Trilio generates supplementary instances and Cinder volumes. To facilitate seamless restore operations, tenants should maintain adequate quota levels for Nova instances and Cinder volumes. Failure to meet these quota requirements may lead to disruptions in restoration procedures.
The AWS S3 object consistency model includes:
Read-after-write
Read-after-update
Read-after-delete
Each of these models explains how an object becomes consistent after being created, updated, or deleted. None of these methods ensures strong consistency, leading to a delay before an object becomes fully consistent.
Although Trilio has introduced measures to address AWS S3's eventual consistency limitations, the exact time an object achieves consistency cannot be predicted through deterministic means.
There is no official statement from AWS on how long it takes for an object to reach a consistent state. However, read-after-write has a shorter time to reach consistency compared to other IO patterns. Therefore, our solution is designed to maximize the read-after-write IO pattern.
The time in which an object reaches eventual consistency also depends on the AWS region.
For instance, the AWS-standard region doesn't offer the same level of strong consistency as regions like us-east or us-west. Opting for these regions when setting up S3 buckets for Trilio is advisable. While fully avoiding the read-after-update IO pattern is complex, we've introduced significant access delays for objects to achieve consistency over longer periods. On rare occasions when this does happen, it will cause a backup failure and require a retry.
Trilio can be deployed as a single node or a three-node cluster. It is highly recommended that Trilio is deployed as a three-node cluster for fault tolerance and load balancing. Starting with the 3.0 release, Trilio requires additional IP or FQDN for the cluster and is required for both single-node and three-node deployments. Cluster IP a.k.a virtual IP is used for managing clusters and is used to register the Trilio service endpoint in the keystone service catalog.






Troubleshooting inside a complex environment like OpenStack can be very time-consuming.
The following tips will help to speed up the troubleshooting process to identify root causes.
OpenStack and Trilio are divided into multiple services. Each service has a very specific purpose that is called during a backup or recovery procedure. Knowing which service is doing what helps to understand where the error is happening, allowing more focused troubleshooting.
The Trilio Workloadmgr is the Controller of Trilio. It receives all Workload related requests from the users.
Every task of a backup or restore process is triggered and managed from here. This includes the creation of the directory structure and initial metadata files on the Backup Target.
During a backup process is the Trilio Workloadmgr also responsible for gathering the metadata about the backed-up VMs and networks from the OpenStack environment. It sends API calls to the OpenStack endpoints on the configured endpoint type to fetch this information. Once the metadata has been received, the Trilio Workloadmgr writes it as JSON files on the Backup Target.
The Trilio Workloadmgr is also sending the Cinder Snapshot command.
During the restore process, the Trilio Workloadmgr reads the VM metadata from its Database and uses the metadata to create the Shell for the restore. It sends API calls to the OpenStack environment to create the necessary resources.
The dmapi service is the connector between the Trilio cluster and the Datamover running on the compute nodes.
The purpose of the dmapi service is to identify which compute node is responsible for the current backup or restore task. To do so, the dmapi service connects to the nova API requesting the compute hose of a provided VM.
Once the compute host has been identified, the dmapi forwards the command from the Trilio Workloadmgr to the datamover running on the identified compute host.
The datamover is the Trilio service running on the compute nodes.
Each datamover is responsible for the VMs running on top of its compute node. A datamover can not work with VMs running on a different compute node.
The datamover controls the freeze and thaw of VMs as well as the actual movement of the data.
Trilio is reading and writing on the Backup Target as nova:nova.
The POSIX user-id and group-id of nova:nova need to be aligned between the Trilio Cluster and all compute nodes. Otherwise, backup or restores may fail with permission or file not found issues.
Alternativ ways to achieve the goal are possible, as long as all required nodes can fully write and read as nova:nova on the Backup Target.
It is recommended to verify the required permissions on the Backup Target in case of any errors during the data transfer phase or in case of any file permission errors.
On Cohesity NFS if an Input/Output error is observed, then increase the timeo and retrans parameter values in your NFS options.
Logging inside all datamover containers and add uxsock_timeout with value as 60000 which is equal to 60 sec inside /etc/multipath.conf.Restart datamover container
Trilio is using RBAC to allow the usage of Trilio features to users.
This trustee role is absolutely required and can not be overwritten using the admin role.
It is recommended to verify the assignment of the Trilio Trustee Role in case of any permission errors from Trilio during the creation of Workloads, backups, or restores.
Trilio is creating Cinder Snapshots and temporary Cinder Volumes. The OpenStack Quotas need to allow that.
Every disk that is getting backed up requires one temporary Cinder Volumes.
Every Cinder Volume that is getting backup requires two Cinder Snapshots. The second Cinder Snapshot is temporary to calculate the incremental.
Trilio seamlessly integrates with OpenStack, functioning exclusively through APIs utilizing the OpenStack Endpoints. Furthermore, Trilio establishes its own set of OpenStack endpoints. Additionally, both the Trilio appliance and compute nodes interact with the backup target, impacting the network strategy for a Trilio installation.
OpenStack comprises three endpoint groupings:
Public Endpoints
Public endpoints are meant to be used by the OpenStack end-users to work with OpenStack.
Internal Endpoints
Internal endpoints are intended to be used by the OpenStack services to communicate with each
Admin Endpoints
Admin endpoints are meant to be used by OpenStack administrators.
Among these three endpoint categories, it's important to note that the admin endpoint occasionally hosts APIs not accessible through any other type of endpoint.
To learn more about OpenStack endpoints please visit the official OpenStack documentation.
Trilio communicates with all OpenStack services through a designated endpoint type, determined and configured during the deployment of Trilio's services.
It is recommended to configure connectivity through the admin endpoints if available.
The following network requirements can be identified this way:
Trilio services need access to the Keystone admin endpoint on the admin endpoint network if it is available.
Trilio services need access to all endpoints of the set endpoint type during deployment.
Trilio recommends granting comprehensive access to all OpenStack endpoints for all Trilio services, aligning with OpenStack's established standards and best practices.
Additionally, Trilio generates its own endpoints, which are integrated within the same network as other OpenStack API services.
To adhere to OpenStack's prescribed standards and best practices, it's advisable that Trilio containers operate on the same network as other OpenStack containers.
The public endpoint to be used by OpenStack users when using Trilio CLI or API
The internal endpoint to communicate with the OpenStack services
The admin endpoint to use the required admin-only APIs of Keystone
The Trilio solution uses backup target storage to place the backup data securely. Trilio divides its backup data into two parts:
Metadata
Volume Disk Data
The first type of data is generated by the Trilio Workloadmgr services through communication with the OpenStack Endpoints. All metadata that is stored together with a backup is written by the Trilio Workloadmgr services to the backup target in the JSON format.
The second type of data is generated by the Trilio Datamover service running on the compute nodes. The Datamover service reads the Volume Data from the Cinder or Nova storage and transfers this data as a qcow2 image to the backup target. Each Datamover service is hereby responsible for the VMs running on its compute node.
The network requirements are therefor:
Every Trilio Workloadmgr service containers need access to the backup target
Every Trilio Datamover service containers need access to the backup target
Ubuntu 22.04 Rocky 9
✔️ ✔️
5.2.1
2023.1 Antelope Zed
Ubuntu 22.04 Rocky 9
✔️ ✔️
5.1.0
Zed
Ubuntu 22.04 Rocky 9
✔️ ✔️
5.2.5+
17.1
RHEL-9
✔️
5.2.0+
17.1 16.2 16.1
RHEL-9 RHEL-8 RHEL-8
✔️ ✔️ ✔️
5.1.0
16.2 16.1
RHEL-8 RHEL-8
✔️ ✔️
5.0.0
16.2 16.1
RHEL-8 RHEL-8
✔️ ✔️
5.2.7
2025.1 Epoxy 2024.1 Caracal
Ubuntu 22.04 Rocky 9
✔️ ✔️
5.2.4 to 5.2.6
2024.1 Caracal 2023.2 Bobcat 2023.1 Antelope Zed
Ubuntu 22.04 Rocky 9
✔️ ✔️
5.2.2+
5.2.4+
Bobcat Jammy Antelope Jammy
Ubuntu 22.04
✔️ ✔️
2023.2 Bobcat 2023.1 Antelope Zed
<trust_id> ➡️ ID of the trust to show
<role_name> ➡️Name of the role that trust is created for
--is_cloud_trust {True,False} ➡️ Set to true if creating cloud admin trust. While creating cloud trust use same user and tenant which used to configure Trilio and keep the role admin.
<trust_id> ➡️ ID of the trust to be deleted
workloadmgr trust-listworkloadmgr trust-show <trust_id>workloadmgr trust-create [--is_cloud_trust {True,False}] <role_name>workloadmgr trust-delete <trust_id>Trilio requires its containers to be deployed on the same plane as OpenStack, utilizing existing cluster resources.
As described in the architecture overview, Trilio requires sufficient cluster resources to deploy its components on both the Controller Plane and Compute Planes.
Valid Trilio License & Acceptance of the EULA
Sufficient resources available on the target OpenShift cluster nodes
Sufficient storage capacity and connectivity on Cinder for snapshotting operations
Sufficient network capabilities for efficient data transfer of workloads
User and Role permissions for access to required cluster objects
Optional features may have specific requirements such as encryption, file search, snapshot mount, FRM instance, etc
Set hw_qemu_guest_agent=True property on the image and install qemu-guest-agent on the VM, in order to avoid any file system inconsistencies post restore.
For the VMware to OpenStack migration feature, please refer to the and pages.
Trilio does not support nested virtualization OpenStack deployments for VMware migration.
For Windows VMs, post migration, if the secondary disks are in an Offline state, you would need to log in to the guest VM and manually bring all the attached disks online by following any of the below steps:\
Disk Management Console:
Open Disk Management. You can do this by searching for "Computer Management" in the taskbar and navigating to Storage > Disk Management.
Locate the disk that is listed as Offline.
Right-click on the disk and select Online.
Diskpart Command-Line Utility:
Open Command Prompt as an administrator.
Type diskpart and press Enter.
Type list disk and press Enter to display a list of all disks.
Windows VMs migrated with the Dry-Run type won't be bootable on OpenStack, as complete boot information won't be available until the guest VM is shut down.
Please refer to Post-Conversion Tasks for more information.
Other Limitations specific to VM Migration:
If any VM is having independent (i.e. not mounted on "/") /usr partition, then migration of such VM will fail with below error:
The workloadmgr CLI client is provided as rpm and deb packages.
The following operating systems have been verified with the installation:
CentOS8
Ubuntu 18.04, Ubuntu 20.04
Installing the workloadmgr client will automatically install all required OpenStack clients as well.
Further, the installation of the workloadmgr client will integrate the client into the global OpenStack Python client, if available.
The Trilio workload manager CLI client has several requirements that need to be met before the client can be installed without dependency issues.
The following steps need to be done to prepare the installation of the workloadmgr client:
Add required repositories
epel-release
centos-release-openstack-train
install base packages
To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:
Create the Trilio yum repository file /etc/yum.repos.d/trilio.repo
Enter the following details into the repository file:
Install the workloadmgr client issuing the following command:
yum install python-3-workloadmgrclient-el8
The Trilio workloadmgr client packages for Ubuntu are only available from the online repository.
There is no preparation required. All dependencies are automatically resolved by the standard repositories provided by Ubuntu.
To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:
Create the Trilio yum repository file /etc/apt/sources.list.d/fury.list
Enter the following details into the repository file:
run apt update to make the new repository available.
The apt package manager is used to install the workloadmgr client package:
apt-get install python3-workloadmgrclient
Trilio employs a base64 hash to establish the mount point for NFS Backup targets, ensuring compatibility across multiple NFS Shares within a single Trilio installation. This hash is an integral component of Trilio's incremental backups, functioning as an absolute path for backing files.
Consequently, during a disaster recovery or rapid migration situation, the utilization of a mount bind becomes necessary.
In scenarios that allow for a comprehensive migration period, an alternative approach comes into play. This involves modifying the backing file, thereby enabling the accessibility of Trilio backups from a different NFS Share. The backing file is updated to correspond with the mount point of the new NFS Share.
Trilio provides a shell script for the purpose of changing the backing file. This script is used after the Trilio appliance has been reconfigured to use the new NFS share.
Please request the shell script from your Trilio Customer Success Manager or Customer Success Engineer by opening a case from our Customer Portal. It is not publically available for download at this time.
The following requirements need to be met before the change of the backing file can be attempted.
The Trilio Appliance has been reconfigured with the new NFS Share
The Openstack environment has been reconfigured with the new NFS Share
Please check for Red Hat Openstack Platform
The shell script is changing one workload at a time.
The shell script has to run as nova user, otherwise the owner will get changed and the backup can not be used by Trilio.
Run the following command:
with
/var/triliovault-mounts/<base64>/ being the new NFS mount path
workload_<workload_id> being the workload to rebase
The shell script is generating the following log file at the following location:
The log file will not get overwritten when the script is run multiple times. Each run of the script will append the available log file.
After all Trilio components are installed, the license can be applied.
The license can be applied either through the Admin tab in Horizon or the CLI
To apply for the license through Horizon follow these steps:
Login to Horizon using admin user.
Click on the Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to License
Click "Update License"
Read the license agreement
Click on "I accept the terms in the License Agreement"
Click on "Next"
Click "Choose File"
Choose license file on the client system
Click "Apply"
<license_file> ➡️ path to the license file
Read and accept the End User License Agreement to complete the license application.
Users can preview the latest EULA at our main site:
This step-by-step guide provides instructions on creating a Migration Plan for migrating Virtual Machines (VMs) from VMware to OpenStack.
Before creating a Migration Plan, ensure the following prerequisites are met:
Access to the VMware environment.
The OpenStack environment is configured and accessible.
A clear understanding of the VMs you intend to migrate.
Log in to the OpenStack Dashboard (Horizon):
Use your credentials to log in to the OpenStack Dashboard.
Navigate to the VMware Migration section:
In the OpenStack Dashboard, locate and click on
Congratulations! You have successfully created a Migration Plan for migrating VMs from VMware to OpenStack. Ensure to review and validate the plan details before proceeding with the actual migration steps.
Frequently Asked Questions about Trilio for OpenStack
Answer: NO
Trilio for OpenStack does not restore Instance UUIDs (also known as Instance IDs). The only scenario where we do not modify the Instance UUID is during an Inplace Restore, where we only recover the data without creating new instances.
When Trilio for OpenStack restores virtual machines (VMs), it effectively creates new instances. This means that new Virtual Machine Instance UUIDs are generated for the restored VMs. We achieve this by orchestrating a call to Nova, which creates new VMs with new UUIDs.
By following this approach, we maintain the principles of OpenStack and auditing. We do not update or modify existing database entries when objects are deleted and subsequently recovered. Instead, all deletions are marked as such, and new instances, including the recovered ones, are created as new objects in the Nova tables. This ensures compliance and preserves the integrity of the OpenStack environment.
Answer: YES
Trilio can restore the VMs MAC address, however, there is a caveat when restoring a virtual machine (VM) to a different IP address: a new MAC address will be assigned to the VM.
In the case of a One-Click Restore, the original MAC addresses and IP addresses will be recovered, but the VM will be created with a new UUID, as mentioned in question #1.
When performing a Selective Restore, you have the option to recover the original MAC address. To do so, you need to select the original IP address from the available dropdown menu during the recovery process.
By choosing the original IP address, Trilio for OpenStack will ensure that the VM is restored with its original MAC address, providing more flexibility and customization in the restoration process.
Example of Selective Restore with original MAC (and IP address):
In this example, we have taken a Trilio backup of a VM called prod-1.
The VM is deleted and we perform a Selective Restore of a VM called prod-1, selecting the IP address it was originally assigned from the drop-down menu:
Trilio then restores the VM with the original MAC address:
If you left the option as "Choose next available IP address", it will assign a new MAC to the VM instead as Neutron maps all MAC addresses to IP addresses on the Subnet - so logically a new IP will result in a new MAC address.
Trilio Workloads are designed to allow a Desaster Recovery without the need to backup the Trilio database.
As long as the Trilio Workloads are existing on the Backup Target Storage and a Trilio installation has access to them, it is possible to restore the Workloads.
Install and Trilio for the target cloud
Notify users to of Workloads being available
Trilio incremental Snapshots involve a backing file to the prior backup taken, which makes every Trilio incremental backup a synthetic full backup.
Trilio is using qcow2 backing files for this feature:
As can be seen in the example is the backing file an absolute path, which makes it necessary, that this path exists so the backing files can be accessed.
Trilio is using the base64 hashing algorithm for the NFS mount-paths, to allow the configuration of multiple NFS Volumes at the same time. The hash value is calculated using the provided NFS path.
When the path of the backing file is not available on the Trilio appliance and Compute nodes, will the restores of incremental backups fail.
The tested and recommended method to make the backing files available is creating the required directory path and using mount --bind to make the path available for the backups.
Running the mount --bind command will make the necessary path available until the next reboot. If it is required to have access to the path beyond a reboot is it necessary to edit the fstab.
Below Trilio containers gets deployed on the Contoller node.
triliovault_datamover_api
triliovault_wlm_api
triliovault_wlm_scheduler
The log files for above Trilio services can be found here:
/var/log/containers/triliovault-datamover-api/triliovault-datamover-api.log
/var/log/containers/triliovault-wlm-api/triliovault-wlm-api.log
/var/log/containers/triliovault-wlm-cron/triliovault-wlm-cron.log
/var/log/containers/triliovault-wlm-scheduler/triliovault-wlm-scheduler.log
/var/log/containers/triliovault-wlm-workloads/triliovault-wlm-workloads.log
In the case of using S3 as a backup target is there also a log file that keeps track of the S3-Fuse plugin used to connect with the S3 storage.
/var/log/containers/triliovault-wlm-workloads/triliovault-object-store.log
For file serach operation, logs can be found on Controller node at below location:
/var/log/containers/triliovault-wlm-workloads/workloadmgr-filesearch.log
Trilio Datamover container gets deployed on the Compute node.
The log file for the Trilio Datamover service can be found at:
/var/log/containers/triliovault-datamover/triliovault-datamover.log
Migration within the same cloud to a different owner Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project A — User B Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project B — User B Cloud A — Domain A — Project A — User A =>Cloud A — Domain B — Project B — User B
Steps used:
Create a secret for Project A in Domain A via User A.
Create encrypted workload in Project A in Domain A via User A. Take snapshot.
Reassign workload to new owner
Load rc file of User A & provide read only rights through acl to the new owner
openstack acl user add --user <userB_id> <secret_href> --insecure
Migration between clouds Cloud A — Domain A — Project A — User A => Cloud B — Domain B — Project B — User B
Steps used:
Create a secret for Project A in Domain A via User A.
Create an encrypted workload in Project A in Domain A via User A. Trigger snapshot.
Reassign workload to Cloud B - Domain B — Project B — User B
Load RC file of User B.
After the installation and configuration of Trilio for Openstack did succeed the following steps can be done to verify that the Trilio installation is healthy.
Make sure below containers are in a running state, triliovault-wlm-cron would be running on only one of the controllers in case of multi-controller setup.
It is possible to configure Cinder and Ceph to use different Ceph users for different Ceph pools and Cinder volume types. Or to have the nova boot volumes and cinder block volumes controlled by different users.
In the case of multiple Ceph users, it is required to delete the keyring extension from the triliovault-datamover.conf inside the Ceph block by following below mentioned steps:
Deploy Trilio as per the documented steps.
virt-v2v: error: inspection could not detect the source guest (or physical
machine).
Assuming that you are running virt-v2v/virt-p2v on a source which is
supported (and not, for example, a blank disk), then this should not
happen.
Inspection field 'i_arch' was 'unknown'.Identify the offline disk and note its disk number.
Type select disk <disk number> and press Enter, replacing <disk number> with the actual disk number.
Type online disk and press Enter.
Type exit to close Diskpart.
triliovault_wlm_workloadstriliovault-wlm-cron
Initiate Plan Creation:
Within VMware Migration, find the Create button to start creating a Migration Plan.
This may take some time to load as it connects to the vCenter and fetches all the accessible VM information.
Provide Plan Details:
Enter a unique and meaningful name for the Migration Plan.
Optionally, add a detailed description to provide context for the plan.
Select the vCenter:
Select one vCenter from the drop-down.
Save the Migration Plan:
Click the Create button to save the Migration Plan after providing the necessary details.
Select VMs for Migration:
Click the Manage VMs button in the drop-down Actions for that Migration Plan.
Browse the list of available VMware VMs under the Migration Plan VMs tab.
Select the VMs that you intend to migrate as part of this plan.
Click the Update button to save the Migration Plan after selecting VMs.
Optional: Modify Plan Details:
If needed, revisit the Migration Plan to modify details, add or remove VMs.
You can do that by clicking the Edit button in the drop-down Actions for that Migration Plan.
Review the Migration Plan:
To ensure all details are accurate and complete for the Migration Plan, click the created Migration Plan.
It will take you to the VMware Migration Plan Detail page, where you can review the VMs and check additional information about those VMs.
Proceed with Discovery and Migration:
After creating and finalizing the Migration Plan, users can proceed with the "Discover VMs" operation to collect resource information and subsequently initiate the actual migration.




yum -y install epel-release
yum -y install centos-release-openstack-train
The workloads are owned by nova:nova user
Create a secret for Project B in Domain B via User B with the same payload used in Cloud A.
Create token via “openstack token issue --insecure”
Add migrated workload's metadata to the new secret (provide issued token to Auth-Token & workload id to matadata as below)
triliovault_datamover_apitriliovault_wlm_api
triliovault_wlm_scheduler
triliovault_wlm_workloads
triliovault-wlm-cron
If the containers are in a restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
After successful deployment, triliovault-wlm-cron service would get added in pcs cluster as a cluster resource, you can verify through pcs status command
Verify the HAproxy configuration under:
Make sure the Trilio Datamover container is in running state and no other Trilio container is deployed on compute nodes.
Check if provided backup target is mounted well on Compute host.
Make sure the horizon container is in a running state. Please note that the Horizon container is replaced with Trilio's Horizon container. This container will have the latest OpenStack horizon + Trilio horizon plugin.
If the Trilio Horizon container is in the restarted state on RHOSP 16.1.8/RHSOP 16.2.4 then use the below workaround
Post successful deployment, please modify the triliovault-datamover.conf file present at following locations on all compute nodes.
For RHOSP : /var/lib/config-data/puppet-generated/triliovaultdm/etc/triliovault-datamover/
For Kolla : /etc/kolla/triliovault-datamover/
Modify keyring_ext value with valid keyring extension (eg. .keyring). This extension is expected to be same for all the keyring files. It will be present under [ceph] block in triliovault-datamover.conf file.
Sample conf entry below. This will try all files with the extension keyring that are located inside /etc/ceph to access the Ceph cluster for a Trilio related task.
Restart triliovault_datamover container on all compute nodes.
...
[ceph]
keyring_ext = .keyring
...
[trilio]
name=Trilio Repository
baseurl=https://yum.fury.io/trilio-5-0/
enabled=1
gpgcheck=0deb [trusted=yes] https://apt.fury.io/trilio-5-0/ /./backing_file_update.sh /var/triliovault-mounts/<base64>/workload_<workload_id>/tmp/backing_file_update.logworkloadmgr license-create <license_file>qemu-img info 85b645c5-c1ea-4628-b5d8-1faea0e9d549
image: 85b645c5-c1ea-4628-b5d8-1faea0e9d549
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 21M
cluster_size: 65536
backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_3c2fbee5-ad90-4448-b009-5047bcffc2ea/snapshot_f4874ed7-fe85-4d7d-b22b-082a2e068010/vm_id_9894f013-77dd-4514-8e65-818f4ae91d1f/vm_res_id_9ae3a6e7-dffe-4424-badc-bc4de1a18b40_vda/a6289269-3e72-4085-adca-e228ba656984
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false# echo -n 10.10.2.20:/upstream | base64
MTAuMTAuMi4yMDovdXBzdHJlYW0=#mount --bind <mount-path1> <mount-path2>#vi /etc/fstab
<mount-path1> <mount-path2> none bind 0 0curl -i -X PUT \
-H "X-Auth-Token:gAAAAABh0ttjiKRPpVNPBjRjZywzsgVton2HbMHUFrbTXDhVL1w2zCHF61erouo4ZUjGyHVoIQMG-NyGLdR7nexmgOmG7ed66LJ3IMVul1LC6CPzqmIaEIM48H0kc-BGvhV0pvX8VMZiozgFdiFnqYHPDvnLRdh7cK6_X5dw4FHx_XPmkhx7PsQ" \
-H "Content-Type:application/json" \
-d \
'{
"metadata": {
"workload_id": "c13243a3-74c8-4f23-b3ac-771460d76130",
"workload_name": "workload-c13243a3-74c8-4f23-b3ac-771460d76130"
}
}' \
'https://kolla-victoria-ubuntu20-1.triliodata.demo:9311/v1/secrets/f3b2fce0-3c7b-4728-b178-7eb8b8ebc966/metadata'
curl -i -X GET \
-H "X-Auth-Token:gAAAAABh0ttjiKRPpVNPBjRjZywzsgVton2HbMHUFrbTXDhVL1w2zCHF61erouo4ZUjGyHVoIQMG-NyGLdR7nexmgOmG7ed66LJ3IMVul1LC6CPzqmIaEIM48H0kc-BGvhV0pvX8VMZiozgFdiFnqYHPDvnLRdh7cK6_X5dw4FHx_XPmkhx7PsQ" \
'https://kolla-victoria-ubuntu20-1.triliodata.demo:9311/v1/secrets/f3b2fce0-3c7b-4728-b178-7eb8b8ebc966/metadata'[root@overcloudtrain5-controller-0 /]# podman ps | grep trilio-
76511a257278 undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 12 days ago Up 12 days ago horizon
5c5acec33392 cluster.common.tag/trilio-wlm:pcmklatest /bin/bash /usr/lo... 7 days ago Up 7 days ago triliovault-wlm-cron-podman-0
8dc61a674a7f undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 7 days ago Up 7 days ago triliovault_datamover_api
a945fbf80554 undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 7 days ago Up 7 days ago triliovault_wlm_scheduler
402c9fdb3647 undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 7 days ago Up 6 days ago triliovault_wlm_workloads
f9452e4b3d14 undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 7 days ago Up 6 days ago triliovault_wlm_api[root@overcloudtrain5-controller-0 /]# pcs status
Cluster name: tripleo_cluster
Cluster Summary:
* Stack: corosync
* Current DC: overcloudtrain5-controller-0 (version 2.0.5-9.el8_4.3-ba59be7122) - partition with quorum
* Last updated: Mon Jul 24 11:19:05 2023
* Last change: Mon Jul 17 10:38:45 2023 by root via cibadmin on overcloudtrain5-controller-0
* 4 nodes configured
* 14 resource instances configured
Node List:
* Online: [ overcloudtrain5-controller-0 ]
* GuestOnline: [ galera-bundle-0@overcloudtrain5-controller-0 rabbitmq-bundle-0@overcloudtrain5-controller-0 redis-bundle-0@overcloudtrain5-controller-0 ]
Full List of Resources:
* ip-172.30.6.27 (ocf::heartbeat:IPaddr2): Started overcloudtrain5-controller-0
* ip-172.30.6.16 (ocf::heartbeat:IPaddr2): Started overcloudtrain5-controller-0
* Container bundle: haproxy-bundle [cluster.common.tag/openstack-haproxy:pcmklatest]:
* haproxy-bundle-podman-0 (ocf::heartbeat:podman): Started overcloudtrain5-controller-0
* Container bundle: galera-bundle [cluster.common.tag/openstack-mariadb:pcmklatest]:
* galera-bundle-0 (ocf::heartbeat:galera): Master overcloudtrain5-controller-0
* Container bundle: rabbitmq-bundle [cluster.common.tag/openstack-rabbitmq:pcmklatest]:
* rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Started overcloudtrain5-controller-0
* Container bundle: redis-bundle [cluster.common.tag/openstack-redis:pcmklatest]:
* redis-bundle-0 (ocf::heartbeat:redis): Master overcloudtrain5-controller-0
* Container bundle: openstack-cinder-volume [cluster.common.tag/openstack-cinder-volume:pcmklatest]:
* openstack-cinder-volume-podman-0 (ocf::heartbeat:podman): Started overcloudtrain5-controller-0
* Container bundle: triliovault-wlm-cron [cluster.common.tag/trilio-wlm:pcmklatest]:
* triliovault-wlm-cron-podman-0 (ocf::heartbeat:podman): Started overcloudtrain5-controller-0
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg[root@overcloudtrain5-novacompute-0 heat-admin]# podman ps | grep -i datamover
c750a8d0471f undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 7 days ago Up 7 days ago triliovault_datamover[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep triliovault-mounts
172.30.1.9:/mnt/rhosptargetnfs 7.0T 5.1T 2.0T 72% /var/lib/nova/triliovault-mounts/L21udC9yaG9zcHRhcmdldG5mcw==[root@overcloudtrain5-controller-0 heat-admin]# podman ps | grep horizon
76511a257278 undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2 kolla_start 12 days ago Up 12 days ago horizon## Either of the below workarounds should be performed on all the controller nodes where issue occurs for horizon pod.
option-1: Restart the memcached service on controller using systemctl (command: systemctl restart tripleo_memcached.service)
option-2: Restart the memcached pod (command: podman restart memcached)Start Day/Time
End Day
Hrs between 2 snapshots
To disable the scheduler of a single Workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Navigate to the tab "Schedule"
Uncheck "Enabled"
Click "Update"
--workloadid <workloadid> ➡️ Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>
To disable the scheduler of a single Workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Navigate to the tab "Schedule"
check "Enabled"
Click "Update"
--workloadid <workloadid> ➡️ Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>
To modify a schedule the workload itself needs to be modified.
Please follow this procedure to modify the workload.
Trilio is using the Openstack Keystone Trust system which enables the Trilio service user to act in the name of another Openstack user.
This system is used during all backup and restore features.
As a trust is bound to a specific user for each Workload does the Trilio Horizon plugin show the status of the Scheduler on the Workload list page.
<workload_id> ➡️ ID of the workload to validate
Please ensure the following requirements are met before starting the upgrade process:
No Snapshot or Restore is running
Global job scheduler is disabled
wlm-cron is disabled on the Trilio Appliance
The following sets of commands will disable the wlm-cron service and verify that it is has been completely shutdown.
Please follow to install the latest Trilio release.
T4O has changed the calculation of the mount point from 4.2 release onwards. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2
Please follow to set up the mount bind for RHOSP.
After the upgrade, global job scheduler would be disabled. You can enable that either through UI or CLI.
Login to the Dashboard and go to Admin -> Backups-Admin -> Settings tab and check the 'Job Scheduler Enabled' checkbox. By clicking the 'Change' button you can enable the Global Job Scheduler.
Login to the any WLM container, create and source the admin rc file. Execute the enable the global job scheduler cli.
Workload import allows importing Workloads existing on the Backup Target into the Trilio database.
Please follow to import the Workloads.
This step-by-step guide provides instructions on running a migration for Virtual Machines (VMs) from VMware to OpenStack.
Before initiating the migration, ensure the following prerequisites are met:
Access to the VMware environment.
OpenStack environment configured and accessible.
created with the desired VMs selected.
Log in to the OpenStack Dashboard (Horizon):
Use your credentials to log in to the OpenStack Dashboard.
Navigate to the VMware Migration section:
In the OpenStack Dashboard, locate and click on the
Verify that Instances are created with storage volumes, networking, and security groups attached.
Verify if the Instances are booting up.
As mentioned on the page, the user may have to set up the network by accessing the console of the instance.
Validate the connectivity of the VMs created on OpenStack.
The Migration Plan will enter a "Locked" state while its migration is in progress. A new migration for that plan can be initiated once the running migration is available or has encountered an error and the Plan is in an "Available" state.
Congratulations! You have successfully run a migration for VMs from VMware to OpenStack using the specified migration type. Ensure to validate the results and perform necessary tests to confirm the success of the migration process.
Trilio enables Openstack administrators to set Project Quotas against the usage of Trilio.
The following Quotas can be set:
Number of Workloads a Project is allowed to have
Number of Snapshots a Project is allowed to have
Number of VMs a Project is allowed to protect
Amount of Storage a Project is allowed to use on the Backup Target
The Trilio Quota feature is available for all supported Openstack versions and distributions, but only Train and higher releases include the Horizon integration of the Quota feature.
Workload Quotas are managed like any other Project Quotas.
Login into Horizon as user with admin role
Navigate to Identity
Navigate to Projects
Identify the Project to modify or show the quotas on
Trilio is providing several different Quotas. The following command allows listing those.
Trilio 4.1 do not yet have the Quota Type Volume integrated. Using this will not generate any Quotas a Tenant has to apply to.
The following command will show the details of a provided Quota Type.
<quota_type_id> ➡️ID of the Quota Type to show
The following command will create a Quota for a given project and set the provided value.
<quota_type_id> ➡️ID of the Quota Type to be created
<allowed_value>➡️ Value to set for this Quota Type
<high_watermark>➡️
The following command lists all Trilio Quotas set for a given project.
<project_id>➡️ Project to list the Quotas from
The following command shows the details about a provided allowed Quota.
<allowed_quota_id> ➡️ID of the allowed Quota to show.
The following command shows how to update the value of an already existing allowed Quota.
<allowed_value>➡️ Value to set for this Quota Type
<high_watermark>➡️ Value to set for High Watermark warnings
<project_id>➡️
The following command will delete an allowed Quota and sets the value of the connected Quota Type back to unlimited for the affected project.
<allowed_quota_id> ➡️ID of the allowed Quota to delete
Please ensure the following requirements are met before starting the upgrade process:
No Snapshot or Restore is running
Global job scheduler is disabled
In case you are upgrading from 4.x release to 5.x, please ensure below:
wlm-cron is disabled on the Trilio Appliance
The following set of commands will disable the wlm-cron service and verify that it is has been completely shutdown.
Please follow to install latest Trilio release.
After upgrade, global job scheduler would be disable. You can enable that either through UI or CLI.
Login to the Dashboard and go to Admin -> Backups-Admin -> Settings tab and check the 'Job Scheduler Enabled' checkbox. By clicking the 'Change' button you can enable the Global Job Scheduler.
Login to the any WLM container, create and source the admin rc file. Execute the enable the global job scheduler cli.
Workload import allows to import Workloads existing on the Backup Target into the Trilio database.
Please follow to import the Workloads.
Trilio for OpenStack generates a base64 hash value for every NFS backup target connected to the T4O solution. This enables T4O to mount multiple NFS backup targets to the same T4O installation.
The mountpoints are generated utilizing a hash value inside the mountpoint, providing a unique mount for every NFS backup target.
This mountpoint is then used inside the incremental backups to point to the qcow2 backing files. The backing file path is required as a full path and can not be set as a relative path. This is a limitation of the qcow2 format.
With T4O 4.2 there was a significant change in how the hash value gets calculated. T4O 4.1 and prior calculated the hash value out of the complete NFS path provided as shown in the example below.
T4O 4.2 is now only considering the NFS directory part for the hash value as shown below.
It is therefore necessary to make older backups taken by T4O 4.1 or prior available for T4O 4.2 to restore.
This can be done by one of two methods:
VM Migration Tool is an add-on stand alone utility provided by Trilio for effective migration of VMs from VMWare to OpenStack. This will assist users in planning the migration, creating any missing artifacts, and providing a holistic view of the migration process at an organizational level.
Current document provides the steps to be followed for deploying respective tool.
Filename and location:
This file exclusively comes into play when users aim to configure Trilio with a NFS backup target that employs multiple network endpoints. For all other scenarios, such as single IP NFS or S3, this file remains inactive, and in such instances, please consult the standard installation documentation.
When using an NFS backup target with multiple network endpoints, T4O will mount a single IP/endpoint on a designated compute node for a specific NFS share. This approach enables users to distribute NFS share IPs/endpoints across various compute nodes.
The 'triliovault_nfs_map_input.yml
workloadmgr disable-scheduler --workloadids <workloadid>workloadmgr enable-scheduler --workloadids <workloadid>workloadmgr scheduler-trust-validate <workload_id>
Use the small arrow next to "Manage Members" to open the submenu
Choose "Modify Quotas"
Navigate to "Workload Manager"
Edit Quotas as desired
Click "Save"
<project_id>➡️ Project to assign the quota to
<allowed_quota_id> ➡️ID of the allowed Quota to update

Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload a file search shall be done in
Click the workload name to enter the Workload overview
Click File Search to enter the file search tab
A file search runs against a single virtual machine for a chosen subset of backups using a provided search string.
To run a file search the following elements need to be decided and configured
Under VM Name/ID choose the VM that the search is done upon. The drop down menu provides a list of all VMs that are part of any Snapshot in the Workload.
The File Path defines the search string that is run against the chosen VM and Snapshots. This search string does support basic RegEx.
The File Path has to start with a '/'
Example File Path for all files inside /etc : /etc/*
"Filter Snapshots by" is the third and last component that needs to be set. This defines which Snapshots are going to be searched.
There are 3 possibilities for a pre-filtering:
All Snapshots - Lists all Snapshots that contain the chosen VM from all available Snapshots
Last Snapshots - Choose between the last 10, 25, 50, or custom Snapshots and click Apply to get the list of the available Snapshots for the chosen VM that match the criteria.
Date Range - Set a start and end date and click apply to get the list of all available Snapshots for the chosen VM within the set dates.
After the pre-filtering is done all matching Snapshots are automatic prechosen. Uncheck any Snapshot that shall not be searched.
To start a File Search the following elements need to be set:
A VM to search in has to be chosen
A valid File Path provided
At least one Snapshot to search in selected
Once those have been set click "Search" to start the file search.
Do not navigate to any other Horizon tab or website after starting the File Search. Results are lost and the search has to be repeated to regain them.
After a short time the results will be presented. The results are presented in a tabular format grouped by Snapshots and Volumes inside the Snapshot.
For each found file or folder the following information are provided:
POSIX permissions
Amount of links pointing to the file or folder
User ID who owns the file or folder
Group ID assigned to the file or folder
Actual size in Bytes of the file or folder
Time of creation
Time of last modification
Time of last access
Full path to the found file or folder
Once the Snapshot of interest has been identified it is possible to go directly to the Snapshot using the "View Snapshot" option at the top of the table. It is also possible to directly mount the Snapshot using the "Mound Snapshot" Button at the end of the table.
<vm_id> ➡️ ID of the VM to be searched
<file_path> ➡️ Path of the file to search for
--snapshotids <snapshotid> ➡️ Search only in specified snapshot ids snapshot-id: include the instance with this UUID
--end_filter <end_filter> ➡️Displays last snapshots, example , last 10 snapshots, default 0 means displays all snapshots
--start_filter <start_filter> ➡️Displays snapshots starting from , example , snapshot starting from 5, default 0 means starts from first snapshot
--date_from <date_from> ➡️ From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If time isn't specified then it takes 00:00 by default
--date_to <date_to> ➡️ To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day),Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to
Here, the User has ‘one’ NFS share exposed with three IP addresses. 192.168.1.34, 192.168.1.35, 192.168.1.33
Share directory path is: /var/share1
So, this NFS share supports the following full paths that clients can mount:
There are 32 compute nodes in the OpenStack cloud. 30 node hostnames have the following naming pattern
The remaining 2 node hostnames do not follow any format/pattern.
Now the mapping file will look like this
Compute node IP range used here: 172.30.3.11-40 and 172.30.4.40, 172.30.4.50 Total of 32 compute nodes
Use the following command to get compute hostnames. Check the ‘Name' column. Use these exact hostnames in 'triliovault_nfs_map_input.yml' file.
In the following command output, ‘overcloudtrain1-novacompute-0' and ‘overcloudtrain1-novacompute-1' are correct hostnames.
[root@TVM2 ~]# pcs resource disable wlm-cron
[root@TVM2 ~]# systemctl status wlm-cron
● wlm-cron.service - workload's scheduler cron service
Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset : disabled)
Active: inactive (dead)
Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
Hint: Some lines were ellipsized, use -l to show in full.
[root@TVM2 ~]# pcs resource show wlm-cron
Resource: wlm-cron (class=systemd type=wlm-cron)
Meta Attrs: target-role=Stopped
Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito r-interval-30s)
start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int erval-0s)
stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
[root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
root 15379 14383 0 08:27 pts/0 00:00:00 grep --color=auto -i workloadmgr -cron
$ podman exec -itu root triliovault_wlm_api /bin/bash
$ source admin.rc
$ workloadmgr enable-global-job-scheduler
Global job scheduler is successfully enabledworkloadmgr project-quota-type-listworkloadmgr project-quota-type-show <quota_type_id>workloadmgr project-allowed-quota-create --quota-type-id quota_type_id
--allowed-value allowed_value
--high-watermark high_watermark
--project-id project_idworkloadmgr project-allowed-quota-list <project_id>workloadmgr project-allowed-quota-show <allowed_quota_id>workloadmgr project-allowed-quota-update [--allowed-value <allowed_value>]
[--high-watermark <high_watermark>]
[--project-id <project_id>]
<allowed_quota_id>workloadmgr project-allowed-quota-delete <allowed_quota_id>[root@TVM2 ~]# pcs resource disable wlm-cron
[root@TVM2 ~]# systemctl status wlm-cron
● wlm-cron.service - workload's scheduler cron service
Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset : disabled)
Active: inactive (dead)
Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
Hint: Some lines were ellipsized, use -l to show in full.
[root@TVM2 ~]# pcs resource show wlm-cron
Resource: wlm-cron (class=systemd type=wlm-cron)
Meta Attrs: target-role=Stopped
Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito r-interval-30s)
start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int erval-0s)
stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
[root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
root 15379 14383 0 08:27 pts/0 00:00:00 grep --color=auto -i workloadmgr -cron
$ docker exec -itu root triliovault_wlm_api /bin/bash
$ source admin.rc
$ workloadmgr enable-global-job-scheduler
Global job scheduler is successfully enabledworkloadmgr filepath-search [--snapshotids <snapshotid>]
[--end_filter <end_filter>]
[--start_filter <start_filter>]
[--date_from <date_from>]
[--date_to <date_to>]
<vm_id> <file_path>triliovault-cfg-scripts/common/triliovault_nfs_map_input.yml192.168.1.33:/var/share1
192.168.1.34:/var/share1
192.168.1.35:/var/share1prod-compute-1.trilio.demo
prod-compute-2.trilio.demo
prod-compute-3.trilio.demo
.
.
.
prod-compute-30.trilio.democompute_bare.trilio.demo
compute_virtualmulti_ip_nfs_shares:
- "192.168.1.34:/var/share1": ['prod-compute-[1:10].trilio.demo', 'compute_bare.trilio.demo']
"192.168.1.35:/var/share1": ['prod-compute-[11:20].trilio.demo', 'compute_virtual']
"192.168.1.33:/var/share1": ['prod-compute-[21:30].trilio.demo']
single_ip_nfs_shares: []multi_ip_nfs_shares:
- "192.168.1.34:/var/share1": ['172.30.3.[11:20]', '172.30.4.40']
"192.168.1.35:/var/share1": ['172.30.3.[21:30]', '172.30.4.50']
"192.168.1.33:/var/share1": ['172.30.3.[31:40]']
single_ip_nfs_shares: [](undercloud) [stack@ucqa161 ~]$ openstack server list
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2 | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0 | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1 | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+Initiate Migration:
If unsure whether the "Migration Plan" has initiated "Discover VMs" or not, it can be re-initiated before starting the migration.
Within the Migration Plan, look for the option Migrate to initiate migration.
It will pop up one wizard for collecting the user inputs.
Provide Migration Details:
Migration Name: Enter a unique and meaningful name for the Migration.
Migration Description: Optionally, add a detailed description to provide context for the migration.
Migration Type: Select the Migration Type from the drop-down. Refer to for more information about the migration types that Trilio supports.
Cutover Option: If the selected migration type is Warm Migration, then the user has an option to select the Cutover option to be automatic, otherwise the cutover will be user initiated by default.
Map the Networks:
This section has a comprehensive list of all the unique networks found in all the VMs that are part of that Migration Plan.
Select the OpenStack networks, to which the VM's networks need to map.
Map the Volume Types:
This section has a comprehensive list of all the unique Datastores found in all the VMs that are part of that Migration Plan.
Select the OpenStack Volume types to which the VM's Datastores need to be mapped.
Select VMs:
Select the VMs of the Migration Plan that needs to be migrated to the OpenStack.
Verify the details of the VM to be migrated by clicking the "+" sign on the VMs.
Start Migration:
Start the migration by clicking the Migrate button.
Observe the progress and status of the migration in the OpenStack Dashboard.
If Warm migration is selected with manual cutover, then the migration will get into a Paused state where the user needs to resume it to continue the migration process.
After migration, Trilio updates the cloud-init configuration on the migrated instance to disable most cloud-init modules. This prevents the instance from reinitializing during its first boot.
If desired, users can revert these changes to re-enable and reinitialize cloud-init on the migrated instances.
On Linux VMs
A new configuration file is created at:
On Windows VMs
The cloud-init configuration file is modified at:
A backup of the original configuration file is saved at:
RHOSP 17.1
RHOSP 16.2
RHOSP 16.1
All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
https://pypi.fury.io/trilio-5-2/contegoclient
5.2.8
s3fuse
5.2.8.1
tvault-horizon-plugin
5.2.8.2
workloadmgr
5.2.8.4
workloadmgrclient
5.2.8.1
Repo URL:
Repo URL:
To enable, add the following file /etc/yum.repos.d/fury.repo:
git clone -b 5.2.1 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.2.1 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.1 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/RHOSP 16.2
RHOSP 16.1
All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
https://pypi.fury.io/trilio-5-0/contegoclient
5.0.16
s3fuse
5.0.16
tvault-horizon-plugin
5.0.16
workloadmgr
5.0.16
workloadmgrclient
Repo URL:
Repo URL:
To enable, add the following file /etc/yum.repos.d/fury.repo:
git clone -b 5.0.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/Trilio’s backup and recovery solution is an OpenStack native service similar to Nova, Cinder, Glance, etc. This software-only solution can be deployed using OpenStack distribution-specific DevOps scripts. For example, Trilio’s solution is native to the RHOSP platform and can be deployed and managed using Red Hat Director.
We extended Trilio’s architecture to support migration at scale. Since Trilio migrates the VM from VMware ESXi to the OpenStack compute node, the solution scales with your VMware/OpenStack without suffering from performance bottlenecks.
The process of migration usually involves moving virtual machines from a source platform to a target platform, however, the strategy to achieve this should be considered carefully.
If the migration process is initiated from the target platform, it is called inline migration. Trilio advocates inline migration as it helps map organizational structures such as data centers/clusters/folders in VMware to regions/domains/projects in OpenStack.
A project owner with sufficient privileges to their VMware resources can migrate the VMs to their project. If every project owner can migrate their resources from VMware, it provides better control to cloud architects to orchestrate the migration.
If the migration process is initiated from an outside source or target platform, it is called out-of-band migration. Out-of-band demands a centralized approach and places constraints on how cloud architects can design their OpenStack clouds.
A very intuitive tenant dashboard for the tenant to manage their migrations. If you want to migrate to KubeVirt, Red Hat’s MTV solution can be helpful.
The sub-commands for the migration feature are available as:
Migration APIs look and feel like OpenStack API. Each tenant can source their cloud rc and start using migration cli commands. CLI also helps users to extend their Ansible scripting for migration.
Trilio migration functionality supports three types of migrations
Dry-run lets users create the VMs on OpenStack with storage volumes, networking, and security groups attached. It does a migration of VM data from VMware to OpenStack by taking a snapshot of the VM without shutting it down and hence, does not guarantee the data/application consistency. Unlike Cold and Warm migration, there is no disruption to the VMware VMs during this migration. It is a handy feature for users to test VMs and their interconnectivity before completely migrating VMs.
Cold migration shuts down the VMware VMs and migrates the VMs to OpenStack. Depending on the size of the data to migrate, cold migration can take minutes to hours to complete, and during the entire duration, the applications are unavailable. Cold migration is more disruptive than warm migration.
Warm migration leverages VMware snapshot and Change Block Tracking functionality to migrate VMs with minimal disruption. It first takes a VM snapshot and copies the full snapshot data to OpenStack. Depending on the size of the data, this upload operation may take a few minutes to a few hours. During this period, the VM may have written additional data blocks, which are then captured by taking a second snapshot of the VM.
At this point, the cutover occurs as per the option selected by the user during invoking the migration:
Manual cutover (default): After the second snapshot upload, the migration process pauses. The user must manually resume the migration once they are ready for the source VM to be shut down and the final data upload to be performed.
Automatic cutover: After the second snapshot upload, the source VM is automatically shut down, and the final data upload is performed without user intervention.
In both cases, after the final snapshot is uploaded, the corresponding VM is started on OpenStack. Warm migration therefore provides significantly less downtime compared to cold migration.
Trilio's support for migrating VMware VMs to OpenStack is recognized as an industry-leading solution. Its scalability and ability to be deployed organization-wide scale make it a valuable choice for businesses. The self-paced approach provided by Trilio allows application owners to experiment with migration options and gain confidence before performing the final migration. In the context of businesses transitioning from vendor lock-in to open-source technologies, Trilio's migration feature can be a helpful tool in accelerating the transition process.


Rebase all backups and change the backing file path
Make the old mount point available again and point it to the new one using mount bind
Trilio provides a script that takes care of the rebase procedure available from the following repository.
Login to the triliovault_wlm_workloads container with nova user and copy the script to the tmp directory inside the container
The nova user is required, as this user owns the backup files, and using any other user will change the ownership and lead to unrestorable backups.
The script is requiring the complete path to the workload directory on the backup target.
This needs to be repeated until all workloads have been rebased.
This method is generating a second mount point based on the old hash value calculation and then mounting the new mount point to the old one. By doing this mount bind both mount points will be available and point to the same backup target.
To use this method the following information needs to be available:
old mount path
new mount path
Examples of how to calculate the base64 hash value are shown below.
Old mountpoint hash value:
New mountpoint hash value:
The mount bind needs to be done for all Trilio appliances and Datamover services.
The mount bind needs to be done on all the Compute nodes and all the wlm containers on the Controller node
To activate the mount bind on the controller node, access each of the WLM containers (triliovault_wlm_api, triliovault_wlm_scheduler, triliovault_wlm_workloads, and triliovault_wlm_cron), and execute the following sequence of actions. It's crucial to replicate these steps on all other overcloud controller nodes.
Create the old mountpoint directory
mkdir -p old_mount_path
run mount --bind command
mount --bind <new_mount_path> <old_mount_path>
set permissions for mountpoint
chmod 777 <old_mount_path>
An example is given below.
The following steps need to be done on all the overcloud compute nodes themselves and not inside of a container.
To enable the mount bind on the Compute node follow the below steps:
Create the old mountpoint directory mkdir -p old_mount_path
Run mount --bind command mount --bind <new_mount_path> <old_mount_path>
Set permissions for mountpoint chmod 777 <old_mount_path>
An example is given below.
A Linux VM (CentOS OR Ubuntu will do) with docker installed on it.
2.1] Clone public repository triliovault-cfg-scripts on the standalone VM (at any convenient location) created for deploying the VM Migration tool.
2.2] On the VM, copy nginx, env, docker-compose.yml & vmosmapping.conf from triliovault-cfg-scripts to /opt directory.
2.3] Set values against params in /opt/env file.
TRILIO_IMAGE_TAG
Tag of the image as provided by Trilio. Refer resources page against respective release.
trilio/trilio-migration-vm2os:5.2.4
NGINX_PORT
Port on which nginx should run. Please ensure that the port is free.
Default : 5085
REDIS_PORT
Port on which redis should run. Please ensure that the port is free.
Default : 6379
2.4] Set values against params in /opt/vmosmapping.conf file.
host
vCenter host address from where VMs have to be migration into OpenStack.
admin
Admin user login name for vCenter
password
Admin user password for vCenter
ssl_verify
To be set to True or False as needed
keystone_url
Keystoner URL of the OpenStack where VMs from vCenter have to be migrated
admin_user
Admin user login name for OpenStack
3.1] To deploy the VM Migration Tool on standalone VM, please execute below command in background mode.
3.2] Deployment checks
Run docker ps -a ; 4 containers should be running, viz redis, nginx, trilio_vm2os & opt-worker-1.
Sample output below (Forwarded port number can be different as per the value provided in /opt/env file)
Respective Migration tool can be accessed with URL : <VM_IP>:<NGINX_PORT> Example
For logging into dashboard, use the credentials provided under [user] section in /opt/vmosmapping.conf file
Learn about artifacts related to Trilio for OpenStack 5.2.6
Branch of the repository to be used for DevOps scripts.
git clone -b 5.2.6 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
Learn about artifacts related to Trilio for OpenStack 5.2.0
git clone -b 5.2.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.2.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
VMware vSphere ESXi 7.0.x
[1] SUSE Enterprise Linux guests are validated to run on RHEL hosts. However, SUSE Linux uses the btrfs file system by default, which is unsupported by RHEL. Therefore, converting a SUSE system that use btrfs is not possible with virt-v2v.
In addition, VMs that use X graphics and a SUSE Linux Enterprise Server 11 operating system should be re-adjusted after the conversion for the graphics to work properly. To do so, use the sax2 distribution tool in the guest OS after the migration is finished.
[2] As a Technology Preview, converting Debian and Ubuntu VMs is not supported. In addition, this conversion currently has the following known issues: * virt-v2v cannot change the default kernel in the GRUB2 configuration, and the kernel configured in the VM is not changed during the conversion, even if a more optimal version of the kernel is available on the VM. * After converting a linux virtual machine to KVM, the name of the VM's network interface may change, and thus requires manual configuration.
Note: virt-v2v conversions of any operating system not listed above may in some cases work, but are not supported by Red Hat.
Refer to the "Supporting Conversions" section of .
Migration refers to the process of transferring Virtual Machines and their associated resources from a VMware environment to an OpenStack environment. It involves multiple steps, including planning (creating a Migration Plan), discovery (fetching VM resource information), and the actual migration execution. During migration, users provide details such as migration name, and description, select VMs, map networks, map volume types, and configure specific parameters for each VM. The goal is to seamlessly move VM workloads while preserving their configurations and ensuring compatibility with the target OpenStack environment. Successful migration results in VMs running efficiently within the OpenStack infrastructure.
Once the VMs of the Migration Plan are discovered (), the user can initiate the migration.
Trilio provides 3 types of migrations and the user can choose as per their need, more details can be found .
Dry-run Migration:
Create VMs on OpenStack with attached storage volumes, networking, and security groups for testing purposes while migrating the VM snapshot data from VMware.
Cold Migration:
Shut down VMware virtual machines and then migrate them to OpenStack. This process can take minutes to hours and may cause significant downtime, making it more disruptive.
While initiating the migration, the user needs to provide a few more below-mentioned information to ensure a seamless migration of the VMs.
To perform the actual migration, users need to:
Provide a name and description for the migration.
Choose the type of migration from a drop-down menu.
During migration, users can:
Map-enabled networks in Guest VMs to OpenStack Networks.
Trilio automatically detects networks, providing a seamless mapping experience.
Users need to:
Map Datastores detected from the migration plan VMs with OpenStack volume types.
This ensures proper mapping of storage resources during migration.
Users must:
Select VMs mapped to the Migration Plan for migration.
For each VM, choose the correct flavor and Availability Zone.
Specify configurations for disks to be created on OpenStack.
For the VMs that are UEFI booted, a glance image must be selected. Refer to for more details.
There are certain limitations that have to be taken care of by the user once the migration is successful, please refer to the page for more information.
This option applies only to warm migrations with manual cutover (default).
Once the migration process reaches the stage where the second snapshot of the source VM has been successfully uploaded, the migration enters a Paused state. At this point, the process waits for the user to perform the Resume Cutover action.
Resuming cutover allows Trilio to shut down the source VM and begin the final stage of data transfer.
There is no time limit for this wait; however, it is recommended to resume the operation as soon as possible. Delaying may result in additional data being written to the source VM, which can increase the overall cutover time.
At any stage, if users decide not to continue with the migration, they can cancel the operation. This action reverts any changes and cleans up the OpenStack resources created for that migration.
The Cancel option is available for all types of migrations and at any stage, including when the migration is in a Paused state.
The Migration Overview page provides a comprehensive summary of a migration job. It displays key details, progress indicators, and status information to help users monitor and analyze the migration process.
Name – The unique name assigned to the migration job.
Description – An optional user-provided description for easier identification.
Migration Type – Indicates the type of migration (e.g., warm, cold).
Status – Current state of the migration (e.g., Running, Paused, Completed, Failed).
Progress Messages – A detailed list of migration tasks, each showing:
Task name
Progress percentage
Status (e.g., In Progress, Completed, Failed)
Workload Manager Host – The hostname of the workload manager where the migration is scheduled.
Migration Options – The options selected while invoking the migration (e.g., automatic or manual cutover, additional configuration settings).
Deletion of the initiated Migrations can be done to clean up the unnecessary entries in the list. Please note that deleting a migration does not clean up the migrated VMs or their associated resources from OpenStack. The user will have to explicitly remove the migrated resources if necessary.
Each Trilio Workload has a dedicated owner. The ownership of a Workload is defined by:
Openstack User - The Openstack User-ID is assigned to a Workload
Openstack Project - The Openstack Project-ID is assigned to a Workload
Openstack Cloud - The Trilio Serviceuser-ID is assigned to a Workload
This ownership secures, that only the owners of a Workload are able to work with it.
Openstack Administrators can reassign Workloads or reimport Workloads from older Trilio installations.
Workload import allows to import Workloads existing on the Backup Target into the Trilio database.
The Workload import is designed to import Workloads, which are owned by the Cloud.
It will not import or list any Workloads that are owned by a different cloud.
To get a list of importable Workloads use the following CLI command:
--project_id <project_id> ➡️ List workloads belongs to given project only.
To import Workloads into the Trilio database use the following CLI command:
--workloadids <workloadid> ➡️ Specify workload ids to import only specified workloads. Repeat option for multiple workloads.
The definition of an orphaned Workload is from the perspective of a specific Trilio installation. Any workload that is located on the Backup Target Storage, but not known to the TrilioVualt installation is considered orphaned.
Further is to divide between Workloads that were previously owned by Projects/Users in the same cloud or are migrated from a different cloud.
The following CLI command provides the list of orphaned workloads:
--migrate_cloud {True,False} ➡️ Set to True if you want to list workloads from other clouds as well. Default is False.
--generate_yaml {True,False} ➡️ Set to True if want to generate output file in yaml format, which would be further used as input for workload reassign API.
Openstack administrators are able to reassign a Workload to a new owner. This involves the possibility to migrate a Workload from one cloud to another or between projects.
Reassigning a workload only changes the database of the target Trilio installation. When the Workload was managed before by a different Trilio installation, will that installation not be updated.
Use the following CLI command to reassign a Workload:
--old_tenant_ids <old_tenant_id>➡️ Specify old tenant ids from which workloads need to reassign to new tenant. Specify multiple times to choose Workloads from multiple tenants.
--new_tenant_id <new_tenant_id> ➡️ Specify new tenant id to which workloads need to reassign from old tenant. Only one target tenant can be specified.
--workload_ids <workload_id>
A sample mapping file with explanations is shown below:
/etc/cloud/cloud.cfg.d/99-migration-disable.cfg vcenters-list List all the VCenters configured for migration.
migration-plans-list List all the migration_plans of current project.
migration-plan-create Creates a migration plan.
migration-plan-discover-vms discover VM's of a migration plan.
migration-plan-get-by-vmid List the migration_plan for given vm id
migration-plan-get-import-list Get list of migration_plans to be imported.
migration-plan-import Import all migration plan records from backup store.
migration-plan-modify Modify a migration plan.
migration-plan-show Show details about a migration plan.
migration-plan-delete Remove a migration plan.
migrations-list List all the migrations for the migration plan.
migration-create Execute a migration plan.
migration-resume Resume the migration.
migration-show Show details about migration of the migration plan
migration-cancel Cancel the migration.
migration-delete Delete the migration.
# echo -n 10.10.2.20:/Trilio_Backup | base64
MTAuMTAuMi4yMDovVHJpbGlvX0JhY2t1cA==# echo -n /Trilio_Backup | base64
L1RyaWxpb19CYWNrdXA=./backing_file_update.sh /var/triliovault-mounts/<base64>/workload_<workload_id># echo -n 10.10.2.20:/Trilio_Backup | base64
MTAuMTAuMi4yMDovVHJpbGlvX0JhY2t1cA==# echo -n /Trilio_Backup | base64
L1RyaWxpb19CYWNrdXA=mkdir -p /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZz
mount --bind /var/lib/nova/triliovault-mounts/L21udC9yaG9zcHRhcmdldG5mcw\=\=/ /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZz
chmod 777 /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZzmkdir -p /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZz
mount --bind /var/lib/nova/triliovault-mounts/L21udC9yaG9zcHRhcmdldG5mcw\=\=/ /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZz
chmod 777 /var/lib/nova/triliovault-mounts/MTcyLjMwLjEuOTovbW50L3Job3NwdGFyZ2V0bmZzgit clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.gitcp -r triliovault-cfg-scripts/migration-vm2os/nginx /opt
cp triliovault-cfg-scripts/migration-vm2os/env /opt
cp triliovault-cfg-scripts/migration-vm2os/docker-compose.yml /opt
cp triliovault-cfg-scripts/migration-vm2os/vmosmapping.conf /optdocker compose -f /opt/docker-compose.yml --env-file /opt/env up &CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ed651c084d4 nginx:latest "/docker-entrypoint.…" 5 seconds ago Up 3 seconds 80/tcp, 0.0.0.0:5085->5085/tcp, :::5085->5085/tcp nginx
8ff9f81b9913 trilio/trilio-migration-vm2os:5.2.3-dev-maint3-3 "gunicorn --config g…" 5 seconds ago Up 4 seconds trilio_vm2os
1f91f14ad011 trilio/trilio-migration-vm2os:5.2.3-dev-maint3-3 "celery -A run.celer…" 5 seconds ago Up 4 seconds opt-worker-1
0bb5574ad56b redis:latest "docker-entrypoint.s…" 5 seconds ago Up 4 seconds 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redishttp://192.168.6.25:5085/trilio_branch : 5.2.6admin_password
Admin user password for OpenStack
admin_project
Project/Tenant of OpenStack
admin_domain
Domain of OpenStack
region_name
Region of OpenStack
ssl_verify
To be set to True or False as needed
hypervisor
To be set as "RHEL 9"
username
OpenStack user which is having same role as the user (having trustee role) mentioned in wlm.conf file. This will be used to log into VM Migration Tool Dashboard.
Email of the VM Migration Tol user
password
password for logging into VM Migration Tool Dashboard
Warm Migration:
Leverage VMware snapshots and the Change Block Tracking feature to migrate VMs with minimal disruption. Take snapshots, copy changed blocks, and then repeat the process after shutting down the VMs. The downtime can be significantly lower than that of a Cold Migration.
Users can choose between an automatic cutover for a hands-free migration experience or a manual cutover for a more controlled migration process.
Warnings – Any warning messages generated during the migration process.
Total Size – The total amount of data included in the migration.
Total Time Taken – The overall time consumed for the migration to complete.
Time taken per task
Overall Progress Percentage – An aggregated percentage representing the overall completion status of the migration.

5.2.8.2
python3-workloadmgrclient
5.2.8.1
workloadmgr
5.2.8.4
python3-dmapi
RHEL8/CentOS8*/Rocky9
5.2.8-5.2
python3-s3fuse-plugin-el9
RHEL8/CentOS8*/Rocky9
5.2.8.1-5.2
python3-s3fuse-plugin
RHEL8/CentOS8*/Rocky9
5.2.8.1-5.2
python3-trilio-fusepy-el9
RHEL8/CentOS8*/Rocky9
3.0.1-1
python3-trilio-fusepy
RHEL8/CentOS8*/Rocky9
3.0.1-1
python3-tvault-contego-el9
RHEL8/CentOS8*/Rocky9
5.2.8.3-5.2
python3-tvault-contego
RHEL8/CentOS8*/Rocky9
5.2.8.3-5.2
python3-tvault-horizon-plugin-el8
RHEL8/CentOS8*/Rocky9
5.2.8.2-5.2
python3-tvault-horizon-plugin-el9
RHEL8/CentOS8*/Rocky9
5.2.8.2-5.2
python3-workloadmgrclient-el8
RHEL8/CentOS8*/Rocky9
5.2.8.1-5.2
python3-workloadmgrclient-el9
RHEL8/CentOS8*/Rocky9
5.2.8.1-5.2
python3-workloadmgr-el9
RHEL8/CentOS8*/Rocky9
5.2.8.4-5.2
workloadmgr
RHEL8/CentOS8*/Rocky9
5.2.8.4-5.2
Kolla Rocky Antelope(2023.1)
docker.io/trilio/kolla-rocky-trilio-datamover:5.2.1-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.1-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.1-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.1-2023.1Kolla Ubuntu Jammy Antelope(2023.1)
docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.1-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.1-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.1-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.1-2023.1Kolla Rocky Zed
docker.io/trilio/kolla-rocky-trilio-datamover:5.2.1-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.1-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.1-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.1-zedKolla Ubuntu Jammy Zed
docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.1-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.1-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.1-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.1-zedpython3-contegoclient
5.2.8
python3-dmapi
5.2.8
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
5.2.8.1
python3-tvault-contego
5.2.8.3
python3-contegoclient-el8
RHEL8/CentOS8*/Rocky9
5.2.8-5.2
python3-contegoclient-el9
RHEL8/CentOS8*/Rocky9
5.2.8-5.2
python3-dmapi-el9
RHEL8/CentOS8*/Rocky9
deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0registry.connect.redhat.com/trilio/trilio-datamover:5.2.1-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.1-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.1-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.1-rhosp17.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.1-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.1-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.1-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.2.1-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.2.1-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.1-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.1-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.1-rhosp16.1python3-tvault-horizon-plugin
5.2.8-5.2
python3-tvault-horizon-plugin
5.0.16
python3-workloadmgrclient
5.0.16
workloadmgr
5.0.16
5.0.16-5.0
python3-trilio-fusepy
RHEL8/Centos8*
3.0.1-1
python3-tvault-contego
RHEL8/Centos8*
5.0.16-5.0
python3-tvault-horizon-plugin-el8
RHEL8/Centos8*
5.0.16-5.0
python3-workloadmgrclient-el8
RHEL8/Centos8*
5.0.16-5.0
5.0.16
python3-contegoclient
5.0.16
python3-dmapi
5.0.16
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
5.0.16
python3-tvault-contego
5.0.16
python3-contegoclient-el8
RHEL8/Centos8*
5.0.16-5.0
python3-dmapi
RHEL8/Centos8*
5.0.16-5.0
python3-s3fuse-plugin
deb [trusted=yes] https://apt.fury.io/trilio-5-0/ /https://yum.fury.io/trilio-5-0/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-0/
enabled=1
gpgcheck=0registry.connect.redhat.com/trilio/trilio-datamover:5.0.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.0.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.0.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.0.0-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.0.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.0.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.0.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.0.0-rhosp16.1RHEL8/Centos8*
--user_id <user_id>➡️ Specify user id to which workloads need to reassign from old tenant. only one target user can be specified.
--migrate_cloud {True,False}➡️ Set to True if want to reassign workloads from other clouds as well. Default if False
--map_file➡️ Provide file path(relative or absolute) including file name of reassign map file. Provide list of old workloads mapped to new tenants. Format for this file is YAML.
C:\Program Files\Cloudbase Solutions\Cloudbase-Init\conf\cloudbase-init.confC:\Program Files\Cloudbase Solutions\Cloudbase-Init\conf\cloudbase-init.conf.bakworkloadmgr workload-get-importworkloads-list [--project_id <project_id>]workloadmgr workload-importworkloads [--workloadids <workloadid>]workloadmgr workload-get-orphaned-workloads-list [--migrate_cloud {True,False}]
[--generate_yaml {True,False}]workloadmgr workload-reassign-workloads
[--old_tenant_ids <old_tenant_id>]
[--new_tenant_id <new_tenant_id>]
[--workload_ids <workload_id>]
[--user_id <user_id>]
[--migrate_cloud {True,False}]
[--map_file <map_file>]reassign_mappings:
- old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
new_tenant_id: new_tenant_id
user_id: user_id
workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
migrate_cloud: True/False #Set to True if want to reassign workloads from
# other clouds as well. Default is False
- old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
new_tenant_id: new_tenant_id
user_id: user_id
workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
migrate_cloud: True/False #Set to True if want to reassign workloads from
# other clouds as well. Default is FalseKolla Rocky Bobcat(2023.2)
Kolla Ubuntu Jammy Bobcat(2023.2)
Kolla Rocky Antelope(2023.1)
Kolla Ubuntu Jammy Antelope(2023.1)
Kolla Rocky Zed
Kolla Ubuntu Jammy Zed
triliovault-pkg-source
deb [trusted=yes] https://apt.fury.io/trilio-5-2 /
channel
latest/stable
Charm names
Supported releases
Revisions
Jammy (Ubuntu 22.04)
18
Repo URL:
deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /python3-contegoclient
5.2.8.1
python3-dmapi
5.2.8
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
5.2.8.3
python3-tvault-contego
5.2.8.23
Repo URL:
https://yum.fury.io/trilio-5-2/To enable, add the following file /etc/yum.repos.d/fury.repo:
[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0python3-contegoclient-el9
RHEL9
5.2.8.1-5.2
python3-dmapi-el9
RHEL9
5.2.8-5.2
python3-s3fuse-plugin-el9
RHEL9
RHOSP 17.1
Kolla Rocky Caracal(2024.1)
Kolla Ubuntu Caracal(2024.1)
contegoclient
5.2.8.1
s3fuse
5.2.8.3
tvault-horizon-plugin
5.2.8.14
workloadmgr
5.2.8.24
workloadmgrclient
5.2.8.8
git clone -b 5.2.6 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.6 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/https://pypi.fury.io/trilio-5-2/Kolla Rocky Antelope(2023.1)
Kolla Ubuntu Jammy Antelope(2023.1)
Kolla Rocky Zed
Kolla Ubuntu Jammy Zed
Repo URL:
deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /python3-contegoclient
5.2.8
python3-dmapi
5.2.8
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
5.2.8
python3-tvault-contego
5.2.8
Repo URL:
https://yum.fury.io/trilio-5-2/To enable, add the following file /etc/yum.repos.d/fury.repo:
[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0python3-contegoclient-el8
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-dmapi
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-s3fuse-plugin
RHOSP 17.1
registry.connect.redhat.com/trilio/trilio-datamover:5.2.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.0-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.0-rhosp17.1RHOSP 16.2
registry.connect.redhat.com/trilio/trilio-datamover:5.2.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.2.0-rhosp16.2RHOSP 16.1
registry.connect.redhat.com/trilio/trilio-datamover:5.2.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.0-rhosp16.1contegoclient
5.2.8
s3fuse
5.2.8
tvault-horizon-plugin
5.2.8
workloadmgr
5.2.8
workloadmgrclient
https://pypi.fury.io/trilio-5-2/5.2.8
2024.1 Caracal
✖️
2023.2 Bobcat
✖️
2023.1 Antelope
✖️
Zed
✖️
Bobcat Jammy
✖️
Antelope Jammy
✖️
SUSE Linux Enterprise Server 11, SP4 and later
Not supported [1]
SUSE Linux Enterprise Server 12
Not supported [1]
SUSE Linux Enterprise Server 15
Not supported [1]
Windows 8
Not supported
Windows 8.1
Not supported
Windows 10
Windows 11
Windows Server 2008
Not supported
Windows Server 2008 R2
Not supported
Windows Server 2012
Not supported
Windows Server 2012 R2
Not supported
Windows Server 2016
Windows Server 2019
Windows Server 2022
Windows Server 2025
Debian [2]
Technology Preview
Ubuntu [2]
Technology Preview
17.1 (RHEL9 host)
✔️
16.1
✖️
16.2
✖️
Red Hat Enterprise Linux 5
Not supported
Red Hat Enterprise Linux 6
Not supported
Red Hat Enterprise Linux 7
Red Hat Enterprise Linux 8
Red Hat Enterprise Linux 9
Red Hat Enterprise Linux 10
Not supported
git clone -b 5.2.7 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/RHOSP 17.1
Kolla Rocky Caracal(2024.1)
Kolla Ubuntu Caracal(2024.1)
All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
https://pypi.fury.io/trilio-5-2/contegoclient
5.2.8.1
s3fuse
5.2.8.3
tvault-horizon-plugin
5.2.8.15
workloadmgr
5.2.8.25
workloadmgrclient
5.2.8.8
Note : For RHOSP17.1, the workloadmgr package version is 5.2.8.26
Repo URL:
Repo URL:
To enable, add the following file /etc/yum.repos.d/fury.repo:
git clone -b 5.2.7 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/RHOSP 17.1
RHOSP 16.2
RHOSP 16.1
All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
https://pypi.fury.io/trilio-5-2/contegoclient
5.2.8
s3fuse
5.2.8.1
tvault-horizon-plugin
5.2.8.5
workloadmgr
5.2.8.13
workloadmgrclient
5.2.8.3
Repo URL:
Repo URL:
To enable, add the following file /etc/yum.repos.d/fury.repo:
git clone -b 5.2.2 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.2.2 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.2 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/vCenter access URL
vCenter access Username and Password
SSL certificate for secured connection to VCenter (if not available, the SSL verification has to be marked false during configuration)
The minimum set of permissions required for a vCenter user to successfully migrate VMs are as follows:
Datastore
Browse datastore
Low level file operations
Sessions
Validate session
Virtual machine / Interaction
Guest operating system management by VIX API
Power off
Power on
Virtual machine / Provisioning
Allow disk access
Allow read-only disk access
Allow virtual machine download
Virtual machine / Snapshot management
Create snapshot
Remove snapshot
Network Port Requirement
Trilio needs to have below port accesses for the VM migration feature to work:
On vCenter:
80, 443: bi-directional access to OpenStack controller and compute nodes
On ESXi hosts:
80,443: bi-directional access to vCenter, OpenStack controller, and compute nodes
902: outgoing access to vCenter, OpenStack compute nodes
VMware vSphere Virtual Disk Development Kit 7.0
Trilio requires this kit for the VM migration feature to work. You can download it from the VMware Customer Connect portal by accessing the link below
Make sure you download the file
For migrating UEFI booted VMware VMs, the OpenStack requires to have the instances with UEFI boot information and so as Trilio.
So before doing the migration, the user needs to create a Glance image (any image) with following metadata properties:
set hw_firmware_type property to UEFI
set hw_machine_type property to q35
This is not required for BIOS booted VMware VMs
Trilio does not create any flavor for the migrating VMs instead it uses the available flavors.
To avoid any booting issues post-migration, the user needs to have the correct flavors created and available for selection while initiating the migration.
VMware tools need to be installed and should be running on all the Windows OS VMs that are to be migrated. Alternatively, the guest OS must be in a Shutdown state and not be in a Power-off state.
Secondary disks in Windows Server VMs appear offline after migration.
To avoid the disks going into an offline state, set the SAN policy OnlineAll in the guest VM before the migration.
Steps to change the SAN policy:
Open Command Prompt as an administrator: Press Windows Key + R, type cmd, and press Ctrl + Shift + Enter to open an elevated command prompt.
Launch Diskpart: Type diskpart and press Enter.
Check the current SAN policy: Type san and press Enter.
Change the SAN policy: If the current policy is Offline Shared or Offline All, type san policy=OnlineAll and press Enter. You should see a message indicating the policy change was successful.
Exit Diskpart: Type exit and press Enter.
This is not mandatory for migration, as the disks can be brought to the Online state manually post-migration as well.
To leverage the Dry-run and Warm migration features, the vCenter and ESXi should support the Change Block Tracking(CBT) feature. Make sure that all the data disks attached to the VM, have the CBT enabled.
To enable the CBT, follow the instructions at VMware Knowledge Base
Also understand the limitations Trilio has described in this page.
Virt-v2v checks if there is sufficient free space in the guest filesystem to perform the conversion. Currently, it checks:
Linux root filesystem
Minimum free space: 100 MB
Linux /boot
Minimum free space: 50 MB This is because we need to build a new initramfs for some Enterprise Linux conversions.
Windows C: drive
Minimum free space: 100 MB We may have to copy in many virtio drivers and guest agents.
Any other mountable filesystem
Minimum free space: 10 MB
In addition to the actual free space, each filesystem is required to have at least 100 available inodes.
In a Linux guest, you see an error such as this:
virt-v2v: error: libguestfs error: command lines: rename: /sysroot/etc/resolv.conf to /sysroot/etc/lwdhuh36: Operation not permitted
This can be caused because the file /etc/resolv.conf in the guest has the immutable bit set. You can use the chattr(1) command before converting the guest:
and then restore it (+i) after conversion.
In addition, please refer to the virt-v2v upstream documentation for other requirements and notes.
Learn about artifacts related to Trilio for OpenStack 5.2.5
Branch of the repository to be used for DevOps scripts.
git clone -b 5.2.5 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
Learn about artifacts related to Trilio for OpenStack 5.2.3
Branch of the repository to be used for DevOps scripts.
git clone -b 5.2.3 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
Support for Kolla Epoxy on Ubuntu & Rocky9 added.
Bug Fixes:
VMware migration failing with error source guest could not be detected.
VMware to openstack migration removes default routes of VM.
Fetching VMs list is very slow when the vCenter has VMs in thousands.
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list and get-orphaned-workloads-list show incorrect workloads in the output.
Workaround:
Enhancements:
Added options for automatic or manual cutover during warm migration.
Improved handling of parallel migration executions to ensure stability.
Enhanced migration progress tracking with detailed status updates for a better user experience.
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list and get-orphaned-workloads-list show incorrect workloads in the output.
Workaround:
Multiple vCenters support added to the VMware VM migration feature.
VM Migration Bug Fixes.
Support for Kolla Caracal on Ubuntu added.
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list and get-orphaned-workloads-list show incorrect workloads in the output.
Workaround:
Support for Kolla Caracal on Rocky9 added
Support for Canonical Antelope & Bobcat on Jammy added
Bug Fixes
Hide the VMware/Virtual Machine Migration related menus, if that feature is not active.
Issue where some volumes were not being attached after restore.
S3 logging is now fixed.
Slow performance while creating snapshot identified and fixed.
Upgrade of python3-oslo.messaging breaks Trilio in Juju
The Trilio charms should not allow installations of python3-oslo.messaging newer than 12.1.6-0ubuntu1, as they disrupt correct RabbitMQ connectivity from the trilio-wlm units.
Workaround: Run below commands
RHOSP16.1/16.2 : Unexpected behavior with OpenStack Horizon Dashboard
VMware Migration Mapping Dashboard
Bug Fixes
[VMware Migration] boot options shows BIOS when the guest VM has unsecured EFI boot.
wlm client does not work on mac having python 3.10.
Cloud admin trust creation was failing.
Modify workload with other user brings workload in error.
Encryption checkbox is disabled when creating 1st workload from UI
While creating 1st workload from the UI, the Enable Encryption checkbox on the Create workload page is disabled. Creating a non-encrypted workload first will resolve this issue and the user can start creating the encrypted workloads.
RHOSP16.1/16.2 : Unexpected behavior with OpenStack UI
Support for Kolla Bobcat OpenStack added.
Fixed the issues reported by customers.
[VMware Migration] Migration failure due to minor issues in handling vm and network names.
[VMware migration] boot options shows BIOS when the guest VM has unsecured efi boot.
[VMware Migration] Migration fails when the network name of the guest VM contains - and _ both.
Encryption checkbox is disabled when creating 1st workload from UI
While creating 1st workload from the UI, the Enable Encryption checkbox on the Create workload page is disabled. Creating a non-encrypted workload first will resolve this issue and the user can start creating the encrypted workloads.
RHOSP16.1/16.2 : Unexpected behavior with OpenStack UI
Software Driven Migration : VMware to OpenStack With T4O-5.2.1 release, Trilio now supports migration of VMs from VMware to OpenStack.
2. Fixed the issues reported by customers.
Support multi ceph config in same /etc/ceph location.
Encrypted incremental restore fails to diff AZ.
Trilio installation failing for RHOSP17.1 IPV6.
[RHOSP] memcache_servers parameter in triliovault wlm and datamover service conf files fixed for ipv6.
RHOSP16.1/16.2 : Unexpected behavior with OpenStack UI
At times, after the overcloud deployment, the OpenStack UI encounters issues, leading to unexpected behavior in certain functionalities.
Workaround:
Login to the Horizon container and run the following commands:
New OpenStack additions in Trilio support matrix: With T4O-5.2.0, Trilio now supports backup and DR for RHOSP17.1 and Kolla-Ansible OpenStack 2023.1 Antelope on Rocky Linux & Ubuntu Jammy.
RHOSP16.1/16.2 : Unexpected behavior with OpenStack UI
At times, after the overcloud deployment, the OpenStack UI encounters issues, leading to unexpected behavior in certain functionalities.
Workaround:
Login to the Horizon container and run the following commands:
True Native and Containerized Deployment: In the T4O-5.1.0 release, Trilio has embraced a transformative shift. Departing from the traditional method of shipping appliance images, T4O has evolved into a comprehensive backup and recovery solution that is fully containerized. Within this iteration, all T4O services have been elegantly encapsulated within containers, seamlessly aligning with OpenStack through its innate deployment process.
New OpenStack additions in Trilio support matrix: With T4O-5.1.0, Trilio now supports backup and DR for Kolla-Ansible OpenStack Zed on Rocky Linux and Ubuntu Jammy.
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
True Native and Containerized Deployment: In the T4O-5.0.0 release, Trilio has embraced a transformative shift. Departing from the traditional method of shipping appliance images, T4O has evolved into a comprehensive backup and recovery solution that is fully containerized. Within this iteration, all T4O services have been elegantly encapsulated within containers, seamlessly aligning with OpenStack through its innate deployment process.
Faster Snapshot operation (Removal of Workload types: Serial and Parallel): In a move to boost Snapshot operation efficiency, we've introduced a streamlined method for creating workloads, eliminating the need for users to select between different workload types. T4O will now intelligently manage workloads in the background, ensuring optimal performance for backup operations.
Improved Database Storage:
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Learn about artifacts related to Trilio for OpenStack 5.1.0
git clone -b 5.1.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/git clone -b 5.1.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
POST https://$(tvm_address):8780/v1/$(tenant_id)/search
Starts a File Search with the given parameters
POST https://$(tvm_address):8780/v1/$(tenant_id)/search/<search_id>
Starts a filesearch with the given parameters
Trilio’s tenant driven backup service gives tenants control over backup policies. However, sometimes it may be too much control to tenants and the cloud admins may want to limit what policies are allowed by tenants. For example, a tenant may become overzealous and only uses full backups every 1 hr interval. If every tenant were to pursue this backup policy, it puts a severe strain on cloud infrastructure. Instead, if cloud admin can define predefined backup policies and each tenant is only limited to those policies then cloud administrators can exert better control over backup service.
Workload policy is similar to nova flavor where a tenant cannot create arbitrary instances. Instead, each tenant is only allowed to use the nova flavors published by the admin.
registry.connect.redhat.com/trilio/trilio-datamover:5.2.6-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.6-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.6-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.6-rhosp17.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.6-2024.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.6-2024.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.6-2024.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.6-2024.1trilio/trilio-migration-vm2os:5.2.6trilio_branch : 5.2.7trilio/trilio-migration-vm2os:5.2.7chattr -i /etc/resolv.conftrilio_branch : 5.2.5trilio_branch : 5.2.3Version
7.0
Size
22.82 MB
MD5
a67ec7cc5c99fa6b0e50e9da1bc6aa25
python3-tvault-horizon-plugin
5.2.8
python3-workloadmgrclient
5.2.8
workloadmgr
5.2.8
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-trilio-fusepy
RHEL8/Centos8*/Rocky9
3.0.1-1
python3-tvault-contego
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-tvault-horizon-plugin-el8
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-workloadmgrclient-el8
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-dmapi-el9
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-contegoclient-el9
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-s3fuse-plugin-el9
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-tvault-contego-el9
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-trilio-fusepy-el9
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-workloadmgr-el9
RHEL8/Centos8*/Rocky9
5.2.8-5.2
python3-tvault-horizon-plugin-el9
RHEL8/Centos8*/Rocky9
5.2.8-5.2
docker.io/trilio/kolla-rocky-trilio-datamover:5.2.0-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.0-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.0-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.0-2023.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.0-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.0-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.0-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.0-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.0-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.0-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.0-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.0-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.0-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.0-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.0-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.0-zed5.2.8.5
python3-workloadmgrclient
5.2.8.3
workloadmgr
5.2.8.13
python3-dmapi-el9
Rocky9
5.2.8-5.2
python3-s3fuse-plugin
RHEL8/CentOS8*
5.2.8.1-5.2
python3-s3fuse-plugin-el9
Rocky9
5.2.8.1-5.2
python3-trilio-fusepy
RHEL8/CentOS8*
3.0.1-1
python3-trilio-fusepy-el9
Rocky9
3.0.1-1
python3-tvault-contego
RHEL8/CentOS8*
5.2.8.9-5.2
python3-tvault-contego-el9
Rocky9
5.2.8.9-5.2
python3-tvault-horizon-plugin-el8
RHEL8/CentOS8*
5.2.8.5-5.2
python3-tvault-horizon-plugin-el9
Rocky9
5.2.8.5-5.2
python3-workloadmgrclient-el8
RHEL8/CentOS8*
5.2.8.3-5.2
python3-workloadmgrclient-el9
Rocky9
5.2.8.3-5.2
python3-workloadmgr-el9
Rocky9
5.2.8.13-5.2
workloadmgr
RHEL8/CentOS8*
5.2.8.13-5.2
Kolla Rocky Bobcat(2023.2)
docker.io/trilio/kolla-rocky-trilio-datamover:5.2.2-2023.2
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.2-2023.2
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.2-2023.2
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.2-2023.2Kolla Ubuntu Jammy Bobcat(2023.2)
docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.2-2023.2
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.2-2023.2
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.2-2023.2
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.2-2023.2Kolla Rocky Antelope(2023.1)
docker.io/trilio/kolla-rocky-trilio-datamover:5.2.2-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.2-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.2-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.2-2023.1Kolla Ubuntu Jammy Antelope(2023.1)
docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.2-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.2-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.2-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.2-2023.1Kolla Rocky Zed
docker.io/trilio/kolla-rocky-trilio-datamover:5.2.2-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.2-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.2-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.2-zedKolla Ubuntu Jammy Zed
docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.2-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.2-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.2-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.2-zedpython3-contegoclient
5.2.8
python3-dmapi
5.2.8
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
5.2.8.1
python3-tvault-contego
5.2.8.9
python3-contegoclient-el8
RHEL8/CentOS8*
5.2.8-5.2
python3-contegoclient-el9
Rocky9
5.2.8-5.2
python3-dmapi
RHEL8/CentOS8*
deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0registry.connect.redhat.com/trilio/trilio-datamover:5.2.2-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.2-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.2-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.2-rhosp17.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.2-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.2-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.2-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.2.2-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.2.2-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.2-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.2-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.2-rhosp16.1python3-tvault-horizon-plugin
5.2.8-5.2
Use --project option with get-importworkload-list CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworkloads cli command throws gateway timed out error
workload-importworkloads CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg for RHOSP
Bug Fixes:
Resolved issue with stale progress tracking files remaining on the backup target post VM migration.
Fixed migration failures caused by whitespace in VM names.
Addressed issue where migrated VMs reset default login users, SSH keys, and hostnames.
Use --project option with get-importworkload-list CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworkloads cli command throws gateway timed out error
workload-importworkloads CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg for RHOSP
Use --project option with get-importworkload-list CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworkloads cli command throws gateway timed out error
workload-importworkloads CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg for RHOSP
Periods of inactivity identified during full snapshot of large volume.
Deployment failing on kolla-ansible Antelope.
Modify workload by another tenant user brought workload in error.
Fixed issue where Trilio looked for instance in wrong region if OpenStack is multi region.
Restore failing with security group not found.
Backup failing with multipath device not found.
Backup failing due to virtual size of LUN zero.
Backup Failure in contego; error in _refresh_multipath_size call. Unexpected error while running command tee -a /sys/bus/scsi/drivers/sd/None:None:None:None/rescan
At times, after the overcloud deployment, the OpenStack Horizon Dashboard encounters issues, leading to unexpected behavior in certain functionalities.
Workaround:
Login to the Horizon container and run the following commands: podman exec -it -u root horizon /bin/bash /usr/bin/python3 /usr/share/openstack-dashboard/manage.py collectstatic --clear --noinput /usr/bin/python3 /usr/share/openstack-dashboard/manage.py compress --force
Restart the Horizon container : podman restart horizon
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the Horizon pod keeps restarting.
The issue has been identified with Memcached. Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list and get-orphaned-workloads-list show incorrect workloads in the output.
Workaround:
Use --project option with get-importworkload-list CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworkloads cli command throws gateway timed out error
workload-importworkloads CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg for RHOSP
Trilio looks for instance in wrong region if OpenStack is multi region
At times, after the overcloud deployment, the OpenStack UI encounters issues, leading to unexpected behavior in certain functionalities.
Workaround:
Login to the Horizon container and run the following commands: podman exec -it -u root horizon /bin/bash /usr/bin/python3 /usr/share/openstack-dashboard/manage.py collectstatic --clear --noinput /usr/bin/python3 /usr/share/openstack-dashboard/manage.py compress --force
Restart the Horizon container : podman restart horizon
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list and get-orphaned-workloads-list show incorrect workloads in the output.
Workaround:
Use --project option with get-importworkload-list CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworkloads cli command throws gateway timed out error
workload-importworkloads CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg for RHOSP
Trilio looks for instance in wrong region if openstack is multi region.
T4O deployment on Kolla Antelope failing. DMAPI and horizon container fail to start.
Quota updation when admin user have access to projects of multiple domains
At times, after the overcloud deployment, the OpenStack UI encounters issues, leading to unexpected behavior in certain functionalities.
Workaround:
Login to the Horizon container and run the following commands: podman exec -it -u root horizon /bin/bash /usr/bin/python3 /usr/share/openstack-dashboard/manage.py collectstatic --clear --noinput /usr/bin/python3 /usr/share/openstack-dashboard/manage.py compress --force
Restart the Horizon container : podman restart horizon
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list and get-orphaned-workloads-list show incorrect workloads in the output.
Workaround:
Use --project option with get-importworkload-list CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworkloads cli command throws gateway timed out error
workload-importworkloads CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg for RHOSP
Deployment (with non-root user) failing on kolla with barbican.
T4O 5.1 failing on Kolla Zed rocky
5.2 deployment failing on Antelope with ansible node on separate server.
WLM service issue with IPV6. Containers keep restarting due to Data too long for column ip_addresses at row 1.
podman exec -it -u root horizon /bin/bash/usr/bin/python3 /usr/share/openstack-dashboard/manage.py collectstatic --clear --noinput/usr/bin/python3 /usr/share/openstack-dashboard/manage.py compress --forceRestart the Horizon container : podman restart horizon
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list and get-orphaned-workloads-list show incorrect workloads in the output.
Workaround:
Use --project option with get-importworkload-list CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworkloads cli command throws gateway timed out error
workload-importworkloads CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg for RHOSP
podman exec -it -u root horizon /bin/bash/usr/bin/python3 /usr/share/openstack-dashboard/manage.py collectstatic --clear --noinput/usr/bin/python3 /usr/share/openstack-dashboard/manage.py compress --forceRestart the Horizon container : podman restart horizon
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list and get-orphaned-workloads-list show incorrect workloads in the output.
Workaround:
Use --project option with get-importworkload-list CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworkloads cli command throws gateway timed out error
workload-importworkloads CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg for RHOSP
Workaround 1 :
Restart the memcached service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list and get-orphaned-workloads-list show incorrect workloads in the output.
Workaround:
Use --project option with get-importworkload-list CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworkloads cli command throws gateway timed out error
workload-importworkloads CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg for RHOSP
Better Troubleshooting: Enhanced troubleshooting capabilities come through the implementation of a unified logging format across all T4O services/containers. This improvement grants users the freedom to adjust the format or log levels for individual services/containers as needed.
Updated EULA: Trilio has updated the End User License Agreement (EULA) that users must accept before using the product. For user convenience, T4O now prompts users to review and accept the EULA during the licensing step of the T4O installation. The most recent version of the EULA can be found here: https://trilio.io/eula/
Workaround 1 :
Restart the memcachedservice on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
Commands get-importworkload-list and get-orphaned-workloads-list show incorrect workloads in the output.
Workaround:
Use --project option with get-importworkload-list CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworkloads cli command throws gateway timed out error
workload-importworkloads CLI command throws gateway timed out error, however, import succeeds.
In the background, the import process continues to run despite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg for RHOSP
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to run the search in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to run the search in
search_id
string
ID of the File Search to get
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 13:23:25 GMT
Content-Type: application/json
Content-Length: 244
Connection: keep-alive
X-Compute-Request-Id: req-bdfd3fb8-5cbf-4108-885f-63160426b2fa
{
"file_search":{
"created_at":"2020-11-09T13:23:25.698534",
"updated_at":null,
"id":14,
"deleted_at":null,
"status":"executing",
"error_msg":null,
"filepath":"/etc/h*",
"json_resp":null,
"vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
}
}Jammy (Ubuntu 22.04)
17
Jammy (Ubuntu 22.04)
22
Jammy (Ubuntu 22.04)
10
python3-tvault-horizon-plugin
5.2.8.14
python3-workloadmgrclient
5.2.8.8
workloadmgr
5.2.8.24
5.2.8.3-5.2
python3-trilio-fusepy-el9
RHEL9
3.0.1-1
python3-tvault-contego-el9
RHEL9
5.2.8.23-5.2
python3-tvault-horizon-plugin-el9
RHEL9
5.2.8.14-5.2
python3-workloadmgrclient-el9
RHEL9
5.2.8.8-5.2
python3-workloadmgr-el9
RHEL9
5.2.8.24-5.2
To see all available Workload policies in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
The following information are shown in the policy tab for each available policy:
Creation time
name
description
status
set interval
set retention type
set retention value
<policy_id>➡️ Id of the policy to show
To create a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
Click new policy
provide a policy name on the Details tab
provide a description on the Details tab
provide the RPO in the Policy tab
Choose the Snapshot Retention Type
provide the Retention value
Choose the Full Backup Interval
Click create
--policy-fields <key=key-name> ➡️ Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'
--display-description <display_description> ➡️ Optional policy description. (Default=No description)
--metadata <key=keyname> ➡️ Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
<display_name> ➡️ the name the policy will get
To edit a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to edit
click on "Edit policy" at the end of the line of the chosen policy
edit the policy as desired - all values can be changed
Click "Update"
--display-name <display-name>➡️Name of the policy
--display-description <display_description> ➡️ Optional policy description. (Default=No description)
--policy-fields <key=key-name> ➡️ Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'
--metadata <key=keyname> ➡️ Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
<policy_id> ➡️ the name the policy will get
To assign or remove a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to assign/remove
click on the small arrow at the end of the line of the chosen policy to open the submenu
click "Add/Remove Projects"
Choose projects to add or remove by using the plus/minus buttons
Click "Apply"
--add_project <project_id> ➡️ ID of the project to assign policy to. Use multiple times to assign multiple projects.
--remove_project <project_id> ➡️ ID of the project to remove policy from. Use multiple times to remove multiple projects.
<policy_id>➡️policy to be assigned or removed
To delete a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to assign/remove
click on the small arrow at the end of the line of the chosen policy to open the submenu
click "Delete Policy"
Confirm by clicking "Delete"
<policy_id> ➡️ID of the policy to be deleted
juju run --app trilio-wlm "sudo apt install python3-oslo.messaging=12.1.6-0ubuntu1 -y --allow-downgrades"
juju run --app trilio-wlm "sudo apt-mark hold python3-oslo.messaging"{
"file_search":{
"start":<Integer>,
"end":<Integer>,
"filepath":"<Reg-Ex String>",
"date_from":<Date Format: YYYY-MM-DDTHH:MM:SS>,
"date_to":<Date Format: YYYY-MM-DDTHH:MM:SS>,
"snapshot_ids":[
"<Snapshot-ID>"
],
"vm_id":"<VM-ID>"
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 13:24:28 GMT
Content-Type: application/json
Content-Length: 819
Connection: keep-alive
X-Compute-Request-Id: req-d57bea9a-9968-4357-8743-e0b906466063
{
"file_search":{
"created_at":"2020-11-09T13:23:25.000000",
"updated_at":"2020-11-09T13:23:48.000000",
"id":14,
"deleted_at":null,
"status":"completed",
"error_msg":null,
"filepath":"/etc/h*",
"json_resp":"[
{
"ed4f29e8-7544-4e1c-af8a-a76031211926":[
{
"/dev/vda1":[
"/etc/hostname",
"/etc/hosts"
],
"/etc/hostname":{
"dev":"2049",
"ino":"32",
"mode":"33204",
"nlink":"1",
"uid":"0",
"gid":"0",
"rdev":"0",
"size":"1",
"blksize":"1024",
"blocks":"2",
"atime":"1603455255",
"mtime":"1603455255",
"ctime":"1603455255"
},
"/etc/hosts":{
"dev":"2049",
"ino":"127",
"mode":"33204",
"nlink":"1",
"uid":"0",
"gid":"0",
"rdev":"0",
"size":"37",
"blksize":"1024",
"blocks":"2",
"atime":"1603455257",
"mtime":"1431011050",
"ctime":"1431017172"
}
}
]
}
]",
"vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
}
}docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.6-2024.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.6-2024.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.6-2024.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.6-2024.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.6-2023.2
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.6-2023.2
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.6-2023.2
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.6-2023.2docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.6-2023.2
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.6-2023.2
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.6-2023.2
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.6-2023.2docker.io/trilio/kolla-rocky-trilio-datamover:5.2.6-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.6-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.6-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.6-2023.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.6-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.6-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.6-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.6-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.6-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.6-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.6-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.6-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.6-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.6-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.6-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.6-zedworkloadmgr policy-listworkloadmgr policy-show <policy_id>workloadmgr policy-create --policy-fields <key=key-name>
[--display-description <display_description>]
[--metadata <key=key-name>]
<display_name>workloadmgr policy-update [--display-name <display-name>]
[--display-description <display-description>]
[--policy-fields <key=key-name>]
[--metadata <key=key-name>]
<policy_id>workloadmgr policy-assign [--add_project <project_id>]
[--remove_project <project_id>]
<policy_id>workloadmgr policy-delete <policy_id>Jammy (Ubuntu 22.04)
17
Jammy (Ubuntu 22.04)
22
Jammy (Ubuntu 22.04)
10
5.2.8.15
python3-workloadmgrclient
5.2.8.8
workloadmgr
5.2.8.25
Note : For RHOSP17.1, the workloadmgr package version is 5.2.8.26
python3-trilio-fusepy-el9
RHEL9
3.0.1-1
python3-tvault-contego-el9
RHEL9
5.2.8.24-5.2
python3-tvault-horizon-plugin-el9
RHEL9
5.2.8.15-5.2
python3-workloadmgrclient-el9
RHEL9
5.2.8.8-5.2
python3-workloadmgr-el9
RHEL9
5.2.8.25-5.2
Note : For RHOSP17.1, the python3-workloadmgr-el9 package version is 5.2.8.26-5.2
Kolla Rocky Epoxy(2025.1)
Kolla Ubuntu Epoxy(2025.1)
triliovault-pkg-source
deb [trusted=yes] https://apt.fury.io/trilio-5-2 /
channel
latest/stable
Charm names
Supported releases
Revisions
Jammy (Ubuntu 22.04)
18
python3-contegoclient
5.2.8.1
python3-dmapi
5.2.8
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
5.2.8.3
python3-tvault-contego
5.2.8.24
python3-contegoclient-el9
RHEL9
5.2.8.1-5.2
python3-dmapi-el9
RHEL9
5.2.8-5.2
python3-s3fuse-plugin-el9
RHEL9
deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0python3-tvault-horizon-plugin
5.2.8.3-5.2
Kolla Rocky Bobcat(2023.2)
Kolla Ubuntu Jammy Bobcat(2023.2)
Kolla Rocky Antelope(2023.1)
Kolla Ubuntu Jammy Antelope(2023.1)
Kolla Rocky Zed
Kolla Ubuntu Jammy Zed
triliovault-pkg-source
deb [trusted=yes] https://apt.fury.io/trilio-5-2 /
channel
latest/stable
Charm names
Supported releases
Revisions
Jammy (Ubuntu 22.04)
18
Repo URL:
deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /python3-contegoclient
5.2.8.1
python3-dmapi
5.2.8
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
5.2.8.3
python3-tvault-contego
5.2.8.20
Repo URL:
https://yum.fury.io/trilio-5-2/To enable, add the following file /etc/yum.repos.d/fury.repo:
[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0python3-contegoclient-el9
RHEL9
5.2.8.1-5.2
python3-dmapi-el9
RHEL9
5.2.8-5.2
python3-s3fuse-plugin-el9
RHEL9
RHOSP 17.1
Kolla Rocky Caracal(2024.1)
Kolla Ubuntu Caracal(2024.1)
contegoclient
5.2.8.1
s3fuse
5.2.8.3
tvault-horizon-plugin
5.2.8.12
workloadmgr
5.2.8.20
workloadmgrclient
5.2.8.7
git clone -b 5.2.5 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.5 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/https://pypi.fury.io/trilio-5-2/Kolla Rocky Bobcat(2023.2)
Kolla Ubuntu Jammy Bobcat(2023.2)
Kolla Rocky Antelope(2023.1)
Kolla Ubuntu Jammy Antelope(2023.1)
Kolla Rocky Zed
Kolla Ubuntu Jammy Zed
Repo URL:
deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /python3-contegoclient
5.2.8
python3-dmapi
5.2.8
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
5.2.8.1
python3-tvault-contego
5.2.8.11
Repo URL:
https://yum.fury.io/trilio-5-2/To enable, add the following file /etc/yum.repos.d/fury.repo:
[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0python3-contegoclient-el8
RHEL8/CentOS8*
5.2.8-5.2
python3-contegoclient-el9
Rocky9
5.2.8-5.2
python3-dmapi
RHEL8/CentOS8*
RHOSP 17.1
RHOSP 16.2
RHOSP 16.1
contegoclient
5.2.8
s3fuse
5.2.8.1
tvault-horizon-plugin
5.2.8.6
workloadmgr
5.2.8.14
workloadmgrclient
5.2.8.4
git clone -b 5.2.3 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.3 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/https://pypi.fury.io/trilio-5-2/Kolla Ubuntu Jammy Zed
Repo URL:
deb [trusted=yes] https://apt.fury.io/trilio-5-1/ /python3-contegoclient
5.1.2
python3-dmapi
5.1.2
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
5.1.2
python3-tvault-contego
5.1.2
Repo URL:
https://yum.fury.io/trilio-5-1/To enable, add the following file /etc/yum.repos.d/fury.repo:
[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-1/
enabled=1
gpgcheck=0python3-contegoclient-el8
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-dmapi
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-s3fuse-plugin
RHOSP 16.2
registry.connect.redhat.com/trilio/trilio-datamover:5.1.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.1.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.1.0-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.1.0-rhosp16.2RHOSP 16.1
registry.connect.redhat.com/trilio/trilio-datamover:5.1.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.1.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.1.0-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.1.0-rhosp16.1contegoclient
5.1.2
s3fuse
5.1.2
tvault-horizon-plugin
5.1.2
workloadmgr
5.1.2
workloadmgrclient
https://pypi.fury.io/trilio-5-1/Kolla Rocky Zed
5.1.2
git clone -b 5.2.4 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/git clone -b 5.2.4 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/ansible/RHOSP 17.1
RHOSP 16.2
RHOSP 16.1
All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
https://pypi.fury.io/trilio-5-2/contegoclient
5.2.8.1
s3fuse
5.2.8.3
tvault-horizon-plugin
5.2.8.8
workloadmgr
5.2.8.15
workloadmgrclient
5.2.8.5
Repo URL:
Repo URL:
To enable, add the following file /etc/yum.repos.d/fury.repo:
git clone -b 5.2.4 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/Charm names
Channel
Supported releases
latest/stable
Jammy (Ubuntu 22.04)
latest/stable
Jammy (Ubuntu 22.04)
latest/stable
Jammy (Ubuntu 22.04)
latest/stable
Jammy (Ubuntu 22.04)
A Canonical OpenStack base setup deployed for a required release like Jammy Antelope/Bobcat. Refer Compatibility Matrix
Trilio File Search functionality requires that the Trilio Workload manager (trilio-wlm) be deployed as a virtual machine. File Search will not function if the Trilio Workload manager (trilio-wlm) is running as a lxd container(s).
If trilio-wlm service is assigned to any nova-compute node then wlm mysql router service fails to start. Hence, please ensure to assign trilio-wlm service to some other node.
Sample Trilio overlay bundles (T4O release wise) can be found at LINK
Alternatively, triliovault-cfg-scripts repository can be cloned to get the sample overlay bundles.
Following table provides the details of various values to be updated in overlay bundle.
Parameters
Summary
triliovault-pkg-source
Trilio debian package repo url; Refer release specific page
machines
List of Machines available on canonical openstack setup
channel
Channel name as provided in release specific page
revision
Latest values as provided by Trilio. Refer release specific page
If Backup Target is NFS
nfs-shares
NFS server IP and share path
3.1] Do a dry run to check if the Trilio bundle is working
3.2] Trigger deployment
3.3] Wait until all the Trilio units are deployed successfully. Check the status via juju status command.
4.1] Once the deployment is complete, perform the below operations:
a. Create cloud admin trust & add licence
Note: Reach out to the Trilio support team for the license file.
5.1] After T4O deployment steps are over, it can take sometime for all units to get deployed successfully. Deployment is considered successful when all the units show Unit is Ready in the message column.
To verify the same, following command (& sample output) can be used to fetch the Trilio units/applications.
6.1] To debug any specific unit : juju debug-log --include <UNIT_NAME_IN_ERROR>
Eg. If trilio-wlm/6 unit is in 'error' state, it's logs can be fetched using following command. Specific correct unit number from respective deployment using juju status command.
juju debug log --include trilio-wlm/6
For multipath enabled environments, perform the following actions
log into each nova compute node
add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf
restart tvault-contego service
This document discusses OpenStack multi-region deployments and how Trilio (or T4O, which stands for Trilio For OpenStack) can be deployed in multi-region OpenStack clouds.
OpenStack is designed to be scalable. To manage scale, OpenStack supports various resource segregation constructs: Regions, cells, and availability zones to manage OpenStack resources. Resource segregation is essential to define fault domains and localize network traffic.
From an end user's perspective, OpenStack regions are equivalent to regions in Amazon Web Services. Regions live in separate data centers, often named after their location. If your organization has a data center in Chicago and one in Boston, you'll have at least a CHG and a BOS region. Users who want to disperse their workloads geographically will place some in CHG and some in BOS. Regions have separate API endpoints for all services except for Keystone. Users, Tenants, and Domains are shaped across regions through a single Keystone deployment.
Availability Zones are an end-user visible logical abstraction for partitioning a cloud without knowing the physical infrastructure. Availability zones can partition a cloud on arbitrary factors, such as location (country, data center, rack), network layout, and power source. Because of the flexibility, the names and purposes of availability zones can vary massively between clouds.
In addition, other services, such as the and the , also provide an availability zone feature. However, the implementation of these features differs vastly between these different services. Please look at the documentation for these other services for more information on their implementation of this feature.
Cells functionality enables OpenStack to scale the compute in a more distributed fashion without using complicated technologies like database and message queue clustering. It supports vast deployments.
Cloud architects can partition OpenStack Compute Cloud into groups called cells. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api service but no nova-compute services. Each child cell should run all the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a regular Compute deployment in that each cell has its database server and message queue broker.
This document discusses OpenStack multi-region deployments and how Trilio can be deployed in multi-region OpenStack clouds.
OpenStack offers lots of flexibility with multi-region deployments, and each organization architects the OpenStack that meets their business needs. OpenStack only suggests that the Keystone service is shared between regions, and the rest of the service endpoints differ for different regions. RabbitMQ and MySQL databases can be shared or deployed independently.
The following code section is a snippet of OpenStack services endpoints in two regions.
Trilio backup and recovery service is architecturally similar to OpenStack services. It has an API endpoint, a scheduler service, and workload services. Cloud architects must deploy Trilio similarly to Nova or Cinder service with an instance of Trilio in each region, as shown below. Trilio deployment must support any OpenStack multi-region deployments compatible with OpenInfra recommendations.
Trilio services endpoints in a multi-region OpenStack deployment are shown below.
Reference Document:
Deployment of Trilio on the Kolla multi-region cloud is straightforward. We need to deploy Trilio in every region of the Kolla OpenStack cloud using the Kolla-ansible deploy command.
Please take a look at the Trilio install document for Kolla.
For example, the Kolla OpenStack cloud has three regions.
RegionOne
RegionTwo
RegionThree
To deploy Multi-Region Trilio on this cloud, we need to install Trilio in each region.
Please follow the below steps:
Identify the kolla ansible inventory file for each region.
Identify the kolla-ansible deploy command that was used for OpenStack deployment for each region(Most probably, this is the same for all regions)
The customer might have used a separate “/etc/kolla/globals.yml“ file for each region deployment. Please check those details.
Deploy Trilio for the first region ( In our example, ‘RegionOne' ). Use it’s globals.yml. Follow the Trilio
docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.7-2024.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.7-2024.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.7-2024.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.7-2024.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.7-2025.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.7-2025.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.7-2025.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.7-2025.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.7-2025.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.7-2025.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.7-2025.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.7-2025.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.7-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.7-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.7-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.7-rhosp17.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.7-2024.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.7-2024.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.7-2024.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.7-2024.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.5-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.5-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.5-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.5-rhosp17.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.5-2024.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.5-2024.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.5-2024.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.5-2024.1trilio/trilio-migration-vm2os:5.2.5registry.connect.redhat.com/trilio/trilio-datamover:5.2.3-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.3-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.3-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.3-rhosp17.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.3-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.3-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.3-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.2.3-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.2.3-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.3-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.3-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.3-rhosp16.1trilio/trilio-migration-vm2os:5.2.3trilio_branch : 5.2.4trilio/trilio-migration-vm2os:5.2.4juju run --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
juju attach-resource trilio-wlm license=<Path to trilio license file>
juju run --wait trilio-wlm/leader create-licensejuju run --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
juju attach-resource trilio-wlm license=<Path to trilio license file>
juju run-action --wait trilio-wlm/leader create-licensejuju export-bundle --filename openstack_base_file.yamlgit clone https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts
git checkout {{ trilio_branch }}
cd juju-charms/sample_overlay_bundlesjuju deploy --dry-run ./openstack_base_file.yaml --overlay <Trilio bundle path>juju deploy ./openstack_base_file.yaml --overlay <Trilio bundle path>juju status | grep -i trilio
trilio-data-mover 5.2.8.14 active 3 trilio-charmers-trilio-data-mover latest/candidate 22 no Unit is ready
trilio-data-mover-mysql-router 8.0.39 active 3 mysql-router 8.0/stable 200 no Unit is ready
trilio-dm-api 5.2.8 active 1 trilio-charmers-trilio-dm-api latest/candidate 17 no Unit is ready
trilio-dm-api-mysql-router 8.0.39 active 1 mysql-router 8.0/stable 200 no Unit is ready
trilio-horizon-plugin 5.2.8.8 active 1 trilio-charmers-trilio-horizon-plugin latest/candidate 10 no Unit is ready
trilio-wlm 5.2.8.15 active 1 trilio-charmers-trilio-wlm latest/candidate 18 no Unit is ready
trilio-wlm-mysql-router 8.0.39 active 1 mysql-router 8.0/stable 200 no Unit is ready
trilio-data-mover-mysql-router/2 active idle 172.20.1.5 Unit is ready
trilio-data-mover/1 active idle 172.20.1.5 Unit is ready
trilio-data-mover-mysql-router/0* active idle 172.20.1.7 Unit is ready
trilio-data-mover/2 active idle 172.20.1.7 Unit is ready
trilio-data-mover-mysql-router/1 active idle 172.20.1.8 Unit is ready
trilio-data-mover/0* active idle 172.20.1.8 Unit is ready
trilio-horizon-plugin/0* active idle 172.20.1.27 Unit is ready
trilio-dm-api/0* active idle 1/lxd/2 172.20.1.29 8784/tcp Unit is ready
trilio-dm-api-mysql-router/0* active idle 172.20.1.29 Unit is ready
trilio-wlm/0* active idle 1 172.20.1.4 8780/tcp Unit is ready
trilio-wlm-mysql-router/0* active idle 172.20.1.4 Unit is readypython3-tvault-horizon-plugin
5.1.2
python3-workloadmgrclient
5.1.2
workloadmgr
5.1.2
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-trilio-fusepy
RHEL8/Centos8*/Rocky9
3.0.1-1
python3-tvault-contego
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-tvault-horizon-plugin-el8
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-workloadmgrclient-el8
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-dmapi-el9
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-contegoclient-el9
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-s3fuse-plugin-el9
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-tvault-contego-el9
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-trilio-fusepy-el9
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-workloadmgr-el9
RHEL8/Centos8*/Rocky9
5.1.2-5.1
python3-tvault-horizon-plugin-el9
RHEL8/Centos8*/Rocky9
5.1.2-5.1
docker.io/trilio/kolla-rocky-trilio-datamover:5.1.0-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.1.0-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.1.0-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.1.0-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.1.0-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.1.0-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.1.0-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.1.0-zednfs-options
Options as per respective NFS server
If Backup Target is S3
tv-s3-endpoint-url
S3 Endpoint URL
tv-s3-secret-key
S3 Secret Key
tv-s3-access-key
S3 Access Key
tv-s3-region-name
S3 Region
tv-s3-bucket
S3 bucket
tv-s3-ssl
Set true for SSL enabled S3 endpoint URL
tv-s3-ssl-verify
Set true for SSL enabled S3 endpoint URL
tv-s3-ssl-cert
Required if S3 SSL/TLS certificates are self signed, else the parameter to be omitted. Provide path of the respective certificate file (.pem) from S3 in format tv-s3-ssl-cert: include-base64://<path_to_ca_cert>
Now, for the following region deployment (RegionTwo), you can identify an ansible inventory file and the '/etc/kolla/globals.yml' file for that region. Also, could you identify the kolla-ansible deploy command used for that region?
Append the appropriate Trilio globals yml file to the/etc/kolla/globals.yml file; refer to section 3.1 from the Trilio install document.
Populate all Trilio config parameters in the ‘/etc/kolla/globals.yml’ file of RegionTwo. Or you can copy them from RegionOne’s '/etc/kolla/globals.yml' file.
Append Trilio inventory to the RegionTwo inventory file. Please take a look at section 3.4 of the Trilio install document.
Repeat step 7 (Pull Trilio container images) from the Trilio install document for RegionTwo.
If you use separate Kolla Ansible servers for each region, you must perform all the steps mentioned in the Trilio install document for Kolla again for RegionTwo. Using the same Kolla Ansible server for all region deployments, you can skip this for RegionTwo and all subsequent regions.
Review the Trilio install document, and if any other standard config file(like /etc/kolla/passwords.yml) is defined separately for each region, we need to check that Trilio uses that config file and perform any related steps from the Trilio install document.
Run the Kolla-ansible deploy command, which will deploy Trilio on RegionTwo.


Jammy (Ubuntu 22.04)
17
Jammy (Ubuntu 22.04)
22
Jammy (Ubuntu 22.04)
10
python3-tvault-horizon-plugin
5.2.8.12
python3-workloadmgrclient
5.2.8.5
workloadmgr
5.2.8.20
5.2.8.3-5.2
python3-trilio-fusepy-el9
RHEL9
3.0.1-1
python3-tvault-contego-el9
RHEL9
5.2.8.20-5.2
python3-tvault-horizon-plugin-el9
RHEL9
5.2.8.12-5.2
python3-workloadmgrclient-el9
RHEL9
5.2.8.7-5.2
python3-workloadmgr-el9
RHEL9
5.2.8.20-5.2
python3-tvault-horizon-plugin
5.2.8.6
python3-workloadmgrclient
5.2.8.4
workloadmgr
5.2.8.14
5.2.8-5.2
python3-dmapi-el9
Rocky9
5.2.8-5.2
python3-s3fuse-plugin
RHEL8/CentOS8*
5.2.8.1-5.2
python3-s3fuse-plugin-el9
Rocky9
5.2.8.1-5.2
python3-trilio-fusepy
RHEL8/CentOS8*
3.0.1-1
python3-trilio-fusepy-el9
Rocky9
3.0.1-1
python3-tvault-contego
RHEL8/CentOS8*
5.2.8.11-5.2
python3-tvault-contego-el9
Rocky9
5.2.8.11-5.2
python3-tvault-horizon-plugin-el8
RHEL8/CentOS8*
5.2.8.6-5.2
python3-tvault-horizon-plugin-el9
Rocky9
5.2.8.6-5.2
python3-workloadmgrclient-el8
RHEL8/CentOS8*
5.2.8.4-5.2
python3-workloadmgrclient-el9
Rocky9
5.2.8.4-5.2
python3-workloadmgr-el9
Rocky9
5.2.8.14-5.2
workloadmgr
RHEL8/CentOS8*
5.2.8.14-5.2
region=RegionOne
Network Subnet: 172.21.6/23
| neutron | network | RegionOne |
| | | public: https://172.21.6.20:9696 |
| | | RegionOne |
| | | internal: https://172.21.6.20:9696 |
| | | RegionOne |
| | | admin: https://172.21.6.20:9696 |
| | | | |
| | | |
| nova | compute | RegionOne |
| | | public: https://172.21.6.21:8774/v2.1 |
| | | RegionOne |
| | | admin: https://172.21.6.21:8774/v2.1 |
| | | RegionOne |
| | | internal: https://172.21.6.21:8774/v2.1 |
| | | |
+-------------+--------------+--------------------------------------------------------------------------+
region=RegionTwo
Network Subnet: 172.21.31/23
| neutron | network | RegionTwo |
| | | public: https://172.31.6.20:9696 |
| | | RegionTwo |
| | | internal: https://172.31.6.20:9696 |
| | | RegionTwo |
| | | admin: https://172.31.6.20:9696 |
| | | | |
| | | |
| nova | compute | RegionTwo |
| | | public: https://172.31.6.21:8774/v2.1 |
| | | RegionTwo |
| | | admin: https://172.31.6.21:8774/v2.1 |
| | | RegionTwo |
| | | internal: https://172.31.6.21:8774/v2.1 |
| | | |
+-------------+--------------+--------------------------------------------------------------------------+region=RegionOne
Network Subnet: 172.21.6/23
| neutron | network | RegionOne |
| | | public: https://172.21.6.20:9696 |
| | | RegionOne |
| | | internal: https://172.21.6.20:9696 |
| | | RegionOne |
| | | admin: https://172.21.6.20:9696 |
| | | |
| workloadmgr | workloads | RegionOne |
| | | internal: https://172.21.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | RegionOne |
| | | public: https://172.21.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | RegionOne |
| | | admin: https://172.21.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | |
| dmapi | datamover | RegionOne |
| | | internal: https://172.21.6.22:8784/v2 |
| | | RegionOne |
| | | public: https://172.21.6.22:8784/v2 |
| | | RegionOne |
| | | admin: https://172.21.6.22:8784/v2 |
| | | |
| nova | compute | RegionOne |
| | | public: https://172.21.6.21:8774/v2.1 |
| | | RegionOne |
| | | admin: https://172.21.6.21:8774/v2.1 |
| | | RegionOne |
| | | internal: https://172.21.6.21:8774/v2.1 |
| | | |
+-------------+--------------+--------------------------------------------------------------------------+
region=RegionTwo
Network Subnet: 172.21.31/23
| neutron | network | RegionTwo |
| | | public: https://172.31.6.20:9696 |
| | | RegionTwo |
| | | internal: https://172.31.6.20:9696 |
| | | RegionTwo |
| | | admin: https://172.31.6.20:9696 |
| | | |
| workloadmgr | workloads | RegionTwo |
| | | internal: https://172.31.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | RegionTwo |
| | | public: https://172.31.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | RegionTwo |
| | | admin: https://172.31.6.23:8780/v1/38bd7aa9b55944ebb3578c251a1b785b |
| | | |
| dmapi | datamover | RegionTwo |
| | | internal: https://172.31.6.22:8784/v2 |
| | | RegionTwo |
| | | public: https://172.31.6.22:8784/v2 |
| | | RegionTwo |
| | | admin: https://172.31.6.22:8784/v2 |
| | | |
| nova | compute | RegionTwo |
| | | public: https://172.31.6.21:8774/v2.1 |
| | | RegionTwo |
| | | admin: https://172.31.6.21:8774/v2.1 |
| | | RegionTwo |
| | | internal: https://172.31.6.21:8774/v2.1 |
| | | |
+-------------+--------------+--------------------------------------------------------------------------+docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.5-2024.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.5-2024.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.5-2024.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.5-2024.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.5-2023.2
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.5-2023.2
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.5-2023.2
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.5-2023.2docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.5-2023.2
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.5-2023.2
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.5-2023.2
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.5-2023.2docker.io/trilio/kolla-rocky-trilio-datamover:5.2.5-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.5-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.5-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.5-2023.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.5-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.5-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.5-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.5-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.5-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.5-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.5-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.5-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.5-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.5-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.5-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.5-zeddocker.io/trilio/kolla-rocky-trilio-datamover:5.2.3-2023.2
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.3-2023.2
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.3-2023.2
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.3-2023.2docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.3-2023.2
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.3-2023.2
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.3-2023.2
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.3-2023.2docker.io/trilio/kolla-rocky-trilio-datamover:5.2.3-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.3-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.3-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.3-2023.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.3-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.3-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.3-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.3-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.3-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.3-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.3-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.3-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.3-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.3-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.3-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.3-zedJammy (Ubuntu 22.04)
17
Jammy (Ubuntu 22.04)
22
Jammy (Ubuntu 22.04)
10
5.2.8.8
python3-workloadmgrclient
5.2.8.5
workloadmgr
5.2.8.15
python3-dmapi-el9
Rocky9
5.2.8-5.2
python3-s3fuse-plugin
RHEL8/CentOS8*
5.2.8.3-5.2
python3-s3fuse-plugin-el9
Rocky9
5.2.8.3-5.2
python3-trilio-fusepy
RHEL8/CentOS8*
3.0.1-1
python3-trilio-fusepy-el9
Rocky9
3.0.1-1
python3-tvault-contego
RHEL8/CentOS8*
5.2.8.14-5.2
python3-tvault-contego-el9
Rocky9
5.2.8.14-5.2
python3-tvault-horizon-plugin-el8
RHEL8/CentOS8*
5.2.8.8-5.2
python3-tvault-horizon-plugin-el9
Rocky9
5.2.8.8-5.2
python3-workloadmgrclient-el8
RHEL8/CentOS8*
5.2.8.5-5.2
python3-workloadmgrclient-el9
Rocky9
5.2.8.5-5.2
python3-workloadmgr-el9
Rocky9
5.2.8.15-5.2
workloadmgr
RHEL8/CentOS8*
5.2.8.15-5.2
Kolla Rocky Caracal(2024.1)
Kolla Rocky Bobcat(2023.2)
Kolla Ubuntu Jammy Bobcat(2023.2)
Kolla Rocky Antelope(2023.1)
Kolla Ubuntu Jammy Antelope(2023.1)
Kolla Rocky Zed
Kolla Ubuntu Jammy Zed
triliovault-pkg-source
deb [trusted=yes] https://apt.fury.io/trilio-5-2 /
channel
latest/stable
Charm names
Supported releases
Revisions
Jammy (Ubuntu 22.04)
18
python3-contegoclient
5.2.8.1
python3-dmapi
5.2.8
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
5.2.8.3
python3-tvault-contego
5.2.8.14
python3-contegoclient-el8
RHEL8/CentOS8*
5.2.8.1-5.2
python3-contegoclient-el9
Rocky9
5.2.8.1-5.2
python3-dmapi
RHEL8/CentOS8*
deb [trusted=yes] https://apt.fury.io/trilio-5-2/ /https://yum.fury.io/trilio-5-2/[trilio-fury]
name=Trilio Gemfury Private Repo
baseurl=https://yum.fury.io/trilio-5-2/
enabled=1
gpgcheck=0python3-tvault-horizon-plugin
5.2.8-5.2
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project the Snapshot is located in
snapshot_id
string
ID of the Snapshot to mount
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 15:29:03 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-9d779802-9c65-463a-973c-39cdffcba82eGET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/mounted/list
Provides the list of all Snapshots mounted in a Tenant
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant to search for mounted Snapshots
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgr
GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/snapshots/mounted/list
Provides the list of all Snapshots mounted in a specified Workload
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant to search for mounted Snapshots
workload_id
string
ID of the Workload to search for mounted Snapshots
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgr
POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/dismount
Unmounts a Snapshot of the provided File Recovery Manager
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project the Snapshot is located in
snapshot_id
string
ID of the Snapshot to dismount
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
A Snapshot is a single Trilio backup of a workload including all data and metadata. It contains the information of all VM's that are protected by the workload.
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to show the details on
The List of Snapshots for the chosen Workload contains the following additional information:
Creation Time
Name of the Snapshot
Description of the Snapshot
Total amount of Restores from this Snapshot
--workload_id <workload_id> ➡️ Filter results by workload_id
--tvault_node <host> ➡️ List all the snapshot operations scheduled on a tvault node(Default=None)
--date_from <date_from>
Snapshots are automatically created by the Trilio scheduler. If necessary or in case of deactivated scheduler is it possible to create a Snapshot on demand.
There are 2 possibilities to create a snapshot on demand.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that shall create a Snapshot
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that shall create a Snapshot
<workload_id>➡️ID of the workload to snapshot.
--full➡️ Specify if a full snapshot is required.
--display-name <display-name>➡️
Each Snapshot contains a lot of information about the backup. These information can be seen in the Snapshot overview.
To reach the Snapshot Overview follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
The Snapshot Details Tab shows the most important information about the Snapshot.
Snapshot Name / Description
Snapshot Type
Time Taken
Size
The Snapshot Restores Tab shows the list of Restores that have been started from the chosen Snapshot. It is possible to start Restores from here.
The Snapshot Miscellaneous Tab provides the remaining metadata information about the Snapshot.
Creation Time
Last Update time
Snapshot ID
Workload ID of the Workload containing the Snapshot
<snapshot_id>➡️ID of the snapshot to be shown
--output <output>➡️Option to get additional snapshot details, Specify --output metadata for snapshot metadata, Specify --output networks for snapshot vms networks, Specify --output disks for snapshot vms disks
Once a Snapshot is no longer needed, it can be safely deleted from a Workload.
There are 2 possibilities to delete a Snapshot.
To delete a single Snapshot through the submenu follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
To delete one or more Snapshots through the Snapshot overview do the following:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
<snapshot_id>➡️ID of the snapshot to be deleted
Ongoing Snapshots can be canceled.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to cancel
<snapshot_id>➡️ID of the snapshot to be canceled
The following steps need to be run on all nodes, which have the Trilio Datamover API & Workloadmanager services running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the below entries
Once the role that runs the Trilio Datamover API & Workloadmanager services has been identified will the following commands clean the nodes from the service.
docker.io/trilio/kolla-rocky-trilio-datamover:5.2.4-2024.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.4-2024.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.4-2024.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.4-2024.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.4-2023.2
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.4-2023.2
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.4-2023.2
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.4-2023.2docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.4-2023.2
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.4-2023.2
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.4-2023.2
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.4-2023.2docker.io/trilio/kolla-rocky-trilio-datamover:5.2.4-2023.1
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.4-2023.1
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.4-2023.1
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.4-2023.1docker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.4-2023.1
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.4-2023.1
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.4-2023.1
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.4-2023.1docker.io/trilio/kolla-rocky-trilio-datamover:5.2.4-zed
docker.io/trilio/kolla-rocky-trilio-datamover-api:5.2.4-zed
docker.io/trilio/kolla-rocky-trilio-horizon-plugin:5.2.4-zed
docker.io/trilio/kolla-rocky-trilio-wlm:5.2.4-zeddocker.io/trilio/kolla-ubuntu-trilio-datamover:5.2.4-zed
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:5.2.4-zed
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:5.2.4-zed
docker.io/trilio/kolla-ubuntu-trilio-wlm:5.2.4-zedregistry.connect.redhat.com/trilio/trilio-datamover:5.2.4-rhosp17.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.4-rhosp17.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.4-rhosp17.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.4-rhosp17.1registry.connect.redhat.com/trilio/trilio-datamover:5.2.4-rhosp16.2
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.4-rhosp16.2
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.4-rhosp16.2
registry.connect.redhat.com/trilio/trilio-wlm:5.2.4-rhosp16.2registry.connect.redhat.com/trilio/trilio-datamover:5.2.4-rhosp16.1
registry.connect.redhat.com/trilio/trilio-datamover-api:5.2.4-rhosp16.1
registry.connect.redhat.com/trilio/trilio-horizon-plugin:5.2.4-rhosp16.1
registry.connect.redhat.com/trilio/trilio-wlm:5.2.4-rhosp16.1HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 15:44:42 GMT
Content-Type: application/json
Content-Length: 228
Connection: keep-alive
X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a
{
"mounted_snapshots":[
{
"snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
"snapshot_name":"snapshot",
"workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
"mounturl":"[\"http://192.168.100.87\"]",
"status":"mounted"
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 15:44:42 GMT
Content-Type: application/json
Content-Length: 228
Connection: keep-alive
X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a
{
"mounted_snapshots":[
{
"snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
"snapshot_name":"snapshot",
"workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
"mounturl":"[\"http://192.168.100.87\"]",
"status":"mounted"
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 11 Nov 2020 16:03:49 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-abf69be3-474d-4cf3-ab41-caa56bb611e4{
"mount":{
"mount_vm_id":"15185195-cd8d-4f6f-95ca-25983a34ed92",
"options":{
}
}
}{
"mount":
{
"options": null
}
}User-Agent
string
python-workloadmgrclient
User-Agent
string
python-workloadmgrclient
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Total amount of succeeded Restores
Total amount of failed Restores
Snapshot Type
Snapshot Size
Snapshot Status
--date_to <date_to>➡️To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day), Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to
--all {True,False} ➡️ List all snapshots of all the projects(valid for admin user only)
Click "Create Snapshot"
Provide a name and description for the Snapshot
Decide between Full and Incremental Snapshot
Click "Create"
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Click "Create Snapshot"
Provide a name and description for the Snapshot
Decide between Full and Incremental Snapshot
Click "Create"
--display-description <display-description>➡️Optional snapshot description. (Default=None)
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Which VMs are part of the Snapshot
for each VM in the Snapshot
Instance Info - Name & Status
Security Group(s) - Name & Type
Flavor - vCPUs, Disk & RAM
Networks - IP, Networkname & Mac Address
Attached Volumes - Name, Type, size (GB), Mount Point & Restore Size
Misc - Original ID of the VM
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu
Click "Delete Snapshot"
Confirm by clicking "Delete"
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshots in the Snapshot list
Check the checkbox for each Snapshot that shall be deleted
Click "Delete Snapshots"
Confirm by clicking "Delete"
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click "Cancel" on the same line as the identified Snapshot
Confirm by clicking "Cancel"
Run all commands as root or user with sudo permissions.
Remove triliovault_datamover_api container.
Clean Trilio Datamover API service conf directory.
Clean Trilio Datamover API service log directory.
Remove triliovault_wlm_api container.
Clean Trilio Workloadmanager API service conf directory.
Clean Trilio Workloadmanager API service log directory.
Remove triliovault_wlm_workloads container.
Clean Trilio Workloadmanager Workloads service conf directory.
Clean Trilio Workloadmanager Workloads service log directory.
Remove triliovault_wlm_scheduler container.
Clean Trilio Workloadmanager Scheduler service conf directory.
Clean Trilio Workloadmanager Scheduler service log directory.
Remove triliovault-wlm-cron-podman-0 container from controller.
Clean Trilio Workloadmanager Cron service conf directory.
Clean Trilio Workloadmanager Cron service log directory.
The following steps need to be run on all nodes, which have the Trilio Datamover service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::TrilioDatamover.
Once the role that runs the Trilio Datamover service has been identified will the following commands clean the nodes from the service.
Run all commands as root or user with sudo permissions.
Remove triliovault_datamover container.
Unmount the Trilio Backup Target on the compute host.
Clean Trilio Datamover service conf directory.
Clean log directory of Trilio Datamover service.
Remove wlm cron resource from pcs cluster on the controller node.
The following steps need to be run on all nodes, which have the HAproxy service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::HAproxy.
Once the role that runs the HAproxy service has been identified will the following commands clean the nodes from all the Trilio resources.
Run all commands as root or user with sudo permissions.
Edit the following file inside the HAproxy container and remove all Trilio entries.
/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
An example of these entries is given below.
Restart the HAproxy container once all edits have been done.
Trilio registers services and users in Keystone. Those need to be unregistered and deleted.
Trilio creates databases for dmapi and workloadmgr services. These databases need to be cleaned.
Login into the database cluster
Run the following SQL statements to clean the database.
Remove the following entries from roles_data.yaml used in the overcloud deploy command.
OS::TripleO::Services::TrilioDatamoverApi
OS::TripleO::Services::TrilioWlmApi
OS::TripleO::Services::TrilioWlmWorkloads
OS::TripleO::Services::TrilioWlmScheduler
OS::TripleO::Services::TrilioWlmCron
OS::TripleO::Services::TrilioDatamover
Follow these steps to clean the overcloud deploy command from all Trilio entries.
Remove trilio_env.yaml entry
Remove trilio endpoint map file Replace with original map file if existing
Run the cleaned overcloud deploy command.
workloadmgr snapshot-list [--workload_id <workload_id>]
[--tvault_node <host>]
[--date_from <date_from>]
[--date_to <date_to>]
[--all {True,False}]workloadmgr workload-snapshot [--full] [--display-name <display-name>]
[--display-description <display-description>]
<workload_id>workloadmgr snapshot-show [--output <output>] <snapshot_id>workloadmgr snapshot-delete <snapshot_id>workloadmgr snapshot-cancel <snapshot_id>OS::TripleO::Services::TrilioDatamoverApi
OS::TripleO::Services::TrilioWlmApi
OS::TripleO::Services::TrilioWlmWorkloads
OS::TripleO::Services::TrilioWlmScheduler
OS::TripleO::Services::TrilioWlmCronpodman rm -f triliovault_datamover_api
podman rm -f triliovault_datamover_api_db_sync
podman rm -f triliovault_datamover_api_init_logrm -rf /var/lib/config-data/puppet-generated/triliovaultdmapi
rm /var/lib/config-data/puppet-generated/triliovaultdmapi.md5sum
rm -rf /var/lib/config-data/triliovaultdmapi*
rm -f /var/lib/config-data/triliovault_datamover_api*rm -rf /var/log/containers/triliovault-datamover-api/podman rm -f triliovault_wlm_api
podman rm -f triliovault_wlm_api_cloud_trust_init
podman rm -f triliovault_wlm_api_db_sync
podman rm -f triliovault_wlm_api_config_dynamic
podman rm -f triliovault_wlm_api_init_logrm -rf /var/lib/config-data/puppet-generated/triliovaultwlmapi
rm /var/lib/config-data/puppet-generated/triliovaultwlmapi.md5sum
rm -rf /var/lib/config-data/triliovaultwlmapi*
rm -f /var/lib/config-data/triliovault_wlm_api*rm -rf /var/log/containers/triliovault-wlm-api/podman rm -f triliovault_wlm_workloads
podman rm -f triliovault_wlm_workloads_config_dynamic
podman rm -f triliovault_wlm_workloads_init_logrm -rf /var/lib/config-data/puppet-generated/triliovaultwlmworkloads
rm /var/lib/config-data/puppet-generated/triliovaultwlmworkloads.md5sum
rm -rf /var/lib/config-data/triliovaultwlmworkloads*rm -rf /var/log/containers/triliovault-wlm-api/podman rm -f triliovault_wlm_scheduler
podman rm -f triliovault_wlm_scheduler_config_dynamic
podman rm -f triliovault_wlm_scheduler_init_logrm -rf /var/lib/config-data/puppet-generated/triliovaultwlmscheduler
rm /var/lib/config-data/puppet-generated/triliovaultwlmscheduler.md5sum
rm -rf /var/lib/config-data/triliovaultwlmscheduler*rm -rf /var/log/containers/triliovault-wlm-scheduler/podman rm -f triliovault-wlm-cron-podman-0
podman rm -f triliovault_wlm_cron_config_dynamic
podman rm -f triliovault_wlm_cron_init_logrm -rf /var/lib/config-data/puppet-generated/triliovaultwlmcron
rm /var/lib/config-data/puppet-generated/triliovaultwlmcron.md5sum
rm -rf /var/lib/config-data//triliovaultwlmcron*rm -rf /var/log/containers/triliovault-wlm-cron/podman rm -f triliovault_datamover## Following steps are applicable for all supported RHOSP releases.
# Check triliovault backup target mount point
mount | grep trilio
# Unmount it
-- If it's NFS (COPY UUID_DIR from your compute host using above command)
umount /var/lib/nova/triliovault-mounts/<UUID_DIR>
-- If it's S3
umount /var/lib/nova/triliovault-mounts
# Verify that it's unmounted
mount | grep trilio
df -h | grep trilio
# Remove mount point directory after verifying that backup target unmounted successfully.
# Otherwise actual data from backup target may get cleaned.
rm -rf /var/lib/nova/triliovault-mountsrm -rf /var/lib/config-data/puppet-generated/triliovaultdm/
rm /var/lib/config-data/puppet-generated/triliovaultdm.md5sum
rm -rf /var/lib/config-data/triliovaultdm*rm -rf /var/log/containers/triliovault-datamover/pcs resource delete triliovault-wlm-cronlisten triliovault_datamover_api
bind 172.30.5.23:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.30.5.23:8784 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
balance roundrobin
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Port %[dst_port]
maxconn 50000
option httpchk
option httplog
retries 5
timeout check 10m
timeout client 10m
timeout connect 10m
timeout http-request 10m
timeout queue 10m
timeout server 10m
server overcloudtrain1-controller-0.internalapi.trilio.local 172.30.5.28:8784 check fall 5 inter 2000 rise 2 verifyhost overcloudtrain1-controller-0.internalapi.trilio.local
listen triliovault_wlm_api
bind 172.30.5.23:13781 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.30.5.23:8781 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
balance roundrobin
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Port %[dst_port]
maxconn 50000
option httpchk
option httplog
retries 5
timeout check 10m
timeout client 10m
timeout connect 10m
timeout http-request 10m
timeout queue 10m
timeout server 10m
server overcloudtrain1-controller-0.internalapi.trilio.local 172.30.5.28:8780 check fall 5 inter 2000 rise 2 verifyhost overcloudtrain1-controller-0.internalapi.trilio.localpodman restart haproxy-bundle-podman-0openstack service delete dmapi
openstack user delete dmapi
openstack service delete TrilioVaultWLM
openstack user delete triliovaultpodman exec -it galera-bundle-podman-0 mysql -u root## Clean database
DROP DATABASE dmapi;
## Clean dmapi user
=> List 'dmapi' user accounts
MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
+-------+-----------------------------------------+
| user | host |
+-------+-----------------------------------------+
| dmapi | % |
| dmapi | 172.30.5.28 |
| dmapi | overcloudtrain1internalapi.trilio.local |
+-------+-----------------------------------------+
3 rows in set (0.000 sec)
=> Delete those user accounts
MariaDB [(none)]> DROP USER dmapi@'%';
Query OK, 0 rows affected (0.005 sec)
MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.006 sec)
MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.005 sec)
=> Verify that dmapi user got cleaned
MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
Empty set (0.00 sec)
## Clean database
DROP DATABASE workloadmgr;
## Clean workloadmgr user
=> List 'workloadmgr' user accounts
MariaDB [(none)]> select user, host from mysql.user where user='workloadmgr';
+-------------+-----------------------------------------+
| user | host |
+-------------+-----------------------------------------+
| workloadmgr | % |
| workloadmgr | 172.30.5.28 |
| workloadmgr | overcloudtrain1internalapi.trilio.local |
+-------------+-----------------------------------------+
3 rows in set (0.000 sec)
=> Delete those user accounts
MariaDB [(none)]> DROP USER workloadmgr@'%';
Query OK, 0 rows affected (0.012 sec)
MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.006 sec)
MariaDB [(none)]> DROP USER [email protected];
Query OK, 0 rows affected (0.005 sec)
=> Verify that workloadmgr user got cleaned
MariaDB [(none)]> select user, host from mysql.user where user='workloadmgr';
Empty set (0.000 sec)Trilio allows you to view or download a file from the snapshot. Any changes to the files or directories when snapshot is mounted are temporary and are discarded when the snapshot is unmounted. Mounting is a faster way to restore a single or multiple files. To mount a snapshot follow these steps.
It is recommended to do these steps once to the chosen cloud-Image and then upload the modified cloud image to Glance.
Create an Openstack image using a Linux based cloud-image like Ubuntu, CentOS or RHEL with the following metadata parameters.
Spin up an instance from that image It is recommended to have at least 8GB RAM for the mount operation. Bigger Snapshots can require more RAM.
install and activate qemu-guest-agent
Edit /etc/sysconfig/qemu-ga and remove the following from BLACKLIST_RPC section
Disable SELINUX in /etc/sysconfig/selinux
Install python3 and lvm2
Reboot the Instance
install and activate qemu-guest-agent
Verify the loaded path of qemu-guest-agent
Follow this path when systemctl returns the following loaded path
Edit /etc/init.d/qemu-guest-agent and add Freeze-Hook file path in daemon args
Follow this path when systemctl returns the following loaded path
Edit qemu-guest-agent systemd file
Add the following lines
Restart qemu-guest-agent service
Install Python3
Reboot the VM
Mounting a Snapshot to a File Recovery Manager provides read access to all data that is located on the in the mounted Snapshot.
It is possible to run the mounting process against any Openstack instance. During this process will the instance be rebooted.
Always mount Snapshots to File Recovery Manager instances only.
Unmount any mounted Snapshot once there is no further need to keep it mounted. Mounted Snapshots will not be purged by the Retention policy.
There are 2 possibilities to mount a Snapshot in Horizon.
To mount a Snapshot through the Snapshot list follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:
tvault_recovery_manager=yes
To mount a Snapshot through the File Search results follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:
tvault_recovery_manager=yes
<snapshot_id> ➡️ ID of the Snapshot to be mounted
<mount_vm_id> ➡️ ID of the File Recovery Manager instance to mount the Snapshot to.
The File Recovery Manager is a normal Linux based Openstack instance.
It can be accessed via SSH or SSH based tools like FileZila or WinSCP.
The mounted Snapshot can be found at the following path:
/home/ubuntu/tvault-mounts/mounts/
Each VM in the Snapshot has its own directory using the VM_ID as the identifier.
Sometimes a Snapshot is mounted for a longer time and it needs to be identified, which Snapshots are mounted.
There are 2 possibilities to identify mounted Snapshots inside Horizon.
Login to Horizon
Navigate to Compute
Navigate to Instances
Identify the File Recovery Manager Instance
The mounted_snapshot_url contains the Snapshot ID of the Snapshot that has been mounted last.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
--workloadid <workloadid> ➡️ Restrict the list to snapshots in the provided workload
Once a mounted Snapshot is no longer needed it is possible and recommended to unmount the snapshot.
Deleting the File Recovery Manager instance will not update the Trilio appliance. The Snapshot will be considered mounted until an unmount command has been received.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
<snapshot_id> ➡️ ID of the snapshot to unmount.
A workload is a backup job that protects one or more Virtual Machines according to a configured policy. There can be as many workloads as needed. But each VM can only be part of one Workload.
Using encrypted Workload will lead to longer backup times. The following timings have been seen in Trilio labs:
RHEL7
✔️
RHEL
RHEL8
✔️
RHEL
RHEL9
✔️
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu
Click "Mount Snapshot"
Choose the File Recovery Manager instance to mount to
Confirm by clicking "Mount"
Click the workload name to enter the Workload overview
Navigate to the File Search tab
Identify the Snapshot to be mounted
Click "Mount Snapshot" for the chosen Snapshot
Choose the File Recovery Manager instance to mount to
Confirm by clicking "Mount"
Click on the Name of the File Recovery Manager Instance to bring up its details
On the Overview tab look for Metadata
Identify the value for mounted_snapshot_url
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Search for the Snapshot that has the option "Unmount Snapshot"
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Search for the Snapshot that has the option "Unmount Snapshot"
Click "Unmount Snapshot"
Cloud Image Name
Version
Supported
Ubuntu
Bionic(18.04)
✔️
Ubuntu
Focal(20.04)
✔️
Centos
Centos8
✔️
Centos
Centos8 stream
✔️
RHEL
Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : ~108GB
For unencrypted WL : 62 min
For encrypted WL : 82 min
Snapshot time for Windows Image booted VM. No additional data except OS. : ~12 GB
For unencrypted WL : 10 min
For encrypted WL : 18 min5
To view all available workloads of a project inside Horizon do:
Login to Horizon
Navigate to Backups
Navigate to Workloads
The overview in Horizon lists all workloads with the following additional information:
Creation time
Workload Name
Workload description
Total amount of Snapshots inside this workload
Total amount of succeeded Snapshots
Total amount of failed Snapshots
Status of the Workload
--all {True,False}➡️List all workloads of all projects (valid for admin user only)
--nfsshare <nfsshare>➡️List all workloads of nfsshare (valid for admin user only)
To create a workload inside Horizon do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Click "Create Workload"
Provide Workload Name and Workload Description on the first tab "Details"
Choose the Policy if available to use on the first tab "Details"
Choose if the Workload is encrypted on the first tab "Details"
Provide the secret UUID if Workload is encrypted on the first tab "Details"
Choose the VMs to protect on the second Tab "Workload Members"
Decide for the schedule of the workload on the Tab "Schedule"
Provide the Retention policy on the Tab "Policy"
Choose the Full Backup Interval on the Tab "Policy"
If required check "Pause VM" on the Tab "Options"
Click create
The created Workload will be available after a few seconds and starts to take backups according to the provided schedule and policy.
--display-name➡️Optional workload name. (Default=None)
--display-description➡️Optional workload description. (Default=None)
--source-platform➡️Workload source platform is required. Supported platforms is 'openstack'
--jobschedule➡️Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'snapshots_to_retain' : '2'
--metadata➡️Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
--policy-id <policy_id>➡️ID of the policy to assign to the workload
--encryption <True/False> ➡️Enable/Disable encryption for this workload
--secret-uuid <secret_uuid> ➡️UUID of the Barbican secret to be used for the workload
<instance-id=instance-uuid>➡️Required to set atleast one instance, Specify an instance to include in the workload. Specify option multiple times to include multiple instances
A workload contains many information, which can be seen in the workload overview.
To enter the workload overview inside Horizon do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to show the details on
Click the workload name to enter the Workload overview
The Workload Details tab provides you with the general most important information about the workload:
Name
Description
Availability Zone
List of protected VMs including the information of qemu guest agent availability
The status of the qemu-guest-agent just shows, whether the necessary Openstack configuration has been done for this VM to provide qemu guest agent integration. It does not check, whether the qemu guest agent is installed and configured on the VM.
The Workload Snapshots Tab shows the list of all available Snapshots in the chosen Workload.
From here it is possible to work with the Snapshots, create Snapshots on demand and start Restores.
The Workload Policy Tab gives an overview of the current configured scheduler and retention policy. The following elements are shown:
Scheduler Enabled / Disabled
Start Date / Time
End Date / Time
RPO
Time till next Snapshot run
Retention Policy and Value
Full Backup Interval policy and value
The Workload Filesearch Tab provides access to the powerful search engine, which allows to find files and folders on Snapshots without the need of a restore.
The Workload Miscellaneous Tab shows the remaining metadata of the Workload. The following information are provided:
Creation time
last update time
Workload ID
<workload_id> ➡️ ID/name of the workload to show
--verbose➡️option to show additional information about the workload
Workloads can be modified in all components to match changing needs.
To edit a workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Modify the workload as desired - All parameters except workload type can be changed
Click "Update"
--display-name ➡️ Optional workload name. (Default=None)
--display-description➡️Optional workload description. (Default=None)
--instance <instance-id=instance-uuid>➡️Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID
--jobschedule <key=key-name>➡️Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. If don't specify timezone, then by default it takes your local machine timezone 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30'
--metadata <key=key-name>➡️Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
--policy-id <policy_id>➡️ID of the policy to assign
<workload_id> ➡️ID of the workload to edit
Once a workload is no longer needed it can be safely deleted.
To delete a workload do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to be deleted
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Delete Workload"
Confirm by clicking "Delete Workload" yet again
<workload_id> ➡️ ID/name of the workload to delete
--database_only <True/False>➡️Keep True if want to delete from database only.(Default=False)
Workloads that are actively taking backups or restores are locked for further tasks. It is possible to unlock a workload by force if necessary.
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to unlock
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Unlock Workload"
Confirm by clicking "Unlock Workload" yet again
<workload_id> ➡️ ID of the workload to unlock
In rare cases it might be necessary to start a backup chain all over again, to ensure the quality of the created backups. To not recreate a Workload in such cases is it possible to reset a Workload.
The Workload reset will:
Cancel all ongoing tasks
Delete all existing Openstack Trilio Snapshots from the protected VMs
recalculate the next Snapshot time
take a full backup at the next Snapshot
To reset a Workload do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to reset
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Reset Workload"
Confirm by clicking "Reset Workload" yet again
<workload_id> ➡️ ID/name of the workload to reset
openstack image create \
--file <File Manager Image Path> \
--container-format bare \
--disk-format qcow2 \
--public \
--property hw_qemu_guest_agent=yes \
--property tvault_recovery_manager=yes \
--property hw_disk_bus=virtio \
tvault-file-managerguest-file-read
guest-file-write
guest-file-open
guest-file-closeSELINUX=disabledyum install python3 lvm2apt-get update
apt-get install qemu-guest-agent
systemctl enable qemu-guest-agentLoaded: loaded (/etc/init.d/qemu-guest-agent; generated)DAEMON_ARGS="-F/etc/qemu/fsfreeze-hook"Loaded: loaded (/usr/lib/systemd/system/qemu-guest-agent.service; disabled; vendor preset: enabled)systemctl edit qemu-guest-agent[Service]
ExecStart=
ExecStart=/usr/sbin/qemu-ga -F/etc/qemu/fsfreeze-hooksystemctl restart qemu-guest-agentapt-get install python3workloadmgr snapshot-mount <snapshot_id> <mount_vm_id>workloadmgr snapshot-mounted-list [--workloadid <workloadid>]workloadmgr snapshot-dismount <snapshot_id>workloadmgr workload-list [--all {True,False}] [--nfsshare <nfsshare>]workloadmgr workload-create [--display-name <display-name>]
[--display-description <display-description>]
[--source-platform <source-platform>]
[--jobschedule <key=key-name>]
[--metadata <key=key-name>]
[--policy-id <policy_id>]
[--encryption <True/False>]
[--secret-uuid <secret_uuid>]
<instance-id=instance-uuid> [<instance-id=instance-uuid> ...]workloadmgr workload-show <workload_id> [--verbose <verbose>]usage: workloadmgr workload-modify [--display-name <display-name>]
[--display-description <display-description>]
[--instance <instance-id=instance-uuid>]
[--jobschedule <key=key-name>]
[--metadata <key=key-name>]
[--policy-id <policy_id>]
<workload_id>workloadmgr workload-delete [--database_only <True/False>] <workload_id>workloadmgr workload-unlock <workload_id>workloadmgr workload-reset <workload_id>tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 11:52:56 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-99f51825-9b47-41ea-814f-8f8141157fc7POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/resume
Enables the scheduler of a given Workload
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>
Validates the Scheduler trust for a given Workload
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
All following API commands require an Authentication token against a user with admin-role in the authentication project.
GET https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler
Requests the status of the Global Job Scheduler
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
POST https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/disable
Requests disabling the Global Job Scheduler
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
POST https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/enable
Requests enabling the Global Job Scheduler
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:06:01 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-4eb1863e-3afa-4a2c-b8e6-91a41fe37f78HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:31:49 GMT
Content-Type: application/json
Content-Length: 1223
Connection: keep-alive
X-Compute-Request-Id: req-c6f826a9-fff7-442b-8886-0770bb97c491
{
"scheduler_enabled":true,
"trust":{
"created_at":"2020-10-23T14:35:11.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"value":"871ca24f38454b14b867338cb0e9b46c",
"description":"token id for user ccddc7e7a015487fa02920f4d4979779 project c76b3355a164498aa95ddbc960adc238",
"category":"identity",
"type":"trust_id",
"public":false,
"hidden":true,
"status":"available",
"metadata":[
{
"created_at":"2020-10-23T14:35:11.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"a3cc9a01-3d49-4ff8-ad8e-b12a7b3c68b0",
"settings_name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
"settings_project_id":"c76b3355a164498aa95ddbc960adc238",
"key":"role_name",
"value":"member"
}
]
},
"is_valid":true,
"scheduler_obj":{
"workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"user_domain_id":"default",
"user":"ccddc7e7a015487fa02920f4d4979779",
"tenant":"c76b3355a164498aa95ddbc960adc238"
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:45:27 GMT
Content-Type: application/json
Content-Length: 30
Connection: keep-alive
X-Compute-Request-Id: req-cd447ce0-7bd3-4a60-aa92-35fc43b4729b
{"global_job_scheduler": true}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:49:29 GMT
Content-Type: application/json
Content-Length: 31
Connection: keep-alive
X-Compute-Request-Id: req-6f49179a-737a-48ab-91b7-7e7c460f5af0
{"global_job_scheduler": false}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 12:50:11 GMT
Content-Type: application/json
Content-Length: 30
Connection: keep-alive
X-Compute-Request-Id: req-ed279acc-9805-4443-af91-44a4420559bc
{"global_job_scheduler": true}GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts
Provides the lists of trusts for the given Tenant.
tvm_name
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant / Project to fetch the trusts from
is_cloud_admin
boolean
true/false
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:21:57 GMT
Content-Type: application/json
Content-Length: 868
Connection: keep-alive
X-Compute-Request-Id: req-fa48f0ad-aa76-42fa-85ea-1e5461889fb3
{
"trust":[
{
"created_at":"2020-11-26T13:10:53.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"trust-6e290937-de9b-446a-a406-eb3944e5a034",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
POST https://$(tvm_address):8780/v1/$(tenant_id)/trusts
Creates a workload in the provided Tenant/Project with the given details.
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to create the Trust for
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>
Shows all details of a specified trust
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Project/Tenant where to find the Workload
workload_id
string
ID of the Workload to show
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
DELETE https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>
Deletes the specified trust.
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Trust in
trust_id
string
ID of the Trust to delete
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>
Validates the Trust of a given Workload.
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Project/Tenant where to find the Workload
workload_id
string
ID of the Workload to validate the Trust of
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
E-Mail Notification Settings are done through the settings API. Use the values from the following table to set Email Notifications up through API.
POST https://$(tvm_address):8780/v1/$(tenant_id)/settings
Creates a Trilio setting.
Setting create requires a Body in json format, to provide the requested information.
GET https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>
Shows all details of a specified setting
PUT https://$(tvm_address):8780/v1/$(tenant_id)/settings
Modifies the provided setting with the given details.
Workload modify requires a Body in json format, to provide the information about the values to modify.
DELETE https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>
Deletes the specified Workload.
Trilio provides Backup-as-a-Service, which allows Openstack Users to manage and control their backups themselves. This doesn't eradicate the need for a Backup Administrator, who has an overview of the complete Backup Solution.
To provide Backup Administrators with the tools they need does Trilio for Openstack provide a Backup-Admin area in Horizon in addition to the API and CLI.
To access the Backups-Admin area follow these steps:
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:43:36 GMT
Content-Type: application/json
Content-Length: 868
Connection: keep-alive
X-Compute-Request-Id: req-2151b327-ea74-4eec-b606-f0df358bc2a0
{
"trust":[
{
"created_at":"2021-01-21T11:43:36.140407",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"value":"1c981a15e7a54242ae54eee6f8d32e6a",
"description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
"category":"identity",
"type":"trust_id",
"public":false,
"hidden":1,
"status":"available",
"is_public":false,
"is_hidden":true,
"metadata":[
]
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:39:12 GMT
Content-Type: application/json
Content-Length: 888
Connection: keep-alive
X-Compute-Request-Id: req-3c2f6acb-9973-4805-bae3-cd8dbcdc2cb4
{
"trust":{
"created_at":"2020-11-26T13:15:29.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"value":"703dfabb4c5942f7a1960736dd84f4d4",
"description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
"category":"identity",
"type":"trust_id",
"public":false,
"hidden":true,
"status":"available",
"metadata":[
{
"created_at":"2020-11-26T13:15:29.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"86aceea1-9121-43f9-b55c-f862052374ab",
"settings_name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
"settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
"key":"role_name",
"value":"member"
}
]
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 21 Jan 2021 11:41:51 GMT
Content-Type: application/json
Content-Length: 888
Connection: keep-alive
X-Compute-Request-Id: req-d838a475-f4d3-44e9-8807-81a9c32ea2a8{
"scheduler_enabled":true,
"trust":{
"created_at":"2021-01-21T11:43:36.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"value":"1c981a15e7a54242ae54eee6f8d32e6a",
"description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
"category":"identity",
"type":"trust_id",
"public":false,
"hidden":true,
"status":"available",
"metadata":[
{
"created_at":"2021-01-21T11:43:36.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"d98d283a-b096-4a68-826a-36f99781787d",
"settings_name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
"settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
"key":"role_name",
"value":"member"
}
]
},
"is_valid":true,
"scheduler_obj":{
"workload_id":"209c13fa-e743-4ccd-81f7-efdaff277a1f",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_domain_id":"default",
"user":"adfa32d7746a4341b27377d6f7c61adb",
"tenant":"4dfe98a43bfa404785a812020066b4d6"
}
}{
"trusts":{
"role_name":"member",
"is_cloud_trust":false
}
}User-Agent
string
python-workloadmgrclient
User-Agent
string
python-workloadmgrclient
User-Agent
string
python-workloadmgrclient
String
smtp_port
email_settings
Integer
587
smtp_server_name
email_settings
String
Mailserver_A
smtp_server_username
email_settings
String
admin
smtp_server_password
email_settings
String
password
smtp_timeout
email_settings
Integer
10
smtp_email_enable
email_settings
Boolean
True
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to work with
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Project/Tenant where to find the Workload
setting_name
string
Name of the setting to show
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to work with w
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
setting_name
string
Name of the setting to delete
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 11:55:43 GMT
Content-Type: application/json
Content-Length: 403
Connection: keep-alive
X-Compute-Request-Id: req-ac16c258-7890-4ae7-b7f4-015b5aa4eb99
{
"settings":[
{
"created_at":"2021-02-04T11:55:43.890855",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"smtp_port",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":null,
"value":"8080",
"description":null,
"category":null,
"type":"email_settings",
"public":false,
"hidden":0,
"status":"available",
"is_public":false,
"is_hidden":false
}
]
}{
"settings":[
{
"category":null,
"name":<String Setting_name>,
"is_public":false,
"is_hidden":false,
"metadata":{
},
"type":<String Setting type>,
"value":<String Setting Value>,
"description":null
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 12:01:27 GMT
Content-Type: application/json
Content-Length: 380
Connection: keep-alive
X-Compute-Request-Id: req-404f2808-7276-4c2b-8870-8368a048c28c
{
"setting":{
"created_at":"2021-02-04T11:55:43.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"smtp_port",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":null,
"value":"8080",
"description":null,
"category":null,
"type":"email_settings",
"public":false,
"hidden":false,
"status":"available",
"metadata":[
]
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 12:05:59 GMT
Content-Type: application/json
Content-Length: 403
Connection: keep-alive
X-Compute-Request-Id: req-e92e2c38-b43a-4046-984e-64cea3a0281f
{
"settings":[
{
"created_at":"2021-02-04T11:55:43.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"name":"smtp_port",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":null,
"value":"8080",
"description":null,
"category":null,
"type":"email_settings",
"public":false,
"hidden":0,
"status":"available",
"is_public":false,
"is_hidden":false
}
]
}{
"settings":[
{
"category":null,
"name":<String Setting_name>,
"is_public":false,
"is_hidden":false,
"metadata":{
},
"type":<String Setting type>,
"value":<String Setting Value>,
"description":null
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 04 Feb 2021 11:49:17 GMT
Content-Type: application/json
Content-Length: 1223
Connection: keep-alive
X-Compute-Request-Id: req-5a8303aa-6c90-4cd9-9b6a-8c200f9c2473Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin Tab.
Navigate to Trilio page.
The Backups-Admin area provides the following features.
It is possible to reduce the shown information down to a single tenant. That way seeing the exact impact the chosen Tenant has.
The status overview is always visible in the Backups-Admin area. It provides the most needed information on a glance, including:
Storage Usage (nfs only)
Number of protected VMs compared to number of existing VMs
Number of currently running Snapshots
Status of TVault Nodes
Status of Contego Nodes
This tab provides information about all currently existing Workloads. It is the most important overview tab for every Backup Administrator and therefor the default tab shown when opening the Backup-Admins area.
The following information are shown:
User-ID that owns the Workload
Project that contains the Workload
Workload name
Availability Zone
Amount of protected VMs
Performance information about the last 30 backups
How much data was backed up (green bars)
How long did the Backup take (red line)
Piechart showing amount of Full (Blue) Backups compared to Incremental (Red) Backups
Number of successful Backups
Number of failed Backups
Storage used by that Workload
Which Backup target is used
When is the next Snapshot run
What is the general intervall of the Workload
Scheduler Status including a Switch to deactivate/activate the Workload
Administrators often need to figure out, where a lot of resources are used up, or they need to quickly provide usage information to a billing system. This tab helps in these tasks by providing the following information:
Storage used by a Tenant
VMs protected by a Tenant
It is possible to drill down to see the same information per workload and finally per protected VM.
This tab displays information about Trilio cluster nodes. The following information are shown:
Node name
Node ID
Trilio Version of the node
IP Address
Node Status including a Switch to deactivate/activate the Node
Node status can be controlled through CLI as well.
To deactivate the Trilio Node use:
--reason➡️Optional reason for disabling workload service
<node_name>➡️name of the Trilio node
To activate the Trilio Node use:
<node_name>➡️name of the Trilio node
This tab displays information about Trilio contego service. The following information are shown:
Service-Name
Compute Node the service is running on
Service Status from Openstack perspective (enabled/disabled)
Version of the Service
General Status
This tab displays information about the backup target storage. It contains the following information:
Storage Name
Capacity of the storage
Total utilization of the storage
Status of the storage
Statistic information
Percentage all storages are used
Percentage how much storage is used for full backups
Amount of Full backups versus Incremental backups
Audit logs provide the sequence of workload related activities done by users, like workload creation, snapshot creation, etc. The following information are shown:
Time of the entry
What task has been done
Project the task has performed in
User that performed the task
The Audit log can be searched for strings to find for example only entries down by a specific user.
Additionally, can the shown timeframe be changed as necessary.
The license tab provides an overview over the current license and allows to upload new licenses, or validate the current license.
The following information about an active license are shown:
Organization (License name)
License ID
Purchase date - when was the license created
License Expiry Date
Maintenance Expiry Date
License value
License Edition
License Version
License Type
Description of the License
Evaluation (True/False)
EULA - when was the license agreed
Trilio will stop all activities once a license is no longer valid or expired.
The policy tab gives Administrators the possibility to work with workload policies.
This tab manages all global settings for the whole cloud. Trilio has two types of settings:
Email settings
Job scheduler settings.
These settings will be used by Trilio to send email reports of snapshots and restores to users.
Configuring the Email settings is a must-have to provide Email notification to Openstack users.
The following information are required to configure the email settings:
SMTP Server
SMTP username
SMTP password
SMTP port
SMTP timeout
Sender email address
To work with email settings through CLI use the following commands:
To set an email setting for the first time or after deletion use:
--description➡️Optional description (Default=None) ➡️ Not required for email settings
--category➡️Optional setting category (Default=None) ➡️ Not required for email settings
--type➡️settings type ➡️ set to email_settings
--is-public➡️sets if the setting can be seen publicly ➡️ set to False
--is-hidden➡️sets if the setting will always be hidden ➡️ set to False
--metadata➡️sets if the setting can be seen publicly ➡️ Not required for email settings
<name>➡️name of the setting ➡️ Take from the list below
<value>➡️value of the setting ➡️ Take value type from the list below
To update an already set email setting through CLI use:
--description➡️Optional description (Default=None) ➡️ Not required for email settings
--category➡️Optional setting category (Default=None) ➡️ Not required for email settings
--type➡️settings type ➡️ set to email_settings
--is-public➡️sets if the setting can be seen publicly ➡️ set to False
--is-hidden➡️sets if the setting will always be hidden ➡️ set to False
--metadata➡️sets if the setting can be seen publicly ➡️ Not required for email settings
<name>➡️name of the setting ➡️ Take from the list below
<value>➡️value of the setting ➡️ Take value type from the list below
To show an already set email setting use:
--get_hidden➡️show hidden settings (True) or not (False) ➡️ Not required for email settings, use False if set
<setting_name>➡️name of the setting to show➡️ Take from the list below
To delete a set email setting use:
<setting_name>➡️name of the setting to delete ➡️ Take from the list below
smtp_port
Integer
587
smtp_server_name
String
The Global Job Scheduler can be used to deactivate all scheduled workloads without modifying each one of them.
To activate/deactivate the Global Job Scheduler through the Backups-Admin area:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin Tab.
Navigate to Trilio page.
Navigate to the Settings tab
Click "Disable/Enable Job Scheduler"
Check or Uncheck the box for "Job Scheduler Enabled"
Confirm by clicking on "Change"
The Global Job Scheduler can be controlled through CLI as well.
To get the status of the Global Job Scheduler use:
To deactivate the Global Job Scheduler use:
To activate the Global Job Scheduler use:
workloadmgr workload-service-disable [--reason <reason>] <node_name>workloadmgr workload-service-enable <node_name>workloadmgr setting-create [--description <description>]
[--category <category>]
[--type <type>]
[--is-public {True,False}]
[--is-hidden {True,False}]
[--metadata <key=value>]
<name> <value>workloadmgr setting-update [--description <description>]
[--category <category>]
[--type <type>]
[--is-public {True,False}]
[--is-hidden {True,False}]
[--metadata <key=value>]
<name> <value>workloadmgr setting-show [--get_hidden {True,False}] <setting_name>workloadmgr setting-delete <setting_name>workloadmgr get-global-job-schedulerworkloadmgr disable-global-job-schedulerworkloadmgr enable-global-job-schedulerMailserver_A
smtp_server_username
String
admin
smtp_server_password
String
password
smtp_timeout
Integer
10
smtp_email_enable
Boolean
True
GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/get_list/import_workloads
Provides the list of all importable workloads
GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/orphan_workloads
Provides the list of all orphaned workloads
POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/import_workloads
Imports all or the provided workloads
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to work in
project_id
string
restricts the output to the given project
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to work in
migrate_cloud
boolean
True also shows Workloads from different clouds
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of the Trilio Service
tenant_id
string
ID of the Tenant/Project to take the Snapshot in
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 10:34:10 GMT
Content-Type: application/json
Content-Length: 7888
Connection: keep-alive
X-Compute-Request-Id: req-9d73e5e6-ca5a-4c07-bdf2-ec2e688fc339
{
"workloads":[
{
"created_at":"2020-11-02T13:40:06.000000",
"updated_at":"2020-11-09T09:53:30.000000",
"id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"availability_zone":"nova",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"name":"Workload_1",
"description":"no-description",
"interval":null,
"storage_usage":null,
"instances":null,
"metadata":[
{
"created_at":"2020-11-09T09:57:23.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"ee27bf14-e460-454b-abf5-c17e3d484ec2",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"63cd8d96-1c4a-4e61-b1e0-3ae6a17bf533",
"value":"c8468146-8117-48a4-bfd7-49381938f636"
},
{
"created_at":"2020-11-05T10:27:06.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"22d3e3d6-5a37-48e9-82a1-af2dda11f476",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"value":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2"
},
{
"created_at":"2020-11-09T09:37:20.000000",
"updated_at":"2020-11-09T09:57:23.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"61615532-6165-45a2-91e2-fbad9eb0b284",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"b083bb70-e384-4107-b951-8e9e7bbac380",
"value":"c8468146-8117-48a4-bfd7-49381938f636"
},
{
"created_at":"2020-11-02T13:40:24.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"5a53c8ee-4482-4d6a-86f2-654d2b06e28c",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"backup_media_target",
"value":"10.10.2.20:/upstream"
},
{
"created_at":"2020-11-05T10:27:14.000000",
"updated_at":"2020-11-09T09:57:23.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"5cb4dc86-a232-4916-86bf-42a0d17f1439",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"e33c1eea-c533-4945-864d-0da1fc002070",
"value":"c8468146-8117-48a4-bfd7-49381938f636"
},
{
"created_at":"2020-11-02T13:40:06.000000",
"updated_at":"2020-11-02T14:10:30.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"506cd466-1e15-416f-9f8e-b9bdb942f3e1",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"hostnames",
"value":"[\"cirros-1\", \"cirros-2\"]"
},
{
"created_at":"2020-11-02T13:40:06.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"093a1221-edb6-4957-8923-cf271f7e43ce",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"pause_at_snapshot",
"value":"0"
},
{
"created_at":"2020-11-02T13:40:06.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"79baaba8-857e-410f-9d2a-8b14670c4722",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"policy_id",
"value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
},
{
"created_at":"2020-11-02T13:40:06.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"4e23fa3d-1a79-4dc8-86cb-dc1ecbd7008e",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"preferredgroup",
"value":"[]"
},
{
"created_at":"2020-11-02T14:10:30.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"ed06cca6-83d8-4d4c-913b-30c8b8418b80",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"topology",
"value":"\"\\\"\\\"\""
},
{
"created_at":"2020-11-02T13:40:23.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"4b6a80f7-b011-48d4-b5fd-f705448de076",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"key":"workload_approx_backup_size",
"value":"6"
}
],
"jobschedule":"(dp0\nVfullbackup_interval\np1\nV-1\np2\nsVretention_policy_type\np3\nVNumber of Snapshots to Keep\np4\nsVend_date\np5\nVNo End\np6\nsVstart_time\np7\nV01:45 PM\np8\nsVinterval\np9\nV5\np10\nsVenabled\np11\nI00\nsVretention_policy_value\np12\nV10\np13\nsVtimezone\np14\nVUTC\np15\nsVstart_date\np16\nV11/02/2020\np17\nsVappliance_timezone\np18\nVUTC\np19\ns.",
"status":"locked",
"error_msg":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
}
],
"scheduler_trust":null
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 10:42:01 GMT
Content-Type: application/json
Content-Length: 120143
Connection: keep-alive
X-Compute-Request-Id: req-b443f6e7-8d8e-413f-8d91-7c30ba166e8c
{
"workloads":[
{
"created_at":"2019-04-24T14:09:20.000000",
"updated_at":"2019-05-16T09:10:17.000000",
"id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"user_id":"6ef8135faedc4259baac5871e09f0044",
"project_id":"863b6e2a8e4747f8ba80fdce1ccf332e",
"availability_zone":"nova",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"name":"comdirect_test",
"description":"Daily UNIX Backup 03:15 PM Full 7D Keep 8",
"interval":null,
"storage_usage":null,
"instances":null,
"metadata":[
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":false,
"created_at":"2019-05-16T09:13:54.000000",
"updated_at":null,
"value":"ca544215-1182-4a8f-bf81-910f5470887a",
"version":"3.2.46",
"key":"40965cbb-d352-4618-b8b0-ea064b4819bb",
"deleted_at":null,
"id":"5184260e-8bb3-4c52-abfa-1adc05fe6997"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":true,
"created_at":"2019-04-24T14:09:30.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"10.10.2.20:/upstream",
"version":"3.2.46",
"key":"backup_media_target",
"deleted_at":"2019-05-16T09:01:23.000000",
"id":"02dd0630-7118-485c-9e42-b01d23aa882c"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":false,
"created_at":"2019-05-16T09:13:51.000000",
"updated_at":null,
"value":"51693eca-8714-49be-b409-f1f1709db595",
"version":"3.2.46",
"key":"eb7d6b13-21e4-45d1-b888-d3978ab37216",
"deleted_at":null,
"id":"4b79a4ef-83d6-4e5a-afb3-f4e160c5f257"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":true,
"created_at":"2019-04-24T14:09:20.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"[\"Comdirect_test-2\", \"Comdirect_test-1\"]",
"version":"3.2.46",
"key":"hostnames",
"deleted_at":"2019-05-16T09:01:23.000000",
"id":"0cb6a870-8f30-4325-a4ce-e9604370198e"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":false,
"created_at":"2019-04-24T14:09:20.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"0",
"version":"3.2.46",
"key":"pause_at_snapshot",
"deleted_at":null,
"id":"5d4f109c-9dc2-48f3-a12a-e8b8fa4f5be9"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":true,
"created_at":"2019-04-24T14:09:20.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"[]",
"version":"3.2.46",
"key":"preferredgroup",
"deleted_at":"2019-05-16T09:01:23.000000",
"id":"9a223fbc-7cad-4c2c-ae8a-75e6ee8a6efc"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":true,
"created_at":"2019-04-24T14:11:49.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"\"\\\"\\\"\"",
"version":"3.2.46",
"key":"topology",
"deleted_at":"2019-05-16T09:01:23.000000",
"id":"77e436c0-0921-4919-97f4-feb58fb19e06"
},
{
"workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
"deleted":true,
"created_at":"2019-04-24T14:09:30.000000",
"updated_at":"2019-05-16T09:01:23.000000",
"value":"121",
"version":"3.2.46",
"key":"workload_approx_backup_size",
"deleted_at":"2019-05-16T09:01:23.000000",
"id":"79aa04dd-a102-4bd8-b672-5b7a6ce9e125"
}
],
"jobschedule":"(dp1\nVfullbackup_interval\np2\nV7\nsVretention_policy_type\np3\nVNumber of days to retain Snapshots\np4\nsVend_date\np5\nV05/31/2019\np6\nsVstart_time\np7\nS'02:15 PM'\np8\nsVinterval\np9\nV24 hrs\np10\nsVenabled\np11\nI01\nsVretention_policy_value\np12\nI8\nsS'appliance_timezone'\np13\nS'UTC'\np14\nsVtimezone\np15\nVAfrica/Porto-Novo\np16\nsVstart_date\np17\nS'04/24/2019'\np18\ns.",
"status":"locked",
"error_msg":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
}
],
"scheduler_trust":null
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 11:03:55 GMT
Content-Type: application/json
Content-Length: 100
Connection: keep-alive
X-Compute-Request-Id: req-0e58b419-f64c-47e1-adb9-21ea2a255839
{
"workloads":{
"imported_workloads":[
"faa03-f69a-45d5-a6fc-ae0119c77974"
],
"failed_workloads":[
]
}
}{
"workload_ids":[
"<workload_id>"
],
"upgrade":true
}tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 15:40:56 GMT
Content-Type: application/json
Content-Length: 1625
Connection: keep-alive
X-Compute-Request-Id: req-2ad95c02-54c6-4908-887b-c16c5e2f20fe
{
"quota_types":[
{
"created_at":"2020-10-19T10:05:52.000000",
"updated_at":"2020-10-19T10:07:32.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
"display_name":"Workloads",
GET https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types/<quota_type_id>
Requests the details of a Quota Type
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project to work in
quota_type_id
string
ID of the Quota Type to show
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
POST https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>
Creates an allowed Quota with the given parameters
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to work in
project_id
string
ID of the Tenant/Project to create the allowed Quota in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
GET https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>
Lists all allowed Quotas for a given project.
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to work in
project_id
string
ID of the Tenant/Project to list allowed Quotas from
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
GET https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quota/<allowed_quota_id>
Shows details for a given allowed Quota
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to work in
<allowed_quota_id>
string
ID of the allowed Quota to show
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
PUT https://$(tvm_address):8780/v1/$(tenant_id)/update_allowed_quota/<allowed_quota_id>
Updates an allowed Quota with the given parameters
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to work in
<allowed_quota_id>
string
ID of the allowed Quota to update
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
DELETE https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<allowed_quota_id>
Deletes a given allowed Quota
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to work in
<allowed_quota_id>
string
ID of the allowed Quota to delete
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 15:44:43 GMT
Content-Type: application/json
Content-Length: 342
Connection: keep-alive
X-Compute-Request-Id: req-5bf629fe-ffa2-4c90-b704-5178ba2ab09b
{
"quota_type":{
"created_at":"2020-10-19T10:05:52.000000",
"updated_at":"2020-10-19T10:07:32.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
"display_name":"Workloads",
"display_description":"Total number of workload creation allowed per project",
"status":"available"
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 15:51:51 GMT
Content-Type: application/json
Content-Length: 24
Connection: keep-alive
X-Compute-Request-Id: req-08c8cdb6-b249-4650-91fb-79a6f7497927
{
"allowed_quotas":[
{
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:01:39 GMT
Content-Type: application/json
Content-Length: 766
Connection: keep-alive
X-Compute-Request-Id: req-e570ce15-de0d-48ac-a9e8-60af429aebc0
{
"allowed_quotas":[
{
"id":"262b117d-e406-4209-8964-004b19a8d422",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
"allowed_value":5,
"high_watermark":4,
"version":"4.0.115",
"quota_type_name":"Workloads"
},
{
"id":"68e7203d-8a38-4776-ba58-051e6d289ee0",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"quota_type_id":"f02dd7a6-2e08-11ea-889c-7440bb00b67d",
"allowed_value":-1,
"high_watermark":-1,
"version":"4.0.115",
"quota_type_name":"Storage"
},
{
"id":"ed67765b-aea8-4898-bb1c-7c01ecb897d2",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"quota_type_id":"be323f58-2e08-11ea-889c-7440bb00b67d",
"allowed_value":50,
"high_watermark":25,
"version":"4.0.115",
"quota_type_name":"VMs"
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:15:07 GMT
Content-Type: application/json
Content-Length: 268
Connection: keep-alive
X-Compute-Request-Id: req-d87a57cd-c14c-44dd-931e-363158376cb7
{
"allowed_quotas":{
"id":"262b117d-e406-4209-8964-004b19a8d422",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
"allowed_value":5,
"high_watermark":4,
"version":"4.0.115",
"quota_type_name":"Workloads"
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:24:04 GMT
Content-Type: application/json
Content-Length: 24
Connection: keep-alive
X-Compute-Request-Id: req-a4c02ee5-b86e-4808-92ba-c363b287f1a2
{"allowed_quotas": [{}]}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Wed, 18 Nov 2020 16:33:09 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-alive{
"allowed_quotas":[
{
"project_id":"<project_id>",
"quota_type_id":"<quota_type_id>",
"allowed_value":"<integer>",
"high_watermark":"<Integer>"
}
]
}{
"allowed_quotas":{
"project_id":"c76b3355a164498aa95ddbc960adc238",
"allowed_value":"20000",
"high_watermark":"18000"
}
}User-Agent
string
python-workloadmgrclient
User-Agent
string
python-workloadmgrclient
Refer to the below-mentioned acceptable values for the placeholders triliovault_tag, trilio_branch and kolla_base_distro , in this document as per the Openstack environment:
Trilio requires OpenStack CLI to be installed and available for use on the Kolla Ansible Control node.
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
Export and activate the kolla virtual environment. This step is applicable only for Kolla Epoxy.
Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterwards, copy Trilio Ansible role into Kolla-ansible roles directory
Generate triliovault passwords and append triliovault_passwords.yml to /etc/kolla/passwords.yml.
On kolla-ansible server node, change directory
Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.
vi triliovault_nfs_map_input.yml
The triliovault_nfs_map_imput.yml is explained .
Update PyYAML on the kolla-ansible server node only
Expand the map file to create one to one mapping of compute and nfs share.
Result will be in file - 'triliovault_nfs_map_output.yml'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all necessary nfs shares.
Append this output map file to 'triliovault_globals.yml' File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’
Ensure to set multi_ip_nfs_enabled in __ triliovault_globals.yml file to yes
Edit /etc/kolla/globals.yml file to fill Trilio backup target and build details.
You will find the Trilio related parameters at the end of globals.yml file.
Details like Trilio version, backup target type, backup target details, etc need to be filled out.
Following is the list of parameters that the usr needs to edit.
In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.
Following are the triliovault container image URLs for 5.x release**.**
Replace kolla_base_distro and triliovault_tag variables with their values.\
This {{ kolla_base_distro }} variable can be either 'rocky' or 'ubuntu' depends on your base OpenStack distro
Below are the OpenStack deployment images
To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variables. Append the Trilio mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
Next, find the variable nova_compute_default_volumes in the same file and append the mount bind /var/trilio:/var/trilio:shared to the list.
After the change will the variable look for a default Kolla installation as follows:
In case of using Ironic compute nodes one more entry need to be adjusted in the same file.
Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.
After the changes the variable will looks like the following:
To enable workloadmanager quota feature on Horizon dashboard, it is necessary to create custom settings for Horizon.
Create following directory if not exists on kolla ansible server node.
Set ownership to user:group that you are using for deployment.
For example, if you are using 'root' system user for deployment, chown command will look like below.
Create the settings file in above directory.
For Zed, Antelope & Bobcat:
For Caracal and Epoxy:
Activate the login into dockerhub for Trilio tagged containers.
Please get the Dockerhub login credentials from Trilio Sales/Support team
Pull the Trilio container images from the dockerhub based on the existing inventory file. In the example is the inventory file named multinode.
All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.
This is just an example command. You need to use your cloud deploy command.
Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.
The example is shown for 5.2.2 release from Kolla Rocky Zed setup.
Replace the inventory file path and run the below command to create cloud admin trust.
Login to any controller node and check the logs of wlm_cloud_trust container. This should show the cloud admin trust created.
To see all TriloVault containers running on a specific node use the docker ps command.
To check the startup logs use the docker logs <container name> command.
Verify that the Trilio Appliance is configured. The Horizon tabs are only shown, when a configured Trilio appliance is available.
Verify that the Trilio horizon container is installed and in a running state.
Trilio workloadmgr api service logs on workloadmgr api node
Trilio workloadmgr cron service logs on workloadmgr cron node
Trilio workloadmgr scheduler service logs on workloadmgr scheduler node
Trilio workloadmgr workloads service logs on workloadmgr workloads node
Trilio datamover api service logs on datamover api node
Trilio datamover service logs on datamover node
ubuntu/rocky
Antelope Zed
5.2.0
5.2.0-2023.1 5.2.0-zed
5.2.0
ubuntu/rocky
Antelope Zed
5.1.0
5.1.0-zed
5.1.0
ubuntu/rocky
Zed
cloud_admin_domainname
<cloud_admin_domainname >
Use the domain name of cloud admin user
cloud_admin_domainid
<cloud_admin_domainid >
Use the domain ID of cloud admin user
trustee_role
<trustee_role >
Comma separated list of trustee roles required.
For Zed, trustee_role should be creator.
For Antelope to Epoxy, trustee_role should be creator,member
os_endpoint_type
<internal/public >
Choose required endpoint type which Trilio APIs will use for communication
triliovault_tag
<triliovault_tag >
Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned in the 1st step
horizon_image_full
Uncomment
By default, Trilio Horizon container would not get deployed.
Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.
triliovault_docker_username
<dockerhub-login-username>
Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team
triliovault_docker_password
<dockerhub-login-password>
Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team
triliovault_docker_registry
Default value: docker.io
Edit this value if a different container registry for Trilio containers is to be used. Containers need to be pulled from docker.io and pushed to chosen registry first.
triliovault_backup_target
nfs
amazon_s3
other_s3_compatible
nfs if the backup target is NFS
amazon_s3 if the backup target is Amazon S3
other_s3_compatible if the backup target type is S3 but not amazon S3.
multi_ip_nfs_enabled
yes no default: no
This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.
triliovault_nfs_shares
<NFS-IP/FQDN>:/<NFS path>
NFS share path example: ‘192.168.122.101:/nfs/tvault’
triliovault_nfs_options
'nolock,soft,timeo=180,
intr,lookupcache=none'.
for Cohesity nfs: 'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10'
-These parameter set NFS mount options. -Keep default values, unless a special requirement exists.
triliovault_s3_access_key
S3 Access Key
Valid for amazon_s3 and
triliovault_s3_secret_key
S3 Secret Key
Valid for amazon_s3 and other_s3_compatible
triliovault_s3_region_name
Default value: us-east-1
S3 Region name
Valid for amazon_s3 and other_s3_compatible
If s3 storage doesn't have region parameter keep default
triliovault_s3_bucket_name
S3 Bucket name
Valid for amazon_s3 and other_s3_compatible
triliovault_s3_endpoint_url
S3 Endpoint URL
Valid for other_s3_compatible only
triliovault_s3_ssl_enabled
True
False
Valid for other_s3_compatible only
Set true for SSL enabled S3 endpoint URL
triliovault_s3_ssl_cert_file_name
s3-cert.pem
Valid for other_s3_compatible only with SSL enabled and self signed certificates
OR issued by a private authority.
In this case, copy the ceph s3 ca chain file to/etc/kolla/config/triliovault/
directory on ansible server. Create this directory if it does not exist already.
triliovault_copy_ceph_s3_ssl_cert
True
False
Valid for other_s3_compatible only
Set to True when: SSL enabled with self-signed certificates or issued by a private authority.
5.2.4
5.2.4-2024.1 5.2.4-2023.2 5.2.4-2023.1 5.2.4-zed
5.2.4
ubuntu/rocky
Caracal Bobcat Antelope Zed
5.2.3
5.2.3-2023.2 5.2.3-2023.1 5.2.3-zed
5.2.3
ubuntu/rocky
Bobcat Antelope Zed
5.2.2
5.2.2-2023.2 5.2.2-2023.1 5.2.2-zed
5.2.2
ubuntu/rocky
Bobcat Antelope Zed
5.2.1
5.2.1-2023.1 5.2.1-zed
cloud_admin_username
<cloud_admin_username >
Use the username of cloud admin user. The user must to have assigned a 'creator' role
cloud_admin_password
<cloud_admin_password >
Use the password of cloud admin user
cloud_admin_projectname
<cloud_admin_projectname >
Use the project name of cloud admin user
cloud_admin_projectid
<cloud_admin_projectid >
5.2.1
Use the project ID of cloud admin user
# Export virtual environment path depending on your setup
export venv_path="/opt/kolla-venv"
source $venv_path/bin/activategit clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/
# For Zed to Caracal
mkdir -p /usr/local/share/kolla-ansible/ansible/roles/triliovault
# For Epoxy
mkdir -p $venv_path/share/kolla-ansible/ansible/roles/triliovault
# For Rocky and Ubuntu Zed and Antelope
cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
# For Rocky and Ubuntu Bobcat and Caracal
cp -R ansible/roles/triliovault-bobcat/* /usr/local/share/kolla-ansible/ansible/roles/triliovault/
# For Rocky and Ubuntu Epoxy
cp -R ansible/roles/triliovault-epoxy/* $venv_path/share/kolla-ansible/ansible/roles/triliovault/## For Rocky and Ubuntu
- Take backup of globals.yml
cp /etc/kolla/globals.yml /opt/
- Append Trilio global variables to globals.yml for Zed
cat ansible/triliovault_globals_zed.yml >> /etc/kolla/globals.yml
- Append Trilio global variables to globals.yml for Antelope
cat ansible/triliovault_globals_2023.1.yml >> /etc/kolla/globals.yml
- Append Trilio global variables to globals.yml for Bobcat
cat ansible/triliovault_globals_2023.2.yml >> /etc/kolla/globals.yml
- Append Trilio global variables to globals.yml for Caracal
cat ansible/triliovault_globals_2024.1.yml >> /etc/kolla/globals.yml
- Append Trilio global variables to globals.yml for Epoxy
cat ansible/triliovault_globals_2025.1.yml >> /etc/kolla/globals.ymlcd ansible
./scripts/generate_password.sh
## For Rocky and Ubuntu
- Take backup of passwords.yml
cp /etc/kolla/passwords.yml /opt/
- Append Trilio global variables to passwords.yml
cat triliovault_passwords.yml >> /etc/kolla/passwords.yml# For Rocky and Ubuntu
- Take backup of site.yml for Zed to Caracal
cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/
- Take backup of site.yml for Epoxy
cp $venv_path/share/kolla-ansible/ansible/site.yml /opt/
- Append Trilio site variables to site.yml for Zed
cat ansible/triliovault_site_yoga.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
- Append Trilio site variables to site.yml for Antelope
cat ansible/triliovault_site_2023.1.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
- Append Trilio site variables to site.yml for Bobcat
cat ansible/triliovault_site_2023.2.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
- Append Trilio site variables to site.yml for Caracal
cat ansible/triliovault_site_2024.1.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
- Append Trilio site variables to site.yml for Epoxy
cat ansible/triliovault_site_2025.1.yml >> $venv_path/share/kolla-ansible/ansible/site.ymlFor example:
If your inventory file name path '/root/multinode' then use following command.
cat ansible/triliovault_inventory.txt >> /root/multinodecd triliovault-cfg-scripts/common/pip3 install -U pyyamlpython ./generate_nfs_map.pycat triliovault_nfs_map_output.yml >> ../kolla-ansible/ansible/triliovault_globals.yml
1. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
2. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
3. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-horizon-plugin:{{ triliovault_tag }}
4. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-wlm:{{ triliovault_tag }}
## EXAMPLE from Kolla Ubuntu OpenStack
docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:{{ triliovault_tag }}
docker.io/trilio/kolla-ubuntu-trilio-wlm:{{ triliovault_tag }}nova_libvirt_default_volumes:
- "{{ node_config_directory }}/nova-libvirt/:{{ container_config_directory }}/:ro"
- "/etc/localtime:/etc/localtime:ro"
- "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
- "/lib/modules:/lib/modules:ro"
- "/run/:/run/:shared"
- "/dev:/dev"
- "/sys/fs/cgroup:/sys/fs/cgroup"
- "kolla_logs:/var/log/kolla/"
- "libvirtd:/var/lib/libvirt"
- "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
- "
{% raw %}
{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
{% endraw %}
"
- "nova_libvirt_qemu:/etc/libvirt/qemu"
- "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }
- "/var/trilio:/var/trilio:shared"nova_compute_default_volumes:
- "{{ node_config_directory }}/nova-compute/:{{ container_config_directory }}/:ro"
- "/etc/localtime:/etc/localtime:ro"
- "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
- "/lib/modules:/lib/modules:ro"
- "/run:/run:shared"
- "/dev:/dev"
- "kolla_logs:/var/log/kolla/"
- "
{% raw %}
{% if enable_iscsid | bool %}iscsi_info:/etc/iscsi{% endif %}"
- "libvirtd:/var/lib/libvirt"
- "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
- "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
{% endraw %}
"
- "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
- "/var/trilio:/var/trilio:shared"nova_compute_ironic_default_volumes:
- "{{ node_config_directory }}/nova-compute-ironic/:{{ container_config_directory }}/:ro"
- "/etc/localtime:/etc/localtime:ro"
- "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
- "kolla_logs:/var/log/kolla/"
- "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
- "/var/trilio:/var/trilio:shared"mkdir -p /etc/kolla/config/horizonchown <DEPLOYMENT_USER>:<DEPLOYMENT_GROUP> /etc/kolla/config/horizonchown root:root /etc/kolla/config/horizonecho 'from openstack_dashboard.settings import HORIZON_CONFIG
HORIZON_CONFIG["customization_module"] = "trilio_dashboard.overrides"' >> /etc/kolla/config/horizon/custom_local_settingsecho 'from openstack_dashboard.settings import HORIZON_CONFIG
HORIZON_CONFIG["customization_module"] = "trilio_dashboard.overrides"' >> /etc/kolla/config/horizon/_9999-custom-settings.pyansible -i <kolla inventory file path> control -m shell -a "docker login -u <docker-login-username> -p <docker-login-password> docker.io" --become
# For Zed to Caracal
kolla-ansible -i <kolla inventory file path> pull --tags triliovault
# For Epoxy
kolla-ansible pull -i <kolla inventory file path> --tags triliovault# For Zed to Caracal
kolla-ansible -i <kolla inventory file path> deploy
# For Epoxy
kolla-ansible deploy -i <kolla inventory file path>[root@controller ~]# docker ps | grep datamover-api
9bf847ec4374 trilio/kolla-rocky-trilio-datamover-api:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_datamover_api
[root@controller ~]# ssh compute "docker ps | grep datamover"
2b590ab33dfa trilio/kolla-rocky-trilio-datamover:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_datamover
[root@controller ~]# docker ps | grep horizon
1333f1ccdcf1 trilio/kolla-rocky-trilio-horizon-plugin:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours (healthy) horizon
[root@controller ~]# docker ps -a | grep wlm
fedc17b12eaf trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Exited (0) 23 hours ago wlm_cloud_trust
60bc1f0d0758 trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_cron
499b8ca89bd6 trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_scheduler
7e3749026e8e trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_workloads
932a41bf7024 trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_api# For Zed to Caracal
ansible-playbook -i {{ kolla inventory file }} /usr/local/share/kolla-ansible/ansible/roles/triliovault/tasks/wlm_cloud_trust.yml -e "@/etc/kolla/globals.yml"
# For Epoxy
ansible-playbook -i {{ kolla inventory file }} $venv_path/share/kolla-ansible/ansible/roles/triliovault/tasks/wlm_cloud_trust.yml -e "@/etc/kolla/globals.yml"ssh controller
docker logs wlm_cloud_trust docker ps -a | grep triliodocker logs triliovault_datamover_api
docker logs triliovault_datamover
docker logs triliovault_wlm_api
docker logs triliovault_wlm_scheduler
docker logs triliovault_wlm_cron
docker logs triliovault_wlm_workloads
docker logs wlm_cloud_trustdocker ps | grep horizon/var/log/kolla/triliovault-wlm-api/triliovault-wlm-api.log/var/log/kolla/triliovault-wlm-cron/triliovault-wlm-cron.log/var/log/kolla/triliovault-wlm-scheduler/triliovault-wlm-scheduler.log/var/log/kolla/triliovault-wlm-workloads/triliovault-wlm-workloads.log/var/log/kolla/triliovault-datamover-api/triliovault-datamover-api.log/var/log/kolla/triliovault-datamover/triliovault-datamover.logGET https://$(tvm_address):8780/v1/$(tenant_id)/workloads
Provides the list of all workloads for the given tenant/project id
POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads
Creates a workload in the provided Tenant/Project with the given details.
Workload create requires a Body in json format, to provide the requested information.
Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.
GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Shows all details of a specified workload
PUT https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Modifies a workload in the provided Tenant/Project with the given details.
Workload modify requires a Body in json format, to provide the information about the values to modify.
Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.
DELETE https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Deletes the specified Workload.
POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/unlock
Unlocks the specified Workload
POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/reset
Resets the defined workload
User-Agent
string
python-workloadmgrclient
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to fetch the workloads from
nfs_share
string
lists workloads located on a specific nfs-share
all_workloads
boolean
admin role required - True lists workloads of all tenants/projects
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to create the workload in
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Project/Tenant where to find the Workload
workload_id
string
ID of the Workload to show
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project where to find the workload in
workload_id
string
ID of the Workload to modify
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
workload_id
string
ID of the Workload to delete
database_only
boolean
True leaves the Workload data on the Backup Target
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
workload_id
string
ID of the Workload to unlock
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
workload_id
string
ID of the Workload to reset
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 29 Oct 2020 14:55:40 GMT
Content-Type: application/json
Content-Length: 3480
Connection: keep-alive
X-Compute-Request-Id: req-a2e49b7e-ce0f-4dcb-9e61-c5a4756d9948
{
"workloads":[
{
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"id":"8ee7a61d-a051-44a7-b633-b495e6f8fc1d",
"name":"worklaod1",
"snapshots_info":"",
"description":"no-description",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"status":"available",
"created_at":"2020-10-26T12:07:01.000000",
"updated_at":"2020-10-29T12:22:26.000000",
"scheduler_trust":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
}
]
},
{
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"id":"a90d002a-85e4-44d1-96ac-7ffc5d0a5a84",
"name":"workload2",
"snapshots_info":"",
"description":"no-description",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"status":"available",
"created_at":"2020-10-20T09:51:15.000000",
"updated_at":"2020-10-29T10:03:33.000000",
"scheduler_trust":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
}
]
}
]
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Thu, 29 Oct 2020 15:42:02 GMT
Content-Type: application/json
Content-Length: 703
Connection: keep-alive
X-Compute-Request-Id: req-443b9dea-36e6-4721-a11b-4dce3c651ede
{
"workload":{
"project_id":"c76b3355a164498aa95ddbc960adc238",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"id":"c4e3aeeb-7d87-4c49-99ed-677e51ba715e",
"name":"API created",
"snapshots_info":"",
"description":"API description",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"status":"creating",
"created_at":"2020-10-29T15:42:01.000000",
"updated_at":"2020-10-29T15:42:01.000000",
"scheduler_trust":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
}
]
}
}retention_policy_type
retention_policy_value
interval{
"workload":{
"name":"<name of the Workload>",
"description":"<description of workload>",
"workload_type_id":"<ID of the chosen Workload Type",
"source_platform":"openstack",
"instances":[
{
"instance-id":"<Instance ID>"
},
{
"instance-id":"<Instance ID>"
}
],
"jobschedule":{
"retention_policy_type":"<'Number of Snapshots to Keep'/'Number of days to retain Snapshots'>",
"retention_policy_value":"<Integer>"
"timezone":"<timezone>",
"start_date":"<Date format: MM/DD/YYYY>"
"end_date":"<Date format MM/DD/YYYY>",
"start_time":"<Time format: HH:MM AM/PM>",
"interval":"<Format: Integer hr",
"enabled":"<True/False>"
},
"metadata":{
<key>:<value>,
"policy_id":"<policy_id>"
}
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 12:08:42 GMT
Content-Type: application/json
Content-Length: 1536
Connection: keep-alive
X-Compute-Request-Id: req-afb76abb-aa33-427e-8219-04fc2b91bce0
{
"workload":{
"created_at":"2020-10-29T15:42:01.000000",
"updated_at":"2020-10-29T15:42:18.000000",
"id":"c4e3aeeb-7d87-4c49-99ed-677e51ba715e",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"availability_zone":"nova",
"workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
"name":"API created",
"description":"API description",
"interval":null,
"storage_usage":{
"usage":0,
"full":{
"snap_count":0,
"usage":0
},
"incremental":{
"snap_count":0,
"usage":0
}
},
"instances":[
{
"id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b",
"name":"cirros-4",
"metadata":{
}
},
{
"id":"7c1bb5d2-aa5a-44f7-abcd-2d76b819b4c8",
"name":"cirros-3",
"metadata":{
}
}
],
"metadata":{
"hostnames":"[]",
"meta":"data",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"preferredgroup":"[]",
"workload_approx_backup_size":"6"
},
"jobschedule":{
"retention_policy_type":"Number of Snapshots to Keep",
"end_date":"15/27/2020",
"start_time":"3:00 PM",
"interval":"5",
"enabled":false,
"retention_policy_value":"10",
"timezone":"UTC+2",
"start_date":"10/27/2020",
"fullbackup_interval":"-1",
"appliance_timezone":"UTC",
"global_jobscheduler":true
},
"status":"available",
"error_msg":null,
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
}
],
"scheduler_trust":null
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 12:31:42 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-674a5d71-4aeb-4f99-90ce-7e8d3158d137retention_policy_type
retention_policy_value
interval{
"workload":{
"name":"<name>",
"description":"<description>"
"instances":[
{
"instance-id":"<instance_id>"
},
{
"instance-id":"<instance_id>"
}
],
"jobschedule":{
"retention_policy_type":"<'Number of Snapshots to Keep'/'Number of days to retain Snapshots'>",
"retention_policy_value":"<Integer>",
"timezone":"<timezone>",
"start_time":"<HH:MM AM/PM>",
"end_date":"<MM/DD/YYYY>",
"interval":"<Integer hr>",
"enabled":"<True/False>"
},
"metadata":{
"meta":"data",
"policy_id":"<policy_id>"
},
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 13:31:00 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-aliveHTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 13:41:55 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-aliveHTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 02 Nov 2020 13:52:30 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-aliveGET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots
Lists all Snapshots.
POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
When creating a Snapshot it is possible to provide additional information
GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Shows the details of a specified Snapshot
DELETE https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Deletes a specified Snapshot
GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/cancel
Cancels the Snapshot process of a given Snapshot
all
boolean
admin role required - True lists all Snapshots of all Workloads
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Projects to fetch the Snapshots from
host
string
host name of the TVM that took the Snapshot
workload_id
string
ID of the Workload to list the Snapshots off
date_from
string
starting date of Snapshots to show
</p>
Format: YYYY-MM-DDTHH:MM:SS
string
ending date of Snapshots to show
</p>
Format: YYYY-MM-DDTHH:MM:SS
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of the Trilio Service
tenant_id
string
ID of the Tenant/Project to take the Snapshot in
workload_id
string
ID of the Workload to take the Snapshot in
full
boolean
True creates a full Snapshot
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
tvm_address
string
IP or FQDN of the Trilio Service
tenant_id
string
ID of the Tenant/Project to take the Snapshot from
snapshot_id
string
ID of the Snapshot to show
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to find the Snapshot in
snapshot_id
string
ID of the Snapshot to delete
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to find the Snapshot in
snapshot_id
string
ID of the Snapshot to cancel
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 12:58:38 GMT
Content-Type: application/json
Content-Length: 266
Connection: keep-alive
X-Compute-Request-Id: req-ed391cf9-aa56-4c53-8153-fd7fb238c4b9
{
"snapshots":[
{
"id":"1ff16412-a0cd-4e6a-9b4a-b5d4440fffc4",
"created_at":"2020-11-02T14:03:18.000000",
"status":"available",
"snapshot_type":"full",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"name":"snapshot",
"description":"-",
"host":"TVM1"
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 13:58:38 GMT
Content-Type: application/json
Content-Length: 283
Connection: keep-alive
X-Compute-Request-Id: req-fb8dc382-e5de-4665-8d88-c75b2e473f5c
{
"snapshot":{
"id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"created_at":"2020-11-04T13:58:37.694637",
"status":"creating",
"snapshot_type":"full",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"name":"API taken 2",
"description":"API taken description 2",
"host":""
}
}{
"snapshot":{
"is_scheduled":<true/false>,
"name":"<name>",
"description":"<description>"
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 14:07:18 GMT
Content-Type: application/json
Content-Length: 6609
Connection: keep-alive
X-Compute-Request-Id: req-f88fb28f-f4ce-4585-9c3c-ebe08a3f60cd
{
"snapshot":{
"id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"created_at":"2020-11-04T13:58:37.000000",
"updated_at":"2020-11-04T14:06:03.000000",
"finished_at":"2020-11-04T14:06:03.000000",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"available",
"snapshot_type":"full",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"instances":[
{
"id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"name":"cirros-2",
"status":"available",
"metadata":{
"availability_zone":"nova",
"config_drive":"",
"data_transfer_time":"0",
"object_store_transfer_time":"0",
"root_partition_type":"Linux",
"trilio_ordered_interfaces":"192.168.100.80",
"vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.80\", \"config_drive\": \"\"}",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"workload_name":"Workload_1"
},
"flavor":{
"vcpus":"1",
"ram":"512",
"disk":"1",
"ephemeral":"0"
},
"security_group":[
{
"name":"default",
"security_group_type":"neutron"
}
],
"nics":[
{
"mac_address":"fa:16:3e:cf:10:91",
"ip_address":"192.168.100.80",
"network":{
"id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
"name":"robert_internal",
"cidr":null,
"network_type":"neutron",
"subnet":{
"id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
"name":"robert_internal",
"cidr":"192.168.100.0/24",
"ip_version":4,
"gateway_ip":"192.168.100.1"
}
}
}
],
"vdisks":[
{
"label":null,
"resource_id":"fa888089-5715-4228-9e5a-699f8f9d59ba",
"restore_size":1073741824,
"vm_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"volume_id":"51491d30-9818-4332-b056-1f174e65d3e3",
"volume_name":"51491d30-9818-4332-b056-1f174e65d3e3",
"volume_size":"1",
"volume_type":"iscsi",
"volume_mountpoint":"/dev/vda",
"availability_zone":"nova",
"metadata":{
"readonly":"False",
"attached_mode":"rw"
}
}
]
},
{
"id":"e33c1eea-c533-4945-864d-0da1fc002070",
"name":"cirros-1",
"status":"available",
"metadata":{
"availability_zone":"nova",
"config_drive":"",
"data_transfer_time":"0",
"object_store_transfer_time":"0",
"root_partition_type":"Linux",
"trilio_ordered_interfaces":"192.168.100.176",
"vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.176\", \"config_drive\": \"\"}",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"workload_name":"Workload_1"
},
"flavor":{
"vcpus":"1",
"ram":"512",
"disk":"1",
"ephemeral":"0"
},
"security_group":[
{
"name":"default",
"security_group_type":"neutron"
}
],
"nics":[
{
"mac_address":"fa:16:3e:cf:4d:27",
"ip_address":"192.168.100.176",
"network":{
"id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
"name":"robert_internal",
"cidr":null,
"network_type":"neutron",
"subnet":{
"id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
"name":"robert_internal",
"cidr":"192.168.100.0/24",
"ip_version":4,
"gateway_ip":"192.168.100.1"
}
}
}
],
"vdisks":[
{
"label":null,
"resource_id":"c8293bb0-031a-4d33-92ee-188380211483",
"restore_size":1073741824,
"vm_id":"e33c1eea-c533-4945-864d-0da1fc002070",
"volume_id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
"volume_name":"365ad75b-ca76-46cb-8eea-435535fd2e22",
"volume_size":"1",
"volume_type":"iscsi",
"volume_mountpoint":"/dev/vda",
"availability_zone":"nova",
"metadata":{
"readonly":"False",
"attached_mode":"rw"
}
}
]
}
],
"name":"API taken 2",
"description":"API taken description 2",
"host":"TVM1",
"size":44171264,
"restore_size":2147483648,
"uploaded_size":44171264,
"progress_percent":100,
"progress_msg":"Snapshot of workload is complete",
"warning_msg":null,
"error_msg":null,
"time_taken":428,
"pinned":false,
"metadata":[
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"16fc1ce5-81b2-4c07-ac63-6c9232e0418f",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"backup_media_target",
"value":"10.10.2.20:/upstream"
},
{
"created_at":"2020-11-04T13:58:37.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"5a56bbad-9957-4fb3-9bbc-469ec571b549",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"cancel_requested",
"value":"0"
},
{
"created_at":"2020-11-04T14:05:29.000000",
"updated_at":"2020-11-04T14:05:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"d36abef7-9663-4d88-8f2e-ef914f068fb4",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"data_transfer_time",
"value":"0"
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"c75f9151-ef87-4a74-acf1-42bd2588ee64",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"hostnames",
"value":"[\"cirros-1\", \"cirros-2\"]"
},
{
"created_at":"2020-11-04T14:05:29.000000",
"updated_at":"2020-11-04T14:05:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"02916cce-79a2-4ad9-a7f6-9d9f59aa8424",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"object_store_transfer_time",
"value":"0"
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"96efad2f-a24f-4cde-8e21-9cd78f78381b",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"pause_at_snapshot",
"value":"0"
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"572a0b21-a415-498f-b7fa-6144d850ef56",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"policy_id",
"value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"dfd7314d-8443-4a95-8e2a-7aad35ef97ea",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"preferredgroup",
"value":"[]"
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"2e17e1e4-4bb1-48a9-8f11-c4cd2cfca2a9",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"topology",
"value":"\"\\\"\\\"\""
},
{
"created_at":"2020-11-04T14:05:57.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"33762790-8743-4e20-9f50-3505a00dbe76",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"key":"workload_approx_backup_size",
"value":"6"
}
],
"restores_info":""
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 14:18:36 GMT
Content-Type: application/json
Content-Length: 56
Connection: keep-alive
X-Compute-Request-Id: req-82ffb2b6-b28e-4c73-89a4-310890960dbc
{"task": {"id": "a73de236-6379-424a-abc7-33d553e050b7"}}
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Wed, 04 Nov 2020 14:26:44 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-47a5a426-c241-429e-9d69-d40aed0dd68dThis runbook will demonstrate how to set up Disaster Recovery with Trilio for a given scenario.
The chosen scenario is following an actively used Trilio customer environment.
Two OpenStack clouds are available: "OpenStack Cloud at Production Site" and "OpenStack Cloud at DR Site." Both clouds have Trilio installed as an OpenStack service to provide backup functionality for OpenStack workloads. Each OpenStack cloud is configured to use its unique S3 bucket for storing backup images. The contents of the S3 buckets are synchronized using the aws sync command. The syncing process is set up independently from Trilio.
This runbook will assume that the following is true:
Both OpenStack clusters have Trilio installed with a valid license.
It's important to note that the mapping of OpenStack cloud domains and projects at the production site to domains and projects of OpenStack cloud at the DR (Disaster Recovery) site is not done automatically by Trilio. This means that domains and projects are not matched based on their names alone.
Additionally, the user carrying out the Disaster Recovery process must have admin access to the cloud at the DR site.
Admin must create the following artifacts at the DR site:
Domains and Projects
Tenant Networks, Routers, Ports, Floating IPs, and DNS Zones
In this scenario, admin can recover a single workload at DR site. To do this, follow the high-level process outlined below:
Sync the workload directories to the DR site s3 bucket
Ensure the correct mount paths are available.
Reassign the workload.
Restore the workload.
Assuming that the workload id is ac9cae9b-5e1b-4899-930c-6aa0600a2105, the workload prefix on the S3 bucket will be
Using AWS S3 sync command, sync workload backup images to DR site.
After successfully synchronizing workload backup images to DR site, you can verify the integrity of backup images. Login to any datamover container at DR site and cd to S3 fuse mount directory.
Use qemu-img tool to explore backup images.
The metadata for each workload includes a user ID and project ID. These IDs are irrelevant at the DR site and cloud admin must change them to valid user and project IDs.
Orphaned workloads are those in the S3 bucket that don't belong to any projects in the current cloud. The orphaned workloads list must include the newly synced workload.
The cloud administrator must assign the identified orphaned workloads to their new projects.
The following provides the list of all available projects viewable by the admin user in the target domain.
Ensure the that new project has the correct trustee role assigned.
Reassign the workload to the target project. Please refer to reassign workloads documentation for additional options.
After the workload has been assigned to the new project, please verify the workload is managed by the Target Trilio and is assigned to the right project and user.
The workload can be restored using Horizon following the procedure described here.
This runbook will use CLI for demonstration. The CLI must be executed with the project credentials.
Get the list of workload snapshots and identify the snapshot you want to restore
Get Snapshot Details with network details for the desired snapshot
Get Snapshot Details with disk details for the desired Snapshot
The selective restore is using a restore.json file for the CLI command. The restore.json includes all the necessary mappings to the current project.
The user who has the backup trustee role can restore the snapshot to DR cloud
To verify the success of the restore from a Trilio perspective the restore status is checked.
The high-level process for disaster recovery in the production cloud includes the following steps:
Ensure that Trilio is configured to use the S3 bucket at the DR site
Replicate the production S3 bucket to the DR site S3 bucket
Reassign workloads to domains/projects at the DR site
Trilio backups are qcow2 files and can be inspected using qemu-img tool. On one of the datamover containers at DR site, cd to s3 fuse mount and navigate to one of the workloads snapshots directory and perform the following operation on a VM disk.
Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by OpenStack administrators.
An orphaned workload is one on the S3 bucket that is not assigned to any existing project in the cloud.
The orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects in a given domain.
Reassign the workload to the target project.
After the workload has been assigned to the new project verify the workload is assigned to the right project and user.
The reassigned workload can be restored using Horizon following the procedure described here.
We will use CLI in this runbook.
List all Snapshots of the workload
Get Snapshot Details with network details for the desired snapshot
Get Snapshot Details with disk details for the desired Snapshot
The selective restore is using a restore.json file for the CLI command. The restore.json file captures all the details regarding restore operation include mapping of VMs to available zones, mapping volumes types to existing volume types and network mappings.
To do the actual restore use the following command:
To verify the success of the restore from a Trilio perspective the restore status is checked.

workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
$ aws s3 sync s3://production-s3-bucket/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/ s3://dr-site-s3-bucket-bucket/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/
#qemu-img info --backing-chain bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 516K
cluster_size: 65536
backing file: /var/triliovault-mounts/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
# workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Workload_1 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | 4224d3acfd394cc08228cc8072861a35 | 329880dedb4cd357579a3279835f392 |
| Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 | 329880dedb4cd357579a3279835f392 |
+------------+--------------------------------------+----------------------------------+----------------------------------+# openstack project list --domain <target_domain>
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 01fca51462a44bfa821130dce9baac1a | project1 |
| 33b4db1099ff4a65a4c1f69a14f932ee | project2 |
| 9139e694eb984a4a979b5ae8feb955af | project3 |
+----------------------------------+----------+ # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| Role | User | Group | Project | Domain | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+# workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| project1 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |
+-----------+--------------------------------------+----------------------------------+----------------------------------+ # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
+-------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+-------------------+------------------------------------------------------------------------------------------------------+
| availability_zone | nova |
| created_at | 2019-04-18T02:19:39.000000 |
| description | Test Linux VMs |
| error_msg | None |
| id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
| instances | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id": |
| | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}] |
| interval | None |
| jobschedule | True |
| name | Test Linux |
| project_id | 2fc4e2180c2745629753305591aeb93b |
| scheduler_trust | None |
| status | available |
| storage_usage | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
| | "snap_count": 13}} |
| updated_at | 2019-11-15T02:32:43.000000 |
| user_id | 72e65c264a694272928f5d84b73fe9ce |
| workload_type_id | f82ce76f-17fe-438b-aa37-7a023058e50d |
+-------------------+------------------------------------------------------------------------------------------------------+# workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| Created At | Name | ID | Workload ID | Snapshot Type | Status | Host |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | full | available | Upstream2 |
| 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
| 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+# workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | Value |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| ip_address | 172.20.20.20 |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:74:58:bb |
| | |
| ip_address | 172.20.20.13 |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:6b:46:ae |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+[root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------------+--------------------------------------------------+
| Vdisks | Value |
+-------------------+--------------------------------------------------+
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a |
| volume_name | 0027b140-a427-46cb-9ccf-7895c7624493 |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 0027b140-a427-46cb-9ccf-7895c7624493 |
| availability_zone | nova |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | 8007ed89-6a86-447e-badb-e49f1e92f57a |
| volume_name | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| availability_zone | nova |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
+-------------------+--------------------------------------------------+{
u'description':u'<description of the restore>',
u'oneclickrestore':False,
u'restore_type':u'selective',
u'type':u'openstack',
u'name':u'<name of the restore>'
u'openstack':{
u'instances':[
{
u'name':u'<name instance 1>',
u'availability_zone':u'<AZ instance 1>',
u'nics':[ #####Leave empty for network topology restore
],
u'vdisks':[
{
u'id':u'<old disk id>',
u'new_volume_type':u'<new volume type name>',
u'availability_zone':u'<new cinder volume AZ>'
}
],
u'flavor':{
u'ram':<RAM in MB>,
u'ephemeral':<GB of ephemeral disk>,
u'vcpus':<# vCPUs>,
u'swap':u'<GB of Swap disk>',
u'disk':<GB of boot disk>,
u'id':u'<id of the flavor to use>'
},
u'include':<True/False>,
u'id':u'<old id of the instance>'
} #####Repeat for each instance in the snapshot
],
u'restore_topology':<True/False>,
u'networks_mapping':{
u'networks':[ #####Leave empty for network topology restore
]
}
}
}
# workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| Created At | Name | ID | Snapshot ID | Size | Status |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
+------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+------------------+------------------------------------------------------------------------------------------------------+
| created_at | 2019-09-24T12:44:38.000000 |
| description | - |
| error_msg | None |
| finished_at | 2019-09-24T12:46:07.000000 |
| host | Upstream2 |
| id | 5b4216d0-4bed-460f-8501-1589e7b45e01 |
| instances | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata": |
| | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}] |
| name | OneClick Restore |
| progress_msg | Restore from snapshot is complete |
| progress_percent | 100 |
| project_id | 8e16700ae3614da4ba80a4e57d60cdb9 |
| restore_options | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
| | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]}, |
| | "type": "openstack", "name": "OneClick Restore"} |
| restore_type | restore |
| size | 41126400 |
| snapshot_id | 5928554d-a882-4881-9a5c-90e834c071af |
| status | available |
| time_taken | 89 |
| updated_at | 2019-09-24T12:44:38.000000 |
| uploaded_size | 41126400 |
| user_id | d5fbd79f4e834f51bfec08be6d3b2ff2 |
| warning_msg | None |
| workload_id | 02b1aca2-c51a-454b-8c0f-99966314165e |
+------------------+------------------------------------------------------------------------------------------------------+aws s3 sync s3://production-s3-bucket/ s3://dr-site-s3-bucket-bucket/#qemu-img info --backing-chain bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 516K
cluster_size: 65536
backing file: /var/triliovault-mounts/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
# workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 | 329880dedb4cd357579a3279835f392 |
| Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 | 329880dedb4cd357579a3279835f392 |
+------------+--------------------------------------+----------------------------------+----------------------------------+# openstack project list --domain <target_domain>
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 01fca51462a44bfa821130dce9baac1a | project1 |
| 33b4db1099ff4a65a4c1f69a14f932ee | project2 |
| 9139e694eb984a4a979b5ae8feb955af | project3 |
+----------------------------------+----------+ # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| Role | User | Group | Project | Domain | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+# workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| project1 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |
+-----------+--------------------------------------+----------------------------------+----------------------------------+ # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
+-------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+-------------------+------------------------------------------------------------------------------------------------------+
| availability_zone | nova |
| created_at | 2019-04-18T02:19:39.000000 |
| description | Test Linux VMs |
| error_msg | None |
| id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
| instances | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id": |
| | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}] |
| interval | None |
| jobschedule | True |
| name | Test Linux |
| project_id | 2fc4e2180c2745629753305591aeb93b |
| scheduler_trust | None |
| status | available |
| storage_usage | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
| | "snap_count": 13}} |
| updated_at | 2019-11-15T02:32:43.000000 |
| user_id | 72e65c264a694272928f5d84b73fe9ce |
| workload_type_id | f82ce76f-17fe-438b-aa37-7a023058e50d |
+-------------------+------------------------------------------------------------------------------------------------------+# workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| Created At | Name | ID | Workload ID | Snapshot Type | Status | Host |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | full | available | Upstream2 |
| 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
| 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+# workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | Value |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| ip_address | 172.20.20.20 |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:74:58:bb |
| | |
| ip_address | 172.20.20.13 |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:6b:46:ae |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+[root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------------+--------------------------------------------------+
| Vdisks | Value |
+-------------------+--------------------------------------------------+
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a |
| volume_name | 0027b140-a427-46cb-9ccf-7895c7624493 |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 0027b140-a427-46cb-9ccf-7895c7624493 |
| availability_zone | nova |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | 8007ed89-6a86-447e-badb-e49f1e92f57a |
| volume_name | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| availability_zone | nova |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
+-------------------+--------------------------------------------------+{
u'description':u'<description of the restore>',
u'oneclickrestore':False,
u'restore_type':u'selective',
u'type':u'openstack',
u'name':u'<name of the restore>'
u'openstack':{
u'instances':[
{
u'name':u'<name instance 1>',
u'availability_zone':u'<AZ instance 1>',
u'nics':[ #####Leave empty for network topology restore
],
u'vdisks':[
{
u'id':u'<old disk id>',
u'new_volume_type':u'<new volume type name>',
u'availability_zone':u'<new cinder volume AZ>'
}
],
u'flavor':{
u'ram':<RAM in MB>,
u'ephemeral':<GB of ephemeral disk>,
u'vcpus':<# vCPUs>,
u'swap':u'<GB of Swap disk>',
u'disk':<GB of boot disk>,
u'id':u'<id of the flavor to use>'
},
u'include':<True/False>,
u'id':u'<old id of the instance>'
} #####Repeat for each instance in the snapshot
],
u'restore_topology':<True/False>,
u'networks_mapping':{
u'networks':[ #####Leave empty for network topology restore
]
}
}
}
# workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| Created At | Name | ID | Snapshot ID | Size | Status |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
+------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+------------------+------------------------------------------------------------------------------------------------------+
| created_at | 2019-09-24T12:44:38.000000 |
| description | - |
| error_msg | None |
| finished_at | 2019-09-24T12:46:07.000000 |
| host | Upstream2 |
| id | 5b4216d0-4bed-460f-8501-1589e7b45e01 |
| instances | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata": |
| | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}] |
| name | OneClick Restore |
| progress_msg | Restore from snapshot is complete |
| progress_percent | 100 |
| project_id | 8e16700ae3614da4ba80a4e57d60cdb9 |
| restore_options | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
| | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]}, |
| | "type": "openstack", "name": "OneClick Restore"} |
| restore_type | restore |
| size | 41126400 |
| snapshot_id | 5928554d-a882-4881-9a5c-90e834c071af |
| status | available |
| time_taken | 89 |
| updated_at | 2019-09-24T12:44:38.000000 |
| uploaded_size | 41126400 |
| user_id | d5fbd79f4e834f51bfec08be6d3b2ff2 |
| warning_msg | None |
| workload_id | 02b1aca2-c51a-454b-8c0f-99966314165e |
+------------------+------------------------------------------------------------------------------------------------------+This runbook will demonstrate how to set up Disaster Recovery with Trilio for a given scenario.
The chosen scenario is following an actively used Trilio customer environment.
There are two Openstack clouds available "Openstack Cloud A" and Openstack Cloud B". "Openstack Cloud B" is the Disaster Recovery restore point of "Openstack Cloud A" and vice versa. Both clouds have an independent Trilio installation integrated. These Trilio installations are writing their Backups to NFS targets. "Trilio A" is writing to "NFS A1" and "Trilio B" is writing to "NFS B1". The NFS Volumes used are getting synced against another NFS Volume on the other side. "NFS A1" is syncing with "NFS B2" and "NFS B1" is syncing with "NFS A2". The syncing process is set up independently from Trilio and will always favor the newer dataset.
This scenario will cover the Disaster Recovery of a single Workload and a complete Cloud. All processes are done be the Openstack administrator.
This runbook will assume that the following is true:
"Openstack Cloud A" and "Openstack Cloud B" both have an active Trilio installation with a valid license
"Openstack Cloud A" and "Openstack Cloud B" have free resources to host additional VMs
"Openstack Cloud A" and "Openstack Cloud B" have Tenants/Projects available that are the designated restore points for Tenant/Projects of the other side
For ease of writing will this runbook assume from here on, that "Openstack Cloud A" is down and the Workloads are getting restored into "Openstack Cloud B".
In the case of the usage of shared Tenant networks, beyond the floating IP, the following additional requirement is needed: All Tenant Networks, Routers, Ports, Floating IPs, and DNS Zones are created
A single Workload can do a Disaster Recovery in this Scenario, while both Clouds are still active. To do so the following high-level process needs to be followed:
Copy the Workload directories to the configured NFS Volume
Make the right Mount-Paths available
Reassign the Workload
Restore the Workload
As only a single Workload is to be recovered it is more efficient to copy the data of that single Workload over to the "NFS B1" Volume, which is used by "Trilio B".
It is recommended to use the Trilio VM as a connector between both NFS Volumes, as the nova user is available on the Trilio VM.
Trilio Workloads are identified by their ID und which they are stored on the Backup Target. See below example:
In the case that the Workload ID is not known can available Metadata inside the Workload directories be used to identify the correct Workload.
The identified workload needs to be copied with all subdirectories and files. Afterward, it is necessary to adjust the ownership to nova:nova with the right permissions.
Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.
The MTAuMTAuMi4yMDovdXBzdHJlYW0= part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.
This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.
Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.
Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.
The used hash values can be calculated using the base64 tool in any Linux distribution.
Based on the identified base64 hash values the following paths are required on each Compute node.
/var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
and
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.
To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.
Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.
To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.
Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.
The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.
To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.
Now that all informations have been gathered the workload can be reassigned to the target project.
After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.
The reassigned workload can be restored using Horizon following the procedure described .
This runbook will continue on the CLI only path.
To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.
List all Snapshots of the workload to restore to identify the snapshot to restore
Get Snapshot Details with network details for the desired snapshot
Get Snapshot Details with disk details for the desired Snapshot
The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.
To do the actual restore use the following command:
To verify the success of the restore from a Trilio perspective the restore status is checked.
After the Desaster Recovery Process has been successfully completed it is recommended to bring the TVM installation back into its original state to be ready for the next DR process.
Delete the workload that got restored.
The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.
To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.
Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.
This script can be found here:
After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.
This Scenario will cover the Disaster Recovery of a full cloud. It is assumed that the source cloud is down or lost completly. To do the disaster recovery the following high-level process needs to be followed:
Reconfigure the Target Trilio installation
Make the right Mount-Paths available
Reassign the Workload
Restore the Workload
Before the Desaster Recovery Process can start is it necessary to make the backups to be restored available for the Trilio installation. The following steps need to be done to completely reconfigure the Trilio installation.
During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.
To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.
Edit the workloadmgr.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the workloadmgr.conf
Restart the wlm-workloads service
Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.
To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.
Edit the tvault-contego.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the tvault-contego.conf
Restart the tvault-contego service
Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.
The MTAuMTAuMi4yMDovdXBzdHJlYW0= part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.
This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.
Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.
Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.
The used hash values can be calculated using the base64 tool in any Linux distribution.
Based on the identified base64 hash values the following paths are required on each Compute node.
/var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
and
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.
To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.
Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.
To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.
Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.
The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.
To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.
Now that all informations have been gathered the workload can be reassigned to the target project.
After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.
The reassigned workload can be restored using Horizon following the procedure described .
This runbook will continue on the CLI only path.
To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.
List all Snapshots of the workload to restore to identify the snapshot to restore
Get Snapshot Details with network details for the desired snapshot
Get Snapshot Details with disk details for the desired Snapshot
The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.
To do the actual restore use the following command:
To verify the success of the restore from a Trilio perspective the restore status is checked.
After the Desaster Recovery Process has finished it is necessary to return the Trilio installation to its original configuration. The following steps need to be done to completely reconfigure the Trilio installation.
During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.
To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.
Edit the workloadmgr.conf
Look for the line defining the NFS mounts
Delete NFS B2 from the comma-seperated list
Write and close the workloadmgr.conf
Restart the wlm-workloads service
Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.
To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.
Edit the tvault-contego.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the tvault-contego.conf
Restart the tvault-contego service
After the Desaster Recovery Process has been successfully completed and the Trilio installation reconfigured to its original state, it is recommended to do the following additional steps to be ready for the next Disaster Recovery process.
The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.
To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.
Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.
This script can be found here:
After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.
GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy
Requests the list of available Workload Policies
GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>
Requests the details of a given policy
GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/assigned/<project_id>
Requests the lists of Policies assigned to a Project.
POST https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy
Creates a Policy with the given parameters
PUT https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>
Updates a Policy with the given information
POST https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>
Updates a Policy with the given information
DELETE https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>
Deletes a given Policy
A Restore is the workflow to bring back the backed up VMs from a Trilio Snapshot.
To reach the list of Restores for a Snapshot follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
--snapshot_id <snapshot_id> ➡️ ID of the Snapshot to show the restores of
To reach the detailed Restore overview follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
The Restore Details Tab shows the most important information about the Restore.
Name
Description
Restore Type
Status
List of VMs restored
restored VM Name
restored VM Status
restored VM ID
The Misc tab provides additional Metadata information.
Creation Time
Restore ID
Snapshot ID containing the Restore
Workload
<restore_id> ➡️ ID of the restore to be shown
--output <output> ➡️ Option to get additional restore details, Specify --output metadata for restore metadata,--output networks --output subnets --output routers --output flavors
Once a Restore is no longer needed, it can be safely deleted from a Workload.
There are 2 possibilities to delete a Restore.
To delete a single Restore through the submenu follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
To delete one or more Restores through the Restore list do the following:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
<restore_id> ➡️ ID of the restore to be deleted
Ongoing Restores can be canceled.
To cancel a Restore in Horizon follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
<restore_id> ➡️ ID of the restore to be deleted
The One Click Restore will bring back all VMs from the Snapshot in the same state as they were backed up. They will:
be located in the same cluster in the same datacenter
use the same storage domain
connect to the same network
have the same flavor
The user can't change any Metadata.
There are 2 possibilities to start a One Click Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
<snapshot_id> ➡️ ID of the snapshot to restore.
--display-name <display-name> ➡️ Optional name for the restore.
--display-description <display-description> ➡️
The Selective Restore is the most complex restore Trilio has to offer. It allows to adapt the restored VMs to the exact needs of the User.
With the selective restore the following things can be changed:
Which VMs are getting restored
Name of the restored VMs
Which networks to connect with
Which Storage domain to use
There are 2 possibilities to start a Selective Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
<snapshot_id> ➡️ ID of the snapshot to restore.
--display-name <display-name> ➡️ Optional name for the restore.
--display-description <display-description> ➡️
The Inplace Restore covers those use cases, where the VM and its Volumes are still available, but the data got corrupted or needs to a rollback for other reasons.
It allows the user to restore only the data of a selected Volume, which is part of a backup.
There are 2 possibilities to start an Inplace Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
<snapshot_id> ➡️ ID of the snapshot to restore.
--display-name <display-name> ➡️ Optional name for the restore.
--display-description <display-description> ➡️
The workloadmgr client CLI is using a restore.json file to define the restore parameters for the selective and the inplace restore.
An example for a selective restore of this restore.json is shown below. A detailed analysis and explanation is given afterwards.
Before the exact details of the restore are to be provided it is necessary to provide the general metadata for the restore.
name➡️the name of the restore
description➡️the description of the restore
oneclickrestore <True/False>➡️
The Selective Restore requires a lot of information to be able to execute the restore as desired.
Those information are divided into 3 components:
instances
restore_topology
networks_mapping
This part contains all information about all instances that are part of the Snapshot to restore and how they are to be restored.
Each instance requires the following information
id ➡️ original id of the instance
include <True/False> ➡️ Set True when the instance shall be restored
name ➡️ new name of the instance
availability_zone ➡️ Nova Availability Zone the instance shall be restored into. Leave empty for "Any Availability Zone"
Nics ➡️
vdisks ➡️ List of all Volumes that are part of the instance. Each Volume requires the following information:
id ➡️ Original ID of the Volume
new_volume_type
The root disk needs to be at least as big as the root disk of the backed up instance was.
The following example describes a single instance with all values.
Do not mix network topology restore together with network mapping.
To activate a network topology restore set:
To activate network mapping set:
When the network mapping is activated it is used, it is necessary to provide the mapping details, which are part of the networks_mapping block:
networks ➡️ list of snapshot_network and target_network pairs
snapshot_network ➡️ the network backed up in the snapshot, contains the following:
The Inplace Restore requires less information thana selective restore. It only requires the base file with some information about the Instances and Volumes to be restored.
id ➡️ ID of the instance inside the Snapshot
restore_boot_disk ➡️ Set to True if the boot disk of that VM shall be restored.
include ➡️ Set to True if at least one Volume from this instance shall be restored
vdisks ➡️ List of disks, that are connected to the instance. Each disk contains:
id
There are no network information required, but the field have to exist as empty value for the restore to work.
One of the Openstack clouds is down/lost
Clean up
Reconfigure the Target Trilio installation back to the original one
Clean up

User-Agent
string
python-workloadmgrclient
User-Agent
string
python-workloadmgrclient
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project
policy_id
string
ID of the Policy to show
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project
project_id
string
ID of the Project to fetch assigned Policies from
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
policy_id
string
ID of the Policy to update
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
policy_id
string
ID of the Policy to assign
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project
policy_id
string
ID of the Policy to delete
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 13:56:08 GMT
Content-Type: application/json
Content-Length: 1399
Connection: keep-alive
X-Compute-Request-Id: req-4618161e-64e4-489a-b8fc-f3cb21d94096
{
"policy_list":[
{
"id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":"2020-10-26T12:52:22.000000",
"status":"available",
"name":"Gold",
"description":"",
"metadata":[
],
"field_values":[
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"0201f8b4-482d-4ec1-9b92-8cf3092abcc2",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"retention_policy_value",
"value":"10"
},
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"48cc7007-e221-44de-bd4e-6a66841bdee0",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"interval",
"value":"5"
},
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"79070c67-9021-4220-8a79-648ffeebc144",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"retention_policy_type",
"value":"Number of Snapshots to Keep"
},
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"9fec205a-9528-45ea-a118-ffb64d8c7d9d",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"fullbackup_interval",
"value":"-1"
}
]
}
]
}# mount <NFS B2-IP/NFS B2-FQDN>:/<VOL-Path> /mntworkload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/…/workload_<id>/workload_db <<< Contains User ID and Project ID of Workload owner
/…/workload_<id>/workload_vms_db <<< Contains VM IDs and VM Names of all VMs actively protected be the Workload# cp /mnt/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
# chown -R nova:nova /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
# chmod -R 644 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105#qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 516K
cluster_size: 65536
backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
# echo -n 10.10.2.20:/NFS_A1 | base64
MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
# echo -n 10.20.3.22:/NFS_B2 | base64
MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0#mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
#mount --bind
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl#vi /etc/fstab
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ / var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl none bind 0 0# source {customer admin rc file}
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>
# openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain># workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 | 329880dedb4cd357579a3279835f392 |
| Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 | 329880dedb4cd357579a3279835f392 |
+------------+--------------------------------------+----------------------------------+----------------------------------+# openstack project list --domain <target_domain>
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 01fca51462a44bfa821130dce9baac1a | project1 |
| 33b4db1099ff4a65a4c1f69a14f932ee | project2 |
| 9139e694eb984a4a979b5ae8feb955af | project3 |
+----------------------------------+----------+ # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| Role | User | Group | Project | Domain | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+# workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| project1 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |
+-----------+--------------------------------------+----------------------------------+----------------------------------+ # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
+-------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+-------------------+------------------------------------------------------------------------------------------------------+
| availability_zone | nova |
| created_at | 2019-04-18T02:19:39.000000 |
| description | Test Linux VMs |
| error_msg | None |
| id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
| instances | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id": |
| | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}] |
| interval | None |
| jobschedule | True |
| name | Test Linux |
| project_id | 2fc4e2180c2745629753305591aeb93b |
| scheduler_trust | None |
| status | available |
| storage_usage | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
| | "snap_count": 13}} |
| updated_at | 2019-11-15T02:32:43.000000 |
| user_id | 72e65c264a694272928f5d84b73fe9ce |
| workload_type_id | f82ce76f-17fe-438b-aa37-7a023058e50d |
+-------------------+------------------------------------------------------------------------------------------------------+# workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| Created At | Name | ID | Workload ID | Snapshot Type | Status | Host |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | full | available | Upstream2 |
| 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
| 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+# workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | Value |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| ip_address | 172.20.20.20 |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:74:58:bb |
| | |
| ip_address | 172.20.20.13 |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:6b:46:ae |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+[root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------------+--------------------------------------------------+
| Vdisks | Value |
+-------------------+--------------------------------------------------+
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a |
| volume_name | 0027b140-a427-46cb-9ccf-7895c7624493 |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 0027b140-a427-46cb-9ccf-7895c7624493 |
| availability_zone | nova |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | 8007ed89-6a86-447e-badb-e49f1e92f57a |
| volume_name | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| availability_zone | nova |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
+-------------------+--------------------------------------------------+{
u'description':u'<description of the restore>',
u'oneclickrestore':False,
u'restore_type':u'selective',
u'type':u'openstack',
u'name':u'<name of the restore>'
u'openstack':{
u'instances':[
{
u'name':u'<name instance 1>',
u'availability_zone':u'<AZ instance 1>',
u'nics':[ #####Leave empty for network topology restore
],
u'vdisks':[
{
u'id':u'<old disk id>',
u'new_volume_type':u'<new volume type name>',
u'availability_zone':u'<new cinder volume AZ>'
}
],
u'flavor':{
u'ram':<RAM in MB>,
u'ephemeral':<GB of ephemeral disk>,
u'vcpus':<# vCPUs>,
u'swap':u'<GB of Swap disk>',
u'disk':<GB of boot disk>,
u'id':u'<id of the flavor to use>'
},
u'include':<True/False>,
u'id':u'<old id of the instance>'
} #####Repeat for each instance in the snapshot
],
u'restore_topology':<True/False>,
u'networks_mapping':{
u'networks':[ #####Leave empty for network topology restore
]
}
}
}
# workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| Created At | Name | ID | Snapshot ID | Size | Status |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
+------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+------------------+------------------------------------------------------------------------------------------------------+
| created_at | 2019-09-24T12:44:38.000000 |
| description | - |
| error_msg | None |
| finished_at | 2019-09-24T12:46:07.000000 |
| host | Upstream2 |
| id | 5b4216d0-4bed-460f-8501-1589e7b45e01 |
| instances | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata": |
| | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}] |
| name | OneClick Restore |
| progress_msg | Restore from snapshot is complete |
| progress_percent | 100 |
| project_id | 8e16700ae3614da4ba80a4e57d60cdb9 |
| restore_options | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
| | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]}, |
| | "type": "openstack", "name": "OneClick Restore"} |
| restore_type | restore |
| size | 41126400 |
| snapshot_id | 5928554d-a882-4881-9a5c-90e834c071af |
| status | available |
| time_taken | 89 |
| updated_at | 2019-09-24T12:44:38.000000 |
| uploaded_size | 41126400 |
| user_id | d5fbd79f4e834f51bfec08be6d3b2ff2 |
| warning_msg | None |
| workload_id | 02b1aca2-c51a-454b-8c0f-99966314165e |
+------------------+------------------------------------------------------------------------------------------------------+# workloadmgr workload-delete <workload_id># source {customer admin rc file}
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>
# openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
# vi /etc/workloadmgr/workloadmgr.confvault_storage_nfs_export = <NFS_B1/NFS_B1-FQDN>:/<VOL-B1-Path>vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path>,<NFS-IP/NFS-FQDN>:/<VOL—2-Path># systemctl restart wlm-workloads# vi /etc/tvault-contego/tvault-contego.confvault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path>vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path># systemctl restart tvault-contego#qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 516K
cluster_size: 65536
backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
# echo -n 10.10.2.20:/NFS_A1 | base64
MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
# echo -n 10.20.3.22:/NFS_B2 | base64
MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0#mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
#mount --bind
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl#vi /etc/fstab
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ / var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl none bind 0 0# source {customer admin rc file}
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>
# openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>
# openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain># workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+------------+--------------------------------------+----------------------------------+----------------------------------+
| Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 | 329880dedb4cd357579a3279835f392 |
| Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 | 329880dedb4cd357579a3279835f392 |
+------------+--------------------------------------+----------------------------------+----------------------------------+# openstack project list --domain <target_domain>
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 01fca51462a44bfa821130dce9baac1a | project1 |
| 33b4db1099ff4a65a4c1f69a14f932ee | project2 |
| 9139e694eb984a4a979b5ae8feb955af | project3 |
+----------------------------------+----------+ # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| Role | User | Group | Project | Domain | Inherited |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
| 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
| 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 | | 8e16700ae3614da4ba80a4e57d60cdb9 | | False |
+----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+# workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| Name | ID | Project ID | User ID |
+-----------+--------------------------------------+----------------------------------+----------------------------------+
| project1 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |
+-----------+--------------------------------------+----------------------------------+----------------------------------+ # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
+-------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+-------------------+------------------------------------------------------------------------------------------------------+
| availability_zone | nova |
| created_at | 2019-04-18T02:19:39.000000 |
| description | Test Linux VMs |
| error_msg | None |
| id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
| instances | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id": |
| | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}] |
| interval | None |
| jobschedule | True |
| name | Test Linux |
| project_id | 2fc4e2180c2745629753305591aeb93b |
| scheduler_trust | None |
| status | available |
| storage_usage | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
| | "snap_count": 13}} |
| updated_at | 2019-11-15T02:32:43.000000 |
| user_id | 72e65c264a694272928f5d84b73fe9ce |
| workload_type_id | f82ce76f-17fe-438b-aa37-7a023058e50d |
+-------------------+------------------------------------------------------------------------------------------------------+# workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| Created At | Name | ID | Workload ID | Snapshot Type | Status | Host |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
| 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | full | available | Upstream2 |
| 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
| 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 | incremental | available | Upstream2 |
+----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+# workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| Networks | Value |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
| ip_address | 172.20.20.20 |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:74:58:bb |
| | |
| ip_address | 172.20.20.13 |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| network | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'} |
| mac_address | fa:16:3e:6b:46:ae |
+-------------+----------------------------------------------------------------------------------------------------------------------------------------------+[root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
+-------------------+--------------------------------------+
| Snapshot property | Value |
+-------------------+--------------------------------------+
| description | None |
| host | Upstream2 |
| id | 7e39e544-537d-4417-853d-11463e7396f9 |
| name | jobscheduler |
| progress_percent | 100 |
| restore_size | 44040192 Bytes or Approx (42.0MB) |
| restores_info | |
| size | 1310720 Bytes or Approx (1.2MB) |
| snapshot_type | incremental |
| status | available |
| time_taken | 154 Seconds |
| uploaded_size | 1310720 |
| workload_id | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
+-------------------+--------------------------------------+
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Instances | Value |
+----------------+---------------------------------------------------------------------------------------------------------------------+
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-1 |
| ID | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| | |
| Status | available |
| Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
| Flavor | {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'} |
| Name | Test-Linux-2 |
| ID | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| | |
+----------------+---------------------------------------------------------------------------------------------------------------------+
+-------------------+--------------------------------------------------+
| Vdisks | Value |
+-------------------+--------------------------------------------------+
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a |
| volume_name | 0027b140-a427-46cb-9ccf-7895c7624493 |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 0027b140-a427-46cb-9ccf-7895c7624493 |
| availability_zone | nova |
| vm_id | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
| volume_mountpoint | /dev/vda |
| restore_size | 22020096 |
| resource_id | 8007ed89-6a86-447e-badb-e49f1e92f57a |
| volume_name | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| volume_type | None |
| label | None |
| volume_size | 1 |
| volume_id | 2a7f9e78-7778-4452-af5b-8e2fa43853bd |
| availability_zone | nova |
| vm_id | 3fd869b2-16bd-4423-b389-18d19d37c8e0 |
| metadata | {u'readonly': u'False', u'attached_mode': u'rw'} |
| | |
+-------------------+--------------------------------------------------+{
u'description':u'<description of the restore>',
u'oneclickrestore':False,
u'restore_type':u'selective',
u'type':u'openstack',
u'name':u'<name of the restore>'
u'openstack':{
u'instances':[
{
u'name':u'<name instance 1>',
u'availability_zone':u'<AZ instance 1>',
u'nics':[ #####Leave empty for network topology restore
],
u'vdisks':[
{
u'id':u'<old disk id>',
u'new_volume_type':u'<new volume type name>',
u'availability_zone':u'<new cinder volume AZ>'
}
],
u'flavor':{
u'ram':<RAM in MB>,
u'ephemeral':<GB of ephemeral disk>,
u'vcpus':<# vCPUs>,
u'swap':u'<GB of Swap disk>',
u'disk':<GB of boot disk>,
u'id':u'<id of the flavor to use>'
},
u'include':<True/False>,
u'id':u'<old id of the instance>'
} #####Repeat for each instance in the snapshot
],
u'restore_topology':<True/False>,
u'networks_mapping':{
u'networks':[ #####Leave empty for network topology restore
]
}
}
}
# workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| Created At | Name | ID | Snapshot ID | Size | Status |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
| 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
+----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
[root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
+------------------+------------------------------------------------------------------------------------------------------+
| Property | Value |
+------------------+------------------------------------------------------------------------------------------------------+
| created_at | 2019-09-24T12:44:38.000000 |
| description | - |
| error_msg | None |
| finished_at | 2019-09-24T12:46:07.000000 |
| host | Upstream2 |
| id | 5b4216d0-4bed-460f-8501-1589e7b45e01 |
| instances | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata": |
| | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}] |
| name | OneClick Restore |
| progress_msg | Restore from snapshot is complete |
| progress_percent | 100 |
| project_id | 8e16700ae3614da4ba80a4e57d60cdb9 |
| restore_options | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
| | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]}, |
| | "type": "openstack", "name": "OneClick Restore"} |
| restore_type | restore |
| size | 41126400 |
| snapshot_id | 5928554d-a882-4881-9a5c-90e834c071af |
| status | available |
| time_taken | 89 |
| updated_at | 2019-09-24T12:44:38.000000 |
| uploaded_size | 41126400 |
| user_id | d5fbd79f4e834f51bfec08be6d3b2ff2 |
| warning_msg | None |
| workload_id | 02b1aca2-c51a-454b-8c0f-99966314165e |
+------------------+------------------------------------------------------------------------------------------------------+# vi /etc/workloadmgr/workloadmgr.confvault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>vault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path># systemctl restart wlm-workloads# vi /etc/tvault-contego/tvault-contego.confvault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path># systemctl restart tvault-contego# source {customer admin rc file}
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>
# openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>
# openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Fri, 13 Nov 2020 14:18:42 GMT
Content-Type: application/json
Content-Length: 2160
Connection: keep-alive
X-Compute-Request-Id: req-0583fc35-0f80-4746-b280-c17b32cc4b25
{
"policy":{
"id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":"2020-10-26T12:52:22.000000",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"status":"available",
"name":"Gold",
"description":"",
"field_values":[
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"0201f8b4-482d-4ec1-9b92-8cf3092abcc2",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"retention_policy_value",
"value":"10"
},
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"48cc7007-e221-44de-bd4e-6a66841bdee0",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"interval",
"value":"5"
},
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"79070c67-9021-4220-8a79-648ffeebc144",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"retention_policy_type",
"value":"Number of Snapshots to Keep"
},
{
"created_at":"2020-10-26T12:52:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"9fec205a-9528-45ea-a118-ffb64d8c7d9d",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"policy_field_name":"fullbackup_interval",
"value":"-1"
}
],
"metadata":[
],
"policy_assignments":[
{
"created_at":"2020-10-26T12:53:01.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"3e3f1b12-1b1f-452b-a9d2-b6e5fbf2ab18",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"policy_name":"Gold",
"project_name":"admin"
},
{
"created_at":"2020-10-29T15:39:13.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"8b4a6236-63f1-4e2d-b8d1-23b37f4b4346",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"policy_name":"Gold",
"project_name":"robert"
}
]
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:14:01 GMT
Content-Type: application/json
Content-Length: 338
Connection: keep-alive
X-Compute-Request-Id: req-57175488-d267-4dcb-90b5-f239d8b02fe2
{
"policies":[
{
"created_at":"2020-10-29T15:39:13.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"8b4a6236-63f1-4e2d-b8d1-23b37f4b4346",
"policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"policy_name":"Gold",
"project_name":"robert"
}
]
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:24:03 GMT
Content-Type: application/json
Content-Length: 1413
Connection: keep-alive
X-Compute-Request-Id: req-05e05333-b967-4d4e-9c9b-561f1a7add5a
{
"policy":{
"id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:24:01.000000",
"status":"available",
"name":"CLI created",
"description":"CLI created",
"metadata":[
],
"field_values":[
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"767ae42d-caf0-4d36-963c-9b0e50991711",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"interval",
"value":"4 hr"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_value",
"value":"10"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_type",
"value":"Number of Snapshots to Keep"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"fullbackup_interval",
"value":"-1"
}
]
}
}{
"workload_policy":{
"field_values":{
"fullbackup_interval":"<-1 for never / 0 for always / Integer>",
"retention_policy_type":"<Number of Snapshots to Keep/Number of days to retain Snapshots>",
"interval":"<Integer hr>",
"retention_policy_value":"<Integer>"
},
"display_name":"<String>",
"display_description":"<String>",
"metadata":{
<key>:<value>
}
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:32:13 GMT
Content-Type: application/json
Content-Length: 1515
Connection: keep-alive
X-Compute-Request-Id: req-9104cf1c-4025-48f5-be92-1a6b7117bf95
{
"policy":{
"id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:24:01.000000",
"status":"available",
"name":"API created",
"description":"API created",
"metadata":[
],
"field_values":[
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"767ae42d-caf0-4d36-963c-9b0e50991711",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"interval",
"value":"8 hr"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_value",
"value":"20"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_type",
"value":"Number of days to retain Snapshots"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"fullbackup_interval",
"value":"7"
}
]
}
}{
"policy":{
"field_values":{
"fullbackup_interval":"<-1 for never / 0 for always / Integer>",
"retention_policy_type":"<Number of Snapshots to Keep/Number of days to retain Snapshots>",
"interval":"<Integer hr>",
"retention_policy_value":"<Integer>"
},
"display_name":"String",
"display_description":"String",
"metadata":{
<key>:<value>
}
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:46:23 GMT
Content-Type: application/json
Content-Length: 2318
Connection: keep-alive
X-Compute-Request-Id: req-169a53e4-b1c9-4bd1-bf68-3416d177d868
{
"policy":{
"id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:24:01.000000",
"user_id":"adfa32d7746a4341b27377d6f7c61adb",
"project_id":"4dfe98a43bfa404785a812020066b4d6",
"status":"available",
"name":"API created",
"description":"API created",
"field_values":[
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"767ae42d-caf0-4d36-963c-9b0e50991711",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"interval",
"value":"8 hr"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_value",
"value":"20"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"retention_policy_type",
"value":"Number of days to retain Snapshots"
},
{
"created_at":"2020-11-17T09:24:01.000000",
"updated_at":"2020-11-17T09:31:45.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"policy_field_name":"fullbackup_interval",
"value":"7"
}
],
"metadata":[
],
"policy_assignments":[
{
"created_at":"2020-11-17T09:46:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"4794ed95-d8d1-4572-93e8-cebd6d4df48f",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"project_id":"cbad43105e404c86a1cd07c48a737f9c",
"policy_name":"API created",
"project_name":"services"
},
{
"created_at":"2020-11-17T09:46:22.000000",
"updated_at":null,
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"68f187a6-3526-4a35-8b2d-cb0e9f497dd8",
"policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"policy_name":"API created",
"project_name":"robert"
}
]
},
"failed_ids":[
]
}{
"policy":{
"remove_projects":[
"<project_id>"
],
"add_projects":[
"<project_id>",
]
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Tue, 17 Nov 2020 09:56:03 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 0
Connection: keep-aliveClick the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restores tab
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restores tab
Identify the restore to show
Click the restore name
Time taken
Size
Progress Message
Progress
Host
Restore Options
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restore tab
Click "Delete Restore" in the line of the restore in question
Confirm by clicking "Delete Restore"
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshots in the Snapshot list
Enter the Snapshot by clicking the Snapshot name
Navigate to the Restore tab
Check the checkbox for each Restore that shall be deleted
Click "Delete Restore" in the menu above
Confirm by clicking "Delete Restore"
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restore tab
Identify the ongoing Restore
Click "Cancel Restore" in the line of the restore in question
Confirm by clicking "Cancel Restore"
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click "One Click Restore" in the same line as the identified Snapshot
(Optional) Provide a name / description
Click "Create"
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "One Click Restore"
(Optional) Provide a name / description
Click "Create"
Which DataCenter / Cluster to restore into
Which flavor the restored VMs will use
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot
Click on "Selective Restore"
Configure the Selective Restore as desired
Click "Restore"
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "Selective Restore"
Configure the Selective Restore as desired
Click "Restore"
--filename <filename> ➡️ Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot
Click on "Inplace Restore"
Configure the Inplace Restore as desired
Click "Restore"
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "Inplace Restore"
Configure the Inplace Restore as desired
Click "Restore"
--filename <filename> ➡️ Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.
restore_type <oneclick/selective/inplace>➡️defines the restore that is intended
type openstack➡️defines that the restore is into an openstack cloud.
openstackstarts the exact definition of the restore
id ➡️ ID of the Neutron port to use
mac_address ➡️ Mac Address of the Neutron port
ip_address ➡️ IP Address of the Neutron port
network ➡️ network the port is assigned to. Contains the following information:
id ➡️ ID of the network the Neutron port is part of
subnet➡️
availability_zone ➡️ The Cinder Availability Zone to use for the Volume. The default Availability Zone of Cinder is Nova
flavor➡️Defines the Flavor to use for the restored instance. Contains the following information:
ram➡️How much RAM the restored instance will have (in MB)
ephemeral➡️How big the ephemeral disk of the instance will be (in GB)
vcpus➡️How many vcpus the restored instance will have available
swap➡️How big the Swap of the restored instance will be (in MB). Leave empty for none.
disk➡️Size of the root disk the instance will boot with
id➡️ID of the flavor that matches the provided information
id ➡️ Original ID of the network backed upsubnet ➡️ the subnet of the network backed up in the snapshot, contains the following:
id ➡️ Original ID of the subnet backed up
target_network ➡️ the existing network to map to, contains the following
id ➡️ ID of the network to map to
subnet ➡️ the subnet of the network backed up in the snapshot, contains the following:
id ➡️ ID of the subnet to map to
restore_cinder_volume ➡️ set to true if the Volume shall be restored
workloadmgr restore-list [--snapshot_id <snapshot_id>]workloadmgr restore-show [--output <output>] <restore_id>workloadmgr restore-delete <restores_id>workloadmgr restore-cancel <restore_id>workloadmgr snapshot-oneclick-restore [--display-name <display-name>]
[--display-description <display-description>]
<snapshot_id>workloadmgr snapshot-selective-restore [--display-name <display-name>]
[--display-description <display-description>]
[--filename <filename>]
<snapshot_id>workloadmgr snapshot-inplace-restore [--display-name <display-name>]
[--display-description <display-description>]
[--filename <filename>]
<snapshot_id>{
oneclickrestore: False,
restore_type: selective,
type: openstack,
openstack:
{
instances:
[
{
include: True,
id: 890888bc-a001-4b62-a25b-484b34ac6e7e,
name: cdcentOS-1,
availability_zone:,
nics: [],
vdisks:
[
{
id: 4cc2b474-1f1b-4054-a922-497ef5564624,
new_volume_type:,
availability_zone: nova
}
],
flavor:
{
ram: 512,
ephemeral: 0,
vcpus: 1,
swap:,
disk: 1,
id: 1
}
}
],
restore_topology: True,
networks_mapping:
{
networks: []
}
}
}'instances':[
{
'name':'cdcentOS-1-selective',
'availability_zone':'US-East',
'nics':[
{
'mac_address':'fa:16:3e:00:bd:60',
'ip_address':'192.168.0.100',
'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
'network':{
'subnet':{
'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
},
'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
}
}
],
'vdisks':[
{
'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
'new_volume_type':'ceph',
'availability_zone':'nova'
}
],
'flavor':{
'ram':2048,
'ephemeral':0,
'vcpus':1,
'swap':'',
'disk':20,
'id':'2'
},
'include':True,
'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
}
]restore_topology:Truerestore_topology:False{
'oneclickrestore':False,
'openstack':{
'instances':[
{
'name':'cdcentOS-1-selective',
'availability_zone':'US-East',
'nics':[
{
'mac_address':'fa:16:3e:00:bd:60',
'ip_address':'192.168.0.100',
'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
'network':{
'subnet':{
'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
},
'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
}
}
],
'vdisks':[
{
'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
'new_volume_type':'ceph',
'availability_zone':'nova'
}
],
'flavor':{
'ram':2048,
'ephemeral':0,
'vcpus':1,
'swap':'',
'disk':20,
'id':'2'
},
'include':True,
'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
}
],
'restore_topology':False,
'networks_mapping':{
'networks':[
{
'snapshot_network':{
'subnet':{
'id':'8b609440-4abf-4acf-a36b-9a0fa70c383c'
},
'id':'8b871820-f92e-41f6-80b4-00555a649b4c'
},
'target_network':{
'subnet':{
'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
},
'id':'d5047e84-077e-4b38-bc43-e3360b0ad174',
'name':'internal'
}
}
]
}
},
'restore_type':'selective',
'type':'openstack'
}{
'oneclickrestore':False,
'restore_type':'inplace',
'type':'openstack',
'openstack':{
'instances':[
{
'restore_boot_disk':True,
'include':True,
'id':'ba8c27ab-06ed-4451-9922-d919171078de',
'vdisks':[
{
'restore_cinder_volume':True,
'id':'04d66b70-6d7c-4d1b-98e0-11059b89cba6',
}
]
}
]
}
}id ➡️ ID of the network the Neutron port is part of
The Red Hat OpenStack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.
Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
Refer to the below-mentioned acceptable values for the placeholders triliovault_tag, trilio_branch, RHOSP_version and CONTAINER-TAG-VERSION in this document as per the Openstack environment:
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
The following backup target types are supported by Trilio
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
The following steps are to be done on the undercloud node on an already installed RHOSP environment.
The overcloud-deploy command has to be run successfully already and the overcloud should be available.
All commands need to be run as a stack user on the undercloud node
The following command clones the triliovault-cfg-scripts github repository.
If your backup target is CEPH S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, the user needs to rename his ca chain cert file to s3-cert.pem and copy it into the directory - triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files
Please refer to to collect the necessary artifacts before continuing further.
Rename the file as vddk.tar.gz and place it at
Copy the vCenter SSL cert file to
Trilio contains multiple services. Add these services to your roles_data.yaml.
Add the following services to the roles_data.yaml
All commands need to be run as a 'stack' user
This service needs to share the same role as the keystone and database service.
In the case of the pre-defined roles, these services will run on the role Controller.
In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone service installed.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute service.
In the case of the pre-defined roles will the nova-compute service run on the role Compute.
In the case of custom-defined roles, it is necessary to use the role that the nova-compute service uses.
Add the following line to the identified role:
All commands need to be run as a 'stack' user
Trilio containers are pushed to the RedHat Container Registry.
Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.
There are three registry methods available in the RedHat OpenStack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
In this method, container images get downloaded directly on overcloud nodes during overcloud deploy/update command execution. Users can set the remote registry to a redhat registry or any other private registry that they want to use.
The user needs to provide credentials for the registry in containers-prepare-parameter.yaml file.
Make sure other OpenStack service images are also using the same method to pull container images. If it's not the case you can not use this method.
Populate containers-prepare-parameter.yaml with content like the following. Important parameters are push_destination: false,
ContainerImageRegistryLogin: true and registry credentials.
Trilio container images are published to registry registry.connect.redhat.com.
Credentials of registry 'registry.redhat.io' will work for registry.connect.redhat.com registry too.
Note: This file containers-prepare-parameter.yaml
Redhat document for remote registry method:
Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat
3. Make sure you have network connectivity to the above registries from all overcloud nodes. Otherwise image pull operation will fail.
4. The user needs to manually populate the trilio_env.yaml file with Trilio container image URLs as given below:
`
At this step, you have configured Trilio image URLs in the necessary environment file.
Follow this section when 'local registry' is used on the undercloud.
In this case, it is necessary to push the Trilio containers to the undercloud registry.
Trilio provides shell scripts that will pull the containers from registry.connect.redhat.com and push them to the undercloud and update the trilio_env.yaml.
At this step, you have downloaded Trilio container images and configured Trilio image URLs in the necessary environment file.
Follow this section when a Satellite Server is used for the container registry.
Pull the Trilio containers on the Red Hat Satellite using the given
Populate the trilio_env.yaml with container URLs.
At this step, you have downloaded Trilio container images into the RedHat satellite server and configured Trilio image URLs in the necessary environment file.
Edit /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml file and provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still, it is recommended to verify the container URLs.
You don't need to provide anything for resource_registry, keep it as it is.
The user needs to generate random passwords for Trilio resources using the following script.
This script will generate random passwords for all Trilio resources that are going to get created in OpenStack cloud.
For only this section user needs to source the cloudrc file of overcloud node
The output will be written to
For TrilioVault functionality to work, we need the following Linux kernel modules to be loaded on all controllers and compute nodes(Where Trilio WLM and Datamover services are going to be installed).
All commands need to be run as a 'stack' user on undercloud node
defaults.yaml in overcloud deploy command with `-e` option as shown below.This YAML file holds the default values, like default Trustee Role is creator and Keystone endpoint interface is Internal. There are some other parameters as well those User can update as per their requirements.
trilio_env.yaml
roles_data.yaml
passwords.yaml
To include new environment files use -e option and for roles data files use -r option.\
Below is an example of an overcloud deploy command with Trilio environment:
Please follow to verify the deployment.
Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights:
=> If any Trilio containers do not start well or are in a restarting state on the Controller/Compute node, use the following logs to debug.
triliovault_nfs_map_input.yml in the current directory and provide compute host and NFS share/IP map.Get the overcloud Controller and Compute hostnames from the following command. Check Name column. Use exact host names in the triliovault_nfs_map_input.yml file.
Edit the input map file triliovault_nfs_map_input.yml and fill in all the details. Refer to for details about the structure.
Below is an example of how you can set the multi-IP NFS details:
You can not configure the different IPs for the Controllers/WLM nodes, you need to use the same share on all the controller nodes. You can configure the different IPs for Compute/Datamover nodes
If pip isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.
The result will be stored in the triliovault_nfs_map_output.yml file
Open file triliovault_nfs_map_output.yml available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yamlValidate the changes in the file triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.MultiIPNfsEnabled is set to true in the trilio_env.yaml file and that NFS is used as a backup target.Following is the HAproxy conf file location on HAproxy nodes of the overcloud. Trilio Datamover API service HAproxy configuration gets added to this file.
Trilio Datamover HAproxy default configuration from the above file looks as follows:
The user can change the following configuration parameter values.
To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for editing.
ii) Search the following entries and edit as required
iii) Save the changes and do the overcloud deployment again to reflect these changes for overcloud nodes.
i) If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use a variable named 'TrilioDatamoverOptVolumes' is available in the below file.
To add one more extra volume/directoy mount to the Trilio Datamover Service container it is necessary that volumes/directories should already be mounted on the Compute host
ii) The variable 'TrilioDatamoverOptVolumes' accepts a list of volume/bind mounts. User needs to edit the file and add their volume mounts in the below format.
iii) Lastly you need to do overcloud deploy/update.
After successful deployment, you will see that volume/directory mount will be mounted inside the Trilio Datamover Service container.
We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user'.
5.2.3
5.2.3-rhosp17.1 5.2.3-rhosp16.2 5.2.3-rhosp16.1
5.2.3
RHOSP17.1 RHOSP16.2 RHOSP16.1
5.2.3
5.2.2
5.2.2-rhosp17.1 5.2.2-rhosp16.2 5.2.2-rhosp16.1
5.2.2
RHOSP17.1 RHOSP16.2 RHOSP16.1
5.2.2
5.2.1
5.2.1-rhosp17.1 5.2.1-rhosp16.2 5.2.1-rhosp16.1
5.2.1
RHOSP17.1 RHOSP16.2 RHOSP16.1
5.2.1
5.2.0
5.2.0-rhosp17.1 5.2.0-rhosp16.2 5.2.0-rhosp16.1
5.2.0
RHOSP17.1 RHOSP16.2 RHOSP16.1
5.2.0
5.1.0
5.1.0-rhosp16.2 5.1.0-rhosp16.1
5.1.0
RHOSP16.2 RHOSP16.1
5.1.0
5.0.0
5.0.0-rhosp16.2 5.0.0-rhosp16.1
5.0.0
RHOSP16.2 RHOSP16.1
5.0.0
ContainerTriliovaultWlmImage
Trilio WLM Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
ContainerHorizonImage
Horizon Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
BackupTargetType
Default value is nfs.
Set either 'nfs' or 's3' as a target backend for snapshots taken by Triliovault
MultiIPNfsEnabled
Default value is False.
Set to True only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.
NfsShares
Provide the nfs share you want to use as backup target for snapshots taken by Triliovault
NfsOptions
This parameter set NFS mount options.
Keep default values, unless a special requirement exists.
S3Type
If your Backup target is S3 then either provide the amazon_s3 or ceph_s3 depends on s3 type
S3AccessKey
Provide S3 Access key
S3SecretKey
Provide Secret key
S3RegionName
Provide S3 region.
If your S3 type does not have region parameter, just keep the parameter as it is
S3Bucket
Provide S3 bucket name
S3EndpointUrl
Provide S3 endpoint url, if your S3 type does not not required keep it as it is.
Not required for Amazon S3
S3SignatureVersion
Provide S3 singature version.
S3AuthVersion
Provide S3 auth version.
S3SslEnabled
Default value is False.
If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint url then change it to 'True', otherwise keep it as 'False'.
TrilioDatamoverOptVolumes
User can specify list of extra volumes that they want to mount on 'triliovault_datamover' container.
Refer the Configure Custom Volume/Directory Mounts for the Trilio Datamover Service section in this doc
VcenterCACertFileName
If VcenterNoSsl is set to False, provide the name of the SSL certificate file which is uploaded at step
Otherwise, keep it blank.
defaults.yamlUse the correct Trilio endpoint map file as per the available Keystone endpoint configuration. You have to remove your OpenStack's endpoint map file from overcloud deploy command and instead of that use Trilio endpoint map file.
Instead of tls-endpoints-public-dns.yaml file, use triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_dns.yaml
Instead of tls-endpoints-public-ip.yaml file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_ip.yaml
Instead of tls-everywhere-endpoints-dns.yaml file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_everywhere_dns.yaml
Instead of no-tls-endpoints-public-ip.yaml file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_non_tls_endpoints_ip.yaml
5.2.5
5.2.5-rhosp17.1
5.2.5
RHOSP17.1
5.2.5
5.2.4
5.2.4-rhosp17.1 5.2.4-rhosp16.2 5.2.4-rhosp16.1
5.2.4
RHOSP17.1 RHOSP16.2 RHOSP16.1
5.2.4
CloudAdminUserName
Default value is admin.
Provide the cloudadmin user name of your overcloud
CloudAdminProjectName
Default value is admin.
Provide the cloudadmin project name of your overcloud
CloudAdminDomainName
Default value is default.
Provide the cloudadmin project name of your overcloud
CloudAdminPassword
Provide the cloudadmin user's password of your overcloud
ContainerTriliovaultDatamoverImage
Trilio Datamover Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
ContainerTriliovaultDatamoverApiImage
Trilio DatamoverApi Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
VmwareToOpenstackMigrationEnabled
Set it to True if this feature is required to be enabled, otherwise keep it to False
Populate all the below mentioned parameters if it is set to True
VcenterServers
This section to be used to provide all the vcenter details. If there are multiple vcenters, then repeat the parameters from VcenterUrl to VcenterCACertFileName as many times as the number of vcenters. Refer environments/trilio_env.yamlenvironments/trilio_env.yaml file
VcenterUrl
vCenter access URL
example:
https://vcenter-1.infra.trilio.io/
VcenterUsername
Access username (Check out the privilege requirement here)
VcenterPassword
Access user's Password
VcenterNoSsl
If the connection is to be established securely, set it to False
Set it to True if SSL verification is to be ignored
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the Restores from
snapshot_id
string
ID of the Snapshot to fetch the Restores from
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 11:28:43 GMT
Content-Type: application/json
Content-Length: 4308
Connection: keep-alive
X-Compute-Request-Id: req-0bc531b6-be6e-43b4-90bd-39ef26ef1463
{
"restores":[
{
"id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
"created_at":"2020-11-05T10:17:40.000000",
"updated_at":"2020-11-05T10:17:40.000000",
"finished_at":"2020-11-05T10:27:20.000000",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"available",
GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>
Provides all details about the specified Restore
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the restore from
restore_id
string
ID of the restore to show
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
DELETE https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>
Deletes the specified Restore
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the Restore from
restore_id
string
ID of the Restore to be deleted
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>/cancel
Cancels an ongoing Restore
tvm_address
string
IP or FQDN of the Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the Restore from
restore_id
string
ID of the Restore to cancel
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
snapshot_id
string
ID of the snapshot to restore
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
The One-Click restore requires a body to provide all necessary information in json format.
POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information.
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
snapshot_id
string
ID of the snapshot to restore
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
The One-Click restore requires a body to provide all necessary information in json format.
POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
snapshot_id
string
ID of the snapshot to restore
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
The One-Click restore requires a body to provide all necessary information in json format.
cd /home/stack
source stackrc
git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
chmod +x *.shcp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files
/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/vddk.tar.gz
/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/
'OS::TripleO::Services::TrilioDatamoverApi'
'OS::TripleO::Services::TrilioWlmApi'
'OS::TripleO::Services::TrilioWlmWorkloads'
'OS::TripleO::Services::TrilioWlmScheduler'
'OS::TripleO::Services::TrilioWlmCron''OS::TripleO::Services::TrilioDatamover'Trilio Datamover Container: registry.connect.redhat.com/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
Trilio Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
Trilio Horizon Plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1
Trilio WLM Container: registry.connect.redhat.com/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1Trilio Datamover Container: registry.connect.redhat.com/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
Trilio Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
Trilio Horizon Plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2
Trilio WLM Container: registry.connect.redhat.com/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2Trilio Datamover Container: registry.connect.redhat.com/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
Trilio Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
Trilio Horizon Plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1
Trilio WLM Container: registry.connect.redhat.com/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1parameter_defaults:
ContainerImagePrepare:
- push_destination: false
set:
namespace: registry.redhat.io/...
...
...
ContainerImageRegistryCredentials:
registry.redhat.io:
myuser: 'p@55w0rd!'
registry.connect.redhat.com:
myuser: 'p@55w0rd!'
ContainerImageRegistryLogin: true$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1$ grep '<CONTAINER-TAG-VERSION>-rhosp16.2' trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2$ grep '<CONTAINER-TAG-VERSION>-rhosp17.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp16.1
## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME', 'undercloud2-161.ctlplane.trilio.local' is the undercloud registry hostname in below example
$ openstack tripleo container image list | grep keystone
| docker://undercloud2-161.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-keystone:16.1 |
| docker://undercloud2-161.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.1 |
## Example of running the script with parameters
sudo ./prepare_trilio_images.sh undercloud2-161.ctlplane.trilio.local 5.0.14-rhosp16.1
## Verify changes
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' ../environments/trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1
$ openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp16.1
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1 |cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp16.2
## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME', 'undercloudqa162.ctlplane.trilio.local' is the undercloud registry hostname in below example
$ openstack tripleo container image list | grep keystone
| docker://undercloudqa162.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-keystone:16.2 |
## Example of running the script with parameters
sudo ./prepare_trilio_images.sh undercloudqa162.ctlplane.trilio.local 5.0.14-rhosp16.2
## Verify changes
grep '<CONTAINER-TAG-VERSION>-rhosp16.2' ../environments/trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2
$ openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp16.2
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2 |cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/scripts/
sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp17.1
## Example of running the script with parameters
sudo ./prepare_trilio_images.sh undercloudqa17.ctlplane.trilio.local 5.2.2-rhosp17.1
## Verify changes
grep '<CONTAINER-TAG-VERSION>-rhosp17.1' ../environments/trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultDatamoverApiImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultWlmImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerHorizonImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1
$ openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp17.1
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1 |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1 |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1 |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1 |cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.2' trilio_env.yaml
ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments
$ grep '<CONTAINER-TAG-VERSION>-rhosp17.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
./generate_passwords.sh-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/passwords.yamlsource <OVERCLOUD_RC_FILE>vi /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yamlopenstack role add --user <cloud-Admin-UserName> --domain <Cloud-Admin-DomainName> admin
# Example
openstack role add --user admin --domain default admincd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts
./create_wlm_ids_conf.shcat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/triliovault_wlm_ids.confmodprobe nbd nbds_max=128
lsmod | grep nbdmodprobe fuse
lsmod | grep fusesource stackrccd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
./upload_puppet_module.sh
## Output of the above command looks like following
Creating tarball...
Tarball created.
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
Uploading file to swift: /tmp/puppet-modules-B1bp1Bk/puppet-modules.tar.gz
+-----------------------+---------------------+----------------------------------+
| object | container | etag |
+-----------------------+---------------------+----------------------------------+
| puppet-modules.tar.gz | overcloud-artifacts | 17ed9cb7a08f67e1853c610860b8ea99 |
+-----------------------+---------------------+----------------------------------+
Upload complete
## Above command creates the following file
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yamlcd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/scripts/
./upload_puppet_module.sh
## Output of above command looks like following
Creating tarball...
Tarball created.
renamed '/tmp/puppet-modules-MUIyvXI/puppet-modules.tar.gz' -> '/var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz'
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
[stack@uc17-1 scripts]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml
parameter_defaults:
DeployArtifactFILEs:
- /var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz
## Above command creates following file.
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/defaults.yaml openstack overcloud deploy --stack overcloudtrain5 --templates \
--libvirt-type qemu \
--ntp-server 192.168.1.34 \
-e /home/stack/templates/node-info.yaml \
-e /home/stack/containers-prepare-parameter.yaml \
-e /home/stack/templates/ceph-config.yaml \
-e /home/stack/templates/cinder_size.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-simple-crypto.yaml \
-e /home/stack/templates/configure-barbican.yaml \
-e /home/stack/templates/multidomain_horizon.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml \
-e /home/stack/templates/tls-parameters.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env_tls_everywhere_dns.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/defaults.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/passwords.yaml \
-r /usr/share/openstack-tripleo-heat-templates/roles_data.yamlopenstack stack failures list overcloud
heat stack-list --show-nested -f "status=FAILED"
heat resource-list --nested-depth 5 overcloud | grep FAILEDpodman logs <trilio-container-name>
tailf /var/log/containers/<trilio-container-name>/<trilio-container-name>.logcd triliovault-cfg-scripts/common/(undercloud) [stack@ucqa161 ~]$ openstack server list
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2 | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0 | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1 | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+$ cat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
# TriliovaultMultiIPNfsMap represents datamover, WLM nodes (compute and controller nodes) and it's NFS share mapping.
parameter_defaults:
TriliovaultMultiIPNfsMap:
overcloudtrain4-controller-0: 172.30.1.11:/rhospnfs
overcloudtrain4-controller-1: 172.30.1.11:/rhospnfs
overcloudtrain4-controller-2: 172.30.1.11:/rhospnfs
overcloudtrain4-novacompute-0: 172.30.1.12:/rhospnfs
overcloudtrain4-novacompute-1: 172.30.1.13:/rhospnfssudo pip3 install PyYAML==5.1 3python3 ./generate_nfs_map.pygrep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfglisten triliovault_datamover_api
bind 172.30.4.53:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.30.4.53:8784 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
balance roundrobin
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Port %[dst_port]
maxconn 50000
option httpchk
option httplog
option forwardfor
retries 5
timeout check 10m
timeout client 10m
timeout connect 10m
timeout http-request 10m
timeout queue 10m
timeout server 10m
server overcloudtraindev2-controller-0.internalapi.trilio.local 172.30.4.57:8784 check fall 5 inter 2000 rise 2 verifyhost overcloudtraindev2-controller-0.internalapi.trilio.localretries 5
timeout http-request 10m
timeout queue 10m
timeout connect 10m
timeout client 10m
timeout server 10m
timeout check 10m
balance roundrobin
maxconn 50000/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/services/triliovault-datamover-api.yaml tripleo::haproxy::trilio_datamover_api::options:
'retries': '5'
'maxconn': '50000'
'balance': 'roundrobin'
'timeout http-request': '10m'
'timeout queue': '10m'
'timeout connect': '10m'
'timeout client': '10m'
'timeout server': '10m'
'timeout check': '10m'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yamlTrilioDatamoverOptVolumes:
- <mount-dir-on-compute-host>:<mount-dir-inside-the-datamover-container>
## For example, below is the `/mnt/mount-on-host` mount directory mounted on Compute host that directory you want to mount on the `/mnt/mount-inside-container` directory inside the Datamover container
[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436 2.5T 2.3T 234G 91% /mnt/mount-on-host
## Then provide that mount in the below format
TrilioDatamoverOptVolumes:
- /mnt/mount-on-host:/mnt/mount-inside-container[root@overcloudtrain5-novacompute-0 heat-admin]# podman exec -itu root triliovault_datamover bash
[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436 2.5T 2.3T 234G 91% /mnt/mount-inside-containerHTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 14:04:45 GMT
Content-Type: application/json
Content-Length: 2639
Connection: keep-alive
X-Compute-Request-Id: req-30640219-e94e-4651-9b9e-49f5574e2a7f
{
"restore":{
"id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
"created_at":"2020-11-05T10:17:40.000000",
"updated_at":"2020-11-05T10:17:40.000000",
"finished_at":"2020-11-05T10:27:20.000000",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"available",
"restore_type":"restore",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"snapshot_details":{
"created_at":"2020-11-04T13:58:37.000000",
"updated_at":"2020-11-05T10:27:22.000000",
"deleted_at":null,
"deleted":false,
"version":"4.0.115",
"id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"snapshot_type":"full",
"display_name":"API taken 2",
"display_description":"API taken description 2",
"size":44171264,
"restore_size":2147483648,
"uploaded_size":44171264,
"progress_percent":100,
"progress_msg":"Creating Instance: cirros-2",
"warning_msg":null,
"error_msg":null,
"host":"TVM1",
"finished_at":"2020-11-04T14:06:03.000000",
"data_deleted":false,
"pinned":false,
"time_taken":428,
"vault_storage_id":null,
"status":"available"
},
"workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
"instances":[
{
"id":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2",
"name":"cirros-2",
"status":"available",
"metadata":{
"config_drive":"",
"instance_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"production":"1"
}
},
{
"id":"b083bb70-e384-4107-b951-8e9e7bbac380",
"name":"cirros-1",
"status":"available",
"metadata":{
"config_drive":"",
"instance_id":"e33c1eea-c533-4945-864d-0da1fc002070",
"production":"1"
}
}
],
"networks":[
],
"subnets":[
],
"routers":[
],
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
}
],
"name":"OneClick Restore",
"description":"-",
"host":"TVM2",
"size":2147483648,
"uploaded_size":2147483648,
"progress_percent":100,
"progress_msg":"Restore from snapshot is complete",
"warning_msg":null,
"error_msg":null,
"time_taken":580,
"restore_options":{
"name":"OneClick Restore",
"oneclickrestore":true,
"restore_type":"oneclick",
"openstack":{
"instances":[
{
"name":"cirros-2",
"id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
"availability_zone":"nova"
},
{
"name":"cirros-1",
"id":"e33c1eea-c533-4945-864d-0da1fc002070",
"availability_zone":"nova"
}
]
},
"type":"openstack",
"description":"-"
},
"metadata":[
]
}
}HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 14:21:07 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-0e155b21-8931-480a-a749-6d8764666e4dHTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 15:13:30 GMT
Content-Type: application/json
Content-Length: 0
Connection: keep-alive
X-Compute-Request-Id: req-98d4853c-314c-4f27-bd3f-f81bda1a2840HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Thu, 05 Nov 2020 14:30:56 GMT
Content-Type: application/json
Content-Length: 992
Connection: keep-alive
X-Compute-Request-Id: req-7e18c309-19e5-49cb-a07e-90dd368fddae
{
"restore":{
"id":"3df1d432-2f76-4ebd-8f89-1275428842ff",
"created_at":"2020-11-05T14:30:56.048656",
"updated_at":"2020-11-05T14:30:56.048656",
"finished_at":null,
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"restoring",
"restore_type":"restore",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
}
],
"name":"One Click Restore",
"description":"One Click Restore",
"host":"",
"size":0,
"uploaded_size":0,
"progress_percent":0,
"progress_msg":null,
"warning_msg":null,
"error_msg":null,
"time_taken":0,
"restore_options":{
"openstack":{
},
"type":"openstack",
"oneclickrestore":true,
"vmware":{
},
"restore_type":"oneclick"
},
"metadata":[
]
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 09:53:31 GMT
Content-Type: application/json
Content-Length: 1713
Connection: keep-alive
X-Compute-Request-Id: req-84f00d6f-1b12-47ec-b556-7b3ed4c2f1d7
{
"restore":{
"id":"778baae0-6c64-4eb1-8fa3-29324215c43c",
"created_at":"2020-11-09T09:53:31.037588",
"updated_at":"2020-11-09T09:53:31.037588",
"finished_at":null,
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"restoring",
"restore_type":"restore",
"snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
}
],
"name":"API",
"description":"API Created",
"host":"",
"size":0,
"uploaded_size":0,
"progress_percent":0,
"progress_msg":null,
"warning_msg":null,
"error_msg":null,
"time_taken":0,
"restore_options":{
"openstack":{
"instances":[
{
"vdisks":[
{
"new_volume_type":"iscsi",
"id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
"availability_zone":"nova"
}
],
"name":"cirros-1-selective",
"availability_zone":"nova",
"nics":[
],
"flavor":{
"vcpus":1,
"disk":1,
"swap":"",
"ram":512,
"ephemeral":0,
"id":"1"
},
"include":true,
"id":"e33c1eea-c533-4945-864d-0da1fc002070"
},
{
"include":false,
"id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe"
}
],
"restore_topology":false,
"networks_mapping":{
"networks":[
{
"snapshot_network":{
"subnet":{
"id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
},
"id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26"
},
"target_network":{
"subnet":{
"id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
},
"id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
"name":"internal"
}
}
]
}
},
"restore_type":"selective",
"type":"openstack",
"oneclickrestore":false
},
"metadata":[
]
}
}HTTP/1.1 202 Accepted
Server: nginx/1.16.1
Date: Mon, 09 Nov 2020 12:53:03 GMT
Content-Type: application/json
Content-Length: 1341
Connection: keep-alive
X-Compute-Request-Id: req-311fa97e-0fd7-41ed-873b-482c149ee743
{
"restore":{
"id":"0bf96f46-b27b-425c-a10f-a861cc18b82a",
"created_at":"2020-11-09T12:53:02.726757",
"updated_at":"2020-11-09T12:53:02.726757",
"finished_at":null,
"user_id":"ccddc7e7a015487fa02920f4d4979779",
"project_id":"c76b3355a164498aa95ddbc960adc238",
"status":"restoring",
"restore_type":"restore",
"snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
"links":[
{
"rel":"self",
"href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
},
{
"rel":"bookmark",
"href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
}
],
"name":"API",
"description":"API description",
"host":"",
"size":0,
"uploaded_size":0,
"progress_percent":0,
"progress_msg":null,
"warning_msg":null,
"error_msg":null,
"time_taken":0,
"restore_options":{
"restore_type":"inplace",
"type":"openstack",
"oneclickrestore":false,
"openstack":{
"instances":[
{
"restore_boot_disk":true,
"include":true,
"id":"7c1bb5d2-aa5a-44f7-abcd-2d76b819b4c8",
"vdisks":[
{
"restore_cinder_volume":true,
"id":"f6b3fef6-4b0e-487e-84b5-47a14da716ca"
}
]
},
{
"restore_boot_disk":true,
"include":true,
"id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b",
"vdisks":[
{
"restore_cinder_volume":true,
"id":"53204f34-019d-4ba8-ada1-e6ab7b8e5b43"
}
]
}
]
}
},
"metadata":[
]
}
}{
"restore":{
"options":{
"openstack":{
},
"type":"openstack",
"oneclickrestore":true,
"vmware":{},
"restore_type":"oneclick"
},
"name":"One Click Restore",
"description":"One Click Restore"
}
}{
"restore":{
"name":"<restore name>",
"description":"<restore description>",
"options":{
"openstack":{
"instances":[
{
"name":"<new name of instance>",
"include":<true/false>,
"id":"<original id of instance to be restored>"
"availability_zone":"<availability zone>",
"vdisks":[
{
"id":"<original ID of Volume>",
"new_volume_type":"<new volume type>",
"availability_zone":"<Volume availability zone>"
}
],
"nics":[
{
'mac_address':'<mac address of the pre-created port>',
'ip_address':'<IP of the pre-created port>',
'id':'<ID of the pre-created port>',
'network':{
'subnet':{
'id':'<ID of the subnet of the pre-created port>'
},
'id':'<ID of the network of the pre-created port>'
}
],
"flavor":{
"vcpus":<Integer>,
"disk":<Integer>,
"swap":<Integer>,
"ram":<Integer>,
"ephemeral":<Integer>,
"id":<Integer>
}
}
],
"restore_topology":<true/false>,
"networks_mapping":{
"networks":[
{
"snapshot_network":{
"subnet":{
"id":"<ID of the original Subnet ID>"
},
"id":"<ID of the original Network ID>"
},
"target_network":{
"subnet":{
"id":"<ID of the target Subnet ID>"
},
"id":"<ID of the target Network ID>",
"name":"<name of the target network>"
}
}
]
}
},
"restore_type":"selective",
"type":"openstack",
"oneclickrestore":false
}
}
}{
"restore":{
"name":"<restore-name>",
"description":"<restore-description>",
"options":{
"restore_type":"inplace",
"type":"openstack",
"oneclickrestore":false,
"openstack":{
"instances":[
{
"restore_boot_disk":<Boolean>,
"include":<Boolean>,
"id":"<ID of the instance the volumes are attached to>",
"vdisks":[
{
"restore_cinder_volume":<boolean>,
"id":"<ID of the Volume to restore>"
}
]
}
]
}
}
}
}User-Agent
string
python-workloadmgrclient
User-Agent
string
python-workloadmgrclient
User-Agent
string
python-workloadmgrclient