Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Support for Kolla Caracal on Rocky9 added
Support for Canonical Antelope & Bobcat on Jammy added
Bug Fixes
Hide the VMware/Virtual Machine Migration related menus, if that feature is not active.
Issue where some volume were not being attached after restore.
S3 logging is now fixed.
Slow performance while creating snapshot identified and fixed.
Periods of inactivity identified during full snapshot of large volume.
Deployment failing on kolla-anisble Antelope.
Modify workload by another tenant user brought workload in error.
Fixed issue where Trilio looked for instance in wrong region if OpenStack is multi region.
Restore failing with security group not found.
Backup failing with multipath device not found.
Backup failing due to virtual size of LUN zero.
Backup Failure in contego; error in _refresh_multipath_size call. Unexpected error while running command tee -a /sys/bus/scsi/drivers/sd/None:None:None:None/rescan
Upgrade of python3-oslo.messaging breaks Trilio in Juju
The Trilio charms should not allow installations of python3-oslo.messaging newer than 12.1.6-0ubuntu1, as they disrupt correct RabbitMQ connectivity from the trilio-wlm units.
Workaround: Run below commands
RHOSP16.1/16.2 : Unexpected behavior with OpenStack Horizon Dashboard
At times, after the overcloud deployment, the OpenStack Horizon Dashboard encounters issues, leading to unexpected behavior in certain functionalities.
Workaround:
Login to the Horizon container and run the following commands: podman exec -it -u root horizon /bin/bash
/usr/bin/python3 /usr/share/openstack-dashboard/manage.py collectstatic --clear --noinput
/usr/bin/python3 /usr/share/openstack-dashboard/manage.py compress --force
Restart the Horizon container : podman restart horizon
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the Horizon pod keeps restarting.
The issue has been identified with Memcached. Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached
service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached
pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list
and get-orphaned-workloads-list
show incorrect workloads in the output.
Workaround:
Use --project
option with get-importworkload-list
CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworklaods cli command throws gateway timed out error
workload-importworklaods
CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
for RHOSP
VMware Migration Mapping Dashboard
Bug Fixes
[VMware Migration] boot options shows BIOS when the guest VM has unsecured EFI boot.
wlm client does not work on mac having python 3.10.
Cloud admin trust creation was failing.
Modify workload with other user brings workload in error.
Trilio looks for instance in wrong region if OpenStack is multi region
Encryption checkbox is disabled when creating 1st workload from UI
While creating 1st workload from the UI, the Enable Encryption
checkbox on the Create workload page is disabled. Creating a non-encrypted workload first will resolve this issue and the user can start creating the encrypted workloads.
RHOSP16.1/16.2 : Unexpected behavior with OpenStack UI
At times, after the overcloud deployment, the OpenStack UI encounters issues, leading to unexpected behavior in certain functionalities.
Workaround:
Login to the Horizon container and run the following commands: podman exec -it -u root horizon /bin/bash
/usr/bin/python3 /usr/share/openstack-dashboard/manage.py collectstatic --clear --noinput
/usr/bin/python3 /usr/share/openstack-dashboard/manage.py compress --force
Restart the Horizon container : podman restart horizon
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached
service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached
pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list
and get-orphaned-workloads-list
show incorrect workloads in the output.
Workaround:
Use --project
option with get-importworkload-list
CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworklaods cli command throws gateway timed out error
workload-importworklaods
CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
for RHOSP
Support for Kolla Bobcat OpenStack added.
Fixed the issues reported by customers.
[VMware Migration] Migration failure due to minor issues in handling vm and network names.
[VMware migration] boot options shows BIOS when the guest VM has unsecured efi boot.
[VMware Migration] Migration fails when the network name of the guest VM contains -
and _
both.
Support multi ceph config in same /etc/ceph location.
Trilio looks for instance in wrong region if openstack is multi region.
T4O deployment on Kolla Antelope failing. DMAPI and horizon container fail to start.
Quota updation when admin user have access to projects of multiple domains
Encryption checkbox is disabled when creating 1st workload from UI
While creating 1st workload from the UI, the Enable Encryption
checkbox on the Create workload page is disabled. Creating a non-encrypted workload first will resolve this issue and the user can start creating the encrypted workloads.
RHOSP16.1/16.2 : Unexpected behavior with OpenStack UI
At times, after the overcloud deployment, the OpenStack UI encounters issues, leading to unexpected behavior in certain functionalities.
Workaround:
Login to the Horizon container and run the following commands: podman exec -it -u root horizon /bin/bash
/usr/bin/python3 /usr/share/openstack-dashboard/manage.py collectstatic --clear --noinput
/usr/bin/python3 /usr/share/openstack-dashboard/manage.py compress --force
Restart the Horizon container : podman restart horizon
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached
service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached
pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list
and get-orphaned-workloads-list
show incorrect workloads in the output.
Workaround:
Use --project
option with get-importworkload-list
CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworklaods cli command throws gateway timed out error
workload-importworklaods
CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
for RHOSP
Software Driven Migration : VMware to OpenStack With T4O-5.2.1 release, Trilio now supports migration of VMs from VMware to OpenStack.
More about this feature can be found here.
2. Fixed the issues reported by customers.
Support multi ceph config in same /etc/ceph location.
Encrypted incremental restore fails to diff AZ.
Trilio installation failing for RHOSP17.1 IPV6.
[RHOSP] memcache_servers parameter in triliovault wlm and datamover service conf files fixed for ipv6.
Deployment (with non-root user) failing on kolla with barbican.
T4O 5.1 failing on Kolla Zed rocky
5.2 deployment failing on Antelope with ansible node on separate server.
WLM service issue with IPV6. Containers keep restarting due to Data too long for column ip_addresses at row 1
.
RHOSP16.1/16.2 : Unexpected behavior with OpenStack UI
At times, after the overcloud deployment, the OpenStack UI encounters issues, leading to unexpected behavior in certain functionalities.
Workaround:
Login to the Horizon container and run the following commands: podman exec -it -u root horizon /bin/bash
/usr/bin/python3 /usr/share/openstack-dashboard/manage.py collectstatic --clear --noinput
/usr/bin/python3 /usr/share/openstack-dashboard/manage.py compress --force
Restart the Horizon container : podman restart horizon
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached
service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached
pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list
and get-orphaned-workloads-list
show incorrect workloads in the output.
Workaround:
Use --project
option with get-importworkload-list
CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworklaods cli command throws gateway timed out error
workload-importworklaods
CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
for RHOSP
New OpenStack additions in Trilio support matrix: With T4O-5.2.0, Trilio now supports backup and DR for RHOSP17.1 and Kolla-Ansible OpenStack 2023.1 Antelope on Rocky Linux & Ubuntu Jammy.
RHOSP16.1/16.2 : Unexpected behavior with OpenStack UI
At times, after the overcloud deployment, the OpenStack UI encounters issues, leading to unexpected behavior in certain functionalities.
Workaround:
Login to the Horizon container and run the following commands: podman exec -it -u root horizon /bin/bash
/usr/bin/python3 /usr/share/openstack-dashboard/manage.py collectstatic --clear --noinput
/usr/bin/python3 /usr/share/openstack-dashboard/manage.py compress --force
Restart the Horizon container : podman restart horizon
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached
service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached
pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list
and get-orphaned-workloads-list
show incorrect workloads in the output.
Workaround:
Use --project
option with get-importworkload-list
CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworklaods cli command throws gateway timed out error
workload-importworklaods
CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
for RHOSP
True Native and Containerized Deployment: In the T4O-5.1.0 release, Trilio has embraced a transformative shift. Departing from the traditional method of shipping appliance images, T4O has evolved into a comprehensive backup and recovery solution that is fully containerized. Within this iteration, all T4O services have been elegantly encapsulated within containers, seamlessly aligning with OpenStack through its innate deployment process.
New OpenStack additions in Trilio support matrix: With T4O-5.1.0, Trilio now supports backup and DR for Kolla-Ansible OpenStack Zed on Rocky Linux and Ubuntu Jammy.
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached
service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached
pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list
and get-orphaned-workloads-list
show incorrect workloads in the output.
Workaround:
Use --project
option with get-importworkload-list
CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworklaods cli command throws gateway timed out error
workload-importworklaods
CLI command throws gateway timed out error, however, import actually succeeds.
In the background, the import process continues to run in spite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
for RHOSP
True Native and Containerized Deployment: In the T4O-5.0.0 release, Trilio has embraced a transformative shift. Departing from the traditional method of shipping appliance images, T4O has evolved into a comprehensive backup and recovery solution that is fully containerized. Within this iteration, all T4O services have been elegantly encapsulated within containers, seamlessly aligning with OpenStack through its innate deployment process.
Faster Snapshot operation (Removal of Workload types: Serial and Parallel): In a move to boost Snapshot operation efficiency, we've introduced a streamlined method for creating workloads, eliminating the need for users to select between different workload types. T4O will now intelligently manage workloads in the background, ensuring optimal performance for backup operations.
Improved Database Storage: T4O has made significant enhancements to its database management, focusing on efficient data garbage collection. By discarding unnecessary data and optimizing the utilization of DB storage within the task flow, these improvements have led to a leaner and more streamlined storage approach for its DB tables.
Better Troubleshooting: Enhanced troubleshooting capabilities come through the implementation of a unified logging format across all T4O services/containers. This improvement grants users the freedom to adjust the format or log levels for individual services/containers as needed.
Updated EULA: Trilio has updated the End User License Agreement (EULA) that users must accept before using the product. For user convenience, T4O now prompts users to review and accept the EULA during the licensing step of the T4O installation. The most recent version of the EULA can be found here: https://trilio.io/eula/
Horizon pod keeps restarting on RHOSP 16.2.4
It has been observed that on a specific RHOSP minor version, i.e. 16.2.4, the horizon pod keeps restarting.
Either of the below workarounds can be performed on all the controller nodes where the issue seems to occur.
Workaround:
Workaround 1 :
Restart the memcached
service on the controller using systemctl
systemctl restart tripleo_memcached.service
Workaround 2 :
Restart the memcached
pod
podman restart memcached
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
Commands get-importworkload-list
and get-orphaned-workloads-list
show incorrect workloads in the output.
Workaround:
Use --project
option with get-importworkload-list
CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
workload-importworklaods cli command throws gateway timed out error
workload-importworkloads
CLI command throws gateway timed out error, however, import succeeds.
In the background, the import process continues to run despite of timeout until the operation is complete.
Workaround:
To avoid the timeout error, the respective timeout server value can be increased inside /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
for RHOSP
Trilio Data, Inc. is the leader in providing backup and recovery solutions for cloud-native applications. Established in 2013, its flagship products, Trilio for OpenStack(T4O) and Trilio for Kubernetes(T4K) are used by a wide number of large corporations around the world.
T4O, by Trilio Data, is a native OpenStack service-based solution that provides policy-based comprehensive backup and recovery for OpenStack workloads. It captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data, and Metadata of an environment) as full or incremental snapshots. These snapshots can be held in a variety of storage environments including NFS, AWS S3, and other S3-compatible storages. With Trilio and its one-click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). With Trilio, IT departments are enabled to fully deploy OpenStack solutions and provide business assurance through enhanced data retention, protection, and integrity.
With the use of Trilio’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. Trilio takes point-in-time backup of the entire workload consisting of compute resources, network configurations, and storage data as one unit. It also takes incremental backups that capture only the changes that were made since the last backup. Incremental snapshots save a considerable amount of storage space as the backup only includes changes since the last backup. The benefits of using VAST for backup and restore could be summarized below:
This documentation serves as the end-user technical resource to accompany Trilio for OpenStack. You will learn about the architecture, installation, and vast number of operations of this product. The intended audience is anyone who wants to understand the value, operations, and nuances of protecting their cloud-native applications with Trilio for OpenStack.
Trilio seamlessly integrates with OpenStack, functioning exclusively through APIs utilizing the OpenStack Endpoints. Furthermore, Trilio establishes its own set of OpenStack endpoints. Additionally, both the Trilio appliance and compute nodes interact with the backup target, impacting the network strategy for a Trilio installation.
OpenStack comprises three endpoint groupings:
Public Endpoints
Public endpoints are meant to be used by the OpenStack end-users to work with OpenStack.
Internal Endpoints
Internal endpoints are intended to be used by the OpenStack services to communicate with each
Admin Endpoints
Admin endpoints are meant to be used by OpenStack administrators.
Among these three endpoint categories, it's important to note that the admin endpoint occasionally hosts APIs not accessible through any other type of endpoint.
To learn more about OpenStack endpoints please visit the official OpenStack documentation.
Trilio communicates with all OpenStack services through a designated endpoint type, determined and configured during the deployment of Trilio's services.
It is recommended to configure connectivity through the admin endpoints if available.
The following network requirements can be identified this way:
Trilio services need access to the Keystone admin endpoint on the admin endpoint network if it is available.
Trilio services need access to all endpoints of the set endpoint type during deployment.
Trilio recommends granting comprehensive access to all OpenStack endpoints for all Trilio services, aligning with OpenStack's established standards and best practices.
Additionally, Trilio generates its own endpoints, which are integrated within the same network as other OpenStack API services.
To adhere to OpenStack's prescribed standards and best practices, it's advisable that Trilio containers operate on the same network as other OpenStack containers.
The public endpoint to be used by OpenStack users when using Trilio CLI or API
The internal endpoint to communicate with the OpenStack services
The admin endpoint to use the required admin-only APIs of Keystone
The Trilio solution uses backup target storage to place the backup data securely. Trilio divides its backup data into two parts:
Metadata
Volume Disk Data
The first type of data is generated by the Trilio Workloadmgr services through communication with the OpenStack Endpoints. All metadata that is stored together with a backup is written by the Trilio Workloadmgr services to the backup target in the JSON format.
The second type of data is generated by the Trilio Datamover service running on the compute nodes. The Datamover service reads the Volume Data from the Cinder or Nova storage and transfers this data as a qcow2 image to the backup target. Each Datamover service is hereby responsible for the VMs running on its compute node.
The network requirements are therefor:
Every Trilio Workloadmgr service containers need access to the backup target
Every Trilio Datamover service containers need access to the backup target
Learn about artifacts related to Trilio for OpenStack 5.2.4
Branch of the repository to be used for DevOps scripts.
RHOSP 17.1
RHOSP 16.2
RHOSP 16.1
Kolla Rocky Caracal(2024.1)
Kolla Rocky Bobcat(2023.2)
Kolla Ubuntu Jammy Bobcat(2023.2)
Kolla Rocky Antelope(2023.1)
Kolla Ubuntu Jammy Antelope(2023.1)
Kolla Rocky Zed
Kolla Ubuntu Jammy Zed
triliovault-pkg-source
deb [trusted=yes] https://apt.fury.io/trilio-5-2 /
channel
latest/stable
Charm names
Supported releases
Revisions
Jammy (Ubuntu 22.04)
18
Jammy (Ubuntu 22.04)
17
Jammy (Ubuntu 22.04)
22
Jammy (Ubuntu 22.04)
10
All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
contegoclient
5.2.8.1
s3fuse
5.2.8.3
tvault-horizon-plugin
5.2.8.8
workloadmgr
5.2.8.15
workloadmgrclient
5.2.8.5
Repo URL:
python3-contegoclient
5.2.8.1
python3-dmapi
5.2.8
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
5.2.8.3
python3-tvault-contego
5.2.8.14
python3-tvault-horizon-plugin
5.2.8.8
python3-workloadmgrclient
5.2.8.5
workloadmgr
5.2.8.15
Repo URL:
To enable, add the following file /etc/yum.repos.d/fury.repo:
python3-contegoclient-el8
RHEL8/CentOS8*
5.2.8.1-5.2
python3-contegoclient-el9
Rocky9
5.2.8.1-5.2
python3-dmapi
RHEL8/CentOS8*
5.2.8-5.2
python3-dmapi-el9
Rocky9
5.2.8-5.2
python3-s3fuse-plugin
RHEL8/CentOS8*
5.2.8.3-5.2
python3-s3fuse-plugin-el9
Rocky9
5.2.8.3-5.2
python3-trilio-fusepy
RHEL8/CentOS8*
3.0.1-1
python3-trilio-fusepy-el9
Rocky9
3.0.1-1
python3-tvault-contego
RHEL8/CentOS8*
5.2.8.14-5.2
python3-tvault-contego-el9
Rocky9
5.2.8.14-5.2
python3-tvault-horizon-plugin-el8
RHEL8/CentOS8*
5.2.8.8-5.2
python3-tvault-horizon-plugin-el9
Rocky9
5.2.8.8-5.2
python3-workloadmgrclient-el8
RHEL8/CentOS8*
5.2.8.5-5.2
python3-workloadmgrclient-el9
Rocky9
5.2.8.5-5.2
python3-workloadmgr-el9
Rocky9
5.2.8.15-5.2
workloadmgr
RHEL8/CentOS8*
5.2.8.15-5.2
Each T4O release includes a set of artifacts such as version-tagged containers, package repositories, and distribution packages.
To help users quickly identify the resources associated with each release, we have added dedicated sub-pages corresponding to a specific release version.
A quick overview on the Architecture of T4O
Trilio is an add-on service to OpenStack cloud infrastructure and provides backup and disaster recovery solutions for tenant workloads. Trilio is very similar to other OpenStack services including Nova, Cinder, Glance, etc., and adheres to all tenets of OpenStack. It is a stateless service that scales with your cloud.
Trilio has three main software components which are again segregated into multiple services.
This component is registered as a keystone service of type workloads
which manages all the workloads being created for utilizing the snapshot and restore functionalities. It has 4 services responsible for managing these workloads, their snapshots, and restores.
workloadmgr-api
workloadmgr-scheduler
workloadmgr-workloads
workloadmgr-cron
Similar to WorkloadManager, this component registers a keystone service of the type datamover
which manages the transfer of extensive data to/from the backup targets. It has 2 services that are responsible for taking care of the data transfer and communication with WorkloadManager
datamover-api
datamover
For ease of access and better user experience, T4O provides an integrated UI with the OpenStack dashboard service Horizon,
Trilio API is a Python module that is installed on all OpenStack controller nodes where the nova-api service is running.
Trilio Datamover is a Python module that is installed on every OpenStack compute nodes
Trilio Horizon plugin is installed as an add-on to Horizon servers. This module is installed on every server that runs Horizon service.
Trilio is both a provider and consumer in the OpenStack ecosystem. It uses other OpenStack services such as Nova, Cinder, Glance, Neutron, and Keystone and provides its own services to OpenStack tenants. To accommodate all possible OpenStack deployments, Trilio can be configured to use either public or internal URLs of services. Likewise, Trilio provides its own public, internal, and admin URLs for two of its services WorkloadManager API and Datamover API.
Unlike the previous versions of Trilio for OpenStack, now it utilizes the existing network of the OpenStack deployed environment. The networks for the Trilio services can be configured as per the user's desire in the same way the user configures any other OpenStack service. Additionally, a dedicated network can be provided to Trilio services on both control and compute planes for storing and retrieving backup data from the backup target store.
5.2.X
17.1 16.2 16.1
RHEL-9 RHEL-8 RHEL-8
5.1.0
16.2 16.1
RHEL-8 RHEL-8
5.0.0
16.2 16.1
RHEL-8 RHEL-8
5.2.4+
2024.1 Caracal (1) 2023.2 Bobcat 2023.1 Antelope Zed
Ubuntu 22.04 Rocky 9
5.2.2+
2023.2 Bobcat 2023.1 Antelope Zed
Ubuntu 22.04 Rocky 9
5.2.1
2023.1 Antelope Zed
Ubuntu 22.04 Rocky 9
5.1.0
Zed
Ubuntu 22.04 Rocky 9
5.2.4
Bobcat Jammy Antelope Jammy
Ubuntu 22.04
Caracal support is currently for Rocky 9 only.
All versions of T4O-5.x releases support NFSv3 and S3 as backup targets on all the compatible distributions.
All versions of T4O-5.x releases support encryption using Barbican service on all the compatible distributions.
QEMU guest agent is highly recommended to be running inside the VM that being backed up to avoid data corruption during backup process. QEMU Guest Agent is a daemon that runs inside a virtual machine (VM) and communicates with the host system (the hypervisor) to provide enhanced management and control of the VM. It is an essential component in virtualized environments specially OpenStack.
Learn about artifacts related to Trilio for OpenStack 5.2.0
All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
Repo URL:
Repo URL:
To enable, add the following file /etc/yum.repos.d/fury.repo:
Learn about artifacts related to Trilio for OpenStack 5.1.0
All packages are Python 3 (py3.6 - py3.10) compatible only.
Repo URL:
Repo URL:
Repo URL:
To enable, add the following file /etc/yum.repos.d/fury.repo:
Learn about artifacts related to Trilio for OpenStack 5.0.0