Trilio seamlessly integrates with OpenStack, functioning exclusively through APIs utilizing the OpenStack Endpoints. Furthermore, Trilio establishes its own set of OpenStack endpoints. Additionally, both the Trilio appliance and compute nodes interact with the backup target, impacting the network strategy for a Trilio installation.
OpenStack comprises three endpoint groupings:
Public Endpoints
Public endpoints are meant to be used by the OpenStack end-users to work with OpenStack.
Internal Endpoints
Internal endpoints are intended to be used by the OpenStack services to communicate with each
Admin Endpoints
Admin endpoints are meant to be used by OpenStack administrators.
Among these three endpoint categories, it's important to note that the admin endpoint occasionally hosts APIs not accessible through any other type of endpoint.
To learn more about OpenStack endpoints please visit the official OpenStack documentation.
Trilio communicates with all OpenStack services through a designated endpoint type, determined and configured during the deployment of Trilio's services.
It is recommended to configure connectivity through the admin endpoints if available.
The following network requirements can be identified this way:
Trilio services need access to the Keystone admin endpoint on the admin endpoint network if it is available.
Trilio services need access to all endpoints of the set endpoint type during deployment.
Trilio recommends granting comprehensive access to all OpenStack endpoints for all Trilio services, aligning with OpenStack's established standards and best practices.
Additionally, Trilio generates its own endpoints, which are integrated within the same network as other OpenStack API services.
To adhere to OpenStack's prescribed standards and best practices, it's advisable that Trilio containers operate on the same network as other OpenStack containers.
The public endpoint to be used by OpenStack users when using Trilio CLI or API
The internal endpoint to communicate with the OpenStack services
The admin endpoint to use the required admin-only APIs of Keystone
The Trilio solution uses backup target storage to place the backup data securely. Trilio divides its backup data into two parts:
Metadata
Volume Disk Data
The first type of data is generated by the Trilio Workloadmgr services through communication with the OpenStack Endpoints. All metadata that is stored together with a backup is written by the Trilio Workloadmgr services to the backup target in the JSON format.
The second type of data is generated by the Trilio Datamover service running on the compute nodes. The Datamover service reads the Volume Data from the Cinder or Nova storage and transfers this data as a qcow2 image to the backup target. Each Datamover service is hereby responsible for the VMs running on its compute node.
The network requirements are therefor:
Every Trilio Workloadmgr service containers need access to the backup target
Every Trilio Datamover service containers need access to the backup target
Before embarking on the installation process for Trilio in your OpenStack environment, it is highly advisable to carefully consider several key elements. These considerations will not only streamline the installation procedure but also ensure the optimal setup and functionality of Trilio's solutions within your OpenStack infrastructure.
Trilio leverages Cinder snapshots to facilitate the computation of both full and incremental backups.
When executing full backups, Trilio orchestrates the generation of Cinder snapshots for all volumes included in the backup job. These Cinder snapshots remain intact for subsequent incremental backup image calculations.
During incremental backup operations, Trilio generates fresh Cinder snapshots and computes the altered blocks between these new snapshots and the earlier retained snapshots from full or previous backups. The old snapshots are subsequently deleted, while the newly generated snapshots are preserved.
Consequently, it becomes imperative for each tenant benefiting from Trilio's backup functionality to possess adequate Cinder snapshot quotas capable of accommodating these supplementary snapshots. As a guiding principle, it is recommended to append 2 snapshots for each volume incorporated into the backup quotas for the respective tenant. Additionally, a commensurate increase in volume quotas for the tenant is advisable, as Trilio briefly materializes a volume from the snapshot to access data for backup purposes.
During the restoration process, Trilio generates supplementary instances and Cinder volumes. To facilitate seamless restore operations, tenants should maintain adequate quota levels for Nova instances and Cinder volumes. Failure to meet these quota requirements may lead to disruptions in restoration procedures.
The AWS S3 object consistency model includes:
Read-after-write
Read-after-update
Read-after-delete
Each of these models explains how an object becomes consistent after being created, updated, or deleted. None of these methods ensures strong consistency, leading to a delay before an object becomes fully consistent.
Although Trilio has introduced measures to address AWS S3's eventual consistency limitations, the exact time an object achieves consistency cannot be predicted through deterministic means.
There is no official statement from AWS on how long it takes for an object to reach a consistent state. However, read-after-write has a shorter time to reach consistency compared to other IO patterns. Therefore, our solution is designed to maximize the read-after-write IO pattern.
The time in which an object reaches eventual consistency also depends on the AWS region.
For instance, the AWS-standard region doesn't offer the same level of strong consistency as regions like us-east or us-west. Opting for these regions when setting up S3 buckets for Trilio is advisable. While fully avoiding the read-after-update IO pattern is complex, we've introduced significant access delays for objects to achieve consistency over longer periods. On rare occasions when this does happen, it will cause a backup failure and require a retry.
Trilio can be deployed as a single node or a three-node cluster. It is highly recommended that Trilio is deployed as a three-node cluster for fault tolerance and load balancing. Starting with the 3.0 release, Trilio requires additional IP or FQDN for the cluster and is required for both single-node and three-node deployments. Cluster IP a.k.a virtual IP is used for managing clusters and is used to register the Trilio service endpoint in the keystone service catalog.
T4O represents a comprehensively containerized deployment model, eliminating the necessity for any KVM-based appliances for service deployment. This marks a departure from earlier T4O releases, where such appliances were required.
Trilio requires its containers to be deployed on the same plane as OpenStack, utilizing existing cluster resources.
As described in the architecture overview, Trilio requires sufficient cluster resources to deploy its components on both the Controller Plane and Compute Planes.
Valid Trilio License & Acceptance of the EULA
Sufficient resources available on the target OpenShift cluster nodes
Sufficient storage capacity and connectivity on Cinder for snapshotting operations
Sufficient network capabilities for efficient data transfer of workloads
User and Role permissions for access to required cluster objects
Optional features may have specific requirements such as encryption, file search, snapshot mount, FRM instance, etc
Set hw_qemu_guest_agent=True
property on the image and install qemu-guest-agent on the VM, in order to avoid any file system inconsistencies post restore.
For the VMware to OpenStack migration feature, please refer to the prerequisite and limitations pages.