Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Deployment of Trilio on RHOCP (with RHOSP17.1).
Support for multiple target backends.
Support for advanced scheduling of workloads' snapshots.
Workload import from non-default BTT gets stuck
If non-default BTT is provided as source btt in import command, then the same gets stuck and no workload gets imported. However same works well if source btt is the default one.
workload-create CLI expects jobschedule as mandatory parameter
Workaround:
While creating workload from CLI, parameter jobschedule
to be provided with True/False values. Eg. --jobschedule enabled=False
get-importworkload-list and get-orphaned-workloads-list show incorrect workloads
The commands get-importworkload-list
and get-orphaned-workloads-list
show incorrect workloads in the output.
Workaround:
Use --project
option with get-importworkload-list
CLI command to get the list of workloads that can be imported on any particular project of the OpenStack.
workloadmgr workload-get-importworkloads-list --project_id <project_id>
Trilio seamlessly integrates with OpenStack, functioning exclusively through APIs utilizing the OpenStack Endpoints. Furthermore, Trilio establishes its own set of OpenStack endpoints. Additionally, both the Trilio appliance and compute nodes interact with the backup target, impacting the network strategy for a Trilio installation.
OpenStack comprises three endpoint groupings:
Public Endpoints
Public endpoints are meant to be used by the OpenStack end-users to work with OpenStack.
Internal Endpoints
Internal endpoints are intended to be used by the OpenStack services to communicate with each
Admin Endpoints
Admin endpoints are meant to be used by OpenStack administrators.
Among these three endpoint categories, it's important to note that the admin endpoint occasionally hosts APIs not accessible through any other type of endpoint.
To learn more about OpenStack endpoints please visit the official OpenStack documentation.
Trilio communicates with all OpenStack services through a designated endpoint type, determined and configured during the deployment of Trilio's services.
It is recommended to configure connectivity through the admin endpoints if available.
The following network requirements can be identified this way:
Trilio services need access to the Keystone admin endpoint on the admin endpoint network if it is available.
Trilio services need access to all endpoints of the set endpoint type during deployment.
Trilio recommends granting comprehensive access to all OpenStack endpoints for all Trilio services, aligning with OpenStack's established standards and best practices.
Additionally, Trilio generates its own endpoints, which are integrated within the same network as other OpenStack API services.
To adhere to OpenStack's prescribed standards and best practices, it's advisable that Trilio containers operate on the same network as other OpenStack containers.
The public endpoint to be used by OpenStack users when using Trilio CLI or API
The internal endpoint to communicate with the OpenStack services
The admin endpoint to use the required admin-only APIs of Keystone
The Trilio solution uses backup target storage to place the backup data securely. Trilio divides its backup data into two parts:
Metadata
Volume Disk Data
The first type of data is generated by the Trilio Workloadmgr services through communication with the OpenStack Endpoints. All metadata that is stored together with a backup is written by the Trilio Workloadmgr services to the backup target in the JSON format.
The second type of data is generated by the Trilio Datamover service running on the compute nodes. The Datamover service reads the Volume Data from the Cinder or Nova storage and transfers this data as a qcow2 image to the backup target. Each Datamover service is hereby responsible for the VMs running on its compute node.
The network requirements are therefor:
Every Trilio Workloadmgr service containers need access to the backup target
Every Trilio Datamover service containers need access to the backup target
Before embarking on the installation process for Trilio in your OpenStack environment, it is highly advisable to carefully consider several key elements. These considerations will not only streamline the installation procedure but also ensure the optimal setup and functionality of Trilio's solutions within your OpenStack infrastructure.
Trilio leverages Cinder snapshots to facilitate the computation of both full and incremental backups.
When executing full backups, Trilio orchestrates the generation of Cinder snapshots for all volumes included in the backup job. These Cinder snapshots remain intact for subsequent incremental backup image calculations.
During incremental backup operations, Trilio generates fresh Cinder snapshots and computes the altered blocks between these new snapshots and the earlier retained snapshots from full or previous backups. The old snapshots are subsequently deleted, while the newly generated snapshots are preserved.
Consequently, it becomes imperative for each tenant benefiting from Trilio's backup functionality to possess adequate Cinder snapshot quotas capable of accommodating these supplementary snapshots. As a guiding principle, it is recommended to append 2 snapshots for each volume incorporated into the backup quotas for the respective tenant. Additionally, a commensurate increase in volume quotas for the tenant is advisable, as Trilio briefly materializes a volume from the snapshot to access data for backup purposes.
During the restoration process, Trilio generates supplementary instances and Cinder volumes. To facilitate seamless restore operations, tenants should maintain adequate quota levels for Nova instances and Cinder volumes. Failure to meet these quota requirements may lead to disruptions in restoration procedures.
The AWS S3 object consistency model includes:
Read-after-write
Read-after-update
Read-after-delete
Each of these models explains how an object becomes consistent after being created, updated, or deleted. None of these methods ensures strong consistency, leading to a delay before an object becomes fully consistent.
Although Trilio has introduced measures to address AWS S3's eventual consistency limitations, the exact time an object achieves consistency cannot be predicted through deterministic means.
There is no official statement from AWS on how long it takes for an object to reach a consistent state. However, read-after-write has a shorter time to reach consistency compared to other IO patterns. Therefore, our solution is designed to maximize the read-after-write IO pattern.
The time in which an object reaches eventual consistency also depends on the AWS region.
For instance, the AWS-standard region doesn't offer the same level of strong consistency as regions like us-east or us-west. Opting for these regions when setting up S3 buckets for Trilio is advisable. While fully avoiding the read-after-update IO pattern is complex, we've introduced significant access delays for objects to achieve consistency over longer periods. On rare occasions when this does happen, it will cause a backup failure and require a retry.
Trilio can be deployed as a single node or a three-node cluster. It is highly recommended that Trilio is deployed as a three-node cluster for fault tolerance and load balancing. Starting with the 3.0 release, Trilio requires additional IP or FQDN for the cluster and is required for both single-node and three-node deployments. Cluster IP a.k.a virtual IP is used for managing clusters and is used to register the Trilio service endpoint in the keystone service catalog.
Trilio Data, Inc. is the leader in providing backup and recovery solutions for cloud-native applications. Established in 2013, its flagship products, Trilio for OpenStack(T4O) and Trilio for Kubernetes(T4K) are used by a wide number of large corporations around the world.
T4O, by Trilio Data, is a native OpenStack service-based solution that provides policy-based comprehensive backup and recovery for OpenStack workloads. It captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data, and Metadata of an environment) as full or incremental snapshots. These snapshots can be held in a variety of storage environments including NFS, AWS S3, and other S3-compatible storages. With Trilio and its one-click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). With Trilio, IT departments are enabled to fully deploy OpenStack solutions and provide business assurance through enhanced data retention, protection, and integrity.
With the use of Trilio’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. Trilio takes point-in-time backup of the entire workload consisting of compute resources, network configurations, and storage data as one unit. It also takes incremental backups that capture only the changes that were made since the last backup. Incremental snapshots save a considerable amount of storage space as the backup only includes changes since the last backup. The benefits of using VAST for backup and restore could be summarized below:
This documentation serves as the end-user technical resource to accompany Trilio for OpenStack. You will learn about the architecture, installation, and vast number of operations of this product. The intended audience is anyone who wants to understand the value, operations, and nuances of protecting their cloud-native applications with Trilio for OpenStack.
Learn about artifacts related to Trilio for OpenStack 6.0.0-beta
T4O represents a comprehensively containerized deployment model, eliminating the necessity for any KVM-based appliances for service deployment. This marks a departure from earlier T4O releases, where such appliances were required.
Trilio requires its containers to be deployed on the same plane as OpenStack, utilizing existing cluster resources.
As described in the , Trilio requires sufficient cluster resources to deploy its components on both the Controller Plane and Compute Planes.
Valid Trilio License & Acceptance of the
Sufficient resources available on the target OpenShift cluster nodes
Sufficient storage capacity and connectivity on Cinder for snapshotting operations
Sufficient network capabilities for efficient data transfer of workloads
User and Role permissions for access to required cluster objects
Optional features may have specific requirements such as encryption, file search, snapshot mount, FRM instance, etc
Set hw_qemu_guest_agent=True
property on the image and install qemu-guest-agent on the VM, in order to avoid any file system inconsistencies post restore.
For the VMware to OpenStack migration feature, please refer to the and pages.
Trilio Release | RHOSP version | Linux Distribution | Supported ? |
---|
All versions of T4O-6.x releases support NFSv3 and S3 as backup targets on all the compatible distributions.
All versions of T4O-6.x releases support encryption using Barbican service on all the compatible distributions.
QEMU guest agent is highly recommended to be running inside the VM that being backed up to avoid data corruption during backup process. QEMU Guest Agent is a daemon that runs inside a virtual machine (VM) and communicates with the host system (the hypervisor) to provide enhanced management and control of the VM. It is an essential component in virtualized environments specially OpenStack.
The is the supported and recommended method to deploy and maintain any RHOSP installation.
Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
Refer to the below-mentioned acceptable values for the placeholders triliovault_tag
, trilio_branch
, RHOSP_version
and CONTAINER-TAG-VERSION
in this document as per the Openstack environment:
Trilio Release | triliovault_tag | trilio_branch | RHOSP_version | CONTAINER-TAG-VERSION |
---|
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
The following backup target types are supported by Trilio
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
The following steps are to be done on the undercloud
node on an already installed RHOSP environment.
The overcloud-deploy command has to be run successfully already and the overcloud should be available.
All commands need to be run as a stack
user on the undercloud
node
Refer to the word <RHOSP_RELEASE_DIRECTORY> as rhosp17 in the below sections
The following command clones the triliovault-cfg-scripts github repository.
If your backup target is CEPH S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For all s3 backup target with self signed TLS certificates, user need to copy ca chain files in following location and in given file name format in trilio puppet module. Edit <S3_BACKUP_TARGET_NAME>, <S3_SELF_SIGNED_CERT_CA_CHAIN_FILE> parameters in following command.
For example if S3_BACKUP_TARGET_NAME = BT2_S3 and your S3_SELF_SIGNED_CERT_CA_CHAIN_FILE='s3-ca.pem' then command to copy this ca chain file to trilio puppet module would be
Rename the file as vddk.tar.gz
and place it at
Copy the vCenter SSL cert file to
Trilio contains multiple services. Add these services to your roles_data.yaml
.
In the case of uncustomized roles_data.yaml can the default file be found on the undercloud
at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
All commands need to be run as a 'stack' user
This service needs to share the same role as the keystone
and database
service.
In the case of the pre-defined roles, these services will run on the role Controller
.
In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone
service installed.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In the case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In the case of custom-defined roles, it is necessary to use the role that the nova-compute
service uses.
Add the following line to the identified role:
All commands need to be run as a 'stack' user
Trilio containers are pushed to the RedHat Container Registry
.
Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.
There are three registry methods available in the RedHat OpenStack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
In this method, container images get downloaded directly on overcloud nodes during overcloud deploy/update command execution. Users can set the remote registry to a redhat registry or any other private registry that they want to use.
The user needs to provide credentials for the registry in containers-prepare-parameter.yaml
file.
Make sure other OpenStack service images are also using the same method to pull container images. If it's not the case you can not use this method.
Populate containers-prepare-parameter.yaml
with content like the following. Important parameters are push_destination: false
,
ContainerImageRegistryLogin: true and registry credentials.
Trilio container images are published to registry registry.connect.redhat.com
.
Credentials of registry 'registry.redhat.io' will work for registry.connect.redhat.com
registry too.
Note: This file containers-prepare-parameter.yaml
Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat
3. Make sure you have network connectivity to the above registries from all overcloud nodes. Otherwise image pull operation will fail.
4. The user needs to manually populate the trilio_env.yaml
file with Trilio container image URLs as given below:
trilio_env.yaml file path:
cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
At this step, you have configured Trilio image URLs in the necessary environment file.
Follow this section when 'local registry' is used on the undercloud.
In this case, it is necessary to push the Trilio containers to the undercloud registry.
Trilio provides shell scripts that will pull the containers from registry.connect.redhat.com
and push them to the undercloud and update the trilio_env.yaml
.
At this step, you have downloaded Trilio container images and configured Trilio image URLs in the necessary environment file.
Follow this section when a Satellite Server is used for the container registry.
Populate the trilio_env.yaml
with container URLs.
At this step, you have downloaded Trilio container images into the RedHat satellite server and configured Trilio image URLs in the necessary environment file.
Edit /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml
file and provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still, it is recommended to verify the container URLs.
You don't need to provide anything for resource_registry
, keep it as it is.
T4O supports setting up multiple target backend for storing snapshots. User can define any number of storage backends as required. At a high level, NFS and S3 are supported.
Following table provides the details of parameters to be set in trilio_env.yaml
file against all the S3 target backends.
Following table provides the details of parameters to be set in trilio_env.yaml
file against all the NFS target backends.
After you fill in details of backup targets in trilio_env.yaml, user needs to run following script from ‘scripts' directory on undercloud node. This script will update ‘services/triliovault-object-store.yaml' file. User don’t need to verify that.
The user needs to generate random passwords for Trilio resources using the following script.
This script will generate random passwords for all Trilio resources that are going to get created in OpenStack cloud.
Include this file in your overcloud deploy command as an environment file with the option "-e"
For only this section user needs to source the cloudrc file of overcloud node
The output will be written to
For TrilioVault functionality to work, we need the following Linux kernel modules to be loaded on all controllers and compute nodes(Where Trilio WLM and Datamover services are going to be installed).
All commands need to be run as a 'stack' user on undercloud node
defaults.yaml
in overcloud deploy command with `-e` option as shown below.This YAML file holds the default values, like default Trustee Role is creator
and Keystone endpoint interface is Internal
. There are some other parameters as well those User can update as per their requirements.
trilio_env.yaml
roles_data.yaml
passwords.yaml
defaults.yaml
Use the correct Trilio endpoint map file as per the available Keystone endpoint configuration. You have to remove your OpenStack's endpoint map file from overcloud deploy command and instead of that use Trilio endpoint map file.
Instead of tls-endpoints-public-dns.yaml
file, use triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_dns.yaml
Instead of tls-endpoints-public-ip.yaml
file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_ip.yaml
Instead of tls-everywhere-endpoints-dns.yaml
file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_everywhere_dns.yaml
Instead of no-tls-endpoints-public-ip.yaml
file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_non_tls_endpoints_ip.yaml
To include new environment files use -e
option and for roles data files use -r
option.\
Below is an example of an overcloud deploy command with Trilio environment:
Post-deployment for the multipath enabled environment, log into respective Datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container
Trilio components will be deployed using puppet scripts.
=> If any Trilio containers do not start well or are in a restarting state on the Controller/Compute node, use the following logs to debug.
This section is only required when the Multi-IP feature for NFS is required.
This feature allows us to set the IP to access the NFS Volume per datamover instead of globally.
triliovault_nfs_map_input.yml
in the current directory and provide compute host and NFS share/IP map.Get the overcloud Controller and Compute hostnames from the following command. Check Name
column. Use exact host names in the triliovault_nfs_map_input.yml
file.
Run this command on undercloud by sourcing stackrc
.
Below is an example of how you can set the multi-IP NFS details:
You can not configure the different IPs for the Controllers/WLM nodes, you need to use the same share on all the controller nodes. You can configure the different IPs for Compute/Datamover nodes
If pip isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.
The result will be stored in the triliovault_nfs_map_output.yml
file
Open file triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
Validate the changes in the file triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
trilio_nfs_map.yaml
) in overcloud deploy command with '-e' option as shown below.MultiIPNfsEnabled
is set to true in the trilio_env.yaml
file and that NFS is used as a backup target.The existing default HAproxy configuration works fine with most of the environments. Only when timeout issues with the Trilio Datamover Api are observed or other reasons are known, change the configuration as described here.
Following is the HAproxy conf file location on HAproxy nodes of the overcloud. Trilio Datamover API service HAproxy configuration gets added to this file.
Trilio Datamover HAproxy default configuration from the above file looks as follows:
The user can change the following configuration parameter values.
To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for editing.
ii) Search the following entries and edit as required
iii) Save the changes and do the overcloud deployment again to reflect these changes for overcloud nodes.
i) If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use a variable named 'TrilioDatamoverOptVolumes' is available in the below file.
To add one more extra volume/directoy mount to the Trilio Datamover Service container it is necessary that volumes/directories should already be mounted on the Compute host
ii) The variable 'TrilioDatamoverOptVolumes' accepts a list of volume/bind mounts. User needs to edit the file and add their volume mounts in the below format.
iii) Lastly you need to do overcloud deploy/update.
After successful deployment, you will see that volume/directory mount will be mounted inside the Trilio Datamover Service container.
We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user'.
Each T4O release includes a set of artifacts such as version-tagged containers, package repositories, and distribution packages.
To help users quickly identify the resources associated with each release, we have added dedicated sub-pages corresponding to a specific release version.
After the installation and configuration of Trilio for Openstack did succeed the following steps can be done to verify that the Trilio installation is healthy.
Make sure below containers are in a running state, triliovault-wlm-cron
would be running on only one of the controllers in case of multi-controller setup.
triliovault_datamover_api
triliovault_wlm_api
triliovault_wlm_scheduler
triliovault_wlm_workloads
triliovault-wlm-cron
If the containers are in a restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
After successful deployment, triliovault-wlm-cron
service would get added in pcs cluster as a cluster resource, you can verify through pcs status
command
Verify the HAproxy configuration under:
Make sure the Trilio Datamover container is in running state and no other Trilio container is deployed on compute nodes.
Check if provided backup target is mounted well on Compute host.
Make sure the horizon container is in a running state. Please note that the Horizon
container is replaced with Trilio's Horizon container. This container will have the latest OpenStack horizon + Trilio horizon plugin
.
More about this feature can be found .
Please refer to to collect the necessary artifacts before continuing further.
Redhat document for remote registry method:
Pull the Trilio containers on the Red Hat Satellite using the given
Parameter | Description |
---|
Parameters | Description |
---|
Parameters | Description |
---|
More about this feature can be found .
Parameters | Description |
---|
Please follow to verify the deployment.
In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights:
Edit the input map file triliovault_nfs_map_input.yml
and fill in all the details. Refer to for details about the structure.
Details about multiple ceph configuration can be found .
RHOSP 17.1
contegoclient
6.0.0
s3fuse
6.0.1
tvault-horizon-plugin
6.0.0
workloadmgr
6.0.6
workloadmgrclient
6.0.3
python3-contegoclient
6.0.0
python3-dmapi
6.0.0
python3-namedatomiclock
1.1.3
python3-s3-fuse-plugin
6.0.2
python3-tvault-contego
6.0.0
python3-tvault-horizon-plugin
6.0.0
python3-workloadmgrclient
6.0.3
workloadmgr
6.0.6
python3-contegoclient-el8
RHEL8/CentOS8*
6.0.0-6.0
python3-contegoclient-el9
Rocky9
6.0.0-6.0
python3-dmapi
RHEL8/CentOS8*
6.0.0-6.0
python3-dmapi-el9
Rocky9
6.0.0-6.0
python3-s3fuse-plugin
RHEL8/CentOS8*
6.0.2-6.0
python3-s3fuse-plugin-el9
Rocky9
6.0.1-6.0
python3-trilio-fusepy
RHEL8/CentOS8*
3.0.1-1
python3-trilio-fusepy-el9
Rocky9
3.0.1-1
python3-tvault-contego
RHEL8/CentOS8*
6.0.0-6.0
python3-tvault-contego-el9
Rocky9
6.0.0-6.0
python3-tvault-horizon-plugin-el8
RHEL8/CentOS8*
6.0.0-6.0
python3-tvault-horizon-plugin-el9
Rocky9
6.0.0-6.0
python3-workloadmgrclient-el8
RHEL8/CentOS8*
6.0.3-6.0
python3-workloadmgrclient-el9
Rocky9
6.0.2-6.0
python3-workloadmgr-el9
Rocky9
6.0.5-6.0
workloadmgr
RHEL8/CentOS8*
6.0.6-6.0
CloudAdminUserName | Default value is Provide the cloudadmin user name of your overcloud |
CloudAdminProjectName | Default value is Provide the cloudadmin project name of your overcloud |
CloudAdminDomainName | Default value is Provide the cloudadmin project name of your overcloud |
CloudAdminPassword | Provide the cloudadmin user's password of your overcloud |
ContainerTriliovaultDatamoverImage | Trilio Datamover Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
ContainerTriliovaultDatamoverApiImage | Trilio DatamoverApi Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
ContainerTriliovaultWlmImage | Trilio WLM Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
ContainerHorizonImage | Horizon Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
TrilioBackupTargets | List of Backup Targets for TrilioVault. These backup targets will be used to store backups taken by TrilioVault. Backup target examples and format of NFS and S3 types are already provided in the trilio_env.yaml file. Details of respective prameters under TrilioBackupTargets given in next section |
TrilioDatamoverOptVolumes | User can specify list of extra volumes that they want to mount on 'triliovault_datamover' container. Refer the |
backup_target_name | User Defined Name of the target backend. Can be any name which can help in quick identifying respective target |
backup_target_type | s3 |
is_default | Can be |
s3_type | Could be either of |
s3_access_key | S3 Access Key |
s3_secret_key | S3 Secret Key |
s3_region_name | S3 Region name |
s3_bucket | S3 Bucket |
s3_endpoint_url | S3 endpoint url |
s3_signature_version | Provide S3 signature version |
s3_auth_version | Provide S3 auth version |
s3_ssl_enabled | true |
s3_ssl_verify | true |
s3_self_signed_cert | true |
s3_bucket_object_lock_enabled | If S3 bucket is having object lock enabled, then this should be set as |
backup_target_name | User Defined Name of the target backend. Can be any name which can help in quick identifying respective target |
backup_target_type | nfs |
is_default | Can be |
nfs_options |
|
is_multi_ip_nfs |
|
nfs_shares | NFS IP and share path. To be kept in case of single IP NFS. Eg. |
multi_ip_nfs_map | NFS IPs and share paths. To be kept in case of multiple NFS IPs. Sample below multi_ip_nfs_map: controller1: 192.168.2.3:/var/nfsshare controller2: 192.168.2.4:/var/nfsshare compute0: 192.168.3.2:/var/nfsshare compute1: 192.168.3.4:/var/nfsshare |
6.0.0 | 6.0.0-beta-2-rhosp17.1 | 6.0.0-beta-2 | RHOSP17.1 | 6.0.0-beta-2 |
The workloadmgr CLI client is provided as rpm and deb packages.
The following operating systems have been verified with the installation:
CentOS8
Ubuntu 18.04, Ubuntu 20.04
Installing the workloadmgr client will automatically install all required OpenStack clients as well.
Further, the installation of the workloadmgr client will integrate the client into the global OpenStack Python client, if available.
The Trilio workload manager CLI client has several requirements that need to be met before the client can be installed without dependency issues.
The following steps need to be done to prepare the installation of the workloadmgr client:
Add required repositories
epel-release
centos-release-openstack-train
install base packages
yum -y install epel-release
yum -y install centos-release-openstack-train
These repositories are required to fulfill the following dependencies:
python3-pbr,python3-prettytable,python3-requests,python3-simplejson,python3-six,python3-pyyaml,python3-pytz,python3-openstackclient
To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:
Create the Trilio yum repository file /etc/yum.repos.d/trilio.repo
Enter the following details into the repository file:
Install the workloadmgr client issuing the following command:
yum install python-3-workloadmgrclient-el8
The Trilio workloadmgr client packages for Ubuntu are only available from the online repository.
There is no preparation required. All dependencies are automatically resolved by the standard repositories provided by Ubuntu.
To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:
Create the Trilio yum repository file /etc/apt/sources.list.d/fury.list
Enter the following details into the repository file:
run apt update
to make the new repository available.
The apt package manager is used to install the workloadmgr client package:
apt-get install python3-workloadmgrclient
The following steps need to be run on all nodes, which have the Trilio Datamover API & Workloadmanager services running. Those nodes can be identified by checking the roles_data.yaml
for the role that contains the below entries
Once the role that runs the Trilio Datamover API & Workloadmanager services has been identified will the following commands clean the nodes from the service.
Run all commands as root or user with sudo permissions.
Remove triliovault_datamover_api
container.
Clean Trilio Datamover API service conf directory.
Clean Trilio Datamover API service log directory.
Remove triliovault_wlm_api
container.
Clean Trilio Workloadmanager API service conf directory.
Clean Trilio Workloadmanager API service log directory.
Remove triliovault_wlm_workloads
container.
Clean Trilio Workloadmanager Workloads service conf directory.
Clean Trilio Workloadmanager Workloads service log directory.
Remove triliovault_wlm_scheduler
container.
Clean Trilio Workloadmanager Scheduler service conf directory.
Clean Trilio Workloadmanager Scheduler service log directory.
Remove triliovault-wlm-cron-podman-0
container from controller.
Clean Trilio Workloadmanager Cron service conf directory.
Clean Trilio Workloadmanager Cron service log directory.
The following steps need to be run on all nodes, which have the Trilio Datamover service running. Those nodes can be identified by checking the roles_data.yaml
for the role that contains the entry OS::TripleO::Services::TrilioDatamover
.
Once the role that runs the Trilio Datamover service has been identified will the following commands clean the nodes from the service.
Run all commands as root or user with sudo permissions.
Remove triliovault_datamover
container.
Unmount the Trilio Backup Target on the compute host.
Clean Trilio Datamover service conf directory.
Clean log directory of Trilio Datamover service.
Remove wlm cron resource from pcs cluster on the controller node.
The following steps need to be run on all nodes, which have the HAproxy service running. Those nodes can be identified by checking the roles_data.yaml
for the role that contains the entry OS::TripleO::Services::HAproxy
.
Once the role that runs the HAproxy service has been identified will the following commands clean the nodes from all the Trilio resources.
Run all commands as root or user with sudo permissions.
Edit the following file inside the HAproxy container and remove all Trilio entries.
/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
An example of these entries is given below.
Restart the HAproxy container once all edits have been done.
Trilio registers services and users in Keystone. Those need to be unregistered and deleted.
Trilio creates databases for dmapi and workloadmgr services. These databases need to be cleaned.
Login into the database cluster
Run the following SQL statements to clean the database.
Remove the following entries from roles_data.yaml
used in the overcloud deploy command.
OS::TripleO::Services::TrilioDatamoverApi
OS::TripleO::Services::TrilioWlmApi
OS::TripleO::Services::TrilioWlmWorkloads
OS::TripleO::Services::TrilioWlmScheduler
OS::TripleO::Services::TrilioWlmCron
OS::TripleO::Services::TrilioDatamover
In the case that the overcloud deploy command used prior to the deployment of Trilio is still available, it can directly be used.
Follow these steps to clean the overcloud deploy command from all Trilio entries.
Remove trilio_env.yaml entry
Remove trilio endpoint map file Replace with original map file if existing
Run the cleaned overcloud deploy command.
Trilio employs a base64 hash to establish the mount point for NFS Backup targets, ensuring compatibility across multiple NFS Shares within a single Trilio installation. This hash is an integral component of Trilio's incremental backups, functioning as an absolute path for backing files.
Consequently, during a disaster recovery or rapid migration situation, the utilization of a mount bind becomes necessary.
In scenarios that allow for a comprehensive migration period, an alternative approach comes into play. This involves modifying the backing file, thereby enabling the accessibility of Trilio backups from a different NFS Share. The backing file is updated to correspond with the mount point of the new NFS Share.
Trilio provides a shell script for the purpose of changing the backing file. This script is used after the Trilio appliance has been reconfigured to use the new NFS share.
Please request the shell script from your Trilio Customer Success Manager or Customer Success Engineer by opening a case from our Customer Portal. It is not publically available for download at this time.
The following requirements need to be met before the change of the backing file can be attempted.
The Trilio Appliance has been reconfigured with the new NFS Share
The Openstack environment has been reconfigured with the new NFS Share
Please check here for Red Hat Openstack Platform
The workloads are available on the new NFS Share
The workloads are owned by nova:nova user
The shell script is changing one workload at a time.
The shell script has to run as nova user, otherwise the owner will get changed and the backup can not be used by Trilio.
Run the following command:
with
/var/triliovault-mounts/<base64>/
being the new NFS mount path
workload_<workload_id>
being the workload to rebase
The shell script is generating the following log file at the following location:
The log file will not get overwritten when the script is run multiple times. Each run of the script will append the available log file.
It is possible to configure Cinder and Ceph to use different Ceph users for different Ceph pools and Cinder volume types. Or to have the nova boot volumes and cinder block volumes controlled by different users.
If multiple Ceph storages are configured/integrated with the OpenStack, please ensure that respective conf and keyring files are present in /etc/ceph directory.
In the case of multiple Ceph users, it is required to delete the keyring extension from the triliovault-datamover.conf inside the Ceph block by following below mentioned steps:
Deploy Trilio as per the documented steps.
Post successful deployment, please modify the triliovault-datamover.conf
file present at following locations on all compute nodes.
For RHOSP : /var/lib/config-data/puppet-generated/triliovaultdm/etc/triliovault-datamover/
For Kolla : /etc/kolla/triliovault-datamover/
Modify keyring_ext
value with valid keyring extension (eg. .keyring). This extension is expected to be same for all the keyring files. It will be present under [ceph]
block in triliovault-datamover.conf
file.
Sample conf entry below. This will try all files with the extension keyring that are located inside /etc/ceph to access the Ceph cluster for a Trilio related task.
Restart triliovault_datamover
container on all compute nodes.
Please ensure the following requirements are met before starting the upgrade process:
No Snapshot or Restore is running
Global job scheduler is disabled
wlm-cron is disabled on the Trilio Appliance
The following sets of commands will disable the wlm-cron service and verify that it is has been completely shutdown.
Please follow this documentation to install the latest Trilio release.
This step would be needed only when your backup target is NFS and if you are upgrading from T4O 4.1 or older releases.
T4O has changed the calculation of the mount point from 4.2 release onwards. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2
Please follow this documentation to set up the mount bind for RHOSP.
After the upgrade, global job scheduler would be disabled. You can enable that either through UI or CLI.
Login to the Dashboard and go to Admin -> Backups-Admin -> Settings tab and check the 'Job Scheduler Enabled' checkbox. By clicking the 'Change' button you can enable the Global Job Scheduler.
Login to the any WLM container, create and source the admin rc file. Execute the enable the global job scheduler cli.
Workload import allows importing Workloads existing on the Backup Target into the Trilio database.
Please follow this documentation to import the Workloads.
The high-level process is the same for all Distributions.
Uninstall the Horizon Plugin or the Trilio Horizon container
Uninstall the datamover-api container
Uninstall the datamover container
Uninstall the workloadmgr container
Ceph is the most common OpenSource solution to provide block storage through OpenStack Cinder.
Ceph is a very flexible solution. The possibilities of Ceph require additional steps to the Trilio solution.
Filename and location:
This file exclusively comes into play when users aim to configure Trilio with a NFS backup target that employs multiple network endpoints. For all other scenarios, such as single IP NFS or S3, this file remains inactive, and in such instances, please consult the standard installation documentation.
When using an NFS backup target with multiple network endpoints, T4O will mount a single IP/endpoint on a designated compute node for a specific NFS share. This approach enables users to distribute NFS share IPs/endpoints across various compute nodes.
The 'triliovault_nfs_map_input.yml
' file allows users to distribute/load balance NFS share endpoints across compute nodes in a given cloud.
Note: Two IPs/endpoints of the same NFS share on a single compute node is an invalid scenario and not required because in the backend it stores data at the same place.
Here, the User has ‘one’ NFS share exposed with three IP addresses. 192.168.1.34, 192.168.1.35, 192.168.1.33
Share directory path is: /var/share1
So, this NFS share supports the following full paths that clients can mount:
There are 32 compute nodes in the OpenStack cloud. 30 node hostnames have the following naming pattern
The remaining 2 node hostnames do not follow any format/pattern.
Now the mapping file will look like this
Compute node IP range used here: 172.30.3.11-40 and 172.30.4.40, 172.30.4.50 Total of 32 compute nodes
Other complex examples are available on github at triliovault-cfg-scripts/common/examples-multi-ip-nfs-map at master · trilioData/triliovault-cfg-scripts
Use the following command to get compute hostnames. Check the ‘Name' column. Use these exact hostnames in 'triliovault_nfs_map_input.yml
' file.
In the following command output, ‘overcloudtrain1-novacompute-0' and ‘overcloudtrain1-novacompute-1' are correct hostnames.
Run this command on undercloud by sourcing 'stackrc'.
Below Trilio containers gets deployed on the Contoller node.
triliovault_datamover_api
triliovault_wlm_api
triliovault_wlm_scheduler
triliovault_wlm_workloads
triliovault-wlm-cron
The log files for above Trilio services can be found here:
/var/log/containers/triliovault-datamover-api/triliovault-datamover-api.log
/var/log/containers/triliovault-wlm-api/triliovault-wlm-api.log
/var/log/containers/triliovault-wlm-cron/triliovault-wlm-cron.log
/var/log/containers/triliovault-wlm-scheduler/triliovault-wlm-scheduler.log
/var/log/containers/triliovault-wlm-workloads/triliovault-wlm-workloads.log
In the case of using S3 as a backup target is there also a log file that keeps track of the S3-Fuse plugin used to connect with the S3 storage.
/var/log/containers/triliovault-wlm-workloads/triliovault-object-store.log
For file serach operation, logs can be found on Controller node at below location:
/var/log/containers/triliovault-wlm-workloads/workloadmgr-filesearch.log
Trilio Datamover container gets deployed on the Compute node.
The log file for the Trilio Datamover service can be found at:
/var/log/containers/triliovault-datamover/triliovault-datamover.log
Troubleshooting inside a complex environment like OpenStack can be very time-consuming.
The following tips will help to speed up the troubleshooting process to identify root causes.
OpenStack and Trilio are divided into multiple services. Each service has a very specific purpose that is called during a backup or recovery procedure. Knowing which service is doing what helps to understand where the error is happening, allowing more focused troubleshooting.
The Trilio Workloadmgr is the Controller of Trilio. It receives all Workload related requests from the users.
Every task of a backup or restore process is triggered and managed from here. This includes the creation of the directory structure and initial metadata files on the Backup Target.
During a backup process is the Trilio Workloadmgr also responsible for gathering the metadata about the backed-up VMs and networks from the OpenStack environment. It sends API calls to the OpenStack endpoints on the configured endpoint type to fetch this information. Once the metadata has been received, the Trilio Workloadmgr writes it as JSON files on the Backup Target.
The Trilio Workloadmgr is also sending the Cinder Snapshot command.
During the restore process, the Trilio Workloadmgr reads the VM metadata from its Database and uses the metadata to create the Shell for the restore. It sends API calls to the OpenStack environment to create the necessary resources.
The dmapi service is the connector between the Trilio cluster and the Datamover running on the compute nodes.
The purpose of the dmapi service is to identify which compute node is responsible for the current backup or restore task. To do so, the dmapi service connects to the nova API requesting the compute hose of a provided VM.
Once the compute host has been identified, the dmapi forwards the command from the Trilio Workloadmgr to the datamover running on the identified compute host.
The datamover is the Trilio service running on the compute nodes.
Each datamover is responsible for the VMs running on top of its compute node. A datamover can not work with VMs running on a different compute node.
The datamover controls the freeze and thaw of VMs as well as the actual movement of the data.
Trilio is reading and writing on the Backup Target as nova:nova
.
The POSIX user-id and group-id of nova:nova
need to be aligned between the Trilio Cluster and all compute nodes. Otherwise, backup or restores may fail with permission or file not found issues.
Alternativ ways to achieve the goal are possible, as long as all required nodes can fully write and read as nova:nova
on the Backup Target.
It is recommended to verify the required permissions on the Backup Target in case of any errors during the data transfer phase or in case of any file permission errors.
On Cohesity NFS if an Input/Output error is observed, then increase the timeo
and retrans
parameter values in your NFS options.
Logging inside all datamover containers and add uxsock_timeout with value as 60000 which is equal to 60 sec inside /etc/multipath.conf.Restart datamover container
Trilio is using RBAC to allow the usage of Trilio features to users.
This trustee role is absolutely required and can not be overwritten using the admin role.
It is recommended to verify the assignment of the Trilio Trustee Role in case of any permission errors from Trilio during the creation of Workloads, backups, or restores.
Trilio is creating Cinder Snapshots and temporary Cinder Volumes. The OpenStack Quotas need to allow that.
Every disk that is getting backed up requires one temporary Cinder Volumes.
Every Cinder Volume that is getting backup requires two Cinder Snapshots. The second Cinder Snapshot is temporary to calculate the incremental.
VmwareToOpenstackMigrationEnabled | Set it to |
VcenterUrl | vCenter access URL
example:
|
VcenterUsername |
VcenterPassword | Access user's Password |
VcenterNoSsl | If the connection is to be established securely, set it to |
VcenterCACertFileName |
After all Trilio components are installed, the license can be applied.
The license can be applied either through the Admin tab in Horizon or the CLI
To apply for the license through Horizon follow these steps:
Login to Horizon using admin user.
Click on the Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to License
Click "Update License"
Read the license agreement
Click on "I accept the terms in the License Agreement"
Click on "Next"
Click "Choose File"
Choose license file on the client system
Click "Apply"
Read and accept the End User License Agreement
to complete the license application.
Users can preview the latest EULA at our main site: https://trilio.io/eula/
The file search functionality allows the user to search for files and folders located on a chosen VM in a workload in one or more Backups.
The file search tab is part of every workload overview. To reach it follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload a file search shall be done in
Click the workload name to enter the Workload overview
Click File Search to enter the file search tab
A file search runs against a single virtual machine for a chosen subset of backups using a provided search string.
To run a file search the following elements need to be decided and configured
Under VM Name/ID choose the VM that the search is done upon. The drop down menu provides a list of all VMs that are part of any Snapshot in the Workload.
VMs that are no longer activly protected by the Workload but are still part of an existing Snapshot are listed in red.
The File Path defines the search string that is run against the chosen VM and Snapshots. This search string does support basic RegEx.
The File Path has to start with a '/'
Windows partitions are fully supported. Each partition is its own Volume with its own root. Use '/Windows' instead of 'C:\Windows'
The file search does not go into deeper directories and always searches on the directory provided in the File Path
Example File Path for all files inside /etc : /etc/*
"Filter Snapshots by" is the third and last component that needs to be set. This defines which Snapshots are going to be searched.
There are 3 possibilities for a pre-filtering:
All Snapshots - Lists all Snapshots that contain the chosen VM from all available Snapshots
Last Snapshots - Choose between the last 10, 25, 50, or custom Snapshots and click Apply to get the list of the available Snapshots for the chosen VM that match the criteria.
Date Range - Set a start and end date and click apply to get the list of all available Snapshots for the chosen VM within the set dates.
After the pre-filtering is done all matching Snapshots are automatic prechosen. Uncheck any Snapshot that shall not be searched.
When no Snapshot is chosen the file search will not start.
To start a File Search the following elements need to be set:
A VM to search in has to be chosen
A valid File Path provided
At least one Snapshot to search in selected
Once those have been set click "Search" to start the file search.
Do not navigate to any other Horizon tab or website after starting the File Search. Results are lost and the search has to be repeated to regain them.
After a short time the results will be presented. The results are presented in a tabular format grouped by Snapshots and Volumes inside the Snapshot.
For each found file or folder the following information are provided:
POSIX permissions
Amount of links pointing to the file or folder
User ID who owns the file or folder
Group ID assigned to the file or folder
Actual size in Bytes of the file or folder
Time of creation
Time of last modification
Time of last access
Full path to the found file or folder
Once the Snapshot of interest has been identified it is possible to go directly to the Snapshot using the "View Snapshot" option at the top of the table. It is also possible to directly mount the Snapshot using the "Mound Snapshot" Button at the end of the table.
A Restore is the workflow to bring back the backed up VMs from a Trilio Snapshot.
To reach the list of Restores for a Snapshot follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restores tab
To reach the detailed Restore overview follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restores tab
Identify the restore to show
Click the restore name
The Restore Details Tab shows the most important information about the Restore.
Name
Description
Restore Type
Status
Time taken
Size
Progress Message
Progress
Host
Restore Options
The Restore Options are the restore.json provided to Trilio.
List of VMs restored
restored VM Name
restored VM Status
restored VM ID
The Misc tab provides additional Metadata information.
Creation Time
Restore ID
Snapshot ID containing the Restore
Workload
Once a Restore is no longer needed, it can be safely deleted from a Workload.
Deleting a Restore will only delete the Trilio information about this Restore. No Openstack resources are getting deleted.
There are 2 possibilities to delete a Restore.
To delete a single Restore through the submenu follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restore tab
Click "Delete Restore" in the line of the restore in question
Confirm by clicking "Delete Restore"
To delete one or more Restores through the Restore list do the following:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshots in the Snapshot list
Enter the Snapshot by clicking the Snapshot name
Navigate to the Restore tab
Check the checkbox for each Restore that shall be deleted
Click "Delete Restore" in the menu above
Confirm by clicking "Delete Restore"
Ongoing Restores can be canceled.
To cancel a Restore in Horizon follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restore tab
Identify the ongoing Restore
Click "Cancel Restore" in the line of the restore in question
Confirm by clicking "Cancel Restore"
The One Click Restore will bring back all VMs from the Snapshot in the same state as they were backed up. They will:
be located in the same cluster in the same datacenter
use the same storage domain
connect to the same network
have the same flavor
The user can't change any Metadata.
The One Click Restore requires, that the original VM's that have been backed up are deleted or otherwise lost. If even one VM is still existing, will the One Click Restore fail.
The One Click Restore will automatically update the Workload to protect the restored VMs.
There are 2 possibilities to start a One Click Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click "One Click Restore" in the same line as the identified Snapshot
(Optional) Provide a name / description
Click "Create"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "One Click Restore"
(Optional) Provide a name / description
Click "Create"
The Selective Restore is the most complex restore Trilio has to offer. It allows to adapt the restored VMs to the exact needs of the User.
With the selective restore the following things can be changed:
Which VMs are getting restored
Name of the restored VMs
Which networks to connect with
Which Storage domain to use
Which DataCenter / Cluster to restore into
Which flavor the restored VMs will use
The Selective Restore is always available and doesn't have any prerequirements.
The Selective Restore will automatically update the Workload to protect the created instance in the case that the original instance is no longer existing.
There are 2 possibilities to start a Selective Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot
Click on "Selective Restore"
Configure the Selective Restore as desired
Click "Restore"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "Selective Restore"
Configure the Selective Restore as desired
Click "Restore"
The Inplace Restore covers those use cases, where the VM and its Volumes are still available, but the data got corrupted or needs to a rollback for other reasons.
It allows the user to restore only the data of a selected Volume, which is part of a backup.
The Inplace Restore only works when the original VM and the original Volume are still available and connected. Trilio is checking this by the saved Object-ID.
The Inplace Restore will not create any new RHV resources. Please use one of the other restore options if new Volumes or VMs are required.
There are 2 possibilities to start an Inplace Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot
Click on "Inplace Restore"
Configure the Inplace Restore as desired
Click "Restore"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "Inplace Restore"
Configure the Inplace Restore as desired
Click "Restore"
The workloadmgr client CLI is using a restore.json file to define the restore parameters for the selective and the inplace restore.
An example for a selective restore of this restore.json is shown below. A detailed analysis and explanation is given afterwards.
The restore.json requires many information about the backed up resources. All required information can be gathered in the Snapshot overview.
Before the exact details of the restore are to be provided it is necessary to provide the general metadata for the restore.
openstack
starts the exact definition of the restore
The Selective Restore requires a lot of information to be able to execute the restore as desired.
Those information are divided into 3 components:
instances
restore_topology
networks_mapping
This part contains all information about all instances that are part of the Snapshot to restore and how they are to be restored.
Even when VMs are not to be restored are they required inside the restore.json to allow a clean execution of the restore.
Each instance requires the following information
All further information are only required, when the instance is part of the restore.
To use the next free IP available in the set Nics to an empty list [ ]
Using an empty list for Nics combined with the Network Topology Restore, will the restore automatically restore the original IP address of the instance.
The root disk needs to be at least as big as the root disk of the backed up instance was.
The following example describes a single instance with all values.
Do not mix network topology restore together with network mapping.
To activate a network topology restore set:
To activate network mapping set:
When the network mapping is activated it is used, it is necessary to provide the mapping details, which are part of the networks_mapping block:
The Inplace Restore requires less information thana selective restore. It only requires the base file with some information about the Instances and Volumes to be restored.
When the boot disk is at the same time a Cinder Disk, both values need to be set true.
There are no network information required, but the field have to exist as empty value for the restore to work.
A Snapshot is a single Trilio backup of a workload including all data and metadata. It contains the information of all VM's that are protected by the workload.
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to show the details on
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
The List of Snapshots for the chosen Workload contains the following additional information:
Creation Time
Name of the Snapshot
Description of the Snapshot
Total amount of Restores from this Snapshot
Total amount of succeeded Restores
Total amount of failed Restores
Snapshot Type
Snapshot Size
Snapshot Status
Snapshots are automatically created by the Trilio scheduler. If necessary or in case of deactivated scheduler is it possible to create a Snapshot on demand.
There are 2 possibilities to create a snapshot on demand.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that shall create a Snapshot
Click "Create Snapshot"
Provide a name and description for the Snapshot
Decide between Full and Incremental Snapshot
Click "Create"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that shall create a Snapshot
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Click "Create Snapshot"
Provide a name and description for the Snapshot
Decide between Full and Incremental Snapshot
Click "Create"
Each Snapshot contains a lot of information about the backup. These information can be seen in the Snapshot overview.
To reach the Snapshot Overview follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
The Snapshot Details Tab shows the most important information about the Snapshot.
Snapshot Name / Description
Snapshot Type
Time Taken
Size
Which VMs are part of the Snapshot
for each VM in the Snapshot
Instance Info - Name & Status
Security Group(s) - Name & Type
Flavor - vCPUs, Disk & RAM
Networks - IP, Networkname & Mac Address
Attached Volumes - Name, Type, size (GB), Mount Point & Restore Size
Misc - Original ID of the VM
The Snapshot Restores Tab shows the list of Restores that have been started from the chosen Snapshot. It is possible to start Restores from here.
Please refer to the Restores User Guide to learn more about Restores.
The Snapshot Miscellaneous Tab provides the remaining metadata information about the Snapshot.
Creation Time
Last Update time
Snapshot ID
Workload ID of the Workload containing the Snapshot
Once a Snapshot is no longer needed, it can be safely deleted from a Workload.
The retention policy will automatically delete the oldest Snapshots according to the configure policy.
You have to delete all Snapshots to be able to delete a Workload.
Deleting a Trilio Snapshot will not delete any Openstack Cinder Snapshots. Those need to be deleted separately if desired.
There are 2 possibilities to delete a Snapshot.
To delete a single Snapshot through the submenu follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu
Click "Delete Snapshot"
Confirm by clicking "Delete"
To delete one or more Snapshots through the Snapshot overview do the following:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshots in the Snapshot list
Check the checkbox for each Snapshot that shall be deleted
Click "Delete Snapshots"
Confirm by clicking "Delete"
Ongoing Snapshots can be canceled.
Canceled Snapshots will be treated like errored Snapshots
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to cancel
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click "Cancel" on the same line as the identified Snapshot
Confirm by clicking "Cancel"
A workload is a backup job that protects one or more Virtual Machines according to a configured policy. There can be as many workloads as needed. But each VM can only be part of one Workload.
Using encrypted Workload will lead to longer backup times. The following timings have been seen in Trilio labs:
Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : ~108GB
For unencrypted WL : 62 min
For encrypted WL : 82 min
Snapshot time for Windows Image booted VM. No additional data except OS. : ~12 GB
For unencrypted WL : 10 min
For encrypted WL : 18 min5
To view all available workloads of a project inside Horizon do:
Login to Horizon
Navigate to Backups
Navigate to Workloads
The overview in Horizon lists all workloads with the following additional information:
Creation time
Workload Name
Workload description
Total amount of Snapshots inside this workload
Total amount of succeeded Snapshots
Total amount of failed Snapshots
Status of the Workload
The encryption options of the workload creation process are only available when the Barbican service is installed and available.
To create a workload inside Horizon do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Click "Create Workload"
Provide Workload Name and Workload Description on the first tab "Details"
Choose the Policy if available to use on the first tab "Details"
Choose if the Workload is encrypted on the first tab "Details"
Provide the secret UUID if Workload is encrypted on the first tab "Details"
Choose the VMs to protect on the second Tab "Workload Members"
Decide for the schedule of the workload on the Tab "Schedule"
Provide the Retention policy on the Tab "Policy"
Choose the Full Backup Interval on the Tab "Policy"
If required check "Pause VM" on the Tab "Options"
Click create
The created Workload will be available after a few seconds and starts to take backups according to the provided schedule and policy.
A workload contains many information, which can be seen in the workload overview.
To enter the workload overview inside Horizon do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to show the details on
Click the workload name to enter the Workload overview
The Workload Details tab provides you with the general most important information about the workload:
Name
Description
Availability Zone
List of protected VMs including the information of qemu guest agent availability
The status of the qemu-guest-agent just shows, whether the necessary Openstack configuration has been done for this VM to provide qemu guest agent integration. It does not check, whether the qemu guest agent is installed and configured on the VM.
It is possible to navigate to the protected VM directly from the list of protected VMs.
The Workload Snapshots Tab shows the list of all available Snapshots in the chosen Workload.
From here it is possible to work with the Snapshots, create Snapshots on demand and start Restores.
The Workload Policy Tab gives an overview of the current configured scheduler and retention policy. The following elements are shown:
Scheduler Enabled / Disabled
Start Date / Time
End Date / Time
RPO
Time till next Snapshot run
Retention Policy and Value
Full Backup Interval policy and value
The Workload Filesearch Tab provides access to the powerful search engine, which allows to find files and folders on Snapshots without the need of a restore.
Please refer to the File Search User Guide to learn more about this feature.
The Workload Miscellaneous Tab shows the remaining metadata of the Workload. The following information are provided:
Creation time
last update time
Workload ID
Workloads can be modified in all components to match changing needs.
Editing a Workload will set the User, who edits the Workload, as the new owner.
To edit a workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Modify the workload as desired - All parameters except workload type can be changed
Click "Update"
Once a workload is no longer needed it can be safely deleted.
All Snapshots need to be deleted before the workload gets deleted. Please refer to the Snapshots User Guide to learn how to delete Snapshots.
To delete a workload do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to be deleted
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Delete Workload"
Confirm by clicking "Delete Workload" yet again
Workloads that are actively taking backups or restores are locked for further tasks. It is possible to unlock a workload by force if necessary.
It is highly recommend to use this feature only as last resort in case of backups/restores being stuck without failing or a restore is required while a backup is running.
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to unlock
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Unlock Workload"
Confirm by clicking "Unlock Workload" yet again
In rare cases it might be necessary to start a backup chain all over again, to ensure the quality of the created backups. To not recreate a Workload in such cases is it possible to reset a Workload.
The Workload reset will:
Cancel all ongoing tasks
Delete all existing Openstack Trilio Snapshots from the protected VMs
recalculate the next Snapshot time
take a full backup at the next Snapshot
To reset a Workload do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to reset
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Reset Workload"
Confirm by clicking "Reset Workload" yet again
Trilio can notify users via E-Mail upon the completion of backup and restore jobs.
The E-Mail will be sent to the owner of the Workload.
To use the E-mail notifications, two requirements need to be met.
Both requirements need to be set or configured by the Openstack Administrator. Please contact your Openstack Administrator to verify the requirements.
As the E-Mail will be sent to the owner of the Workload does the Openstack User, who created the workload, require to have an E-Mail address associated.
Trilio needs to know which E-Mail server to use, to send the E-mail notifications. Backup Administrators can do this in the "Backup Admin" area.
E-Mail notifications are activated tenant wide. To activate the E-Mail notification feature for a tenant follow these steps:
Login to Horizon
Navigate to the Backups
Navigate to Settings
Check/Uncheck the box for "Enable Email Alerts"
The following screenshots show example E-mails send by Trilio.
Trilio’s tenant driven backup service gives tenants control over backup policies. However, sometimes it may be too much control to tenants and the cloud admins may want to limit what policies are allowed by tenants. For example, a tenant may become overzealous and only uses full backups every 1 hr interval. If every tenant were to pursue this backup policy, it puts a severe strain on cloud infrastructure. Instead, if cloud admin can define predefined backup policies and each tenant is only limited to those policies then cloud administrators can exert better control over backup service.
Workload policy is similar to nova flavor where a tenant cannot create arbitrary instances. Instead, each tenant is only allowed to use the nova flavors published by the admin.
To see all available Workload policies in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
The following information are shown in the policy tab for each available policy:
Creation time
name
description
status
set interval
set retention type
set retention value
To create a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
Click new policy
provide a policy name on the Details tab
provide a description on the Details tab
provide the RPO in the Policy tab
Choose the Snapshot Retention Type
provide the Retention value
Choose the Full Backup Interval
Click create
To edit a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to edit
click on "Edit policy" at the end of the line of the chosen policy
edit the policy as desired - all values can be changed
Click "Update"
To assign or remove a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to assign/remove
click on the small arrow at the end of the line of the chosen policy to open the submenu
click "Add/Remove Projects"
Choose projects to add or remove by using the plus/minus buttons
Click "Apply"
To delete a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to assign/remove
click on the small arrow at the end of the line of the chosen policy to open the submenu
click "Delete Policy"
Confirm by clicking "Delete"
Trilio enables Openstack administrators to set Project Quotas against the usage of Trilio.
The following Quotas can be set:
Number of Workloads a Project is allowed to have
Number of Snapshots a Project is allowed to have
Number of VMs a Project is allowed to protect
Amount of Storage a Project is allowed to use on the Backup Target
The Trilio Quota feature is available for all supported Openstack versions and distributions, but only Train and higher releases include the Horizon integration of the Quota feature.
Workload Quotas are managed like any other Project Quotas.
Login into Horizon as user with admin role
Navigate to Identity
Navigate to Projects
Identify the Project to modify or show the quotas on
Use the small arrow next to "Manage Members" to open the submenu
Choose "Modify Quotas"
Navigate to "Workload Manager"
Edit Quotas as desired
Click "Save"
Trilio is providing several different Quotas. The following command allows listing those.
Trilio 4.1 do not yet have the Quota Type Volume integrated. Using this will not generate any Quotas a Tenant has to apply to.
The following command will show the details of a provided Quota Type.
The following command will create a Quota for a given project and set the provided value.
The high watermark is automatically set to 80% of the allowed value when set via Horizon.
A created Quota will generate an allowed_quota_object with its own ID. This is ID is needed when continuing to work with the created Quota.
The following command lists all Trilio Quotas set for a given project.
The following command shows the details about a provided allowed Quota.
The following command shows how to update the value of an already existing allowed Quota.
The following command will delete an allowed Quota and sets the value of the connected Quota Type back to unlimited for the affected project.
Every Workload has its own schedule. Those schedules can be activated, deactivated and modified.
A schedule is defined by:
Status (Enabled/Disabled)
Start Day/Time
End Day
Hrs between 2 snapshots
To disable the scheduler of a single Workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Navigate to the tab "Schedule"
Uncheck "Enabled"
Click "Update"
To disable the scheduler of a single Workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Navigate to the tab "Schedule"
check "Enabled"
Click "Update"
To modify a schedule the workload itself needs to be modified.
Please follow this procedure to modify the workload.
Trilio is using the Openstack Keystone Trust system which enables the Trilio service user to act in the name of another Openstack user.
This system is used during all backup and restore features.
As a trust is bound to a specific user for each Workload does the Trilio Horizon plugin show the status of the Scheduler on the Workload list page.
Trilio provides Backup-as-a-Service, which allows Openstack Users to manage and control their backups themselves. This doesn't eradicate the need for a Backup Administrator, who has an overview of the complete Backup Solution.
To provide Backup Administrators with the tools they need does Trilio for Openstack provide a Backup-Admin area in Horizon in addition to the API and CLI.
To access the Backups-Admin area follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin Tab.
Navigate to Trilio page.
The Backups-Admin area provides the following features.
It is possible to reduce the shown information down to a single tenant. That way seeing the exact impact the chosen Tenant has.
The status overview is always visible in the Backups-Admin area. It provides the most needed information on a glance, including:
Storage Usage (nfs only)
Number of protected VMs compared to number of existing VMs
Number of currently running Snapshots
Status of TVault Nodes
Status of Contego Nodes
The status of nodes is filled when the services are running and in good status.
This tab provides information about all currently existing Workloads. It is the most important overview tab for every Backup Administrator and therefor the default tab shown when opening the Backup-Admins area.
The following information are shown:
User-ID that owns the Workload
Project that contains the Workload
Workload name
Availability Zone
Amount of protected VMs
Performance information about the last 30 backups
How much data was backed up (green bars)
How long did the Backup take (red line)
Piechart showing amount of Full (Blue) Backups compared to Incremental (Red) Backups
Number of successful Backups
Number of failed Backups
Storage used by that Workload
Which Backup target is used
When is the next Snapshot run
What is the general intervall of the Workload
Scheduler Status including a Switch to deactivate/activate the Workload
Administrators often need to figure out, where a lot of resources are used up, or they need to quickly provide usage information to a billing system. This tab helps in these tasks by providing the following information:
Storage used by a Tenant
VMs protected by a Tenant
It is possible to drill down to see the same information per workload and finally per protected VM.
The Usage tab includes workloads and VMs that are no longer actively used by a Tenant, but exist on the backup target.
This tab displays information about Trilio cluster nodes. The following information are shown:
Node name
Node ID
Trilio Version of the node
IP Address
Node Status including a Switch to deactivate/activate the Node
Node status can be controlled through CLI as well.
To deactivate the Trilio Node use:
To activate the Trilio Node use:
This tab displays information about Trilio contego service. The following information are shown:
Service-Name
Compute Node the service is running on
Service Status from Openstack perspective (enabled/disabled)
Version of the Service
General Status
This tab displays information about the backup target storage. It contains the following information:
Storage Name
Clicking on the Storage name provides an overview of all workloads stored on that storage.
Capacity of the storage
Total utilization of the storage
Status of the storage
Statistic information
Percentage all storages are used
Percentage how much storage is used for full backups
Amount of Full backups versus Incremental backups
Audit logs provide the sequence of workload related activities done by users, like workload creation, snapshot creation, etc. The following information are shown:
Time of the entry
What task has been done
Project the task has performed in
User that performed the task
The Audit log can be searched for strings to find for example only entries down by a specific user.
Additionally, can the shown timeframe be changed as necessary.
The license tab provides an overview over the current license and allows to upload new licenses, or validate the current license.
A license validation is automatically done, when opening the tab.
The following information about an active license are shown:
Organization (License name)
License ID
Purchase date - when was the license created
License Expiry Date
Maintenance Expiry Date
License value
License Edition
License Version
License Type
Description of the License
Evaluation (True/False)
EULA - when was the license agreed
Trilio will stop all activities once a license is no longer valid or expired.
The policy tab gives Administrators the possibility to work with workload policies.
This tab manages all global settings for the whole cloud. Trilio has two types of settings:
Email settings
Job scheduler settings.
These settings will be used by Trilio to send email reports of snapshots and restores to users.
Configuring the Email settings is a must-have to provide Email notification to Openstack users.
The following information are required to configure the email settings:
SMTP Server
SMTP username
SMTP password
SMTP port
SMTP timeout
Sender email address
A test email can be sent directly from the configuration page.
To work with email settings through CLI use the following commands:
To set an email setting for the first time or after deletion use:
To update an already set email setting through CLI use:
To show an already set email setting use:
To delete a set email setting use:
The Global Job Scheduler can be used to deactivate all scheduled workloads without modifying each one of them.
To activate/deactivate the Global Job Scheduler through the Backups-Admin area:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin Tab.
Navigate to Trilio page.
Navigate to the Settings tab
Click "Disable/Enable Job Scheduler"
Check or Uncheck the box for "Job Scheduler Enabled"
Confirm by clicking on "Change"
The Global Job Scheduler can be controlled through CLI as well.
To get the status of the Global Job Scheduler use:
To deactivate the Global Job Scheduler use:
To activate the Global Job Scheduler use:
Migration within the same cloud to a different owner Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project A — User B Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project B — User B Cloud A — Domain A — Project A — User A =>Cloud A — Domain B — Project B — User B
Steps used:
Create a secret for Project A in Domain A via User A.
Create encrypted workload in Project A in Domain A via User A. Take snapshot.
Reassign workload to new owner
Load rc file of User A & provide read only rights through acl to the new owner
openstack acl user add --user <userB_id> <secret_href> --insecure
Migration between clouds Cloud A — Domain A — Project A — User A => Cloud B — Domain B — Project B — User B
Steps used:
Create a secret for Project A in Domain A via User A.
Create an encrypted workload in Project A in Domain A via User A. Trigger snapshot.
Reassign workload to Cloud B - Domain B — Project B — User B
Load RC file of User B.
Create a secret for Project B in Domain B via User B with the same payload used in Cloud A.
Create token via “openstack token issue --insecure”
Add migrated workload's metadata to the new secret (provide issued token to Auth-Token & workload id to matadata as below)
6.0.X | 17.1 | RHEL-9 |
The following versions can be upgraded to each other:
From | To |
---|---|
The Red Hat OpenStack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.
Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
Refer to the below-mentioned acceptable values for the placeholders triliovault_tag
, trilio_branch
, RHOSP_version
and CONTAINER-TAG-VERSION
in this document as per the Openstack environment:
Trilio Release | triliovault_tag | trilio_branch | RHOSP_version | CONTAINER-TAG-VERSION |
---|---|---|---|---|
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
The following backup target types are supported by Trilio
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
The following steps are to be done on the workstation
node on an already installed RHOSP environment.
The overcloud-deploy command has to be run successfully already and the overcloud should be available.
The following command clones the triliovault-cfg-scripts github repository.
If your backup target is CEPH S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For all s3 backup target with self signed TLS certificates, user need to copy ca chain files in following location and in given file name format in trilio puppet module. Edit <S3_BACKUP_TARGET_NAME>, <S3_SELF_SIGNED_CERT_CA_CHAIN_FILE> parameters in following command.
For example if S3_BACKUP_TARGET_NAME = BT2_S3 and your S3_SELF_SIGNED_CERT_CA_CHAIN_FILE='s3-ca.pem' then command to copy this ca chain file to trilio puppet module would be
More about this feature can be found here.
Please refer to this page to collect the necessary artifacts before continuing further.
Rename the file as vddk.tar.gz
and place it at
Copy the vCenter SSL cert file to
Trilio contains multiple services. Add these services to your roles_data.yaml
.
You need to find roles_data.yaml file that is getting used for openstack deployment. You will find it in 'custom-templates' directory on workstation node, where cloud administrator has kept all custom heat templates. This directory name can be anything.
Please add all Trilio services to this roles_data.yaml file.
This service needs to share the same role as the keystone
and database
service.
In the case of the pre-defined roles, these services will run on the role Controller
.
In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone
service installed.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In the case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In the case of custom-defined roles, it is necessary to use the role that the nova-compute
service uses.
Add the following line to the identified role:
This change will trigger step X: "Recreate config map tripleo-tarball-config due to change in custom templates"
Trilio containers are pushed to the RedHat Container Registry
.
Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.
There are two registry methods available in the RedHat OpenStack Platform 17.1 on RHOCP.
Remote Registry
Image Registry on Red Hat Satellite Server
1] Follow this section when 'Remote Registry' is used.
In this method, container images get downloaded directly on overcloud nodes during overcloud deploy/update command execution.
Add 'registry.connect.redhat.com' redhat connect registry credentials to containers-prepare-parameter.yaml
env file. Please refer below example.
If you want to use TrilioVault images from dockerhub, please use following approach. Add 'docker.io' registry to containers-prepare-parameter.yaml
env file. Please refer below example.
Redhat document for remote registry method: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/director_installation_and_usage/preparing-for-director-installation#container-image-preparation-parameters
Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat
2. Make sure you have network connectivity to the above registries from all overcloud nodes. Otherwise image pull operation will fail.
3. The user needs to manually populate the trilio_env.yaml
file with Trilio container image URLs as given below:
trilio_env.yaml file path:
cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
RedHat Connect Registry URL: registry.connect.redhat.com Dockerhub Registry URL: docker.io
You can use any one registry from above. But make sure to use same registry as you configured in step 1 of this section
At this step, you have configured Trilio image URLs in the necessary environment file. You can skip step 3.2
Follow this section when a Satellite Server is used for the container registry.
Pull the Trilio containers on the Red Hat Satellite using the given Red Hat registry URLs.
Populate the trilio_env.yaml
with container URLs.
At this step, you have downloaded Trilio container images into the RedHat satellite server and configured Trilio image URLs in the necessary environment file.
Edit /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml
file and provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still, it is recommended to verify the container URLs.
You don't need to provide anything for resource_registry
, keep it as it is.
After you fill in details of backup targets in trilio_env.yaml, user needs to run following script from ‘scripts' directory on undercloud node. This script will update ‘services/triliovault-object-store.yaml' file. User don’t need to verify that.
More about this feature can be found here.
The user needs to generate random passwords for Trilio resources using the following script.
This script will generate random passwords for all Trilio resources that are going to get created in OpenStack cloud.
Include this file in your overcloud deploy command as an environment file with the option "-e"
Run all steps of this section from 'openstackclient' pod.
Login to openstackclient pod.
For only this section user needs to source the cloudrc file of overcloud node
The output will be written to
For TrilioVault functionality to work, we need the following Linux kernel modules to be loaded on all controllers and compute nodes(Where Trilio WLM and Datamover services are going to be installed).
Use ansible ad hoc commands or playbooks to copy the Trilio puppet module from openstack client pod to all overcloud nodes.
Login to openstackclient pod.
This step copies Trilio puppet module from path '/home/cloud-admin/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/puppet/trilio' to all controller and compute nodes at path '/etc/puppet/modules/trilio'
In this step we will copy all TrilioVault heat service templates to custom templates folder and re-create custom-config.tar.gz This section is valid for following TrilioVault heat templates
All heat templates paths should be relative to following directory :
First we need to delete existing config map
RHOSP Reference document: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_adding-custom-environment-files-to-the-overcloud-configuration_OSPdO-with-HCI
In this step user need to copy Trilio environment files to custome environment files directory. This diretcory name can be anything.
Let's say this directory path is "PATH_TO/dir_custom_environment_files"
In following commands, replace <dir_custom_environment_files> with the directory that contains the environment files you want to use in your overcloud deployment. The ConfigMap object stores these as individual data entries.
RHOSP Reference Document: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_creating-ansible-playbooks-for-overcloud-configuration-with-the-openstackplaybookgenerator-CRD_OSPdO-overcloud-deploy
Before running this step, please take permission from Redhat deployment team.
Check status of the resource, wait till it’s status chnges to “Finished“
RHOSP Reference Document: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_applying-overcloud-configuration-with-director-operator_OSPdO-overcloud-deploy
In this step we will apply the new ansible playbooks generated in previous step to overcloud. Use config version of the playbooks generated in previous step. Update config version in openstack-deployment.yaml file
Apply the updated definition.
Check deployment progress and logs
Make sure you see a successful deployment message at the bottom of the following logs You may need to adjust your deployment name deploy-openstack-default' as per your environment.
Please follow this documentation to verify the deployment.
This document discusses OpenStack multi-region deployments and how Trilio (or T4O, which stands for Trilio For OpenStack) can be deployed in multi-region OpenStack clouds.
OpenStack is designed to be scalable. To manage scale, OpenStack supports various resource segregation constructs: Regions, cells, and availability zones to manage OpenStack resources. Resource segregation is essential to define fault domains and localize network traffic.
From an end user's perspective, OpenStack regions are equivalent to regions in Amazon Web Services. Regions live in separate data centers, often named after their location. If your organization has a data center in Chicago and one in Boston, you'll have at least a CHG and a BOS region. Users who want to disperse their workloads geographically will place some in CHG and some in BOS. Regions have separate API endpoints for all services except for Keystone. Users, Tenants, and Domains are shaped across regions through a single Keystone deployment.
Availability Zones are an end-user visible logical abstraction for partitioning a cloud without knowing the physical infrastructure. Availability zones can partition a cloud on arbitrary factors, such as location (country, data center, rack), network layout, and power source. Because of the flexibility, the names and purposes of availability zones can vary massively between clouds.
In addition, other services, such as the networking service and the block storage service, also provide an availability zone feature. However, the implementation of these features differs vastly between these different services. Please look at the documentation for these other services for more information on their implementation of this feature.
Cells
functionality enables OpenStack to scale the compute in a more distributed fashion without using complicated technologies like database and message queue clustering. It supports vast deployments.
Cloud architects can partition OpenStack Compute Cloud into groups called cells. Cells are configured as a tree. The top-level cell should have a host that runs a nova-api
service but no nova-compute
services. Each child cell should run all the typical nova-*
services in a regular Compute cloud except for nova-api
. You can think of cells as a regular Compute deployment in that each cell has its database server and message queue broker.
This document discusses OpenStack multi-region deployments and how Trilio can be deployed in multi-region OpenStack clouds.
OpenStack offers lots of flexibility with multi-region deployments, and each organization architects the OpenStack that meets their business needs. OpenStack only suggests that the Keystone service is shared between regions, and the rest of the service endpoints differ for different regions. RabbitMQ and MySQL databases can be shared or deployed independently.
The following code section is a snippet of OpenStack services endpoints in two regions.
Trilio backup and recovery service is architecturally similar to OpenStack services. It has an API endpoint, a scheduler service, and workload services. Cloud architects must deploy Trilio similarly to Nova or Cinder service with an instance of Trilio in each region, as shown below. Trilio deployment must support any OpenStack multi-region deployments compatible with OpenInfra recommendations.
Trilio services endpoints in a multi-region OpenStack deployment are shown below.
Reference Document: OpenStack Docs: Multiple Regions Deployment with Kolla
Deployment of Trilio on the Kolla multi-region cloud is straightforward. We need to deploy Trilio in every region of the Kolla OpenStack cloud using the Kolla-ansible deploy command.
Please take a look at the Trilio install document for Kolla.
Getting started with Trilio on Kolla-Ansible OpenStack
For example, the Kolla OpenStack cloud has three regions.
RegionOne
RegionTwo
RegionThree
To deploy Multi-Region Trilio on this cloud, we need to install Trilio in each region.
Please follow the below steps:
Identify the kolla ansible inventory file for each region.
Identify the kolla-ansible deploy command that was used for OpenStack deployment for each region(Most probably, this is the same for all regions)
The customer might have used a separate “/etc/kolla/globals.yml“ file for each region deployment. Please check those details.
Deploy Trilio for the first region ( In our example, ‘RegionOne' ). Use it’s globals.yml. Follow the Trilio install guide for Kolla-ansible. No other configuration changes are needed for this deployment.
Now, for the following region deployment (RegionTwo), you can identify an ansible inventory file and the '/etc/kolla/globals.yml' file for that region. Also, could you identify the kolla-ansible deploy command used for that region?
Append the appropriate Trilio globals yml file to the/etc/kolla/globals.yml file; refer to section 3.1 from the Trilio install document.
Populate all Trilio config parameters in the ‘/etc/kolla/globals.yml’ file of RegionTwo. Or you can copy them from RegionOne’s '/etc/kolla/globals.yml' file.
Append Trilio inventory to the RegionTwo inventory file. Please take a look at section 3.4 of the Trilio install document.
Repeat step 7 (Pull Trilio container images) from the Trilio install document for RegionTwo.
If you use separate Kolla Ansible servers for each region, you must perform all the steps mentioned in the Trilio install document for Kolla again for RegionTwo. Using the same Kolla Ansible server for all region deployments, you can skip this for RegionTwo and all subsequent regions.
Review the Trilio install document, and if any other standard config file(like /etc/kolla/passwords.yml) is defined separately for each region, we need to check that Trilio uses that config file and perform any related steps from the Trilio install document.
Run the Kolla-ansible deploy command, which will deploy Trilio on RegionTwo.
Trilio allows you to view or download a file from the snapshot. Any changes to the files or directories when snapshot is mounted are temporary and are discarded when the snapshot is unmounted. Mounting is a faster way to restore a single or multiple files. To mount a snapshot follow these steps.
It is recommended to do these steps once to the chosen cloud-Image and then upload the modified cloud image to Glance.
Create an Openstack image using a Linux based cloud-image like Ubuntu, CentOS or RHEL with the following metadata parameters.
Spin up an instance from that image It is recommended to have at least 8GB RAM for the mount operation. Bigger Snapshots can require more RAM.
install and activate qemu-guest-agent
Edit /etc/sysconfig/qemu-ga
and remove the following from BLACKLIST_RPC section
Disable SELINUX in /etc/sysconfig/selinux
Install python3 and lvm2
Reboot the Instance
install and activate qemu-guest-agent
Verify the loaded path of qemu-guest-agent
Follow this path when systemctl returns the following loaded path
Edit /etc/init.d/qemu-guest-agent
and add Freeze-Hook file path in daemon args
Follow this path when systemctl returns the following loaded path
Edit qemu-guest-agent systemd
file
Add the following lines
Restart qemu-guest-agent service
Install Python3
Reboot the VM
Mounting a Snapshot to a File Recovery Manager provides read access to all data that is located on the in the mounted Snapshot.
It is possible to run the mounting process against any Openstack instance. During this process will the instance be rebooted.
Always mount Snapshots to File Recovery Manager instances only.
To be able to successfully mount Windows (NTFS) Snapshots the ntfs filesystem support is required on the File Recovery Manager instance.
Unmount any mounted Snapshot once there is no further need to keep it mounted. Mounted Snapshots will not be purged by the Retention policy.
There are 2 possibilities to mount a Snapshot in Horizon.
To mount a Snapshot through the Snapshot list follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu
Click "Mount Snapshot"
Choose the File Recovery Manager instance to mount to
Confirm by clicking "Mount"
Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:
tvault_recovery_manager=yes
To mount a Snapshot through the File Search results follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the File Search tab
Identify the Snapshot to be mounted
Click "Mount Snapshot" for the chosen Snapshot
Choose the File Recovery Manager instance to mount to
Confirm by clicking "Mount"
Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:
tvault_recovery_manager=yes
The File Recovery Manager is a normal Linux based Openstack instance.
It can be accessed via SSH or SSH based tools like FileZila or WinSCP.
SSH login is often disabled by default in cloud-images. Enable SSH login if necessary.
The mounted Snapshot can be found at the following path:
/home/ubuntu/tvault-mounts/mounts/
Each VM in the Snapshot has its own directory using the VM_ID as the identifier.
Sometimes a Snapshot is mounted for a longer time and it needs to be identified, which Snapshots are mounted.
There are 2 possibilities to identify mounted Snapshots inside Horizon.
Login to Horizon
Navigate to Compute
Navigate to Instances
Identify the File Recovery Manager Instance
Click on the Name of the File Recovery Manager Instance to bring up its details
On the Overview tab look for Metadata
Identify the value for mounted_snapshot_url
The mounted_snapshot_url
contains the Snapshot ID of the Snapshot that has been mounted last.
This value only gets updated, when a new Snapshot is mounted.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Search for the Snapshot that has the option "Unmount Snapshot"
Once a mounted Snapshot is no longer needed it is possible and recommended to unmount the snapshot.
Unmounting a Snapshot frees the File Recovery Manager instance to mount the next Snapshot and allows Trilio retention policy to purge the former mounted Snapshot.
Deleting the File Recovery Manager instance will not update the Trilio appliance. The Snapshot will be considered mounted until an unmount command has been received.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Search for the Snapshot that has the option "Unmount Snapshot"
Click "Unmount Snapshot"
Each Trilio Workload has a dedicated owner. The ownership of a Workload is defined by:
Openstack User - The Openstack User-ID is assigned to a Workload
Openstack Project - The Openstack Project-ID is assigned to a Workload
Openstack Cloud - The Trilio Serviceuser-ID is assigned to a Workload
Openstack Users can update the User ownership of a Workload by modifying the Workload.
This ownership secures, that only the owners of a Workload are able to work with it.
Openstack Administrators can reassign Workloads or reimport Workloads from older Trilio installations.
Workload import allows to import Workloads existing on the Backup Target into the Trilio database.
The Workload import is designed to import Workloads, which are owned by the Cloud.
It will not import or list any Workloads that are owned by a different cloud.
To get a list of importable Workloads use the following CLI command:
--project_id <project_id>
List workloads belongs to given project only.
To import Workloads into the Trilio database use the following CLI command:
--workloadids <workloadid>
Specify workload ids to import only specified workloads. Repeat option for multiple workloads.
The definition of an orphaned Workload is from the perspective of a specific Trilio installation. Any workload that is located on the Backup Target Storage, but not known to the TrilioVualt installation is considered orphaned.
Further is to divide between Workloads that were previously owned by Projects/Users in the same cloud or are migrated from a different cloud.
The following CLI command provides the list of orphaned workloads:
Running this command against a Backup Target with many Workloads can take a bit of time. Trilio is reading the complete Storage and verifies every found Workload against the Workloads known in the database.
Openstack administrators are able to reassign a Workload to a new owner. This involves the possibility to migrate a Workload from one cloud to another or between projects.
Reassigning a workload only changes the database of the target Trilio installation. When the Workload was managed before by a different Trilio installation, will that installation not be updated.
Use the following CLI command to reassign a Workload:
A sample mapping file with explanations is shown below:
This runbook will demonstrate how to set up Disaster Recovery with Trilio for a given scenario.
The chosen scenario is following an actively used Trilio customer environment.
There are two Openstack clouds available "Openstack Cloud A" and Openstack Cloud B". "Openstack Cloud B" is the Disaster Recovery restore point of "Openstack Cloud A" and vice versa. Both clouds have an independent Trilio installation integrated. These Trilio installations are writing their Backups to NFS targets. "Trilio A" is writing to "NFS A1" and "Trilio B" is writing to "NFS B1". The NFS Volumes used are getting synced against another NFS Volume on the other side. "NFS A1" is syncing with "NFS B2" and "NFS B1" is syncing with "NFS A2". The syncing process is set up independently from Trilio and will always favor the newer dataset.
This scenario will cover the Disaster Recovery of a single Workload and a complete Cloud. All processes are done be the Openstack administrator.
This runbook will assume that the following is true:
"Openstack Cloud A" and "Openstack Cloud B" both have an active Trilio installation with a valid license
"Openstack Cloud A" and "Openstack Cloud B" have free resources to host additional VMs
"Openstack Cloud A" and "Openstack Cloud B" have Tenants/Projects available that are the designated restore points for Tenant/Projects of the other side
Access to a user with the admin role permissions on domain level
One of the Openstack clouds is down/lost
For ease of writing will this runbook assume from here on, that "Openstack Cloud A" is down and the Workloads are getting restored into "Openstack Cloud B".
In the case of the usage of shared Tenant networks, beyond the floating IP, the following additional requirement is needed: All Tenant Networks, Routers, Ports, Floating IPs, and DNS Zones are created
A single Workload can do a Disaster Recovery in this Scenario, while both Clouds are still active. To do so the following high-level process needs to be followed:
Copy the Workload directories to the configured NFS Volume
Make the right Mount-Paths available
Reassign the Workload
Restore the Workload
Clean up
This process only shows how to get a Workload from "Openstack Cloud A" to "Openstack Cloud B". The vice versa process is similar.
As only a single Workload is to be recovered it is more efficient to copy the data of that single Workload over to the "NFS B1" Volume, which is used by "Trilio B".
It is recommended to use the Trilio VM as a connector between both NFS Volumes, as the nova user is available on the Trilio VM.
Trilio Workloads are identified by their ID und which they are stored on the Backup Target. See below example:
In the case that the Workload ID is not known can available Metadata inside the Workload directories be used to identify the correct Workload.
The identified workload needs to be copied with all subdirectories and files. Afterward, it is necessary to adjust the ownership to nova:nova with the right permissions.
Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.
The MTAuMTAuMi4yMDovdXBzdHJlYW0=
part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.
This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.
Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.
Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.
The used hash values can be calculated using the base64 tool in any Linux distribution.
Based on the identified base64 hash values the following paths are required on each Compute node.
/var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
and
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.
To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.
Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.
To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.
Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.
The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.
To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.
Now that all informations have been gathered the workload can be reassigned to the target project.
After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.
The reassigned workload can be restored using Horizon following the procedure described here.
This runbook will continue on the CLI only path.
To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.
List all Snapshots of the workload to restore to identify the snapshot to restore
Get Snapshot Details with network details for the desired snapshot
Get Snapshot Details with disk details for the desired Snapshot
The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.
To do the actual restore use the following command:
To verify the success of the restore from a Trilio perspective the restore status is checked.
After the Desaster Recovery Process has been successfully completed it is recommended to bring the TVM installation back into its original state to be ready for the next DR process.
Delete the workload that got restored.
The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.
To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.
Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.
This script can be found here: https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabase
After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.
This Scenario will cover the Disaster Recovery of a full cloud. It is assumed that the source cloud is down or lost completly. To do the disaster recovery the following high-level process needs to be followed:
Reconfigure the Target Trilio installation
Make the right Mount-Paths available
Reassign the Workload
Restore the Workload
Reconfigure the Target Trilio installation back to the original one
Clean up
Before the Desaster Recovery Process can start is it necessary to make the backups to be restored available for the Trilio installation. The following steps need to be done to completely reconfigure the Trilio installation.
During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.
To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be fully reconfigured to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.
Edit the workloadmgr.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the workloadmgr.conf
Restart the wlm-workloads service
Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.
To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.
Edit the tvault-contego.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the tvault-contego.conf
Restart the tvault-contego service
Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.
The MTAuMTAuMi4yMDovdXBzdHJlYW0=
part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.
This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.
Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.
Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.
The used hash values can be calculated using the base64 tool in any Linux distribution.
Based on the identified base64 hash values the following paths are required on each Compute node.
/var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
and
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.
To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.
Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.
To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.
Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.
The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.
To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.
Now that all informations have been gathered the workload can be reassigned to the target project.
After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.
The reassigned workload can be restored using Horizon following the procedure described here.
This runbook will continue on the CLI only path.
To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.
List all Snapshots of the workload to restore to identify the snapshot to restore
Get Snapshot Details with network details for the desired snapshot
Get Snapshot Details with disk details for the desired Snapshot
The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.
To do the actual restore use the following command:
To verify the success of the restore from a Trilio perspective the restore status is checked.
After the Desaster Recovery Process has finished it is necessary to return the Trilio installation to its original configuration. The following steps need to be done to completely reconfigure the Trilio installation.
During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.
To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be fully reconfigured to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.
Edit the workloadmgr.conf
Look for the line defining the NFS mounts
Delete NFS B2 from the comma-seperated list
Write and close the workloadmgr.conf
Restart the wlm-workloads service
Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.
To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.
Edit the tvault-contego.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the tvault-contego.conf
Restart the tvault-contego service
After the Desaster Recovery Process has been successfully completed and the Trilio installation reconfigured to its original state, it is recommended to do the following additional steps to be ready for the next Disaster Recovery process.
The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.
To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.
Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.
This script can be found here: https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabase
After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.
Trilio is using the Openstack Keystone Trust system which enables the Trilio service user to act in the name of another Openstack user.
This system is used during all backup and restore features.
Openstack Administrators should never have the need to directly work with the trusts created.
The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.
Trusts can only be worked with via CLI
<trust_id> ID of the trust to show
<role_name>
Name of the role that trust is created for
--is_cloud_trust {True,False}
Set to true if creating cloud admin trust. While creating cloud trust use same user and tenant which used to configure Trilio and keep the role admin.
Frequently Asked Questions about Trilio for OpenStack
Answer: NO
Trilio for OpenStack does not restore Instance UUIDs (also known as Instance IDs). The only scenario where we do not modify the Instance UUID is during an Inplace Restore, where we only recover the data without creating new instances.
When Trilio for OpenStack restores virtual machines (VMs), it effectively creates new instances. This means that new Virtual Machine Instance UUIDs are generated for the restored VMs. We achieve this by orchestrating a call to Nova, which creates new VMs with new UUIDs.
By following this approach, we maintain the principles of OpenStack and auditing. We do not update or modify existing database entries when objects are deleted and subsequently recovered. Instead, all deletions are marked as such, and new instances, including the recovered ones, are created as new objects in the Nova tables. This ensures compliance and preserves the integrity of the OpenStack environment.
Answer: YES
Trilio can restore the VMs MAC address, however, there is a caveat when restoring a virtual machine (VM) to a different IP address: a new MAC address will be assigned to the VM.
In the case of a One-Click Restore, the original MAC addresses and IP addresses will be recovered, but the VM will be created with a new UUID, as mentioned in question #1.
When performing a Selective Restore, you have the option to recover the original MAC address. To do so, you need to select the original IP address from the available dropdown menu during the recovery process.
By choosing the original IP address, Trilio for OpenStack will ensure that the VM is restored with its original MAC address, providing more flexibility and customization in the restoration process.
Example of Selective Restore with original MAC (and IP address):
In this example, we have taken a Trilio backup of a VM called prod-1.
The VM is deleted and we perform a Selective Restore of a VM called prod-1, selecting the IP address it was originally assigned from the drop-down menu:
Trilio then restores the VM with the original MAC address:
If you left the option as "Choose next available IP address", it will assign a new MAC to the VM instead as Neutron maps all MAC addresses to IP addresses on the Subnet - so logically a new IP will result in a new MAC address.
The Trilio solution is using the qcow2 backing file to provide full synthetic backups.
Especially when the NFS backup target is used, there are scenarios at which this backing file needs to be updated to a new mount path.
To make this process easier and streamlined Trilio is providing the following rebase tool.
Trilio Workloads are designed to allow a Desaster Recovery without the need to backup the Trilio database.
As long as the Trilio Workloads are existing on the Backup Target Storage and a Trilio installation has access to them, it is possible to restore the Workloads.
and Trilio for the target cloud
Notify users to of Workloads being available
This procedure is designed to be applicable to all Openstack installations using Trilio. It is to be used as a starting point to develop the exact Desaster Recovery process of a specific environment.
In case that instead of noticing the users, the workloads shall be restored is it necessary to have an User in each Project, that has the necessary privileges to restore.
Trilio incremental Snapshots involve a backing file to the prior backup taken, which makes every Trilio incremental backup a synthetic full backup.
Trilio is using qcow2 backing files for this feature:
As can be seen in the example is the backing file an absolute path, which makes it necessary, that this path exists so the backing files can be accessed.
Trilio is using the base64 hashing algorithm for the NFS mount-paths, to allow the configuration of multiple NFS Volumes at the same time. The hash value is calculated using the provided NFS path.
When the path of the backing file is not available on the Trilio appliance and Compute nodes, will the restores of incremental backups fail.
The tested and recommended method to make the backing files available is creating the required directory path and using mount --bind
to make the path available for the backups.
Running the mount --bind command will make the necessary path available until the next reboot. If it is required to have access to the path beyond a reboot is it necessary to edit the fstab.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/search
Starts a File Search with the given parameters
Name | Type | Description |
---|
Name | Type | Description |
---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/search/<search_id>
Starts a filesearch with the given parameters
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots
Lists all Snapshots.
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
When creating a Snapshot it is possible to provide additional information
This Body is completely optional
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Shows the details of a specified Snapshot
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Deletes a specified Snapshot
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/cancel
Cancels the Snapshot process of a given Snapshot
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/mount
Mounts a Snapshot to the provided File Recovery Manager
Name | Type | Description |
---|
Name | Type | Description |
---|
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/mounted/list
Provides the list of all Snapshots mounted in a Tenant
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/snapshots/mounted/list
Provides the list of all Snapshots mounted in a specified Workload
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/dismount
Unmounts a Snapshot of the provided File Recovery Manager
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads
Provides the list of all workloads for the given tenant/project id
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads
Creates a workload in the provided Tenant/Project with the given details.
Workload create requires a Body in json format, to provide the requested information.
Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Shows all details of a specified workload
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Modifies a workload in the provided Tenant/Project with the given details.
Workload modify requires a Body in json format, to provide the information about the values to modify.
All values in the body are optional.
Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Deletes the specified Workload.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/unlock
Unlocks the specified Workload
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/reset
Resets the defined workload
GET
https://$(tvm_address):8780/v1/$(tenant_id)/restores/detail
Lists Restores with details
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
GET
https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>
Provides all details about the specified Restore
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>
Deletes the specified Restore
GET
https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>/cancel
Cancels an ongoing Restore
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information
The One-Click restore requires a body to provide all necessary information in json format.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information.
The One-Click restore requires a body to provide all necessary information in json format.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information
The One-Click restore requires a body to provide all necessary information in json format.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy
Requests the list of available Workload Policies
Name | Type | Description |
---|
Name | Type | Description |
---|
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>
Requests the details of a given policy
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/assigned/<project_id>
Requests the lists of Policies assigned to a Project.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy
Creates a Policy with the given parameters
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>
Updates a Policy with the given information
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>
Updates a Policy with the given information
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>
Deletes a given Policy
GET
https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types
Lists all available Quota Types
Name | Type | Description |
---|
Name | Type | Description |
---|
GET
https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types/<quota_type_id>
Requests the details of a Quota Type
POST
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>
Creates an allowed Quota with the given parameters
GET
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>
Lists all allowed Quotas for a given project.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quota/<allowed_quota_id>
Shows details for a given allowed Quota
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/update_allowed_quota/<allowed_quota_id>
Updates an allowed Quota with the given parameters
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<allowed_quota_id>
Deletes a given allowed Quota
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/get_list/import_workloads
Provides the list of all importable workloads
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/orphan_workloads
Provides the list of all orphaned workloads
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/import_workloads
Imports all or the provided workloads
Openstack Administrators should never have the need to directly work with the trusts created.
The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts
Provides the lists of trusts for the given Tenant.
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/trusts
Creates a workload in the provided Tenant/Project with the given details.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>
Shows all details of a specified trust
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>
Deletes the specified trust.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>
Validates the Trust of a given Workload.
Access username (Check out the privilege requirement )
If VcenterNoSsl
is set to False
, provide the name of the SSL certificate file which is uploaded at step
Otherwise, keep it blank.
<license_file>
path to the license file
<vm_id>
ID of the VM to be searched
<file_path>
Path of the file to search for
--snapshotids <snapshotid>
Search only in specified snapshot ids snapshot-id: include the instance with this UUID
--end_filter <end_filter>
Displays last snapshots, example , last 10 snapshots, default 0 means displays all snapshots
--start_filter <start_filter>
Displays snapshots starting from , example , snapshot starting from 5, default 0 means starts from first snapshot
--date_from <date_from>
From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If time isn't specified then it takes 00:00 by default
--date_to <date_to>
To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day),Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to
--snapshot_id <snapshot_id>
ID of the Snapshot to show the restores of
<restore_id>
ID of the restore to be shown
--output <output>
Option to get additional restore details, Specify --output metadata for restore metadata,--output networks --output subnets --output routers --output flavors
<restore_id>
ID of the restore to be deleted
<restore_id>
ID of the restore to be deleted
<snapshot_id>
ID of the snapshot to restore.
--display-name <display-name>
Optional name for the restore.
--display-description <display-description>
Optional description for restore.
<snapshot_id>
ID of the snapshot to restore.
--display-name <display-name>
Optional name for the restore.
--display-description <display-description>
Optional description for restore.
--filename <filename>
Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.
<snapshot_id>
ID of the snapshot to restore.
--display-name <display-name>
Optional name for the restore.
--display-description <display-description>
Optional description for restore.
--filename <filename>
Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.
name
the name of the restore
description
the description of the restore
oneclickrestore <True/False>
If the restore is a oneclick restore. Setting this to True will override all other settings and a One Click Restore is started.
restore_type <oneclick/selective/inplace>
defines the restore that is intended
type openstack
defines that the restore is into an openstack cloud.
id
original id of the instance
include <True/False>
Set True when the instance shall be restored
name
new name of the instance
availability_zone
Nova Availability Zone the instance shall be restored into. Leave empty for "Any Availability Zone"
Nics
list of openstack Neutron ports that shall be attached to the instance. Each Neutron Port consists of:
id
ID of the Neutron port to use
mac_address
Mac Address of the Neutron port
ip_address
IP Address of the Neutron port
network
network the port is assigned to. Contains the following information:
id
ID of the network the Neutron port is part of
subnet
subnet the port is assigned to. Contains the following information:
id
ID of the network the Neutron port is part of
vdisks
List of all Volumes that are part of the instance. Each Volume requires the following information:
id
Original ID of the Volume
new_volume_type
The Volume Type to use for the restored Volume. Leave empty for Volume Type None
availability_zone
The Cinder Availability Zone to use for the Volume. The default Availability Zone of Cinder is Nova
flavor
Defines the Flavor to use for the restored instance. Contains the following information:
ram
How much RAM the restored instance will have (in MB)
ephemeral
How big the ephemeral disk of the instance will be (in GB)
vcpus
How many vcpus the restored instance will have available
swap
How big the Swap of the restored instance will be (in MB). Leave empty for none.
disk
Size of the root disk the instance will boot with
id
ID of the flavor that matches the provided information
networks
list of snapshot_network and target_network pairs
snapshot_network
the network backed up in the snapshot, contains the following:
id
Original ID of the network backed up
subnet
the subnet of the network backed up in the snapshot, contains the following:
id
Original ID of the subnet backed up
target_network
the existing network to map to, contains the following
id
ID of the network to map to
subnet
the subnet of the network backed up in the snapshot, contains the following:
id
ID of the subnet to map to
id
ID of the instance inside the Snapshot
restore_boot_disk
Set to True if the boot disk of that VM shall be restored.
include
Set to True if at least one Volume from this instance shall be restored
vdisks
List of disks, that are connected to the instance. Each disk contains:
id
Original ID of the Volume
restore_cinder_volume
set to true if the Volume shall be restored
--workload_id <workload_id>
Filter results by workload_id
--tvault_node <host>
List all the snapshot operations scheduled on a tvault node(Default=None)
--date_from <date_from>
From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If don't specify time then it takes 00:00 by default
--date_to <date_to>
To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day), Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to
--all {True,False}
List all snapshots of all the projects(valid for admin user only)
<workload_id>
ID of the workload to snapshot.
--full
Specify if a full snapshot is required.
--display-name <display-name>
Optional snapshot name. (Default=None)
--display-description <display-description>
Optional snapshot description. (Default=None)
<snapshot_id>
ID of the snapshot to be shown
--output <output>
Option to get additional snapshot details, Specify --output metadata for snapshot metadata, Specify --output networks for snapshot vms networks, Specify --output disks for snapshot vms disks
<snapshot_id>
ID of the snapshot to be deleted
<snapshot_id>
ID of the snapshot to be canceled
--all {True,False}
List all workloads of all projects (valid for admin user only)
--nfsshare <nfsshare>
List all workloads of nfsshare (valid for admin user only)
--display-name
Optional workload name. (Default=None)
--display-description
Optional workload description. (Default=None)
--source-platform
Workload source platform is required. Supported platforms is 'openstack'
--jobschedule
Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'snapshots_to_retain' : '2'
--metadata
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
--policy-id <policy_id>
ID of the policy to assign to the workload
--encryption <True/False>
Enable/Disable encryption for this workload
--secret-uuid <secret_uuid>
UUID of the Barbican secret to be used for the workload
<instance-id=instance-uuid>
Required to set atleast one instance, Specify an instance to include in the workload. Specify option multiple times to include multiple instances
<workload_id>
ID/name of the workload to show
--verbose
option to show additional information about the workload
--display-name
Optional workload name. (Default=None)
--display-description
Optional workload description. (Default=None)
--instance <instance-id=instance-uuid>
Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID
--jobschedule <key=key-name>
Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. If don't specify timezone, then by default it takes your local machine timezone 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30'
--metadata <key=key-name>
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
--policy-id <policy_id>
ID of the policy to assign
<workload_id>
ID of the workload to edit
<workload_id>
ID/name of the workload to delete
--database_only <True/False>
Keep True if want to delete from database only.(Default=False)
<workload_id>
ID of the workload to unlock
<workload_id>
ID/name of the workload to reset
<policy_id>
Id of the policy to show
--policy-fields <key=key-name>
Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'
--display-description <display_description>
Optional policy description. (Default=No description)
--metadata <key=keyname>
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
<display_name>
the name the policy will get
--display-name <display-name>
Name of the policy
--display-description <display_description>
Optional policy description. (Default=No description)
--policy-fields <key=key-name>
Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'
--metadata <key=keyname>
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
<policy_id>
the name the policy will get
--add_project <project_id>
ID of the project to assign policy to. Use multiple times to assign multiple projects.
--remove_project <project_id>
ID of the project to remove policy from. Use multiple times to remove multiple projects.
<policy_id>
policy to be assigned or removed
<policy_id>
ID of the policy to be deleted
<quota_type_id>
ID of the Quota Type to show
<quota_type_id>
ID of the Quota Type to be created
<allowed_value>
Value to set for this Quota Type
<high_watermark>
Value to set for High Watermark warnings
<project_id>
Project to assign the quota to
<project_id>
Project to list the Quotas from
<allowed_quota_id>
ID of the allowed Quota to show.
<allowed_value>
Value to set for this Quota Type
<high_watermark>
Value to set for High Watermark warnings
<project_id>
Project to assign the quota to
<allowed_quota_id>
ID of the allowed Quota to update
<allowed_quota_id>
ID of the allowed Quota to delete
--workloadid <workloadid>
Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>
--workloadid <workloadid>
Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>
<workload_id>
ID of the workload to validate
--reason
Optional reason for disabling workload service
<node_name>name of the Trilio node
<node_name>name of the Trilio node
Please use in the Admin guide to learn more about how to create and use Workload Policies.
--description
Optional description (Default=None) Not required for email settings
--category
Optional setting category (Default=None) Not required for email settings
--type
settings type set to email_settings
--is-public
sets if the setting can be seen publicly set to False
--is-hidden
sets if the setting will always be hidden set to False
--metadata
sets if the setting can be seen publicly Not required for email settings
<name>name of the setting Take from the list below
<value>value of the setting Take value type from the list below
--description
Optional description (Default=None) Not required for email settings
--category
Optional setting category (Default=None) Not required for email settings
--type
settings type set to email_settings
--is-public
sets if the setting can be seen publicly set to False
--is-hidden
sets if the setting will always be hidden set to False
--metadata
sets if the setting can be seen publicly Not required for email settings
<name>
name of the setting Take from the list below
<value>
value of the setting Take value type from the list below
--get_hidden
show hidden settings (True) or not (False) Not required for email settings, use False
if set
<setting_name>
name of the setting to show Take from the list below
<setting_name>
name of the setting to delete Take from the list below
Setting name | Value type | example |
---|
Parameter | Description |
---|---|
Parameters | Description |
---|---|
<snapshot_id>
ID of the Snapshot to be mounted
<mount_vm_id>
ID of the File Recovery Manager instance to mount the Snapshot to.
--workloadid <workloadid>
Restrict the list to snapshots in the provided workload
<snapshot_id>
ID of the snapshot to unmount.
--migrate_cloud {True,False}
Set to True if you want to list workloads from other clouds as well. Default is False.
--generate_yaml {True,False}
Set to True if want to generate output file in yaml format, which would be further used as input for workload reassign API.
--old_tenant_ids <old_tenant_id>
Specify old tenant ids from which workloads need to reassign to new tenant. Specify multiple times to choose Workloads from multiple tenants.
--new_tenant_id <new_tenant_id>
Specify new tenant id to which workloads need to reassign from old tenant. Only one target tenant can be specified.
--workload_ids <workload_id>
Specify workload_ids which need to reassign to new tenant. If not provided then all the workloads from old tenant will get reassigned to new tenant. Specifiy multiple times for multiple workloads.
--user_id <user_id>
Specify user id to which workloads need to reassign from old tenant. only one target user can be specified.
--migrate_cloud {True,False}
Set to True if want to reassign workloads from other clouds as well. Default if False
--map_file
Provide file path(relative or absolute) including file name of reassign map file. Provide list of old workloads mapped to new tenants. Format for this file is YAML.
<trust_id>
ID of the trust to be deleted
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
5.0 GA or 5.x.x
6.x.x
6.0.0
6.0.0-beta-1-rhosp17.1
6.0.0-beta-1
RHOSP17.1
6.0.0-beta-1
CloudAdminUserName
Default value is admin
.
Provide the cloudadmin user name of your overcloud
CloudAdminProjectName
Default value is admin
.
Provide the cloudadmin project name of your overcloud
CloudAdminDomainName
Default value is default
.
Provide the cloudadmin project name of your overcloud
CloudAdminPassword
Provide the cloudadmin user's password of your overcloud
ContainerTriliovaultDatamoverImage
Trilio Datamover Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
ContainerTriliovaultDatamoverApiImage
Trilio DatamoverApi Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
ContainerTriliovaultWlmImage
Trilio WLM Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
ContainerHorizonImage
Horizon Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
TrilioBackupTargets
List of Backup Targets for TrilioVault. These backup targets will be used to store backups taken by TrilioVault. Backup target examples and format of NFS and S3 types are already provided in th trilio_env.yaml file.
TrilioDatamoverOptVolumes
User can specify list of extra volumes that they want to mount on 'triliovault_datamover' container.
Refer the Configure Custom Volume/Directory Mounts for the Trilio Datamover Service
section in this doc
Following are the valid keys to define any backup target
NfsShares
Provide the nfs share you want to use as backup target for snapshots taken by Triliovault
NfsOptions
This parameter set NFS mount options.
Keep default values, unless a special requirement exists.
S3Type
If your Backup target is S3 then either provide the amazon_s3
or ceph_s3
depends on s3 type
S3AccessKey
Provide S3 Access key
S3SecretKey
Provide Secret key
S3RegionName
Provide S3 region.
If your S3 type does not have region parameter, just keep the parameter as it is
S3Bucket
Provide S3 bucket name
S3EndpointUrl
Provide S3 endpoint url, if your S3 type does not not required keep it as it is.
Not required for Amazon S3
S3SignatureVersion
Provide S3 singature version.
S3AuthVersion
Provide S3 auth version.
S3SslEnabled
Default value is False
.
If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint url then change it to 'True', otherwise keep it as 'False'.
VmwareToOpenstackMigrationEnabled
Set it to True
if this feature is required to be enabled, otherwise keep it to False
Populate all the below mentioned parameters if it is set to True
VcenterUrl
vCenter access URL
example:
https://vcenter-1.infra.trilio.io/
VcenterUsername
Access username (Check out the privilege requirement here)
VcenterPassword
Access user's Password
VcenterNoSsl
If the connection is to be established securely, set it to False
Set it to True
if SSL verification is to be ignored
VcenterCACertFileName
If VcenterNoSsl
is set to False
, provide the name of the SSL certificate file which is uploaded at step 1.5.2
Otherwise, keep it blank.
smtp_default_recipient | String | admin@example.net |
smtp_default_sender | String | admin@example.net |
smtp_port | Integer | 587 |
smtp_server_name | String | Mailserver_A |
smtp_server_username | String | admin |
smtp_server_password | String | password |
smtp_timeout | Integer | 10 |
smtp_email_enable | Boolean | True |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to run the search in |
search_id | string | ID of the File Search to get |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Projects to fetch the Snapshots from |
host | string | host name of the TVM that took the Snapshot |
workload_id | string | ID of the Workload to list the Snapshots off |
date_from | string | starting date of Snapshots to show \ Format: YYYY-MM-DDTHH:MM:SS |
string | ending date of Snapshots to show \ Format: YYYY-MM-DDTHH:MM:SS |
all | boolean | admin role required - True lists all Snapshots of all Workloads |
X-Auth-Project-Id | string | project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of the Trilio Service |
tenant_id | string | ID of the Tenant/Project to take the Snapshot in |
workload_id | string | ID of the Workload to take the Snapshot in |
full | boolean | True creates a full Snapshot |
X-Auth-Project-Id | string | Project to run authentication against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of the Trilio Service |
tenant_id | string | ID of the Tenant/Project to take the Snapshot from |
snapshot_id | string | ID of the Snapshot to show |
X-Auth-Project-Id | string | Project to run authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to find the Snapshot in |
snapshot_id | string | ID of the Snapshot to delete |
X-Auth-Project-Id | string | Project to run authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to find the Snapshot in |
snapshot_id | string | ID of the Snapshot to cancel |
X-Auth-Project-Id | string | Project to run authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant to search for mounted Snapshots |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgr |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant to search for mounted Snapshots |
workload_id | string | ID of the Workload to search for mounted Snapshots |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgr |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project the Snapshot is located in |
snapshot_id | string | ID of the Snapshot to dismount |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to create the workload in |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Project/Tenant where to find the Workload |
workload_id | string | ID of the Workload to show |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project where to find the workload in |
workload_id | string | ID of the Workload to modify |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant where to find the Workload in |
workload_id | string | ID of the Workload to delete |
database_only | boolean | True leaves the Workload data on the Backup Target |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication Token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant where to find the Workload in |
workload_id | string | ID of the Workload to unlock |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication Token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant where to find the Workload in |
workload_id | string | ID of the Workload to reset |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication Token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to fetch the restore from |
restore_id | string | ID of the restore to show |
X-Auth-Project-Id | string | Project to run authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to fetch the Restore from |
restore_id | string | ID of the Restore to be deleted |
X-Auth-Project-Id | string | Project to run authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of the Trilio service |
tenant_id | string | ID of the Tenant/Project to fetch the Restore from |
restore_id | string | ID of the Restore to cancel |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to do the restore in |
snapshot_id | string | ID of the snapshot to restore |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to do the restore in |
snapshot_id | string | ID of the snapshot to restore |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to do the restore in |
snapshot_id | string | ID of the snapshot to restore |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project |
policy_id | string | ID of the Policy to show |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project |
project_id | string | ID of the Project to fetch assigned Policies from |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to do the restore in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to do the restore in |
policy_id | string | ID of the Policy to update |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to do the restore in |
policy_id | string | ID of the Policy to assign |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project |
policy_id | string | ID of the Policy to delete |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project to work in |
quota_type_id | string | ID of the Quota Type to show |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to work in |
project_id | string | ID of the Tenant/Project to create the allowed Quota in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to work in |
project_id | string | ID of the Tenant/Project to list allowed Quotas from |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to work in |
<allowed_quota_id> | string | ID of the allowed Quota to show |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to work in |
<allowed_quota_id> | string | ID of the allowed Quota to update |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to work in |
<allowed_quota_id> | string | ID of the allowed Quota to delete |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to work in |
migrate_cloud | boolean | True also shows Workloads from different clouds |
X-Auth-Project-Id | string | project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of the Trilio Service |
tenant_id | string | ID of the Tenant/Project to take the Snapshot in |
X-Auth-Project-Id | string | Project to run authentication against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to create the Trust for |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Project/Tenant where to find the Workload |
workload_id | string | ID of the Workload to show |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant where to find the Trust in |
trust_id | string | ID of the Trust to delete |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication Token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Project/Tenant where to find the Workload |
workload_id | string | ID of the Workload to validate the Trust of |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to run the search in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project the Snapshot is located in |
snapshot_id | string | ID of the Snapshot to mount |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to fetch the workloads from |
nfs_share | string | lists workloads located on a specific nfs-share |
all_workloads | boolean | admin role required - True lists workloads of all tenants/projects |
X-Auth-Project-Id | string | project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to fetch the Restores from |
snapshot_id | string | ID of the Snapshot to fetch the Restores from |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to work in |
project_id | string | restricts the output to the given project |
X-Auth-Project-Id | string | project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_name | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant / Project to fetch the trusts from |
is_cloud_admin | boolean | true/false |
X-Auth-Project-Id | string | project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
Cloud Image Name
Version
Supported
Ubuntu
Bionic(18.04)
Ubuntu
Focal(20.04)
Centos
Centos8
Centos
Centos8 stream
RHEL
RHEL7
RHEL
RHEL8
RHEL
RHEL9
E-Mail Notification Settings are done through the settings API. Use the values from the following table to set Email Notifications up through API.
Setting name | Settings Type | Value type | example |
---|---|---|---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/settings
Creates a Trilio setting.
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Setting create requires a Body in json format, to provide the requested information.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>
Shows all details of a specified setting
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/settings
Modifies the provided setting with the given details.
Workload modify requires a Body in json format, to provide the information about the values to modify.
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>
Deletes the specified Workload.
A quick overview on the Architecture of T4O
Trilio is an add-on service to OpenStack cloud infrastructure and provides backup and disaster recovery solutions for tenant workloads. Trilio is very similar to other OpenStack services including Nova, Cinder, Glance, etc., and adheres to all tenets of OpenStack. It is a stateless service that scales with your cloud.
Trilio has three main software components which are again segregated into multiple services.
This component is registered as a keystone service of type workloads
which manages all the workloads being created for utilizing the snapshot and restore functionalities. It has 4 services responsible for managing these workloads, their snapshots, and restores.
workloadmgr-api
workloadmgr-scheduler
workloadmgr-workloads
workloadmgr-cron
Similar to WorkloadManager, this component registers a keystone service of the type datamover
which manages the transfer of extensive data to/from the backup targets. It has 2 services that are responsible for taking care of the data transfer and communication with WorkloadManager
datamover-api
datamover
For ease of access and better user experience, T4O provides an integrated UI with the OpenStack dashboard service Horizon,
Trilio API is a Python module that is installed on all OpenStack controller nodes where the nova-api service is running.
Trilio Datamover is a Python module that is installed on every OpenStack compute nodes
Trilio Horizon plugin is installed as an add-on to Horizon servers. This module is installed on every server that runs Horizon service.
Trilio is both a provider and consumer in the OpenStack ecosystem. It uses other OpenStack services such as Nova, Cinder, Glance, Neutron, and Keystone and provides its own services to OpenStack tenants. To accommodate all possible OpenStack deployments, Trilio can be configured to use either public or internal URLs of services. Likewise, Trilio provides its own public, internal, and admin URLs for two of its services WorkloadManager API and Datamover API.
Unlike the previous versions of Trilio for OpenStack, now it utilizes the existing network of the OpenStack deployed environment. The networks for the Trilio services can be configured as per the user's desire in the same way the user configures any other OpenStack service. Additionally, a dedicated network can be provided to Trilio services on both control and compute planes for storing and retrieving backup data from the backup target store.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/pause
Disables the scheduler of a given Workload
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/resume
Enables the scheduler of a given Workload
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>
Validates the Scheduler trust for a given Workload
All following API commands require an Authentication token against a user with admin-role in the authentication project.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler
Requests the status of the Global Job Scheduler
POST
https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/disable
Requests disabling the Global Job Scheduler
POST
https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/enable
Requests enabling the Global Job Scheduler
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
smtp_default___recipient
email_settings
String
admin@example.net
smtp_default___sender
email_settings
String
admin@example.net
smtp_port
email_settings
Integer
587
smtp_server_name
email_settings
String
Mailserver_A
smtp_server_username
email_settings
String
admin
smtp_server_password
email_settings
String
password
smtp_timeout
email_settings
Integer
10
smtp_email_enable
email_settings
Boolean
True
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to work with
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Project/Tenant where to find the Workload
setting_name
string
Name of the setting to show
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to work with w
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
setting_name
string
Name of the setting to delete
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient