Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Trilio is an add on service to OpenStack cloud infrastructure and provides backup and disaster recovery functions for tenant workloads. Trilio is very similar to other OpenStack services including nova, cinder, glance, etc and adheres to all tenets of OpenStack. It is a stateless service that scales with your cloud.
Trilio has four main software components:
Trilio ships as a QCOW2 image. User can instantiate one or more VMs from the QCOW2 image on a standalone KVM boxes.
Trilio API is a python module that is installed on all OpenStack controller nodes where the nova-api service is running.
Trilio Datamover is a python module that is installed on every OpenStack compute nodes
Trilio horizon plugin is installed as an add on to horizon servers. This module is installed on every server that runs horizon service.
Trilio is both a provider and consumer into OpenStack ecosystem. It uses other OpenStack services such as nova, cinder, glance, neutron, and keystone and provides its own service to OpenStack tenants. To accomodate all possible OpenStack deployments, Trilio can be configured to use either public or internal URLs of services. Likewise Trilio provides its own public, internal and admin URLs.
This figure represents a typical network topology. Trilio exposes its public URL endpoint on public network and Trilio virtual appliances and data movers typically use either internal network or dedicated backup network for storing and retrieving backup images from backup store.
It is recommended to think about the following elements prior to the installation of Trilio for Openstack.
Trilio uses Cinder snapshots for calculating full and incremental backups. For full backups, Trilio creates Cinder snapshots for all the volumes in the backup job. It then leaves these Cinder snapshots behind for calculating the incremental backup image during next backup. During an incremental backup operation it creates new Cinder snapshots, calculates the changed blocks between the new snapshots and the old snapshots that were left behind during full/previous backups. It then deletes the old snapshots but leaves the newly created snapshots behind. So, it is important that each tenant who is availing Trilio backup functionality has sufficient Cinder snapshot quotas to accommodate these additional snapshots. The guideline is to add 2 snapshots for every volume that is added to backups to volume snapshot quotas for that tenant. You may also increase the volume quotas for the tenant by the same amount because Trilio briefly creates a volume from snapshot to read data from the snapshot for backup purposes. During a restore process, Trilio creates additional instances and Cinder volumes. To accommodate restore operations, a tenant should have sufficient quota for Nova instances and Cinder volumes. Otherwise restore operations will result in failures.
AWS S3 object consistency model includes:
Read-after-write
Read-after-update
Read-after-delete
Each of them describes how an object will reach its consistent state after an object is created/updated or deleted. None of them provides strong consistency and there is a lag time for an object to reach the consistent state. Though Trilio employed mechanisms to work around the limitations of eventual consistency of AWS S3, when an object reach its consistency state is not deterministic. There is no official statement from AWS on how long it takes for an object to reach consistent state. However read-after-write has a shorter time to reach consistency compared to other IO patterns. Our solution is designed to maximize read-after-write IO pattern. The time in which an object reaches eventual consistency also depends on the AWS region. For example, aws-standard region does not have strong consistency model compared to us-east or us-west. We suggest to use these regions when creating s3 buckets for Trilio. Though read-after-update IO pattern is hard to avoid completely, we employed ample delays in accessing objects to accommodate larger durations for objects to get into consistent state. However in rare occasions, backups may still fail and need to restarted.
Trilio can be deployed as a single node or a three node cluster. It is highly recommended that Trilio is deployed as three node cluster for fault tolerance and load balancing. Starting with 3.0 release, Trilio requires additional IP for cluster and is required for both single node and three node deployments. Cluster ip a.k.a virtual ip is used for managing cluster and is used to register Trilio service endpoint in the keystone sevice catalog.
Trilio has four main software components:
Trilio ships as a QCOW2 image. User can instantiate one or more VMs from the QCOW2 image on a standalone KVM boxes.
Trilio API is a python module that is an extension to nova api service. This module is installed on all OpenStack controller nodes
Trilio Datamover is a python module that is installed on every OpenStack compute nodes
Trilio horizon plugin is installed as an add on to horizon servers. This module is installed on every server that runs horizon service.
The Trilio Appliance is not supported as instance inside Openstack.
The Trilio Appliance gets delivered as a qcow2 image, which gets attached to a virtual machine.
Trilio supports KVM-based hypervisors on x86 architectures, with the following properties:
libvirt
2.0.0 and above
QEMU
2.0.0 and above
qemu-img
2.6.0 and above
The recommended size of the VM for the Trilio Appliance is:
vCPU
8
RAM
24 GB
The qcow2 image itself defines the 40GB disk size of the VM.
In addition to the Trilio Appliance does Trilio contain components, which are installed directly into the Openstack itself.
Additionally, it is necessary to have the nfs-common
packages installed on the compute nodes in case of using the NFS protocol for the backup target.
Since Trilio for OpenStack 4.2 is T4O capable of providing encrypted backups
The Trilio for OpenStack solution is leveraging the OpenStack Barbican service to provide encryption capabilities for its backups.
To be precise, T4O is using secrets provided by Barbican to encrypt and decrypt backups. Barbican is therefore required to utilize this feature.
Each Openstack distribution comes with a set of supported operating systems. Please check the to see, which Openstack Distribution is supported with which Operating System.
Once the Trilio VM or the Cluster of Trilio VMs has been spun, the actual installation process can begin. This process contains of the following steps:
Install the Trilio dm-api service on the control plane
Install the Trilio datamover service on the compute plane
Install the Trilio Horizon plugin into the Horizon service
How these steps look in detail is depending on the Openstack distribution Trilio is installed in. Each supported Openstack distribution has its own deployment tools. Trilio is integrating into these deployment tools to provide a native integration from the beginning to the end.
Test text
Ceph is the most common OpenSource solution to provide block storage through OpenStack Cinder.
Ceph is a very flexible solution. The possibilities of Ceph require additional steps to the Trilio solution.
Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
Following backup target types are supported by Trilio
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.
All commands need to be run as user 'stack' on undercloud node
The following command clones the triliovault-cfg-scripts github repository.
Next access the Red Hat Director scripts according to the used RHOSP version.
The remaining documentation will use the following path for examples:
If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE___Directory/puppet/trilio/files'
The following commands upload the Trilio puppet module to the overcloud registry. The actual upload happens upon the next deployment.
Trilio puppet module is uploaded to overcloud as a swift deploy artifact with heat resource name 'DeployArtifactURLs'. If you check trilio's puppet module artifact file it looks like following.
Note: If your overcloud deploy command using any other deploy artifact through a environment file, then you need to merge trilio deploy artifact url and your url in single file.
How to check if your overcloud deploy environment files using deploy artifacts? You need to check string 'DeployArtifactURLs' in your environment files (only those mentioned in overcloud deploy command with -e option). If you find it any such environment file that is mentioned in overcloud dpeloy command with '-e' option then your deploy command is using deploy artifact.
In that case you need to merge all deploy artifacts in single file. Refer following steps.
Let's say, your artifact file path is "/home/stack/templates/user-artifacts.yaml" then refer following steps to merge both urls in single file and pass that new file to overcloud deploy command with '-e' option.
Trilio contains multiple services. Add these services to your roles_data.yaml.
Add the following services to the roles_data.yaml
All commands need to be run as user 'stack'
This service needs to share the same role as the keystone
and database
service.
In case of the pre-defined roles will these services run on the role Controller
.
In case of custom defined roles, it is necessary to use the same role where 'OS::TripleO::Services::Keystone' service installed.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In case of custom defined roles, it is necessary to use the role the nova-compute
service is using.
Add the following line to the identified role:
This service needs to share the same role as the OpenStack Horizon server. In case of the pre-defined roles will the Horizon service run on the role Controller. Add the following line to the identified role:
All commands need to be run as user 'stack'
Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.
Please note that using the hotfix containers requires that the Trilio Appliance is getting upgraded to the desired hotfix level as well.
There are three registry methods available in RedHat Openstack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
In this method, container images gets downloaded directly on overcloud nodes during overcloud deploy/update command execution. User can set remote registry to redhat registry or any other private registry that he wants to use. User needs to provide credentials of the registry in 'containers-prepare-parameter.yaml' file.
Make sure other openstack service images are also using the same method to pull container images. If it's not the case you can not use this method.
Populate 'containers-prepare-parameter.yaml' with content like following. Important parameters are 'push_destination: false', ContainerImageRegistryLogin: true and registry credentials. Trilio container images are published to registry 'registry.connect.redhat.com'. Credentials of registry 'registry.redhat.io' will work for 'registry.connect.redhat.com' registry too.
Note: This file -'containers-prepare-parameter.yaml'
Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat
3. Make sure you have network connectivity to above registries from all overcloud nodes. Otherwise image pull operation will fail.
4. User need to manually populate the trilio_env.yaml file with Trilio container image URLs as given below:
At this step, you have configured Trilio image urls in the necessary environment file.
Follow this section when 'local registry' is used on the undercloud.
In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.
At this step, you have downloaded triliovault container images and configured triliovault image urls in necessary environment file.
Follow this section when a Satellite Server is used for the container registry.
Populate the trilio_env.yaml with container urls.
At this step, you have downloaded triliovault container images into RedHat sattelite server and configured triliovault image urls in necessary environment file.
On Undercloud node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml
' file.
vi triliovault_nfs_map_input.yml
Update pyyaml on the undercloud node only
If pip isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Validate the changes in file 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Include the environment file (trilio_nfs_map.yaml
) in overcloud deploy command with '-e' option as shown below.
Ensure that MultiIPNfsEnabled
is set to true in
trilio_env.yaml file and that nfs is used as backup target.
Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.
The following information are required additionally:
Network for the datamover api
datamover password
Backup target type {nfs/s3}
In case of NFS
list of NFS Shares
NFS options
MultiIPNfsEnabled
In case of S3
S3 type {amazon_s3/ceph_s3}
S3 Access key
S3 Secret key
S3 Region name
S3 Bucket
S3 Endpoint URL
S3 Signature Version
S3 Auth Version
S3 SSL Enabled {true/false}
S3 SSL Cert
Following is the haproxy conf file location on haproxy nodes of the overcloud. Trilio datamover api service haproxy configuration gets added to this file.
Trilio datamover haproxy default configuration from the above file looks as follows:
The user can change the following configuration parameter values.
To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for edit (Edit <RHOSP_RELEASE> with your cloud's release information. Valid values are - rhosp13, rhosp16, rhosp16.1)
For RHOSP13
For RHOSP16.1
For RHOSP16.2
For RHOSP17.0
ii) Search the following entries and edit as required
iii) Save the changes.
If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use the following heat environment file. A variable named 'TrilioDatamoverOptVolumes' is available in this file.
This variable 'TrilioDatamoverOptVolumes' accepts list of volume/bind mounts.
User needs to edit this file and add their volume mounts in below format.
For example:
In this volume mount "/mnt/dir2:/var/dir2", "/mnt/dir2" is a diretcory available on compute host and "/var/dir2" is the mount point inside datamover container.
Next, User needs to pass this file to overcloud deploy command with '-e' option Like below.
Use the following heat environment file and roles data file in overcloud deploy command:
trilio_env.yaml
roles_data.yaml
Use correct Trilio endpoint map file as per available Keystone endpoint configuration
Instead of tls-endpoints-public-dns.yaml
file, use environments/trilio_env_tls_endpoints_public_dns.yaml
Instead of tls-endpoints-public-ip.yaml
file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml
Instead of tls-everywhere-endpoints-dns.yaml
file, useenvironments/trilio_env_tls_everywhere_dns.yaml
If activated use the correct trilio_nfs_map.yaml
file
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
Verify the haproxy configuration under:
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.
If Trilio Horizon container is in restarted state on RHOSP 16.1.8/RHSOP 16.2.4 then use below workaroud
Trilio components will be deployed using puppet scripts.
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
The following backup target types are supported by Trilio
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.
All commands need to be run as user 'stack' on undercloud node
TripleO CentOS8 is not supported anymore as CentOS Linux 8 has reached End of Life on December 31st,2021.
The following command clones the triliovault-cfg-scripts github repository.
Please note that the Trilio Appliance needs to get updated to the latest HF as well.
If your backup target is ceph S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, the user needs to rename his ca chain cert file to s3-cert.pem
and copy it into the directory triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE_Directory/puppet/trilio/files
Trilio contains multiple services. Add these services to your roles_data.yaml.
Add the following services to the roles_data.yaml
All commands need to be run as user 'stack'
This service needs to share the same role as the keystone
and database
service.
In the case of the pre-defined roles will these services run on the role Controller
.
In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone
service is installed.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In the case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In the case of custom-defined roles, it is necessary to use the role the nova-compute
service is used.
Add the following line to the identified role:
This service needs to share the same role as the OpenStack Horizon server. In the case of the pre-defined roles, Horizon service runs on the role Controller. Add the following line to the identified role:
All commands need to be run as user 'stack'
Trilio containers are pushed to 'Dockerhub'. Registry URL: 'docker.io'. Container pull URLs are given below.
There are two registry methods available in TripleO Openstack Platform.
Remote Registry
Local Registry
Follow this section when 'Remote Registry' is used.
For this method, it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from the Dockerhub registry.
Populate the trilio_env.yaml with container URLs for:
Trilio Datamover container
Trilio Datamover api container
Trilio Horizon Plugin
Follow this section when 'local registry' is used on the undercloud.
Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.
The changes can be verified using the following commands.
On Undercloud node, change the directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/IP map.
Get the compute hostnames from the following command. Check the 'Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml
' file.
vi triliovault_nfs_map_input.yml
Update pyYAML
on the undercloud node only
If pip
isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Validate the changes in file triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
Include the environment file (trilio_nfs_map.yaml
) in overcloud deploy command with '-e' option as shown below.
Ensure that MultiIPNfsEnabled
is set to true in
trilio_env.yaml file and that NFS is used as the backup target.
Fill Trilio details in the file /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml
, triliovault environment file is self-explanatory. Fill in details of the backup target, verify image URLs, and other details.
Use the following heat environment file and roles data file in overcloud deploy command
trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations
roles_data.yaml: This file contains overcloud roles data with Trilio roles added.
Use the correct trilio endpoint map file as per your keystone endpoint configuration.
- Instead of tls-endpoints-public-dns.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’
- Instead of tls-endpoints-public-ip.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’
- Instead of tls-everywhere-endpoints-dns.yaml
this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
Deploy command with triliovault environment file looks like the following.
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
Verify the haproxy configuration under:
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.
Trilio components will be deployed using puppet scripts.
Trilio, by TrilioData, is a native OpenStack service that provides policy-based comprehensive backup and recovery for OpenStack workloads. The solution captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data and Metadata of an environment) as full or incremental snapshots. These snapshots can be held in a variety of storage environments including NFS AWS S3 compatible storage. With Trilio and its single click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). With Trilio, IT departments are enabled to fully deploy OpenStack solutions and provide business assurance through enhanced data retention, protection and integrity.
With the use of Trilio’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. Trilio takes point-in-time backup of the entire workload consisting of compute resources, network configurations and storage data as one unit. It also takes incremental backups that only captures the changes that were made since last backup. Incremental snapshots save time and storage space as the backup only includes changes since the last backup. The benefits of using VAST for backup and restore could be summarized as below:
Efficient capture and storage of snapshots. Since our full backups only include data that is committed to storage volume and the incremental backups only include changed blocks of data since last backup, our backup processes are efficient and storages backup images efficiently on the backup media
Faster and reliable recovery. When your applications become complex that snap multiple VMs and storage volumes, our efficient recovery process will bring your application from zero to operational with just click of button
Easy migration of workloads between clouds. Trilio captures all the details of your application and hence our migration includes your entire application stack without leaving any thing for guess work.
Through policy and automation lower the Total Cost of Ownership. Our tenant driven backup process and automation eliminates the need for dedicated backup administrators, there by improves your total cost of ownership.
Learn about Trilio Support for OpenStack Distributions
NFS & S3 Support:
All versions of Trilio for OpenStack support NFSv3 and S3 as backup targets.
Barbican Support:
Supported in RHOSP 16.1, 16.2, 17.0, Canonical Ussuri, Victoria, Wallaby, Yoga, Zed, Kolla Victoria, Wallaby, Yoga.
Not Supported in RHOSP 13, Canonical Queens, Stein, Train, TripleO Train.
Supported OS:
RHEL7, RHEL8, RHEL9, Ubuntu 18.04, Ubuntu 20.04, Ubuntu 20.04 source image, Ubuntu 22.04, CentOS7, CentOS Stream 8, CentOS Linux 8, Ubuntu 20.04 binary image, CentOS Stream 8 binary image, CentOS Stream 8 source image.
Deployment:
Deploy using the distributions corresponding deployment toolsets, including Red Hat Director, JuJu Charms, Ansible, and Director.
Improvement of workload import performance.
Issues reported by customers.
Trilio backup deleted after completion without manual intervention and backup not visible under workload.
S3 mountpoint gets stalled when the datamover container is stopped.
TVM uses outdated TLS protocols when HTTPS is activated for the Trilio API.
1. Import of multiple workloads don't get segregated across all nodes evenly
Observation : When import of multiple workloads is triggered, it is expected that system must evenly divide the load on all available nodes in round robin fashion. However, current it is running the import on any one of the available nodes.
2. After clicking on snapshot or any restore against Imported WL, UI become unresponsive for first time only
Observation : For workloads with a large number of VMs and with a large number of networks attached to those VMs, then the import of the snapshot details may take more than 1 minute (connection timeout) and hence this issue might be observed.
Comments:
If a user hits this issue, then the import of the snapshot has already been triggered and now the user needs to wait till the snapshot import is done.
The users are advised to wait for a couple of minutes before they recheck the snapshot details.
Once the details of the snapshot are visible, the restore operation can be carried out.
3. Import of ALL workloads without specifying any workload id NOT recommended
Observation: If the user uses the workload import command without any workload id, thereby expecting that all eligible workloads will get imported, then the command execution takes longer than expected time.
Comments:
As long as the import command is not returning, it is expected to be running; if successful, it will return the job ID, if not, it will throw an error msg.
The execution time may vary based on the number of workloads present in the backend target.
It is recommended to run this command with specific workload IDs.
To import ALL workloads at once, all WL IDs to be provided as parameter to the import CLI command. Procedure for the same mentioned below.
4. Deleting snapshots show status as available in horizon UI
Observation: Snapshot may show as available instead of deleting when the deletion is in progress.
Workaround:
Wait until all the delete operations have completed. Eventually, all the snapshots will be deleted successfully.
Trilio and Canonical have started a partnership to ensure a native deployment of Trilio using JuJu Charms.
Those JuJu Charms are publicly available as Open Source Charms.
Trilio is providing the JuJu Charms to deploy Trilio 4.2 in Canonical OpenStack from Yoga release onwards only. JuJu Charms to deploy Trilio 4.2 in Canonical OpenStack up to wallaby release are developed and maintained by Canonical.
Canonical OpenStack doesn't require the Trilio Cluster. The required services are installed and managed via JuJu Charms.
The documentation of the charms can be found here:
Prerequisite
Have a canonical OpenStack base setup deployed for a required release like Jammy Zed/Yoga, Focal yoga/Wallaby/Victoria/Ussuri, or Bionic Ussuri/Queens.
Steps to install the Trilio charms
Export the OpenStack base bundle
2. Create a Trilio overlay bundle as per the OpenStack setup release using the charms given above.
Trilio File Search functionality requires that the Trilio Workload manager (trilio-wlm) be deployed as a virtual machine. File Search will not function if the Trilio Workload manager (trilio-wlm) is running as a lxd container(s).
3. If file search functionality is required, provision any additional node(s) that will be required for deploying the Trilio Workload manager (trilio-wlm) as a VM instead of lxd container(s).
4. Commission the additional node from MAAS UI.
5. Do a dry run to check if the Trilio bundle is working
6. Do the deployment
7. Wait till all the Trilio units are deployed successfully. Check the status via juju status
command.
8. Once the deployment is complete, perform the below operations:
Create cloud admin trust
Add license
Note: Reach out to the Trilio support team for the license file.
For multipath enabled environments, perform the following actions
log into each nova compute node
add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf
restart tvault-contego service
Sample Trilio overlay bundles
Issues reported by customers.
Snapshot listing (workload open) on UI doesn't happen if snapshot count is > 1000.
FileManager caches old snapshot details.
Custom horizon integration - Canonical.
Restore failing with error Failed restoring snapshot: RetryError[<Future at 0x7fc607693c18 state=finished raised StorageFailure>.
Duplicate security groups get created post restore of a VM.
kolla ansible yoga 4.3.0 deployment fails with error "Unsupported parameters for (kolla_container_facts) module: container_engine. Supported parameters include: name, api_version".
1. Import of multiple workloads don't get segregated across all nodes evenly
Observation : When import of multiple workloads is triggered, it is expected that system must evenly divide the load on all available nodes in round robin fashion. However, current it is running the import on any one of the available nodes.
2. After clicking on snapshot or any restore against Imported WL, UI become unresponsive for first time only
Observation : For workloads with a large number of VMs and with a large number of networks attached to those VMs, then the import of the snapshot details may take more than 1 minute (connection timeout) and hence this issue might be observed.
Comments:
If a user hits this issue, then the import of the snapshot has already been triggered and now the user needs to wait till the snapshot import is done.
The users are advised to wait for a couple of minutes before they recheck the snapshot details.
Once the details of the snapshot are visible, the restore operation can be carried out.
3. Import of ALL workloads without specifying any workload id NOT recommended
Observation : If user hits workload import command without any workload id, thereby expecting that all eligible workloads will get imported, then the command execution takes longer than expected time.
Comments:
As long as the import command is not returning, it is expected to be running; if successful, it will return the job ID, if not, it will throw an error msg.
The execution time may vary based on the number of workloads present in the backend target.
It is recommended to run this command with specific workload IDs.
To import ALL workloads at once, all WL IDs to be provided as parameter to the import CLI command. Procedure for the same mentioned below.
4. Deleting snapshots show status as available in horizon UI
Observation : Snapshot for which delete operation is in-progress from UI , its status is showing as available instead deleting.
Workaround:
Wait for sometime to complete all the delete operations.Eventually all the snapshots will be deleted successfully.
The 'nova' user ID and Group ID on the Trilio nodes need to be set the same as in the compute node(s). Trilio is by default using the nova user ID (UID) and Group ID (GID) 162:162. Ansible OpenStack is not always 'nova' user id 162 on compute node. Do the following steps on all Trilio nodes in case of nova UID & GID are not in sync with the Compute Node(s)
Download the shell script that will change the user-id
Assign executable permissions
Edit script to use the correct nova id
Execute the script
Verify that 'nova' user and group id has changed to the desired value
Clone triliovault-cfg-scripts from github repository on Ansible Host.
Available values for <branch>: hotfix-4-TVO/4.2
Copy Ansible roles and vars to required places.
Add Trilio playbook to /opt/openstack-ansible/playbooks/setup-openstack.yml
at the end of the file.
Add the following content at the end of the file /etc/openstack_deploy/user_variables.yml
Create the following file /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml
Add the following content to the created file.
Edit the file /etc/openstack_deploy/openstack_user_config.yml
according to the example below to set host entries for Trilio components.
Edit the common editable parameter section in the file /etc/openstack_deploy/user_tvault_vars.yml
Append the required details like Trilio Appliance IP address, Openstack distribution, snapshot storage backend, SSL related information, etc.
New parameter added to /etc/openstack_deploy/user_tvault_vars.yml
file for Mutli-IP NFS\
Change the Directory
Edit file 'triliovault_nfs_map_input.yml
' in current directory and provide compute host and NFS share/IP map.
Update pyyaml on the Openstack Ansible server node only
Execute generate_nfs_map.py
file to create one to one mapping of compute and nfs share.
Result will be in file - 'triliovault_nfs_map_output.yml
' of the current directory
Validate output map file
Open file 'triliovault_nfs_map_output.yml
available in current directory and validate that all compute nodes are mapped with all necessary nfs shares.
Append the content of triliovault_nfs_map_output.yml
file to /etc/openstack_deploy/user_tvault_vars.yml
Run the following commands to deploy only Trilio components in case of an already deployed Ansible Openstack.
If Ansible Openstack is not already deployed then run the native Openstack deployment commands to deploy Openstack and Trilio Components together. An example for the native deployment command is given below:
Verify triliovault datamover api service deployed and started well. Run the below commands on controller node(s).
Verify triliovault datamover service deployed and started well on compute node(s). Run the following command oncompute node(s).
Verify that triliovault horizon plugin, contegoclient, and workloadmgrclient are installed on the Horizon container.
Run the following command on Horizon container.
Verify that haproxy setting on controller node using below commands.
After the deployment has been verified it is recommended to update to the latest hotfix to ensure the best possible experience.
Issues reported by customers.
Backup and Selective restore failure with quobyte cinder volume.
Trilio takes user time and not the dashboard time.
Documentation Change : Steps to be followed from Trilio end while minor upgrade from RHOSP 16.x to 16.y.
Canonical/Juju - python-os-brick_5.2.2-0ubuntu1.3 will disrupt Trilio snapshots completely, if using multipath.
Restores failing with Error : "security group rule does not exist".
When VMs of the workloads are deleted, snapshots fail (with a very misleading message).
"Timeout uploading data" when backing up a VM with a 23TB volume.
1. Import of multiple workloads don't get segregated across all nodes evenly
Observation : When import of multiple workloads is triggered, it is expected that system must evenly divide the load on all available nodes in round robin fashion. However, current it is running the import on any one of the available nodes.
2. After clicking on snapshot or any restore against Imported WL, UI become unresponsive for first time only
Observation : For workloads with a large number of VMs and with a large number of networks attached to those VMs, then the import of the snapshot details may take more than 1 minute (connection timeout) and hence this issue might be observed.
Comments:
If a user hits this issue, then the import of the snapshot has already been triggered and now the user needs to wait till the snapshot import is done.
The users are advised to wait for a couple of minutes before they recheck the snapshot details.
Once the details of the snapshot are visible, the restore operation can be carried out.
3. Import of ALL workloads without specifying any workload id NOT recommended
Observation : If user hits workload import command without any workload id, thereby expecting that all eligible workloads will get imported, then the command execution takes longer than expected time.
Comments:
As long as the import command is not returning, it is expected to be running; if successful, it will return the job ID, if not, it will throw an error msg.
The execution time may vary based on the number of workloads present in the backend target.
It is recommended to run this command with specific workload IDs.
To import ALL workloads at once, all WL IDs to be provided as parameter to the import CLI command. Procedure for the same mentioned below.
4. Deleting snapshots show status as available in horizon UI
Observation : Snapshot for which delete operation is in-progress from UI , its status is showing as available instead deleting.
Workaround:
Wait for sometime to complete all the delete operations.Eventually all the snapshots will be deleted successfully.
Trilio integrates natively with Openstack. This includes that Trilio communicates completely through APIs using the Openstack Endpoints. Trilio is also generating its own Openstack endpoints. In addition, is the Trilio appliance and the compute nodes writing to and reading from the backup target. These points affect the network planning for the Trilio installation.
Openstack knows 3 types of endpoints:
Public Endpoints
Internal Endpoints
Admin Endpoints
Each of these endpoint types is designed for a specific purpose. Public endpoints are meant to be used by the Openstack end-users to work with Openstack. Internal endpoints are meant to be used by the Openstack services to communicate with each other. Admin endpoints are meant to be used by Openstack administrators.
Out of those 3 endpoint types does only the admin endpoint sometimes contain APIs which are not available on any other endpoint type.
To learn more about Openstack endpoints please visit the official Openstack documentation.
Trilio is communicating with all services of Openstack on a defined endpoint type. Which endpoint type Trilio is using to communicate with Openstack is decided during the configuration of the Trilio appliance.
There is one exception: The Trilio Appliance always requires access to the Keystone admin endpoint.
The following network requirement can be identified this way:
Trilio appliance needs access to the Keystone admin endpoint on the admin endpoint network
Trilio appliance needs access to all endpoints of one type
Trilio is recommending providing full access to all Openstack endpoints to the Trilio appliance to follow the Openstack standards and best practices.
Trilio is generating its own endpoints as well. These endpoints are pointing towards the Trilio Appliance directly. This means that using those endpoints will not send the API calls towards the Openstack Controller nodes first, but directly to the Trilio VM.
Following the Openstack standards and best practices, it is therefore recommended to put the Trilio endpoints on the same networks as the already existing Openstack endpoints. This allows to extend the purpose of each endpoint type to the Trilio service:
The public endpoint to be used by Openstack users when using Trilio CLI or API
The internal endpoint to communicate with the Openstack services
The admin endpoint to use the required admin only APIs of Keystone
The Trilio solution is using backup target storage to securely place the backup data. Trilio is dividing its backup data into two parts:
Metadata
Volume Disk Data
The first type of data is generated by the Trilio appliance through communicating with the Openstack Endpoints. All metadata that is stored together with a backup is written by the Trilio Appliance to the backup target in the json format.
The second type of data is generated by the Trilio Datamover service running on the compute nodes. The Datamover service is reading the Volume Data from the Cinder or Nova storage and transferring this data as qcow2 image to the backup target. Each Datamover service is hereby responsible for the VMs running on its compute node.
The network requirements are therefor:
The Trilio appliance needs access to the backup target
Every compute node needs access to the backup target
Most Trilio customers are following the Openstack standards and best practices to have the public, internal, and admin endpoints on separate networks. They also typically don't have any network yet, which can access the desired backup target.
The starting network configuration typically looks like this:
Following the Openstack standards and Trilio's recommendation will the Trilio Appliance be placed on all those 3 networks. Further is the access to the backup target required by Trilio Appliance and Compute nodes. Here done by adding a 4th network.
The resulting network configuration would look like this:
It is of course possible to combine networks as necessary. As long as the required network access is available will Trilio work.
Each Openstack installation is different and so is the network configuration. There are endless possibilities of how to configure the Openstack network and how to implement the Trilio appliance into this network. The following three examples have been seen in production:
The first example is from a manufacturing company, which wanted to split the networks by function and decided to put the Trilio backup target on the internal network as the backup and recovery function was identified as an Openstack internal solution. This example looks complex but integrates Trilio just as recommended.
The second example is from a financial institute that wanted to be sure that the Openstack Users have no direct uncontrolled network access to the Openstack infrastructure. Following this example requires additional work as the internal HA-Proxy needs to be configured to correctly translates the API calls towards the Trilio
The third example is from a service company that was forced to treat Trilio as an external 3rd party solution, as we require a virtual machine running outside of Openstack. This kind of network configuration requires good planning on the Trilio endpoints and firewall rules.
After the Trilio VM has been configured and all components are installed can the license be applied.
The license can be applied either through the admin-tab in Horizon or the CLI
To apply the license through Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to License
Click "Update License"
Click "Choose File"
choose license-file on client system
click "Apply"
This page lists all steps required to deploy Trilio components on Kolla-ansible deployed OpenStack cloud.
Refer to the below-mentioned acceptable values for the placeholders
triliovault_tag
andkolla_base_distro
, in this document as per the Openstack environment:
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterwards, copy Trilio Ansible role into Kolla-ansible roles directory
Append triliovault_passwords.yml
to /etc/kolla/passwords.yml
. Passwords are empty. Set these passwords manually in the /etc/kolla/passwords.yml
.
On kolla-ansible server node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
vi triliovault_nfs_map_input.yml
Update PyYAML
on the kolla-ansible server node only
Expand the map file to create one to one mapping of compute and nfs share.
Result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all necessary nfs shares.
Append this output map file to 'triliovault_globals.yml' File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’
Ensure to set multi_ip_nfs_enabled in __ triliovault_globals.yml file to yes
Edit /etc/kolla/globals.yml
file to fill Trilio backup target and build details.
You will find the Trilio related parameters at the end of globals.yml
file.
Details like Trilio build version, backup target type, backup target details, etc need to be filled out.
Following is the list of parameters that the usr needs to edit.
In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.
Following are the triliovault container image URLs for 4.3 releases**.**
Replace kolla_base_distro
and triliovault_tag
variables with their values.\
This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro
Below are the Source-based OpenStack deployment images
Below are the Binary-based OpenStack deployment images
To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml
and find nova_libvirt_default_volumes
variables. Append the Trilio mount bind /var/trilio:/var/trilio:shared
to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
Next, find the variable nova_compute_default_volumes
in the same file and append the mount bind /var/trilio:/var/trilio:shared
to the list.
After the change will the variable look for a default Kolla installation as follows:
In case of using Ironic compute nodes one more entry need to be adjusted in the same file.
Find the variable nova_compute_ironic_default_volumes
and append trilio mount /var/trilio:/var/trilio:shared
to the list.
After the changes the variable will looks like the following:
Activate the login into dockerhub for Trilio tagged containers.
Please get the Dockerhub login credentials from Trilio Sales/Support team
Pull the Trilio container images from the dockerhub based on the existing inventory file. In the example is the inventory file named multinode
.
All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.
This is just an example command. You need to use your cloud deploy command.
Verify on the controller and compute nodes that the Trilio containers are in UP state.
Following is a sample output of commands from controller and compute nodes. triliovault_tag
will have value as per the openstack release where deployment being done.
To see all TriloVault containers running on a specific node use the docker ps command.
To check the startup logs use the docker logs <container name> command.
Verify that the Trilio Appliance is configured. The Horizon tabs are only shown, when a configured Trilio appliance is available.
Verify that the Trilio horizon container is installed and in a running state.
Trilio datamover api service logs on datamover api node
Trilio datamover service logs on datamover node
Note: This step needs to be done on Trilio Appliance node. Not on OpenStack node.
Pre-requisite: You should have already launched Trilio appliance VM
In Kolla openstack distribution, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all Trilio nodes:
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that 'nova' user and group id has changed to '42436'
After this step, you can proceed to 'Configuring Trilio' section.
11.1] We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user' in the file '/etc/kolla/globals.yaml'.
If user wants to edit this parameter value they can do it. Impact will be, cinder's ceph user and triliovault datamover's ceph user will be updated upon next kolla-ansible deploy command.
Learn about spinning up the Trilio VM
For Canonical Openstack it is not necessary to spin up the Trilio VM. However, Trilio File Search functionality requires that the Trilio Workload manager (trilio-wlm) be deployed as a virtual machine. File Search will not function if the Trilio Workload manager (trilio-wlm) is running as a lxd container(s).
The Trilio Appliance is delivered as qcow2 image and runs as VM on top of a KVM Hypervisor.
The Trilio appliance is utilizing cloud-init to provide the initial network and user configuration.
Cloud-init is reading it's information either from a metadata server or from a provided cd image. Trilio is utilizing the cd image.
To create the cloud-init image it is required to have genisoimage available.
Cloud-init is using two files for it's metadata.
The first file is called meta-data
and contains the information about the network configuration.
Below is an example of this file.
Keep the hostname localhost. The hostname gets changed through the configuration step. Changing the hostname will lead to the tvault-config service not properly starting, blocking further configuration.
The instance-id has to match the VM name in virsh
The second file is called user-data
and contains little scripts and information to set up for example the user passwords.
Below is an example of this file.
The image is getting created using genisoimage follwing this general command:
genisoimage -output <name>.iso -volid cidata -joliet -rock </path/user-data> </path/meta-data>
An example of this command is shown below.
After the cloud-init image has been created the TriloVault appliance can be spun up on the desired KVM server.
Extract the Trilio QCOW2 tar file using the following command :
See below an example command, how to spin up the Trilio appliance using virsh and the created iso image.
Once the Trilio appliance is up and running with it's initial configuration is it recommended to uninstall cloud-init.
To uninstall cloud-init, follow the example below.
Red Hat OpenStack, TripleO, and Kolla Ansible Openstack are using the nova UID/GID of 42436 inside their containers instead of 162:162 which is the standard in other Openstack environments.
Please verify that the nova UID/GID on the Trilio Appliance is still 42436,
In case of the UID/GID being changed back to 162:162 follow these steps to set it back to 42436:42436.
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that the nova
user and group id have changed to '42436'
It is recommended to directly update the Trilio appliance to the latest version.
To do so follow the minor update guide provided here:
The is the supported and recommended method to deploy and maintain any RHOSP installation.
Redhat document for remote registry method: \
Pull the Trilio containers on the Red Hat Satellite using the given
Edit input map file and fill all the details. Refer to the for details about the structure.
oIn case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights:
Edit the input map file and fill in all the details. Refer to for details about the structure.
In case of the overcloud deployment fails do the following command to provide the list of errors. The following document also provides valuable insights:
The support for CentOS8 has ended on December 31st 2021. The official announcement can be found .
Some sample Trilio overlay bundles can be found .
For the AWS S3 storage backend, we need to use `` as S3 end-point URL.
A few Sample overlay bundles for different OpenStack versions can be found .
Please take a look atto learn about the format of the file.
To update the environment follow .
<license_file>
path to the license file
The triliovault_nfs_map_imput.yml is explained .
The Trilio Appliance qcow2 image can be downloaded from the . Please contact your Trilio sales or technical lead to get access to the portal.
4.3.X
17.0
16.2
16.1
13
Zed
Yoga
Wallaby
Victoria
Ussuri
Train
Stein
Queens
Zed
Yoga
Wallaby
Victoria
Train
RHOSP 13
Yes
RHEL7
Not supported
NFSv3
AWS S3 compatible
Red Hat Director
RHOSP 16.1
Yes
RHEL8
Supported
NFSv3
AWS S3 compatible
Red Hat Director
RHOSP 16.2
Yes
RHEL8
Supported
NFSv3
AWS S3 compatible
Red Hat Director
RHOSP 17.0
Yes
RHEL9
Supported
NFSv3
AWS S3 compatible
Red Hat Director
Canonical Queens
Yes
Ubuntu 18.04
Not supported
NFSv3
AWS S3 compatible
JuJu Charms
Canonical Stein
Yes
Ubuntu 18.04
Not supported
NFSv3
AWS S3 compatible
JuJu Charms
Canonical Train
Yes
Ubuntu 18.04
Not supported
NFSv3
AWS S3 compatible
JuJu Charms
Canonical Ussuri
Yes
Ubuntu 18.04/20.04
Supported
NFSv3
AWS S3 compatible
JuJu Charms
Canonical Victoria
Yes
Ubuntu 20.04
Supported
NFSv3
AWS S3 compatible
JuJu Charms
Canonical Wallaby
Yes
Ubuntu 20.04
Supported
NFSv3
AWS S3 compatible
JuJu Charms
Canonical Yoga
Yes
Ubuntu 20.04/22.04
Supported
NFSv3
AWS S3 compatible
JuJu Charms
Canonical Zed
Yes
Ubuntu 22.04
Supported
NFSv3
AWS S3 compatible
JuJu Charms
Kolla Victoria
Yes
Ubuntu 20.04, CentOS Linux 8
Supported
NFSv3
AWS S3 compatible
Ansible
Kolla Wallaby
Yes
Ubuntu 20.04, CentOS Stream 8
Supported
NFSv3
AWS S3 compatible
Ansible
Kolla Yoga
Yes
Ubuntu 20.04, CentOS Stream 8
Supported
NFSv3
AWS S3 compatible
Ansible
Kolla Zed
Yes
Ubuntu 22.04, Rocky9
Supported
NFSv3
AWS S3 compatible
Ansible
TripleO Train
Yes
CentOS7
Not Supported
NFSv3
AWS S3 compatible
Director
Package/Container Names
Package Kind
Package Version/Container Tags
contegoclient
deb
4.3.1
dmapi
deb
4.3.1
python3-contegoclient
deb
4.3.1
python3-dmapi
deb
4.3.1
python3-s3-fuse-plugin
deb
4.3.1
python3-tvault-contego
deb
4.3.1
s3-fuse-plugin
deb
4.3.1
tvault-contego
deb
4.3.1
python3-tvault-horizon-plugin
deb
4.3.2
tvault-horizon-plugin
deb
4.3.2
python3-workloadmgrclient
deb
4.3.6
python-workloadmgrclient
deb
4.3.6
workloadmgr
deb
4.3.8
python3-namedatomiclock
deb
1.1.3
tvault-contego
python
4.3.1
contegoclient
python
4.3.2
dmapi
python
4.3.2
s3fuse
python
4.3.2
tvault-horizon-plugin
python
4.3.3
tvault_configurator
python
4.3.7
workloadmgrclient
python
4.3.7
workloadmgr
python
4.3.9
python3-trilio-fusepy-el9
rpm
3.0.1-1
python3-trilio-fusepy
rpm
3.0.1-1
trilio-fusepy
rpm
3.0.1-1
puppet-triliovault
rpm
4.2.64-4.2
contegoclient
rpm
4.3.1-4.3
dmapi
rpm
4.3.1-4.3
python3-contegoclient-el8
rpm
4.3.1-4.3
python3-contegoclient-el9
rpm
4.3.1-4.3
python3-dmapi-el9
rpm
4.3.1-4.3
python3-dmapi
rpm
4.3.1-4.3
python3-s3fuse-plugin-el9
rpm
4.3.1-4.3
python3-s3fuse-plugin
rpm
4.3.1-4.3
python3-tvault-contego-el9
rpm
4.3.1-4.3
python3-tvault-contego
rpm
4.3.1-4.3
python-s3fuse-plugin-cent7
rpm
4.3.1-4.3
tvault-contego
rpm
4.3.1-4.3
python3-tvault-horizon-plugin-el8
rpm
4.3.2-4.3
python3-tvault-horizon-plugin-el9
rpm
4.3.2-4.3
tvault-horizon-plugin
rpm
4.3.2-4.3
python3-workloadmgrclient-el8
rpm
4.3.6-4.3
python3-workloadmgrclient-el9
rpm
4.3.6-4.3
workloadmgrclient
rpm
4.3.6-4.3
Name
Tag
Gitbranch
4.3.0
RHOSP13 containers
4.3.0-rhosp13
RHOSP16.1 containers
4.3.0-rhosp16.1
RHOSP16.2 containers
4.3.0-rhosp16.2
RHOSP17.0 containers
4.3.0-rhosp17.0
Kolla Ansible Victoria containers
4.3.0-victoria
Kolla Ansible Wallaby containers
4.3.0-wallaby
Kolla Yoga Containers
4.3.0-yoga
Kolla Zed Containers
4.3.0-zed
TripleO Containers
4.3.0-tripleo
Package/Container Names
Package Kind
Package Versions
contego
deb
4.2.64
s3-fuse-plugin
deb
4.3.1.2
python3-s3-fuse-plugin
deb
4.3.1.2
tvault-contego
deb
4.3.1.3
python3-tvault-contego
deb
4.3.1.3
dmapi
deb
4.3.1
contegoclient
deb
4.3.1
python3-dmapi
deb
4.3.1
python3-contegoclient
deb
4.3.1
tvault-horizon-plugin
deb
4.3.2.2
python3-tvault-horizon-plugin
deb
4.3.2.2
python-workloadmgrclient
deb
4.3.6
python3-workloadmgrclient
deb
4.3.6
workloadmgr
deb
4.3.8.4
python3-namedatomiclock
deb
1.1.3
tvault-contego
python
4.3.1.3
s3fuse
python
4.3.2.2
dmapi
python
4.3.2
contegoclient
python
4.3.2
tvault-horizon-plugin
python
4.3.3.2
tvault_configurator
python
4.3.7
workloadmgrclient
python
4.3.7
workloadmgr
python
4.3.9.4
trilio-fusepy
rpm
3.0.1-1
python3-trilio-fusepy
rpm
3.0.1-1
python3-trilio-fusepy-el9
rpm
3.0.1-1
puppet-triliovault
rpm
4.2.64-4.2
python3-s3fuse-plugin
rpm
4.3.1.2-4.3
python3-s3fuse-plugin-el9
rpm
4.3.1.2-4.3
python-s3fuse-plugin-cent7
rpm
4.3.1.2-4.3
tvault-contego
rpm
4.3.1.3-4.3
python3-tvault-contego
rpm
4.3.1.3-4.3
python3-tvault-contego-el9
rpm
4.3.1.3-4.3
dmapi
rpm
4.3.1-4.3
contegoclient
rpm
4.3.1-4.3
python3-dmapi
rpm
4.3.1-4.3
python3-dmapi-el9
rpm
4.3.1-4.3
python3-contegoclient-el8
rpm
4.3.1-4.3
python3-contegoclient-el9
rpm
4.3.1-4.3
tvault-horizon-plugin
rpm
4.3.2.2-4.3
python3-tvault-horizon-plugin-el8
rpm
4.3.2.2-4.3
python3-tvault-horizon-plugin-el9
rpm
4.3.2.2-4.3
workloadmgrclient
rpm
4.3.6-4.3
python3-workloadmgrclient-el8
rpm
4.3.6-4.3
python3-workloadmgrclient-el9
rpm
4.3.6-4.3
Name
Tag
Gitbranch
4.3.2
RHOSP13 containers
4.3.2-rhosp13
RHOSP16.1 containers
4.3.2-rhosp16.1
RHOSP16.2 containers
4.3.2-rhosp16.2
RHOSP17.0 containers
4.3.2-rhosp17.0
Kolla Ansible Victoria containers
4.3.2-victoria
Kolla Ansible Wallaby containers
4.3.2-wallaby
Kolla Yoga Containers
4.3.2-yoga
Kolla Zed Containers
4.3.2-zed
TripleO Containers
4.3.2-tripleo
Package/Container Names
Package Kind
Package Versions
s3-fuse-plugin
deb
4.3.1.2
tvault-contego
deb
4.3.1.2
python3-s3-fuse-plugin
deb
4.3.1.2
python3-tvault-contego
deb
4.3.1.2
dmapi
deb
4.3.1
contegoclient
deb
4.3.1
python3-dmapi
deb
4.3.1
python3-contegoclient
deb
4.3.1
tvault-horizon-plugin
deb
4.3.2.1
python3-tvault-horizon-plugin
deb
4.3.2.1
python-workloadmgrclient
deb
4.3.6
python3-workloadmgrclient
deb
4.3.6
workloadmgr
deb
4.3.8.2
tvault-contego
python
4.3.1.2
s3fuse
python
4.3.2.2
dmapi
python
4.3.2
contegoclient
python
4.3.2
tvault-horizon-plugin
python
4.3.3.1
workloadmgrclient
python
4.3.7
tvault_configurator
python
4.3.7
workloadmgr
python
4.3.9.2
trilio-fusepy
rpm
3.0.1-1
python3-trilio-fusepy
rpm
3.0.1-1
python3-trilio-fusepy-el9
rpm
3.0.1-1
puppet-triliovault
rpm
4.2.64-4.2
tvault-contego
rpm
4.3.1.2-4.3
python3-s3fuse-plugin
rpm
4.3.1.2-4.3
python3-tvault-contego
rpm
4.3.1.2-4.3
python3-s3fuse-plugin-el9
rpm
4.3.1.2-4.3
python3-tvault-contego-el9
rpm
4.3.1.2-4.3
python-s3fuse-plugin-cent7
rpm
4.3.1.2-4.3
dmapi
rpm
4.3.1-4.3
contegoclient
rpm
4.3.1-4.3
python3-dmapi
rpm
4.3.1-4.3
python3-dmapi-el9
rpm
4.3.1-4.3
python3-contegoclient-el8
rpm
4.3.1-4.3
python3-contegoclient-el9
rpm
4.3.1-4.3
tvault-horizon-plugin
rpm
4.3.2.1-4.3
python3-tvault-horizon-plugin-el8
rpm
4.3.2.1-4.3
python3-tvault-horizon-plugin-el9
rpm
4.3.2.1-4.3
workloadmgrclient
rpm
4.3.6-4.3
python3-workloadmgrclient-el8
rpm
4.3.6-4.3
python3-workloadmgrclient-el9
rpm
4.3.6-4.3
Name
Tag
Gitbranch
4.3.1
RHOSP13 containers
4.3.1-rhosp13
RHOSP16.1 containers
4.3.1-rhosp16.1
RHOSP16.2 containers
4.3.1-rhosp16.2
RHOSP17.0 containers
4.3.1-rhosp17.0
Kolla Ansible Victoria containers
4.3.1-victoria
Kolla Ansible Wallaby containers
4.3.1-wallaby
Kolla Yoga Containers
4.3.1-yoga
Kolla Zed Containers
4.3.1-zed
TripleO Containers
4.3.1-tripleo
Victoria
4.3.2-victoria
ubuntu centos
Wallaby
4.3.2-wallaby
ubuntu centos
Yoga
4.3.2-yoga
ubuntu centos
Zed
4.3.2-zed
ubuntu rocky
triliovault_tag
<triliovault_tag >
Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned in the 1st step
horizon_image_full
Uncomment
By default, Trilio Horizon container would not get deployed.
Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.
triliovault_docker_username
<dockerhub-login-username>
Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team
triliovault_docker_password
<dockerhub-login-password>
Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team
triliovault_docker_registry
Default value: docker.io
Edit this value if a different container registry for Trilio containers is to be used. Containers need to be pulled from docker.io and pushed to chosen registry first.
triliovault_backup_target
nfs
amazon_s3
other_s3_compatible
nfs
if the backup target is NFS
amazon_s3
if the backup target is Amazon S3
other_s3_compatible
if the backup target type is S3 but not amazon S3.
multi_ip_nfs_enabled
yes no default: no
This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.
triliovault_nfs_shares
<NFS-IP/FQDN>:/<NFS path>
NFS share path example: ‘192.168.122.101:/nfs/tvault’
triliovault_nfs_options
'nolock,soft,timeo=180,
intr,lookupcache=none'
.
for Cohesity nfs: 'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
'
-These parameter set NFS mount options. -Keep default values, unless a special requirement exists.
triliovault_s3_access_key
S3 Access Key
Valid for amazon_s3
and
triliovault_s3_secret_key
S3 Secret Key
Valid for amazon_s3
and other_s3_compatible
triliovault_s3_region_name
Default value: us-east-1
S3 Region name
Valid for amazon_s3
and other_s3_compatible
If s3 storage doesn't have region parameter keep default
triliovault_s3_bucket_name
S3 Bucket name
Valid for amazon_s3
and other_s3_compatible
triliovault_s3_endpoint_url
S3 Endpoint URL
Valid for other_s3_compatible
only
triliovault_s3_ssl_enabled
True
False
Valid for other_s3_compatible
only
Set true for SSL enabled S3 endpoint URL
triliovault_s3_ssl_cert_file_name
s3-cert.pem
Valid for other_s3_compatible
only with SSL enabled and self signed certificates
OR issued by a private authority.
In this case, copy the ceph s3 ca chain file
to/etc/kolla/config/triliovault/
directory on ansible server. Create this directory if it does not exist already.
triliovault_copy_ceph_s3_ssl_cert
True
False
Valid for other_s3_compatible
only
Set to True when: SSL enabled with self-signed certificates or issued by a private authority.
It is possible to configure Cinder and Ceph to use different Ceph users for different Ceph pools and Cinder volume types. Or to have the nova boot volumes and cinder block volumes controlled by different users.
In the case of multiple Ceph users, it is required to adopt the keyring extension in the tvault-contego.conf inside the Ceph block.
The following example will try all files with the extension keyring that are located inside /etc/ceph to access the Ceph cluster for a Trilio related task.
After the installation and configuration of Trilio for Openstack did succeed the following steps can be done to verify that the Trilio installation is healthy.
Trilio is using 4 main services on the Trilio Appliance:
wlm-api
wlm-scheduler
wlm-workloads
wlm-cron
Those can be verified to be up and running using the systemctl status
command.
The second component to check the Trilio Appliance's health is the nginx and pacemaker cluster.
Checking the availability of the Trilio API on the chosen endpoints is recommended.
The following example curl command lists the available workload-types and verifies that the connection is available and working:
The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.
In order to check the dmapi service go to dmapi container which is residing on controller nodes and run below command
The datamover service is running on each compute node. Logging to compute node and run below command
The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.
Run the following command on “nova-api” nodes and make sure “triliovault_datamover_api” container is in started state.
Run the following command on "nova-compute" nodes and make sure the container is in a started state.
Run the following command on horizon nodes and make sure the container is in a started state.
Run the following command on MAAS nodes and make sure all trilio units like trilio-data-mover
, trilio-dm-api
, trilio-horizon-plugin
, trilio-wlm
are in active state
Make sure the Trilio dmapi and horizon containers (shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.
Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly.
Please check dmapi endpoints on overcloud node.
It is possible to configure Cinder to have multiple configurations and keyrings for CEPH.
In this case, the Trilio Datamover file needs to be extended with the CEPH information.
For Trilio to be able to work in such an environment it is required to put copies of each of these configurations and keyrings into a separate directory, which is then made known to the Trilio Datamover inside a [ceph]
block in the tvault-contego.conf.
A tvault-contego.conf file with the extended [ceph] block would look like this.
The uninstallation of Trilio is depending on the Openstack Distribution it is installed in. The high-level process is the same for all Distributions.
Uninstall the Horizon Plugin or the Trilio Horizon container
Uninstall the datamover-api container
Uninstall the datamover
Delete the Trilio Appliance Cluster
The following steps need to be run on all nodes, which have the Trilio Datamover API service running. Those nodes can be identified by checking the roles_data.yaml
for the role that contains the entry OS::TripleO::Services::TrilioDatamoverApi
.
Once the role that runs the Trilio Datamover API service has been identified will the following commands clean the nodes from the service.
Run all commands as root or user with sudo permissions.
Stop trilio_dmapi
container.
Remove trilio_dmapi
container.
Clean Trilio Datamover API service conf directory.
Clean Trilio Datamover API service log directory.
The following steps need to be run on all nodes, which have the Trilio Datamover service running. Those nodes can be identified by checking the roles_data.yaml
for the role that contains the entry OS::TripleO::Services::TrilioDatamover
.
Once the role that runs the Trilio Datamover service has been identified will the following commands clean the nodes from the service.
Run all commands as root or user with sudo permissions.
Stop trilio_datamover
container.
Remove trilio_datamover
container.
Unmount Trilio Backup Target on compute host.
Clean Trilio Datamover service conf directory.
Clean log directory of Trilio Datamover service.
The following steps need to be run on all nodes, which have the haproxy service running. Those nodes can be identified by checking the roles_data.yaml
for the role that contains the entry OS::TripleO::Services::HAproxy
.
Once the role that runs the haproxy service has been identified will the following commands clean the nodes from all Trilio resources.
Run all commands as root or user with sudo permissions.
Edit the following file inside the haproxy container and remove all Trilio entries.
/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
An example of these entries is given below.
Restart the haproxy container once all edits have been done.
Trilio registers services and users in Keystone. Those need to be unregistered and deleted.
Trilio creates a database for the dmapi service. This database needs to be cleaned.
Login into the database cluster
Run the following SQL statements to clean the database.
Remove the following entries from roles_data.yaml used in the overcloud deploy command.
OS::TripleO::Services::TrilioDatamoverApi
OS::TripleO::Services::TrilioDatamover
Follow these steps to clean the overcloud deploy command from all Trilio entries.
Remove trilio_env.yaml entry
Remove trilio endpoint map file Replace with original map file if existing
Run the cleaned overcloud deploy command.
List all VMs running on the KVM node
Destroy the Trilio VMs
Undefine the Trilio VMs
Delete the TrlioVault VM disk from KVM Host storage
Trilio is not providing the JuJu Charms to deploy Trilio 4.1 in Canonical Openstack. At the time of release are the JuJu Charms not yet updated to Trilio 4.1. We will update this page once the Charms are available.
Charm names
Channel
Supported releases
latest/edge
Jammy (Ubuntu 22.04)
latest/edge
Jammy (Ubuntu 22.04)
latest/edge
Jammy (Ubuntu 22.04)
latest/edge
Jammy (Ubuntu 22.04)
latest/edge
Focal (Ubuntu 20.04)
latest/edge
Focal (Ubuntu 20.04)
latest/edge
Focal (Ubuntu 20.04)
latest/edge
Focal (Ubuntu 20.04)
Charm names
Channel
Supported releases
4.2/stable
Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)
4.2/stable
Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)
4.2/stable
Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)
4.2/stable
Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)
Learn about configuring Trilio for OpenStack
The configuration process used by Trilio for OpenStack heavily utilizes Ansible scripts. In recent years, Ansible has emerged as a leading tool for configuration management, due to which Trilio makes extensive use of Ansible playbooks to effectively configure the Trilio cluster. To address any potential Trilio configuration issues, it's crucial for users to have a fundamental understanding of Ansible playbook output.
Given the inherent repeatability of Ansible modules, the Trilio configuration can be run as many times as needed to alter or reconfigure the Trilio cluster.
Upon booting the VM, direct your browser (preferably Chrome or Firefox) to the Trilio node's IP address. This will take you to the Trilio Dashboard, which houses the Trilio configurator.
The user is: admin The default password is: password
Unlike previous versions of Trilio, the current version only requires you to configure the cluster once and the Trilio dashboard provides cluster-wide management capability.
OpenStack endpoints can be configured to use TLS. In such a configuration the Trilio appliance needs to trust the certificates provided by the OpenStack endpoints.
To achieve this trust it is required to upload the OpenStack certificate bundle through the OS API certificate tab of the Trilio appliance Dashboard.
The certificate bundle is located on the controller nodes of the OpenStack installation.
The default paths for each distribution are as follows:
The uploaded certificates can be verified on the Trilio appliance at the following location.
Once you log in to an unconfigured Trilio Appliance, the first page you encounter is the configurator. This tool needs specific details about the Trilio Appliance, OpenStack, and Backup Storage to proceed.
The Trilio Cluster must integrated into an existing OpenStack environment. The following fields ask for the details of your Trilio Cluster.
Controller Nodes
This is the list of Trilio virtual appliance IP addresses along with their hostnames.
Format: comma-separated list with pairs combined through '='
Example: 172.20.4.151=tvault-104-1,172.20.4.152=tvault-104-2,172.20.4.153=tvault-104-3’
Virtual IP Address
This is the Trilio cluster IP address which is mandatory
Format: IP/Subnet
Example: 172.20.4.150/24
The Virtual IP is mandatory even for single-node clusters and has to be different from any IP assigned to a Trilio Controller Node.
Name Server
List of nameservers, primarily used to resolve OpenStack service endpoints.
Format: comma-separated list
example: 10.10.10.1,172.20.4.1
If defining OpenStack endpoint hostnames in the /etc/hosts file on the Trilio Applicance VM is preferred over a DNS solution you may set the nameserver to 0.0.0.0, the default gateway.
Domain Search Order
The domain the Trilio Cluster will use.
Format: comma-separated list
example: trilio.io,trilio.demo
NTP Servers
NTP servers the Trilio Cluster will use
format: comma-separated list
example: 0.pool.ntp.org,10.10.10.10
Timezone
Timezone the Trilio Cluster will use internally
format: pre-populated list
example: UTC
The Trilio Appliance integrates with one OpenStack environment. The following fields ask for the information required to access and connect with the OpenStack Cluster.
Keystone URL
The Keystone endpoint used to fetch authentication for configuration
format: URL
example: https://keystone.trilio.io:5000/v3
Endpoint Type
Defines which endpoint type will be used to communicate with the Openstack endpoints
format: predefined list of radio buttons
example: Public
Domain ID
domain the provided user and tenant are located in
format: ID
example: default
Administrator
Username of an account with the domain admin role
format: String
example: admin
Password
password for the prior provided user
format: String
example: password
The Trilio configurator verifies after every entry if it is possible to login into Openstack using the provided credentials.
This verification will fail until all entries are set and correct.
When the verification is successful it is possible to choose the Admin tenant, the Region, and the Trustee role without error.
Admin Tenant
The tenant to be used together with the provided user
format: a pre-populated list
example: admin
Region
Openstack Region the user and tenant are located in
format: a pre-populated list
example: RegionOne
Trustee Role
The Openstack role required to be able to use Trilio functionalities
format: a pre-populated list
example: _member_
When leveraging OpenStack Barbican for protecting encrypted volumes and offering encrypted backups, it's essential that the Trustee Role is assigned as 'Creator' or a role that possesses equivalent permissions to the Creator role.
This is crucial because only the Creator role has the authority to create, read, and delete secrets within Barbican. The generation of encryption-enabled workloads would be unsuccessful if the Trustee Role does not possess the permissions associated with the 'Creator' role.
These fields request information about the backup target that the Trilio installation will use to store your backups.
OpenStack Distribution
Select the Distribution of OpenStack for Trilio integration
format: predefined list
example: RHOSP
Some distributions of OpenStack require a special mount point to be used, so make the OpenStack Distribution selection carefully.
Backup Storage
Defines the Backup Storage protocol to use
format: predefined list of radio buttons
example: NFS
NFS Export
The path under which the NFS Volumes to be used can be found
format: comma-separated list of NFS Volumes paths
example: 10.10.2.20:/upstream,10.10.5.100:/nfs2
NFS Options
NFS options used by the Trilio Cluster when mounting the NFS Exports
format: NFS options
example: nolock,soft,timeo=180,intr,lookupcache=none
NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
S3 Compatible
Switch between Amazon and other S3 compatible storage solutions
format: predefined list
example: Amazon S3
(S3 compatible) Endpoint URL
URL to be used to reach and access the provided S3 compatible storage
format: URL
example: objects.trilio.io
Access Key
Access Key necessary to login into the S3 storage
format: access key
example: SFHSAFHPFFSVVBSVBSZRF
Secret Key
Secret Key necessary to login into the S3 storage
format: secret key
example: bfAEURFGHsnvd3435BdfeF
Region
Configured Region for the S3 Bucket (keep the default for S3 compatible without Region)
format: String
example: us-east-1
Signature Version
S3 signature version to use for signing into the S3 storage
format: string
example: default
Bucket Name
Name of the bucket to be used as Backup target
format: string
example: Trilio-backup
At the end of the configurator is the option to activate advanced settings.
Activating this option provides the ability to configure the Keystone endpoints used for the Datamover API and Trilio.
Trilio generates Keystone endpoints for 2 services. The Trilio Datamover API and the Trilio Workloadmanager.
OpenStack installations typically distribute endpoint types across various networks.
The advanced settings for both the Datamover API endpoints and TrilioWorkloadManager endpoints enable Trilio configuration options which allow the user to accommodate for such an environment.
IP addresses supplied in these fields are added as additional VIPs to the Trilio cluster.
Should a Fully Qualified Domain Name (FQDN) be used for those endpoints, the Trilio configurator will resolve the FQDN, subsequently identifying the associated IP addresses, which are then added as additional Virtual IP addresses (VIPs).
It is recommended to verify the Datamover API settings against the ones configured during the installation of the Trilio components.
Trilio allows the use of an external MySQL or MariaDB database.
This database needs to be prepared by creating the empty workloadmgr database, creating the workloadmgr user and setting the right permissions.
An example command to create this database would be:
Provide the connection string to the Trilio configurator.
The database value can only be set upon an initial configuration of the Trilio solution.
When the Cluster has been configured to use the internal database, then the connection string will not be shown in the next configuration attempt.
In the case of an external database, the connection string will be shown but is immutable.
Trilio is using a service user that is located in the OpenStack service project.
The password for this service user will be generated randomly or can be defined in the advanced settings.
Once all entries have been set and all validations are error-free the configurator can be started.
Click Finish
Reconfirm in the pop-up that you want to start the configuration
Wait for the configurator to finish
The configurator utilizes Ansible and Trilio internal API calls during the configuration process
Following each configuration block or upon completion of the entire configurator process, you have the opportunity to examine the output generated by Ansible.
At the end of a successful configuration, the page will be forwarded to the configured VIP for the Trilio Appliance.
Learn about upgrading Trilio on Canonical OpenStack
The following charms exist:
The documentation of the charms can be found here:
The following steps have been tested and verified within Trilio environments. There have been cases where these steps updated all packages inside the LXC containers, leading to failures in basic OpenStack services.
It is recommended to run each of these steps in dry-run first.
When any other packages but Trilio packages are getting updated, stop the upgrade procedure and contact your Trilio customer success manager.
Follow the steps mentioned below to upgrade the charms to the latest 4.2 charms before upgrading the Trilio packages.
General Prerequisites
No snapshot OR restore are to be running during this process.
Global job scheduler should be disabled
juju [-m <model>] upgrade-charm trilio-wlm --switch trilio-charmers-trilio-wlm-focal --channel latest/edge
juju [-m <model>] upgrade-charm trilio-horizon-plugin --switch trilio-charmers-trilio-horizon-plugin-focal --channel latest/edge
juju [-m <model>] upgrade-charm trilio-dm-api --switch trilio-charmers-trilio-dm-api-focal --channel latest/edge
juju [-m <model>] upgrade-charm trilio-data-mover --switch trilio-charmers-trilio-data-mover-focal --channel latest/edge
juju [-m <model>] exec --application trilio-dm-api "sudo systemctl restart tvault-datamover-api"
juju [-m <model>] exec --application trilio-data-mover "sudo systemctl restart tvault-contego"
juju [-m <model>] exec --application trilio-horizon-plugin "sudo systemctl restart apache2"
juju [-m <model>] exec --application trilio-wlm "sudo systemctl restart wlm-api wlm-scheduler wlm-workloads wlm-cron"
juju [-m <model>] exec --application trilio-wlm "systemctl restart wlm-api wlm-scheduler wlm-workloads"
juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource restart res_trilio_wlm_wlm_cron"
juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-cron"
juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource stop res_trilio_wlm_wlm_cron"
juju [-m <model>] exec --application trilio-wlm "sudo systemctl stop wlm-cron"
juju [-m <model>] exec --application trilio-wlm "sudo ps -ef | grep workloadmgr-cron | grep -v grep"
'sudo systemctl stop wlm-cron'
juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource start res_trilio_wlm_wlm_cron"
juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-cron"
wlm-cron service should only be running in a single Juju Unit and not running in the other two.
Check the trilio unit's status in juju status [-m <model>] | grep trilio
output.
All the trilio units will be reporting the new juju charm version.
trilio-data-mover 4.2.64.26 active 3 trilio-charmers-trilio-data-mover-jammy latest/edge 4 no Unit is ready
trilio-dm-api 4.2.64.2 active 1 trilio-charmers-trilio-dm-api-jammy latest/edge 2 no Unit is ready
trilio-horizon-plugin 4.2.64.5 active 1 trilio-charmers-trilio-horizon-plugin-jammy latest/edge 4 no Unit is ready
trilio-wlm 4.2.64.20 active 1 trilio-charmers-trilio-wlm-jammy latest/edge 3 no Unit is ready
juju [-m <model>] exec --application trilio-dm-api "sudo systemctl status tvault-datamover-api"
juju [-m <model>] exec --application trilio-data-mover "sudo systemctl status tvault-contego"
juju [-m <model>] exec --application trilio-horizon-plugin "sudo systemctl status apache2"
juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-api wlm-scheduler wlm-workloads wlm-cron"
Trilio is releasing hotfixes, which require updating the packages inside the containers. These hotfixes can not be installed using the Juju charms as they don't require an update to the charms.
Trilio packages can be upgraded after deployment. Trilio supports upgrade to the latest 4.2 releases from as old as the Trilio 4.0 release.
No snapshot OR restore are to be running during this process.
Global job scheduler should be disabled.
wlm-cron should be disabled ( Following sets of commands are to be run on MAAS node).
If trilio-wlm is HA enabled, set the cluster configuration to maintenance mode ( this command will fail for single node deployment).
Stop the wlm-cron service
Ensure that no stale wlm-cron processes exist
If any stale process are found, they should be stopped manually.
The deployed Trilio version is controlled by the triliovault-pkg-source
charm configuration option.
The gemfury repository should be accessible from all Trilio units. For each trilio charm, it should be pointing to the following Gemfury repository:
This can be checked via juju [-m ] config trilio-wlm triliovault-pkg-source
command output.
The preferred, recommended, and tested method to update the packages is through the Juju command line.
Run the below commands from MAAS node
juju status [-m ] | grep trilio
output. All the trilio units will be with the new packages.Run the below command to update the schema
Check the schema head with below command. It should point to latest schema head.
T4O 4.2 is changing how the mount point is calculated. It is required to setup a mount bind to make T4O 4.1 or older backups available.
If the trilio-wlm nodes are HA enabled:
Make sure the wlm-cron services are down after the pkg upgrade. Run the following command for the same:
Unset the cluster maintenance mode
Make sure the wlm-cron service up and running on any one node.
Set the Global Job Scheduler to the original state.
If any trilio unit gets into an error state with the message :
hook failed: "update-status"
One of the reasons could be the package installation did not finish correctly. One way to verify that is by logging into that unit and following the below steps;
Please ensure the following points are met before starting the upgrade process:
No Snapshot or Restore is running
Global job scheduler is disabled
wlm-cron is disabled on the Trilio Appliance
The following sets of commands will disable the wlm-cron service and verify that it is has been completely shut-down.
All commands need to be run as user 'stack' on undercloud node
Separate directories are created as per Redhat OpenStack release under 'triliovault-cfg-scripts/redhat-director-scripts/' directory. Use all scripts/templates from respective directory. For ex, if your RHOSP release is 13, then use scripts/templates from 'triliovault-cfg-scripts/redhat-director-scripts/rhosp13' directory only.
If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into the puppet directory of the right release.
Trilio has two services as explained below.
You need to add these two services to your roles_data.yaml.
If you do not have customized roles_data file, you can find your default roles_data.yaml file at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
on undercloud.
You need to find that role_data file and edit it to add the following Trilio services.
i) Trilio Datamover Api Service:
Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamoverApi
Typically this service should be deployed on controller nodes where keystone and database runs.
If you are using RHOSP's pre-defined roles, you need to addOS::TripleO::Services::TrilioDatamoverApi
service to Controller role
.
ii) Trilio Datamover Service:
Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamover
This service should be deployed on role where nova-compute
service is running.
If you are using RHOSP's pre-defined roles, you need to add our OS::TripleO::Services::TrilioDatamover
service to Compute role
.
If you have defined your custom roles, then you need to identify the role name where in 'nova-compute' service is running and then you need to add 'OS::TripleO::Services::TrilioDatamover' service to that role.
iii) Trilio Horizon Service:
This service needs to share the same role as the OpenStack Horizon server.
In the case of the pre-defined roles will the Horizon service run on the role Controller.
Add the following to the identified role OS::TripleO::Services::TrilioHorizon
All commands need to be run as user 'stack'
There are three registry methods available in RedHat Openstack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from Redhat registry.
Populate the trilio_env.yaml with container URLs for:
Trilio Datamover container
Trilio Datamover api container
Trilio Horizon Plugin
Follow this section when 'local registry' is used on the undercloud.
In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.
The changes can be verified using the following commands.
Follow this section when a Satellite Server is used for the container registry.
Populate the trilio_env.yaml with container urls.
It is recommended to re-populate the backup target details in the freshly downloaded trilio_env.yaml file. This will ensure that parameters that have been added since the last update/installation of Trilio are available and will be filled out too.
Locations of the trilio_env.yaml:
On Undercloud node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml
' file.
vi triliovault_nfs_map_input.yml
Update pyyaml on the undercloud node only
If pip isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Validate the changes in file 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Include the environment file (trilio_nfs_map.yaml
) in overcloud deploy command with '-e' option as shown below.
Ensure that MultiIPNfsEnabled
is set to true in
trilio_env.yaml file and that nfs is used as backup target.
Use the following heat environment file and roles data file in overcloud deploy command:
trilio_env.yaml
roles_data.yaml
Use correct Trilio endpoint map file as per available Keystone endpoint configuration
Instead of tls-endpoints-public-dns.yaml
file, use environments/trilio_env_tls_endpoints_public_dns.yaml
Instead of tls-endpoints-public-ip.yaml
file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml
Instead of tls-everywhere-endpoints-dns.yaml
file, useenvironments/trilio_env_tls_everywhere_dns.yaml
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.
If Trilio Horizon container is in restarted state on RHOSP 16.1.8/RHSOP 16.2.4 then use below workaroud
In T4O 4.2 and later releases, deriving of the mount point has been changed. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2 and onwards releases.
Starting Trilio for Openstack 4.0 does Trilio for Openstack allow in-place upgrades.
The following versions can be upgraded to each other:
The upgrade process contains upgrading the Trilio appliance and the Openstack components and is dependent on the underlying operating system.
The Upgrade of Trilio for Canonical Openstack is managed through the charms.
Trilio supports the upgrade of Trilio-Openstack components from any of the older releases (4.1 onwards) to the latest 4.2 hotfix releases without ripping up the older deployments.
Refer to the below-mentioned acceptable values for the placeholders
kolla_base_distro
andtriliovault_tag
in this document as per the Openstack environment:
Please ensure the following points are met before starting the upgrade process:
Either 4.1 or 4.2 GA OR any hotfix patch against 4.1/4.2 should be already deployed
No Snapshot OR Restore is running
Global job scheduler should be disabled
wlm-cron is disabled on the primary Trilio Appliance
Access to the gemfury repository to fetch new packages
The following sets of commands will disable the wlm-cron service and verify that it has been completly shut-down.
Before the latest configuration script is loaded it is recommended to take a backup of the existing config scripts' folder & Trilio ansible roles. The following command can be used for this purpose:
Clone the latest configuration scripts of the required branch and access the deployment script directory for Kolla Ansible Openstack.
Copy the downloaded Trilio ansible role into the Kolla-Ansible roles directory.
This step is not always required. It is recommended to comparetriliovault_globals.yml
with the Trilio entries in the/etc/kolla/globals.yml
file.
In case of no changes, this step can be skipped.
This is required, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_globals.yml
they need to be updated in /etc/kolla/globals.yml
file.
This step is not always required. It is recommended to comparetriliovault_passwords.yml
with the Trilio entries in the/etc/kolla/passwords.yml
file.
In case of no changes, this step can be skipped.
This step is required, when some password variable names have been added, changed, or removed in the latest triliovault_passwords.yml. In this case, the /etc/kolla/passwords.yml needs to be updated.
This step is not always required. It is recommended to comparetriliovault_site.yml
with the Trilio entries in the/usr/local/share/kolla-ansible/ansible/site.yml
file.
In case of no changes, this step can be skipped.
This is required because, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_site.yml
they need to be updated in /usr/local/share/kolla-ansible/ansible/site.yml
file.
This step is not always required. It is recommended to comparetriliovault_inventory.yml
ith the Trilio entries in the/root/multinode
file.
In case of no changes, this step can be skipped.
By default, the triliovault-datamover-api service gets installed on ‘control' hosts and the trilio-datamover service gets installed on 'compute’ hosts. You can edit the T4O groups in the inventory file as per your cloud architecture.
T4O group names are ‘triliovault-datamover-api’ and ‘triliovault-datamover’
On kolla-ansible server node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
vi triliovault_nfs_map_input.yml
Update PyYAML
on the kolla-ansible server node only
Expand the map file to create a one-to-one mapping of compute and NFS share.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all necessary NFS shares.
Append this output map file to triliovault_globals.yml
File Path: /home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml
Ensure to set multi_ip_nfs_enabled in _`_triliovault_globals.yml` file to yes
A new parameter is added to triliovault_globals.yml
, set this parameter value to 'yes' if backup target NFS supports multiple endpoints/IPs.
File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’
multi_ip_nfs_enabled: 'yes'
Later append triliovault_globals.yaml
file to /etc/kolla/globals.yml
Edit /etc/kolla/globals.yml
file to fill triliovault backup target and build details. You will find the triliovault related parameters at the end of globals.yml
. The user needs to fill in details like triliovault build version, backup target type, backup target details, etc.
Following is the list of parameters that the user needs to edit.
To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml
and find nova_libvirt_default_volumes
variable. Append the Trilio mount bind /var/trilio:/var/trilio:shared
to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
Next, find the variable nova_compute_default_volumes
in the same file and append the mount bind /var/trilio:/var/trilio:shared
to the list.
After the change will the variable look for a default Kolla installation as follows:
In the case of using Ironic compute nodes one more entry needs to be adjusted in the same file.
Find the variable nova_compute_ironic_default_volumes
and append trilio mount /var/trilio:/var/trilio:shared
to the list.
After the changes the variable will look like the following:
In case, the user doesn’t want to use the docker hub registry for triliovault containers during cloud deployment, then the user can pull triliovault images before starting cloud deployment and push them to other preferred registries.
Following are the triliovault container image URLs for 4.3.2 releases.
Replace kolla_base_distro
and triliovault_tag
variables with their values.
This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro This {{ triliovault_tag }} is mentioned at the start of this document.
Below are the Source-based OpenStack deployment images
Below are the Binary-based OpenStack deployment images
Activate the login into dockerhub for Trilio tagged containers.
Please get the Dockerhub login credentials from Trilio Sales/Support team
Run the below command from the directory with the multinode file tull pull the required images.
Run the below command from the directory with the multinode file to start the upgrade process.
Verify on the controller and compute nodes that the Trilio containers are in UP state.
Following is a sample output of commands from controller and compute nodes. triliovault_tag
will have value as per the openstack release where deployment being done.
Following are the default haproxy conf parameters set against triliovault datamover api service.
These values work best for triliovault dmapi service. It’s not recommended to change these parameter values. However, in some exceptional cases, If needed to change any of the above parameter values then same can be done on kolla-ansible server in the following file.
After editing, run kolla-ansible deploy command again to push these changes to openstack cloud.
Post kolla-ansible deploy, to verify the changes, please check following file, available on all controller/haproxy nodes.
In T4O 4.2 and later releases, deriving of the mount point has been changed. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2 and onwards releases.
The Trilio Ansible OpenStack playbook can be run to uninstall the Trilio services.
To cleanly remove the Trilio Datamover API container run the following Ansible playbook.
Remove the tvault-dmapi_hosts
and tvault_compute_hosts
entries from /etc/openstack_deploy/openstack_user_config.yml
Remove Trilio Datamover API settings from /etc/openstack_deploy/user_variables.yml
Go inside galera container.
Login as root user in mysql database engine.
Drop dmapi database.
Drop dmapi user
Go inside rabbitmq container.
Delete dmapi user.
Delete dmapi vhost.
Remove /etc/haproxy/conf.d/datamover_service
file.
Remove HAproxy configuration entry from /etc/haproxy/haproxy.cfg
file.
Restart the HAproxy service.
List all VMs running on the KVM node
Destroy the Trilio VMs
Undefine the Trilio VMs
Delete the TrlioVault VM disk from KVM Host storage
Filename and location:
This file is used only when the user wants to configure 'multiple IP/endpoints based NFS share' as a backup target for Trilio. In all other cases like single IP NFS, S3 this file does not get used. Follow regular install doc.
If the user is using multiple-IP/endpoints based NFS share as a backup target for triliovault then triliovault mounts any one IP/endpoint on a given compute node for a given NFS share. Users can distribute NFS share IPs/endpoints across compute nodes.
'triliovault_nfs_map_input.yml
' file is a tool that users can use to distribute/load balance NFS share endpoints across compute nodes in a given cloud.
Here, the User has ‘one’ NFS share exposed with three IP addresses. 192.168.1.34, 192.168.1.35, 192.168.1.33
Share directory path is: /var/share1
So, this NFS share supports the following full paths that clients can mount:
There are 32 compute nodes in the OpenStack cloud. 30 node hostnames have the following naming pattern
The remaining 2 node hostnames do not follow any format/pattern.
Now the mapping file will look like this
Compute node IP range used here: 172.30.3.11-40 and 172.30.4.40, 172.30.4.50 Total of 32 compute nodes
Use the following command to get compute hostnames. Check the ‘Name' column. Use these exact hostnames in 'triliovault_nfs_map_input.yml
' file.
In the following command output, ‘overcloudtrain1-novacompute-0' and ‘overcloudtrain1-novacompute-1' are correct hostnames.
Compute hostnames/IPs should match in kolla-ansible inventory file and triliovault_nfs_map_input.yml.
If IP addresses are sued in kolla-ansible inventory file then the same IP addresses should be used in ‘triliovault_nfs_map_input.yml' file too. If the kolla-ansible deploy command looks like the following,
kolla-ansible -i multinode deploy
then, the inventory file is 'multinode'. Generally, it is available at “/root/multinode“.
Compute hostnames/IPs should match in the Openstack Ansible inventory file and triliovault_nfs_map_input.yml
If IP addresses have been used in the Openstack-ansible inventory file then you should use the same IP addresses in the ‘triliovault_nfs_map_input.yml' file too.
Generally inventory file available at /etc/openstack_deploy/openstack_user_config.yml
Openstack Ansible deploy command looks like following,
openstack-ansible os-tvault-install.yml
Openstack Ansible automatically picks up values that the user has set in the file openstack_user_config.yml
For example: RHOSP16.1 to RHOSP16.2 upgrade
Without any changes in TrilioVault, customer needs to complete RHOSP upgrade process first.
Check if current TrilioVault release which is deployed on the RHOSP cloud supports new RHOSP release If it does not support new RHOSP release, identify the next TrilioVault release which supports this new RHOSP release.
Follow all the steps.
Please ensure following points before starting the upgrade process:
Either 4.1 GA OR any hotfix patch against 4.1 should be already deployed for performing upgrades mentioned in the current document.
No snapshot OR restore to be running.
Global job scheduler should be disabled.
wlm-cron should be disabled (on primary T4O node).
pcs resource disable wlm-cron
Check : systemctl status wlm-cron OR pcs resource show wlm-cron
Additional step : To ensure that cron is actually stopped, search for any lingering processes against wlm-cron and kill them. [Cmd : ps -ef | grep -i workloadmgr-cron]
If the backup target is Ceph S3 with SSL and SSL certificates are self-signed or authorized by private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, user needs to rename his CA chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/puppet/trilio/files'.
In this step, we are going to pull triliovault container images to the user’s registry.
Trilio containers are pushed to ‘Dockerhub'. The registry URL is ‘docker.io’. Following are the triliovault container pull URLs.
Trilio container URLs for TripleO Train CentOS7:
There are two registry methods available in the TripleO Openstack Platform.
Remote Registry
Local Registry
Identify which method you are using. Below we have explained all three methods to pull and configure trilioVault's container images for overcloud deployment.
If you are using the 'Remote Registry' method follow this section. You don't need to pull anything. You just need to populate the following container URLs in trilio env yaml.
Populate 'environments/trilio_env.yaml' file with triliovault container urls. Changes look like the following.
If you are using 'local registry' on undercloud, follow this section.
Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.
## Above script pushes trilio container images to undercloud registry and sets correct trilio images URLs in ‘environments/trilio_env.yaml’. Verify the changes using the following command.
On Undercloud node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml
' file.
vi triliovault_nfs_map_input.yml
Update pyyaml on the undercloud node only
If pip isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Validate the changes in file 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Include the environment file (trilio_nfs_map.yaml
) in overcloud deploy command with '-e' option as shown below.
Ensure that MultiIPNfsEnabled
is set to true in
trilio_env.yaml file and that nfs is used as backup target.
Refer old 'trilio_env.yaml' - (/home/stack/triliovault-cfg-scripts-old/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml) file and update new trilio_env.yaml file.
Fill triliovault details in file - '/home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml', triliovault environment file is self explanatory. Fill details of backup target, verify image urls and other details.
A new separate triliovault service is introduced in Trilio 4.3 release for Trilio Horizon plugin. User need to add following service to roles_data.yaml file and this service need to be co-located with openstack horizon service.
OS::TripleO::Services::TrilioHorizon
Use the following heat environment file and roles data file in overcloud deploy command
trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations
roles_data.yaml: This file contains overcloud roles data with Trilio roles added. This file need not be changed, you can use the old role_data.yaml file
Use the correct trilio endpoint map file as per your keystone endpoint configuration.
- Instead of tls-endpoints-public-dns.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’
- Instead of tls-endpoints-public-ip.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’
- Instead of tls-everywhere-endpoints-dns.yaml
this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
Deploy command with triliovault environment file looks like following.
Make sure Trilio dmapi and horizon containers(shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps
Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps.
Make sure the horizon container is in a running state. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin
Trilio components will be deployed using puppet scripts.
Workaround:
Note : Below mentioned steps required only if target backend is NFS.
Trilio 4.1 can be upgraded without reinstallation to a higher version of T4O if available.
Please ensure the following points are met before starting the upgrade process:
No Snapshot or Restore is running
The Global-Job-Scheduler is disabled
wlm-cron
is disabled on the Trilio Appliance
Access to the Gemfury repository to fetch new packages \
Note: For single IP-based NFS share as a backup target refer to this rolling upgrade on the Ansible Openstack document. User needs to follow the Ansible Openstack installation document if having multiple IP-based NFS share
The following sets of commands will disable the wlm-cron
service and verify that it has been completely shut down.
Add the Gemfury repository on each of the DMAPI containers, Horizon containers & Compute nodes.
Create a file /etc/apt/sources.list.d/fury.list
and add the below line to it.
The following commands can be used to verify the connection to the Gemfury repository and to check for available packages.
Add Trilio repo on each of the DMAPI containers, Horizon containers & Compute nodes.
Modify the file /etc/yum.repos.d/trilio.repo
and add the below line in it.
The following commands can be used to verify the connection to the Gemfury repository and to check for available packages.
The following steps represent the best practice procedure to upgrade the DMAPI service.
Login to DMAPI container
Take a backup of the DMAPI configuration in /etc/dmapi/
use apt list --upgradeable
to identify the package used for the dmapi service
Update the DMAPI package
restore the backed-up config files into /etc/dmapi/
Restart the DMAPI service
Check the status of the DMAPI service
These steps are done with the following commands. This example is assuming that the more common python3 packages are used.
The following steps represent the best practice procedure to update the Horizon plugin.
Login to Horizon Container
use apt list --upgradeable
to identify the package the Trilio packages for the workloadmgrclient
, contegoclient
and tvault-horizon-plugin
Install the tvault-horizon-plugin
package in the required python version
Install the workloadmgrclient
package
Install the contegoclient
Restart the Horizon webserver
Check the installed version of the workloadmgrclient
These steps are done with the following commands. This example is assuming that the more common python3 packages are used.
The following steps represent the best practice procedure to update the tvault-contego
service on the compute nodes.
Login into the compute node
Take a backup of the config files at and /etc/tvault-contego/
and /etc/tvault-object-store
(if S3)
Unmount storage mount path
Upgrade the tvault-contego
package in the required python version
(S3 only) upgrade the s3-fuse-plugin
package
Restore the config files
(S3 only) Restart the tvault-object-store
service
Restart the tvault-contego
service
Check the status of the service(s)
These steps are done with the following commands. This example is assuming that the more common python3 packages are used.
Take a backup of the config files
Check the mount path of the NFS storage using the command df -h
and unmount the path using umount
command.
e.g.
Upgrade the Trilio packages:
Deb-based (Ubuntu):
RPM-based (CentOS):
Restore the config files, restart the service and verify the mount point
Take a backup of the config files
Check the mount path of the S3 storage using the command df -h
and unmount the path using umount
command.
Upgrade the Trilio packages
Deb-based (Ubuntu):
RPM-based (CentOS):
Restore the config files, restart the service and verify the mount point
Following are the haproxy cfg parameters recommended for optimal performance of dmapi service. File location on controller /etc/haproxy/haproxy.cfg
Remove the below content, if present in the file/etc/openstack_deploy/user_variables.yml
on the ansible host.
Add the below lines at end of the file /etc/openstack_deploy/user_variables.yml
on the ansible host.
Update Haproxy configuration using the below command on the ansible host.
T4O 4.2 has changed the calculation of the mount point. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2
The container needs to be cleaned on all nodes where the triliovault_datamover_api
container is running.
The Kolla Openstack inventory file helps to identify the nodes with the service.
Following steps need to be done to clean the triliovault_datamover_api
container:
Stop the triliovault_datamover_api
container.
Remove the triliovault_datamover_api
container.
Clean /etc/kolla/triliovault-datamover-api
directory.
Clean log directory of triliovault_datamover_api
container.
The container needs to be cleaned on all nodes where the triliovault_datamover
container is running.
The Kolla Openstack inventory file helps to identify the nodes with the service.
Following steps need to be done to clean the triliovault_datamover
container:
Stop the triliovault_datamover
container.
Remove the triliovault_datamover
container.
Clean /etc/kolla/triliovault-datamover
directory.
Clean log directory of triliovault_datamover
container.
The Trilio Datamover API entries need to be cleaned on all haproxy
nodes.
The Kolla Openstack inventory file helps to identify the nodes with the service.
Following steps need to be done to clean the haproxy
container:
Delete all Trilio related entries from:
Trilio entries can be found in:
Run deploy command to replace the Trilio Horizon container with original Kolla Ansible Horizon container.
Trilio created a dmapi service with dmapi user.
Trilio Datamover API service has its own database in the Openstack database.
Login to Openstack database as root user or user with similar priviliges.
Delete dmapi database and user.
List all VMs running on the KVM node
Destroy the Trilio VMs
Undefine the Trilio VMs
Delete the TrlioVault VM disk from KVM Host storage
When using a secure HTTPS endpoint for non-AWS S3 storage (for example Ceph), you should validate the Certificate Authority (CA) by uploading the corresponding CA certificate. The certificate can be uploaded in the "OS API Certificate" section, under the "Upload Client Certificate" subsection, as explained in .
Trilio supports the upgrade of charms and Trilio components from older releases (4.0 onwards) to the latest release. More information about the latest 4.2 Trilio charms for the Trilio-4.2 release can be found .
Installs and manages Trilio Controller services.
Installs and manages the Trilio Datamover API service.
Installs and manages the Trilio Datamover service.
Installs and manages the Trilio Horizon Plugin.
More detailed information about the latest 4.2 Trilio charms for the Trilio-4.2 release can be found .
Follow the steps mentioned in this to upgrade the charms to the latest 4.2 charms before upgrading the Trilio packages.
Please referfor detailed steps to set up the mount bind.
Trilio containers are pushed to 'RedHat Container Registry'. Registry URL is ''. The Trilio container URLs are as follows:
Please refer to the to see which containers are available.
Please refer to the to see which containers are available.
Please refer to the to see which containers are available.
Pull the Trilio containers on the Red Hat Satellite using the given
For more details about the trilio_env.yaml please check .
Edit input map file and fill all the details. Refer to the for details about the structure.
Please follow to set up the mount bind for RHOSP.
In T4O 4.2 and later releases, calculation of the mount point has been changed. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2 and T4O 4.2 onwards releases. Please follow to set up the mount bind for Canonical OpenStack.
The triliovault_nfs_map_input.yml
is explained .
Please follow to ensure that Backups taken from T4O 4.1 or older can be used with T4O 4.2 and onwards releases.
Other complex examples are available on github at
Install TrilioVault release identified in step 2.1, Refer TrilioVault install for RHOSP Get the correct deployment document as per Triliovault release from Trilio support team.
Edit input map file and fill all the details. Refer to the for details about the structure.
In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights:
Undercloud jobs puppet task ertmonger_certificate[haproxy-external-cert] fails with Unrecognized parameter or wrong value type
APPLY the fix directly on the setup. It's not merged in train centos7.
PR:
Need fix in on controller and compute nodes
/usr/share/openstack-puppet/modules/certmonger/lib/puppet/provider/certmonger_certificate/certmonger_certificate.rb
Please referfor detailed steps to setup mount bind.
Please follow for detailed steps to set up mount bind.
To cross-verify the uninstallation undo all steps done in and .
/usr/local/share/kolla-ansible/ansible/roles/
There is a role triliovault
/etc/kolla/globals.yml
Trilio entries had been appended at the end of the file
/etc/kolla/passwords.yml
Trilio entries had been appended at the end of the file
/usr/local/share/kolla-ansible/ansible/site.yml
Trilio entries had been appended at the end of the file
/root/multinode
Trilio entries had been appended at the end of this example inventory file
4.0 GA (4.0.92)
4.0 SP1 (4.0.115)
4.0 GA (4.0.92)
4.1 latest hotfix
4.0 GA (4.0.92)
4.2 GA (4.2.64)
4.1 GA (4.1.94)
4.1 latest Hotfix
4.1 Hotfix
4.1 latest Hotfix
4.1 GA (4.1.94)
4.2 GA (4.2.64)
4.1 Hotfix
4.2 GA (4.2.64)
4.2 GA (4.2.64)
4.3.0
4.2 Hotfix
4.3.0
Victoria
4.3.2-victoria
ubuntu centos
Wallaby
4.3.2-wallaby
ubuntu centos
Yoga
4.3.2-yoga
ubuntu centos
Zed
4.3.2-zed
ubuntu rocky
The workloadmgr CLI client is provided as rpm and deb packages.
It got tested against the following operating systems:
CentOS7, CentOS8
Ubuntu 18.04, Ubuntu 20.04
Installing the workloadmgr client will automatically install all required Openstack clients as well.
Further will the installation of the workloadmgr client integrate the client into the global openstack python client, if available.
The Trilio workload manager CLI client has several requirements that need to be met before the client can be installed without dependency issues.
The following steps need to be done to prepare the installation the workloadmgr client:
Add required repositories
epel-release
for CentOS7: centos-release-openstack-stein
for CentOS8: centos-release-openstack-train
install base packages
yum -y install epel-release
for CentOS7: yum -y install centos-release-openstack-stein
for CentOS8: yum -y install centos-release-openstack-train
There are 2 possibilities for how the workloadmgr client packages can be installed.
The Trilio appliance is shipping the workloadmgr client version, that is matching the Trilio version of the Trilio appliance. These clients will always work with their respective Trilio versions.
The workloadmgr client can be directly downloaded using the following command:
For CentOS7:
wget http://<TVM-IP>:8085/yum-repo/queens/workloadmgrclient-<Trilio-Version>-<Trilio-Release>.noarch.rpm
For CentOS8: http://<TVM-IP>:8085/yum-repo/queens/python3-workloadmgrclient-<Trilio-Version>-<TVault-Release>.noarch.rpm
The yum package manager is used to install the workloadmgr client package:
yum install workloadmgrclient-<Trilio-Version>-<Trilio-Release>.noarch.rpm
An example installation can be found below:
To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:
Create the Trilio yum repository file /etc/yum.repos.d/trilio.repo
Enter the following details into the repository file:
Install the workloadmgr client issuing the following command:
For CentOS7: yum install workloadmgrclient
For CentOS8: yum install python-3-workloadmgrclient-el8
An example installation can be found below:
The Trilio workloadmgr client packages for Ubuntu are only available from the online repository.
There is no preparation required. All dependencies are automatically resolved by the standard repositories provided by Ubuntu.
There are 2 possibilities for how the workloadmgr client packages can be installed.
The Trilio appliance is shipping the workloadmgr client version, that is matching the Trilio version of the Trilio appliance. These clients will always work with their respective Trilio versions.
The workloadmgr client can be directly downloaded using the following command:
For Python2:
curl -Og6 http://<TVM-IP>:8085/deb-repo/deb-repo/python-workloadmgrclient_<Trilio-Version>_all.deb
For Python3:curl -Og6 http://<TVM-IP>:8085/deb-repo/deb-repo/python3-workloadmgrclient_<Trilio-Version>_all.deb
The apt package manager is used to install the workloadmgr client package:
For Python2:apt-get install ./python-workloadmgrclient_<Trilio-Version>_all.deb -y
For Python3:apt-get install ./python3-workloadmgrclient_<Trilio-Version>_all.deb -y
An example installation can be found below:
To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:
Create the Trilio yum repository file /etc/apt/sources.list.d/fury.list
Enter the following details into the repository file:
run apt update
to make the new repository available.
The apt package manager is used to install the workloadmgr client package:
For Python2:apt-get install python-workloadmgrclient
For Python3:apt-get install python3-workloadmgrclient
An example installation can be seen below:
To configure the banner shown upon accessing the Trilio Appliance GUI do the following.
Login into Trilio Appliance console
edit the banner.yaml at /etc/tvault-config/banner.yaml
restart tvault-config service
The content of the banner.yaml looks as follows and can be edited as required:
In case of the Trilio Dashboard being lost it can be resetted as long as SSH access to the appliance is available.
To reset the password to its default do the following:
The dashboard login will be reset to:
The Trilio Appliance can be reinitialized, which will delete all workload related values from the Trilio database.
To reinitialize the Trilio Appliance do:
Login into the Trilio Dashboard
Click on "admin" in the upper right corner to open the submenu
Choose "Reinitialize"
Verify that you want to reinitialize the Trilio
It is possible to download the Trilio logs directly through the Trilio web gui.
To download logs throught the Trilio web gui:
Login into the Trilio web gui
Go to "Logs"
Choose the log to be downloaded
Each log for every Trilio Appliance can be downloaded seperatly
or a zip of all logfiles can be created and downloaded
triliovault_tag
<triliovault_tag >
Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned at the start of this document
horizon_image___full
Uncomment
By default, Trilio Horizon container would not get deployed.
Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.
triliovault_docker___username
<dockerhub-login-username>
Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team
triliovault_docker___password
<dockerhub-login-password>
Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team
triliovault_docker___registry
Default: docker.io
triliovault_backup___target
nfs
amazon_s3
ceph_s3
'nfs': If the backup target is NFS
'amazon_s3': If the backup target is Amazon S3
'ceph_s3': If the backup target type is S3 but not amazon S3.
multi_ip_nfs_enabled
yes no Default: no
This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.
dmapi_workers
Default: 16
If dmapi_workers field is not present in config file. The Default value will be equals to number of cores present on the node
triliovault_nfs___shares
<NFS-IP/FQDN>:/<NFS path>
Only with nfs for triliovault_backup_target
User needs to provide NFS share path, e.g.: 192.168.122.101:/opt/tvault
triliovault_nfs___options
Default:'nolock,soft,timeo=180,
intr,lookupcache=none'
.
for Cohesity nfs: 'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
'
-Only with nfs for triliovault_backup_target -Keep default values if unclear
triliovault_s3___access_key
S3 Access Key
Only with amazon_s3 or cephs3 for triliovault_backuptarget
Provide S3 access key
triliovault_s3___secret_key
S3 Secret Key
Only with amazon_s3 or cephs3 for triliovault_backuptarget
Provide S3 secret key
triliovault_s3___region_name
S3 Region Name
Only with amazon_s3 or cephs3 for triliovault_backuptarget
Provide S3 region or keep default if no region required
triliovault_s3___bucket_name
S3 Bucket name
Only with amazon_s3 or cephs3 for triliovault_backuptarget
Provide S3 bucket
triliovault_s3___endpoint_url
S3 Endpoint URL
Valid for other_s3_compatible
only
triliovault_s3___ssl_enabled
True
False
Only with ceph_s3 for triliovault_backup_target
Set to true if endpoint is on HTTPS
triliovault_s3__ssl_cert__file_name
s3-cert-pem
Only with ceph_s3 for triliovault_backup_target and
if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority user needs to copy the 'ceph s3 ca chain file' to "/etc/kolla/config/triliovault/" directory on ansible server. Create this directory if it does not exist already.
triliovault_copy__ceph_s3__ssl_cert
True
False
Set to true if:
ceph_s3 for triliovault_backup_target and
if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority
Please ensure to complete the upgrade of all the Trilio components on the Openstack controller & compute nodes before starting the rolling upgrade of Trilio.
The mentioned Gemfury repository should be accessible from Trilio VM.
Either 4.2 GA OR any hotfix patch against 4.2 should be already deployed for performing upgrades mentioned in the current document.
Please ensure the following points before starting the upgrade process:
No snapshot OR restore to be running.
Global job-scheduler should be disabled.
wlm-cron should be disabled and any lingering process should be killed.
The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut-down.
Download the hf_upgrade.sh
script on all T4O nodes where the upgrade is to be done.
Run the following command to download and Install the upgraded packages
Run the following command to enable the wlm-cron service
Here the packages need to be downloaded on a separate host which has internet access.
The script ./hf_upgrade.sh
can be used with the below-mentioned option to download the required package
Copy the downloaded 4.3-offlinePkgs.tar.gz
package and hf_upgrade.sh
script to all the Trilio nodes\
Now access the appliance VMs and install the copied offline packages on each of the Trilio appliances by running the below-mentioned command:\
Once the installation is done, run the following command to enable the wlm-cron service\
Verify all wlm services on all Trilio nodes
Make sure all wlm services are up and running from python version 3.8.x
Check the mount point using “df -h” command
Grafana dashboard shows the correct wlm service status on all T4O nodes.
Enable Global Job Scheduler
Additional check for wlm-cron on the primary node
Red Hat OpenStack, TripleO, and Kolla Ansible Openstack are using the nova UID/GID of 42436 inside their containers instead of 162:162 which is the standard in other Openstack environments.
Please verify that the nova UID/GID on the Trilio Appliance is still 42436, to do so follow the below document provided here:
Trilio for OpenStack is generating a base64 hash value for every NFS backup target connected to the T4O solution. This enables T4O to mount multiple NFS backup targets to the same T4O installation.
The mountpoints are generated utilizing a hash value inside the mountpoint, providing a unique mount for every NFS backup target.
This mountpoint is then used inside the incremental backups to point to the qcow2 backing files. The backing file path is required as a full path and can not be set as a relative path. This is a limitation of the qcow2 format.
T4O 4.2 has changed how the hash value gets calculated. T4O 4.1 and prior calculated the hash value out of the complete NFS path provided as shown in the example below.
T4O 4.2 is now only considering the NFS directory part for the hash value as shown below.
It is therefore necessary to make older backups taken by T4O 4.1 or prior available for TVO4.2 to restore.
This can be done by one of two methods:
Rebase all backups and change the backing file path
Make the old mount point available again and point it to the new one using mount bind
Trilio is providing a script, that is taking care of the rebase procedure. This script can be downloaded from the following location.
Copy and use the script from the Trilio appliance as the nova user.
The nova user is required, as this user owns the backup files, and using any other user will change the ownership and lead to unrestorable backups.
The script is requiring the complete path to the workload directory on the backup target.
This needs to be repeated until all workloads have been rebased.
This method is generating a second mount point based on the old hash value calculation and then mounting the new mount point to the old one. By doing this mount bind both mount points will be available and point to the same backup target.
To use this method the following information needs to be available:
old mount path
new mount path
Examples of how to calculate the base64 hash value are shown below.
Old mountpoint hash value:
New mountpoint hash value:
The mount bind needs to be done for all Trilio appliances and Datamover services.
To enable the mount bind on the Trilio appliance follow these steps:
Create the old mountpoint directory
mkdir -p old_mount_path
run mount --bind command
mount --bind <new_mount_path> <old_mount_path>
set permissions for mountpoint
chmod 777 <old_mount_path>
An example is given below.
The following steps need to be done on the overcloud compute node. They don't need to be done inside any container.
To enable the mount bind on the Trilio appliance follow these steps:
Create the old mountpoint directory
mkdir -p old_mount_path
run mount --bind command
mount --bind <new_mount_path> <old_mount_path>
set permissions for mountpoint
chmod 777 <old_mount_path>
The following steps need to be done on the overcloud compute node. They don't need to be done inside any container.
To enable the mount bind on the Trilio appliance follow these steps:
Create the old mountpoint directory
mkdir -p old_mount_path
run mount --bind command
mount --bind <new_mount_path> <old_mount_path>
set permissions for mountpoint
chmod 777 <old_mount_path>
The following steps need to be done on the overcloud compute node. They don't need to be done inside any container.
To enable the mount bind on the Trilio appliance follow these steps:
Create the old mountpoint directory
mkdir -p old_mount_path
run mount --bind command
mount --bind <new_mount_path> <old_mount_path>
set permissions for mountpoint
chmod 777 <old_mount_path>
Canonical OpenStack does the creation of the mountpoint and the mount bind through JuJu using the following commands.
To create the mountpoint, if it doesn't already exist:
To create the mount bind
Trilio is using a base64 hash for the mount point of NFS Backup targets. This hash makes sure, that multiple NFS Shares can be used with the same Trilio installation.
This base64 hash is part of the Trilio incremental backups as an absolute path of the backing files. This requires the usage of mount bind during a DR scenario or quick migration scenario.
In the case that there is time for a thorough migration there is another option to change the backing file and make the Trilio backups available on a different NFS Share. This option is updating the backing file to the new NFS Share mount point.
Trilio is providing a shell script for the purpose of changing the backing file. This script is used after the Trilio appliance has been reconfigured to use the new NFS share.
The Shell script is publicly available at:
The following requirements need to be met before the change of the backing file can be attempted.
The Trilio Appliance has been reconfigured with the new NFS Share
The Openstack environment has been reconfigured with the new NFS Share
The workloads are available on the new NFS Share
The workloads are owned by nova:nova user
The shell script is changing one workload at a time.
The shell script has to run as nova user, otherwise the owner will get changed and the backup can not be used by Trilio.
Run the following command:
with
/var/triliovault-mounts/<base64>/
being the new NFS mount path
workload_<workload_id>
being the workload to rebase
The shell script is generating the following log file at the following location:
The log file will not get overwritten when the script is run multiple times. Each run of the script will append the available log file.
The Trilio Appliance Dashboard gives an overview of the running services and their Status inside the Cluster. The dashboard can be accessed using the virtual IP.
The Trilio Appliance Dashboard gives an overview of the running services and their Status inside the Cluster. This dashboard is accessible through the virtual IP.
It shows for each Trilio Appliance the Status of the following Trilio services:
wlm-workloads
wlm-scheduler
wlm-api
wlm-cron
To give administrators an overview of the HA status, does the dashboard also show the service status for:
Pacemaker
RabbitMQ
MySQL Galera Cluster
The first step is to remove the datamover container and to unmount the old mounts. This is necessary to make sure, that the new datamover container with the new backend target is not getting any interference from the old backup target.
Edit the globals.yml file to contain the new backup target.
The Trilio appliance can be reconfigured at any time to adjust the Trilio cluster to any changes in the Openstack environment or the general backup solution.
To reconfigure the Trilio Cluster go to the "Configure". The configure page shows the current configuration of the TriloVault cluster.
The configuration page also gives access to the ansible playbooks of the last successful configuration.
To start the reconfiguration of the Trilio Cluster click "Reconfigure" at the end of the table.
Learn about encrypting Trilio workloads with Barbican
Integrating Trilio for OpenStack 4.2 with Barbican facilitates encryption support for the qcow2-data segment of Trilio backups. However, the JSON files housing the backed-up OpenStack metadata remain unencrypted.
This capability necessitates the presence of the OpenStack Barbican service. Absence of the Barbican service will result in the omission of encryption options for Workloads within the Horizon interface.
Trilio for OpenStack (T4O) 4.2 exclusively retrieves secrets from Barbican, without generating, modifying, or deleting any secrets within the Barbican platform.
Barbican secrets are indispensable for executing backups or restorations in encryption-enabled Workloads. The onus lies on the OpenStack project user to supply these secrets and guarantee their accurate availability.
In order to employ encrypted Workloads, the Trilio trustee role must be capable of interacting with the Barbican service to access and retrieve secrets from Barbican. By default, only the 'admin' and 'creator' roles are endowed with these permissions.
Encryption availability for a Workload is confined to the Workload creation stage. Post-creation, the encryption status of a Workload is irreversible; it cannot be altered or toggled.
Alghorithm: AES-256 (All supported types utilize this algorithm)
Mode: cbc
content type: application/octet-stream
payload-content-encoding: base64
secret type: opaque
payload: plaintext
For encrypted workload Barbican should enabled on Openstack, then while configuring TVO-4.2, trustee role should be kept as creator .
Additionally every user who’ll be interacting with TrilioVault for any operator, should also have creator role assigned.
Secret can be created via OpenStack cli with following the below steps
Source the rc file of the user who is going to create the encrypted workload.
Create any supported type of secret and fetch the secret UUID using OpenStack secret cli.
Below is the example of symmetric secret type creation & fetch the secret UUID.
Generate a new 256-bit key using order create
()[root@overcloudtrain1-controller-0 /]# openstack secret order create --name secret2 --algorithm aes --mode ctr --bit-length 256 --payload-content-type=application/octet-stream key
+----------------+-------------------------------------------------------------------------------------------+
| Field | Value |
+----------------+-------------------------------------------------------------------------------------------+
| Order href | https://overcloudtrain1.trilio.local:13311/v1/orders/641bac2c-b5b2-4a7a-9172-8fe0cf55425d |
| Type | Key |
| Container href | N/A |
| Secret href | None |
| Created | None |
| Status | None |
| Error code | None |
| Error message | None |
+----------------+-------------------------------------------------------------------------------------------+
b. View the details of the order to identify the location of the generated key, shown here as the Secret href
value:
()[root@overcloudtrain1-controller-0 /]# openstack secret order get https://overcloudtrain1.trilio.local:13311/v1/orders/641bac2c-b5b2-4a7a-9172-8fe0cf55425d
+----------------+--------------------------------------------------------------------------------------------+
| Field | Value |
+----------------+--------------------------------------------------------------------------------------------+
| Order href | https://overcloudtrain1.trilio.local:13311/v1/orders/641bac2c-b5b2-4a7a-9172-8fe0cf55425d |
| Type | Key |
| Container href | N/A |
| Secret href | https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c |
| Created | 2023-07-12T11:38:17+00:00 |
| Status | ACTIVE |
| Error code | None |
| Error message | None |
+----------------+--------------------------------------------------------------------------------------------+
c. Fetch the secret UUID via below command (use Secret href
)
()[root@overcloudtrain1-controller-0 /]# openstack secret get https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c
+---------------+--------------------------------------------------------------------------------------------+
| Field | Value |
+---------------+--------------------------------------------------------------------------------------------+
| Secret href | https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c |
| Name | secret2 |
| Created | 2023-07-12T11:38:17+00:00 |
| Status | ACTIVE |
| Content types | {'default': 'application/octet-stream'} |
| Algorithm | aes |
| Bit length | 256 |
| Secret type | symmetric |
| Mode | ctr |
| Expiration | None |
+---------------+--------------------------------------------------------------------------------------------+
()[root@overcloudtrain1-controller-0 /]#
Note down the last value from Secret href
URL which is UUID (8-4-4-4-12 format).
For example secret UUID for the Secret href
URL https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c
will be f2c96fe2-6ae7-4985-b98c-e571ba05403c
This UUID will be used further for creating the encrypted workload.
Login to openstack horizon dashboard with user who has created the secret & go to Project->Backup->workloads
click on create workload.
select the checkbox for Enable Encryption
Provide the UUID noted in above steps in Secret UUID text box
Follow the usual procedure for further tabs (Workload member, Schedule, policy & Option) & click on create.
Workload will be created and value for Encryption field will be True.
The Trilio database is following an older OpenStack database schema. This schema is not forgetting anything and is instead setting the deleted flag for any data that is no longer valid/deleted.
This leads to ever-growing databases, which can lead to performance degradation.
To counter this a database clean-up and optimization tool has been made available.
The following Trilio services are providing certificates for secured access to the Trilio solution.
The TVault-Config service and the Nginx Resource for the Grafana Dashboard are using the same certificate.
The certificate used is a symlink to a host-specific certificate. Each Trilio VM has its own self-signed certificate by default which is getting recreated every time the TVault-Config service is restarted.
When the certificate for the TVault-Config and Nginx (Grafana) is to be changed to a customer chosen certificate it is required to deactivate the recreation of the certificates upon service restart.
Login into the Trilio VM via SSH
Edit the following file:
/home/stack/myansible/lib/python3.6/site-packages/tvault_configurator/tvault_config_bottle.py
Look for create_ssl_certificates() in the main function
Comment out create_ssl_certificates()
Repeat for all nodes of the Trilio cluster
The resulting main function will look like this:
Afterward, the certificates can be replaced manually by overwriting the files.
Once the certificates have been replaced by the desired ones restart the TVault-Config service and the Nginx pcs resource.
The certificate provided by the Nginx for the wlm-api service is set during configuration when HTTPS endpoints are configured for the Trilio appliance. This certificate is provided to the end-user or Openstack every time an API call to the Trilio solution is sent.
The certificate and its related private key can be changed through the OS API certificate tab.
In this tab is the section to Upload Server certificate | Private key. Use this section to update the wlm-api certificate as required.
To gracefully shutdown/restart the Trilio cluster the following steps are recommended.
It is recommended to verify that no snapshots or restores are running on the Trilio Cluster.
Stopping or restarting the Trilio cluster will cancel all running actively running backup or restore jobs. These jobs will be marked as errored after the system has come up again.
This can be verified using the following two commands:
The Trilio cluster is using the pacemaker service for setting the VIP(s) of the cluster and controlling the active node for the wlm-cron service. The identified node will be the last to shut down in case that the whole cluster gets shut down.
This can be checked using the following command:
In the following example is the master node the tvm1
A single node in the cluster can be shut down or restarted without issues. All services will come up and the RabbitMQ and Galeera service will rejoin the remaining cluster.
When the master node gets shutdown or restarted the VIP(s) and the wlm-cron service will switch to one of the remaining cluster nodes.
To speed up the shutdown/restart process it is recommended to stop the Trilio services, the RabbitMQ service, and the MariaDB service on the node.
After the services have been stopped the node can be restarted or shut down using standard Linux commands.
Restarting the whole cluster node by node follows the same procedure as restarting a single node, with the difference that each restarted node needs to be fully started again before the next node can be restarted.
When the complete cluster needs to get stopped and restarted at the same time the following procedure needs to be completed.
The procedure on a high level is:
Shutdown the two slave nodes
Shutdown the master node
Start the master node
Enable the Galeera cluster
Start the two slave nodes
Before shutting down the two slave nodes it is recommended to stop running Trilio services, the RabbitMQ server, and the MariaDB on the nodes.
Afterward, the nodes can be shut down.
Before shutting down the master node it is recommended to stop running Trilio services, the RabbitmQ server, the MariaDB, the wlm-cron and the VIP(s) resource in Pacemaker.
Afterward, the node can be shut down.
The first server that is getting booted will be the master node. It is highly recommended that the old master node will be booted first again.
Login into the freshly started master node and run the following command. This will restart the Galeera cluster with this node as master.
After the master node has been booted and the Galeera cluster started the remaining nodes can be started and will automatically rejoin the Trilio cluster.
A Snapshot is a single Trilio backup of a workload including all data and metadata. It contains the information of all VM's that are protected by the workload.
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to show the details on
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
The List of Snapshots for the chosen Workload contains the following additional information:
Creation Time
Name of the Snapshot
Description of the Snapshot
Total amount of Restores from this Snapshot
Total amount of succeeded Restores
Total amount of failed Restores
Snapshot Type
Snapshot Size
Snapshot Status
Snapshots are automatically created by the Trilio scheduler. If necessary or in case of deactivated scheduler is it possible to create a Snapshot on demand.
There are 2 possibilities to create a snapshot on demand.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that shall create a Snapshot
Click "Create Snapshot"
Provide a name and description for the Snapshot
Decide between Full and Incremental Snapshot
Click "Create"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that shall create a Snapshot
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Click "Create Snapshot"
Provide a name and description for the Snapshot
Decide between Full and Incremental Snapshot
Click "Create"
Each Snapshot contains a lot of information about the backup. These information can be seen in the Snapshot overview.
To reach the Snapshot Overview follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
The Snapshot Details Tab shows the most important information about the Snapshot.
Snapshot Name / Description
Snapshot Type
Time Taken
Size
Which VMs are part of the Snapshot
for each VM in the Snapshot
Instance Info - Name & Status
Security Group(s) - Name & Type
Flavor - vCPUs, Disk & RAM
Networks - IP, Networkname & Mac Address
Attached Volumes - Name, Type, size (GB), Mount Point & Restore Size
Misc - Original ID of the VM
The Snapshot Restores Tab shows the list of Restores that have been started from the chosen Snapshot. It is possible to start Restores from here.
The Snapshot Miscellaneous Tab provides the remaining metadata information about the Snapshot.
Creation Time
Last Update time
Snapshot ID
Workload ID of the Workload containing the Snapshot
Once a Snapshot is no longer needed, it can be safely deleted from a Workload.
There are 2 possibilities to delete a Snapshot.
To delete a single Snapshot through the submenu follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu
Click "Delete Snapshot"
Confirm by clicking "Delete"
To delete one or more Snapshots through the Snapshot overview do the following:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshots in the Snapshot list
Check the checkbox for each Snapshot that shall be deleted
Click "Delete Snapshots"
Confirm by clicking "Delete"
Ongoing Snapshots can be canceled.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to cancel
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click "Cancel" on the same line as the identified Snapshot
Confirm by clicking "Cancel"
By default is the Trilio GUI available on all NICs on port 443.
To limit this to only one IP the following steps need to be applied.
The Trilio Appliance provides by default the possibility of 4 VIPs.
A general VIP which can be used for everything
A public VIP for the public endpoint
An internal VIP for the internal endpoint
An admin VIP for the admin endpoint
Should an additional VIP be required to restrict the access of the Trilio Dashboard to this VIP the new VIP needs to be created as a new resource inside the PCS cluster.
When the new dashboard_ip has been created or decided, then the next step is to set up the proxy forwarding inside Nginx, which will make the Trilio GUI available through port 8000.
All of the following steps need to be done all Trilio appliances of the cluster.
Create new conf file at /etc/nginx/conf.d/tvault-dashboard.conf
. Replace variables dashboard_ip
and virtual_ip
as configured or decided.
edit /etc/nginx/nginx.conf
and uncomment line
#include /etc/nginx/conf.d/*.conf;
check nginx syntax: nginx -t
reload nginx conf: nginx -s reload
Verify if the new cluster resource is visible or not using pcs resource
command and
by accessing the dashboard_ip.
The configured dashboard_ip will always end on the nginx service on port 8000 and will then be forwarded to the local dashboard service on port 443.
This configuration limits the required access to the local dashboard service to the Trilio appliance cluster itself. All other connections on port 443 can be dropped.
The following commands will set the required iptable rules.
At this point is the Trilio GUI only reachable on the dashboard_ip on port 8000. Accessing the Trilio GUI through any other IP or on port 443 is not allowed.
If users want to use a different container registry for the triliovault containers, then the user can edit this value. In that case, the user first needs to manually pull triliovault containers from and push them to the other registry.
You can check the usage of this script .
Download the hf_upgrade.sh
script and use the script to download the upgraded packages.
You can check the usage of this script .\
Please check for reconfiguring the Trilio Appliance
Please check for Red Hat Openstack Platform
Please check for Canonical Openstack
Please check for Kolla Ansible Openstack
Please check for Ansible Openstack
Follow the to learn about the globals.yml Trilio variables.
Follow with the new backup target.
Follow the guide afterwards.
When the reconfiguration is required to switch to an external database it is necessary to and configure it from scratch.
Refer to this guide for more information about
For migration of encrypted workload please follow:
For upgrade from older release to 4.2 please follow:
For Trilio configuration please follow:
--workload_id <workload_id>
Filter results by workload_id
--tvault_node <host>
List all the snapshot operations scheduled on a tvault node(Default=None)
--date_from <date_from>
From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If don't specify time then it takes 00:00 by default
--date_to <date_to>
To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day), Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to
--all {True,False}
List all snapshots of all the projects(valid for admin user only)
<workload_id>
ID of the workload to snapshot.
--full
Specify if a full snapshot is required.
--display-name <display-name>
Optional snapshot name. (Default=None)
--display-description <display-description>
Optional snapshot description. (Default=None)
Please refer to the User Guide to learn more about Restores.
<snapshot_id>
ID of the snapshot to be shown
--output <output>
Option to get additional snapshot details, Specify --output metadata for snapshot metadata, Specify --output networks for snapshot vms networks, Specify --output disks for snapshot vms disks
<snapshot_id>
ID of the snapshot to be deleted
<snapshot_id>
ID of the snapshot to be canceled
cbc, xts
text/plain
None
passphrase
plaintext
ctr, cbc, xts
application/octet-stream
base64
symmetric keys
encoded with base64
cbc, ctr, xts
text/plain
None
opaque
plaintext
TVault-Config
443
Webservice providing the TrilIoVault Dashboard
Nginx (wlm-api)
8780
provides the VIP for wlm-api service
Nginx (Grafana)
3001
VIP for the dashboard of Grafana service running on TrilIioVault VM
Every Workload has its own schedule. Those schedules can be activated, deactivated and modified.
A schedule is defined by:
Status (Enabled/Disabled)
Start Day/Time
End Day
Hrs between 2 snapshots
To disable the scheduler of a single Workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Navigate to the tab "Schedule"
Uncheck "Enabled"
Click "Update"
To disable the scheduler of a single Workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Navigate to the tab "Schedule"
check "Enabled"
Click "Update"
To modify a schedule the workload itself needs to be modified.
This system is used during all backup and restore features.
As a trust is bound to a specific user for each Workload does the Trilio Horizon plugin show the status of the Scheduler on the Workload list page.
A Restore is the workflow to bring back the backed up VMs from a Trilio Snapshot.
To reach the list of Restores for a Snapshot follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restores tab
To reach the detailed Restore overview follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restores tab
Identify the restore to show
Click the restore name
The Restore Details Tab shows the most important information about the Restore.
Name
Description
Restore Type
Status
Time taken
Size
Progress Message
Progress
Host
Restore Options
List of VMs restored
restored VM Name
restored VM Status
restored VM ID
The Misc tab provides additional Metadata information.
Creation Time
Restore ID
Snapshot ID containing the Restore
Workload
Once a Restore is no longer needed, it can be safely deleted from a Workload.
There are 2 possibilities to delete a Restore.
To delete a single Restore through the submenu follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restore tab
Click "Delete Restore" in the line of the restore in question
Confirm by clicking "Delete Restore"
To delete one or more Restores through the Restore list do the following:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshots in the Snapshot list
Enter the Snapshot by clicking the Snapshot name
Navigate to the Restore tab
Check the checkbox for each Restore that shall be deleted
Click "Delete Restore" in the menu above
Confirm by clicking "Delete Restore"
Ongoing Restores can be canceled.
To cancel a Restore in Horizon follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restore tab
Identify the ongoing Restore
Click "Cancel Restore" in the line of the restore in question
Confirm by clicking "Cancel Restore"
The One Click Restore will bring back all VMs from the Snapshot in the same state as they were backed up. They will:
be located in the same cluster in the same datacenter
use the same storage domain
connect to the same network
have the same flavor
The user can't change any Metadata.
There are 2 possibilities to start a One Click Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click "One Click Restore" in the same line as the identified Snapshot
(Optional) Provide a name / description
Click "Create"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "One Click Restore"
(Optional) Provide a name / description
Click "Create"
The Selective Restore is the most complex restore Trilio has to offer. It allows to adapt the restored VMs to the exact needs of the User.
With the selective restore the following things can be changed:
Which VMs are getting restored
Name of the restored VMs
Which networks to connect with
Which Storage domain to use
Which DataCenter / Cluster to restore into
Which flavor the restored VMs will use
There are 2 possibilities to start a Selective Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot
Click on "Selective Restore"
Configure the Selective Restore as desired
Click "Restore"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "Selective Restore"
Configure the Selective Restore as desired
Click "Restore"
The Inplace Restore covers those use cases, where the VM and its Volumes are still available, but the data got corrupted or needs to a rollback for other reasons.
It allows the user to restore only the data of a selected Volume, which is part of a backup.
There are 2 possibilities to start an Inplace Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot
Click on "Inplace Restore"
Configure the Inplace Restore as desired
Click "Restore"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "Inplace Restore"
Configure the Inplace Restore as desired
Click "Restore"
The workloadmgr client CLI is using a restore.json file to define the restore parameters for the selective and the inplace restore.
An example for a selective restore of this restore.json is shown below. A detailed analysis and explanation is given afterwards.
Before the exact details of the restore are to be provided it is necessary to provide the general metadata for the restore.
openstack
starts the exact definition of the restore
The Selective Restore requires a lot of information to be able to execute the restore as desired.
Those information are divided into 3 components:
instances
restore_topology
networks_mapping
This part contains all information about all instances that are part of the Snapshot to restore and how they are to be restored.
Each instance requires the following information
The root disk needs to be at least as big as the root disk of the backed up instance was.
The following example describes a single instance with all values.
Do not mix network topology restore together with network mapping.
To activate a network topology restore set:
To activate network mapping set:
When the network mapping is activated it is used, it is necessary to provide the mapping details, which are part of the networks_mapping block:
The Inplace Restore requires less information thana selective restore. It only requires the base file with some information about the Instances and Volumes to be restored.
There are no network information required, but the field have to exist as empty value for the restore to work.
Trilio can notify users via E-Mail upon the completion of backup and restore jobs.
The E-Mail will be sent to the owner of the Workload.
To use the E-mail notifications, two requirements need to be met.
Both requirements need to be set or configured by the Openstack Administrator. Please contact your Openstack Administrator to verify the requirements.
As the E-Mail will be sent to the owner of the Workload does the Openstack User, who created the workload, require to have an E-Mail address associated.
Trilio needs to know which E-Mail server to use, to send the E-mail notifications. Backup Administrators can do this in the "Backup Admin" area.
E-Mail notifications are activated tenant wide. To activate the E-Mail notification feature for a tenant follow these steps:
Login to Horizon
Navigate to the Backups
Navigate to Settings
Check/Uncheck the box for "Enable Email Alerts"
The following screenshots show example E-mails send by Trilio.
The file search functionality allows the user to search for files and folders located on a chosen VM in a workload in one or more Backups.
The file search tab is part of every workload overview. To reach it follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload a file search shall be done in
Click the workload name to enter the Workload overview
Click File Search to enter the file search tab
A file search runs against a single virtual machine for a chosen subset of backups using a provided search string.
To run a file search the following elements need to be decided and configured
Under VM Name/ID choose the VM that the search is done upon. The drop down menu provides a list of all VMs that are part of any Snapshot in the Workload.
The File Path defines the search string that is run against the chosen VM and Snapshots. This search string does support basic RegEx.
The File Path has to start with a '/'
Example File Path for all files inside /etc : /etc/*
"Filter Snapshots by" is the third and last component that needs to be set. This defines which Snapshots are going to be searched.
There are 3 possibilities for a pre-filtering:
All Snapshots - Lists all Snapshots that contain the chosen VM from all available Snapshots
Last Snapshots - Choose between the last 10, 25, 50, or custom Snapshots and click Apply to get the list of the available Snapshots for the chosen VM that match the criteria.
Date Range - Set a start and end date and click apply to get the list of all available Snapshots for the chosen VM within the set dates.
After the pre-filtering is done all matching Snapshots are automatic prechosen. Uncheck any Snapshot that shall not be searched.
To start a File Search the following elements need to be set:
A VM to search in has to be chosen
A valid File Path provided
At least one Snapshot to search in selected
Once those have been set click "Search" to start the file search.
Do not navigate to any other Horizon tab or website after starting the File Search. Results are lost and the search has to be repeated to regain them.
After a short time the results will be presented. The results are presented in a tabular format grouped by Snapshots and Volumes inside the Snapshot.
For each found file or folder the following information are provided:
POSIX permissions
Amount of links pointing to the file or folder
User ID who owns the file or folder
Group ID assigned to the file or folder
Actual size in Bytes of the file or folder
Time of creation
Time of last modification
Time of last access
Full path to the found file or folder
Once the Snapshot of interest has been identified it is possible to go directly to the Snapshot using the "View Snapshot" option at the top of the table. It is also possible to directly mount the Snapshot using the "Mound Snapshot" Button at the end of the table.
In complex environments it is sometimes necessary to restart a single service or the complete solution. Rarely is restarting the complete node, where a service is running possible or even the ideal solution.
This page describes the services running by Trilio and how to restart those.
The Trilio Appliance is the controller of Trilio. Most services on the Appliance are running in a High Availability mode on a 3-node cluster.
The wlm-api service takes the API calls against the Trilio Appliance. It is running in active-active mode on all nodes of the Trilio cluster.
To restart the wlm-api service run on each Trilio node:
The wlm-scheduler service is taking job requests and identifies which Trilio node should take the request. It is running in active-active mode on all nodes of the Trilio cluster.
To restart the wlm-scheduler service run on each Trilio node:
The wlm-workloads service is the task worker of Trilio executing all jobs given to the Trilio node. It is running in active-active mode on all nodes of the Trilio cluster.
To restart the wlm-workloads service run on each Trilio node:
The wlm-cron service is responsible for starting scheduled Backups according to the configurtation of Tenant Workloads. It is running in active-passive mode and controlled by the pacemaker cluster.
To restart the wlm-workloads service run on the Trilio node with VIP assigned:
The Trilio appliance is running 1 to 4 virtual IPs on the Trilio cluster. These are controlled by the pacemaker cluster and provided through NGINX.
To restart these resources the pacemaker NGINX resource is getting restarted:
The Trilio cluster is using RabbitMQ as messaging service. It is running in active-active mode on all nodes of the Trilio cluster.
To restart a RabbitMQ node run on each Trilio node:
When the complete cluster is getting stopped and restarted it is important to keep the order of nodes in mind. The last node to be stopped needs to be the first node to be started.
The Galera Cluster is managing the Trilio MariaDB database. It is running in active-active mode on all nodes of the Trilio cluster.
When restarting Galera two different use-cases need to be considered:
Restarting a single node
Restarting the whole cluster
A single node can be restarted without any issues. It will automatically rejoin the cluster and sync against the remaining nodes.
The following commands will gracefully stop and restart the mysqld service.
After a restart will the cluster start the syncing process. Don't restart node after node to reach a complete cluster restart.
Check the cluster health after the restart.
Restarting a complete cluster requires some additional steps as the Galera cluster is basically destroyed once all nodes have been shut down. It needs to be rebuild afterwards.
First gracefully shutdown the Galera cluster on all nodes:
The second step is to identify the Galera node with the latest dataset. This can be achieved by reading the grastate.dat
file on the Trilio nodes.
The value to check for are the seqno
.
The node with the highest seqno is the node that contains the latest data. This node will also contain safe_to_bootstrap: 1
to indicate that the Galera cluster can be rebuild from this node.
On the identified node the new cluster is getting generated with the following command:
Running galera_new_cluster on the wrong node will lead to data loss as this command will set the node the command is issued on as the first node of the cluster. All nodes which join afterward will sync against the data of this first node.
After the command has been issued is the mysqld service running on this node. Now the other nodes can be restarted one by one. The started nodes will automatically rejoin the cluster and sync against the master node. Once a synced status has been reached is each node a primary node in the cluster.
Check the Cluster health after all services are up again.
Verify the cluster health by running the following commands inside each Trilio MariaDB. The values returned from these statements have to be the same for each node.
Canonical Openstack is not using the Trilio Appliance. In Canonical environments is the Trilio controller unit part of the JuJu deployment as workloadmgr container.
To restart the services inside this container the following commands are to be issued.
On all nodes:
On a single node:
The Trilio dmapi service is running on the Openstack controller nodes. Depending on the Openstack Distribution Trilio is installed on different commands are issued to restart the dmapi service.
RHOSP13 is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.
RHOSP16 is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.
Canonical is running the Trilio services in JuJu controlled LXD containers. The dmapi service can be restarted by issuing the following command from the MASS node.
Kolla-Ansible Openstack is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.
Ansible Openstack is running the Trilio services as LXD containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.
The Trilio datamover service is running on the Openstack compute nodes. Depending on the Openstack Distribution Trilio is installed on different commands are issued to restart the datamover service.
RHOSP13 is running the Trilio services as docker containers. The datamover service can be restarted by issuing the following command on the compute node.
RHOSP16 is running the Trilio services as docker containers. The datamover service can be restarted by issuing the following command on the compute node.
Canonical is running the Trilio services in JuJu controlled LXD containers. The datamover service can be restarted by issuing the following command from the MASS node.
Kolla-Ansible Openstack is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.
Ansible Openstack is running the Trilio datamover service directly on the compute node. The datamover service can be restarted by issuing the following command on.
A workload is a backup job that protects one or more Virtual Machines according to a configured policy. There can be as many workloads as needed. But each VM can only be part of one Workload.
Using encrypted Workload will lead to longer backup times. The following timings have been seen in Trilio labs:
Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : ~108GB
For unencrypted WL : 62 min
For encrypted WL : 82 min
Snapshot time for Windows Image booted VM. No additional data except OS. : ~12 GB
For unencrypted WL : 10 min
For encrypted WL : 18 min5
To view all available workloads of a project inside Horizon do:
Login to Horizon
Navigate to Backups
Navigate to Workloads
The overview in Horizon lists all workloads with the following additional information:
Creation time
Workload Name
Workload description
Total amount of Snapshots inside this workload
Total amount of succeeded Snapshots
Total amount of failed Snapshots
Workload Type
Status of the Workload
To create a workload inside Horizon do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Click "Create Workload"
Provide Workload Name and Workload Description on the first tab "Details"
Choose between Serial or Parallel workload on the first tab "Details"
Choose the Policy if available to use on the first tab "Details"
Choose if the Workload is encrypted on the first tab "Details"
Provide the secret UUID if Workload is encrypted on the first tab "Details"
Choose the VMs to protect on the second Tab "Workload Members"
Decide for the schedule of the workload on the Tab "Schedule"
Provide the Retention policy on the Tab "Policy"
Choose the Full Backup Interval on the Tab "Policy"
If required check "Pause VM" on the Tab "Options"
Click create
The created Workload will be available after a few seconds and starts to take backups according to the provided schedule and policy.
A workload contains many information, which can be seen in the workload overview.
To enter the workload overview inside Horizon do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to show the details on
Click the workload name to enter the Workload overview
The Workload Details tab provides you with the general most important information about the workload:
Name
Description
Availability Zone
List of protected VMs including the information of qemu guest agent availability
The status of the qemu-guest-agent just shows, whether the necessary Openstack configuration has been done for this VM to provide qemu guest agent integration. It does not check, whether the qemu guest agent is installed and configured on the VM.
The Workload Snapshots Tab shows the list of all available Snapshots in the chosen Workload.
From here it is possible to work with the Snapshots, create Snapshots on demand and start Restores.
The Workload Policy Tab gives an overview of the current configured scheduler and retention policy. The following elements are shown:
Scheduler Enabled / Disabled
Start Date / Time
End Date / Time
RPO
Time till next Snapshot run
Retention Policy and Value
Full Backup Interval policy and value
The Workload Filesearch Tab provides access to the powerful search engine, which allows to find files and folders on Snapshots without the need of a restore.
The Workload Miscellaneous Tab shows the remaining metadata of the Workload. The following information are provided:
Creation time
last update time
Workload ID
Workload Type
Workloads can be modified in all components to match changing needs.
To edit a workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Modify the workload as desired - All parameters except workload type can be changed
Click "Update"
Once a workload is no longer needed it can be safely deleted.
To delete a workload do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to be deleted
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Delete Workload"
Confirm by clicking "Delete Workload" yet again
Workloads that are actively taking backups or restores are locked for further tasks. It is possible to unlock a workload by force if necessary.
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to unlock
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Unlock Workload"
Confirm by clicking "Unlock Workload" yet again
In rare cases it might be necessary to start a backup chain all over again, to ensure the quality of the created backups. To not recreate a Workload in such cases is it possible to reset a Workload.
The Workload reset will:
Cancel all ongoing tasks
Delete all existing Openstack Trilio Snapshots from the protected VMs
recalculate the next Snapshot time
take a full backup at the next Snapshot
To reset a Workload do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to reset
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Reset Workload"
Confirm by clicking "Reset Workload" yet again
Trilio allows you to view or download a file from the snapshot. Any changes to the files or directories when snapshot is mounted are temporary and are discarded when the snapshot is unmounted. Mounting is a faster way to restore a single or multiple files. To mount a snapshot follow these steps.
Cloud Image Name
Version
Supported
Ubuntu
Bionic(18.04)
Ubuntu
Focal(20.04)
Centos
Centos8
Centos
Centos8 stream
RHEL
RHEL7
RHEL
RHEL8
RHEL
RHEL9
It is recommended to do these steps once to the chosen cloud-Image and then upload the modified cloud image to Glance.
Create an Openstack image using a Linux based cloud-image like Ubuntu, CentOS or RHEL with the following metadata parameters.
Spin up an instance from that image It is recommended to have at least 8GB RAM for the mount operation. Bigger Snapshots can require more RAM.
install and activate qemu-guest-agent
Edit /etc/sysconfig/qemu-ga
and remove the following from BLACKLIST_RPC section
Disable SELINUX in /etc/sysconfig/selinux
Install python3 and lvm2
Reboot the Instance
install and activate qemu-guest-agent
Verify the loaded path of qemu-guest-agent
Follow this path when systemctl returns the following loaded path
Edit /etc/init.d/qemu-guest-agent
and add Freeze-Hook file path in daemon args
Follow this path when systemctl returns the following loaded path
Edit qemu-guest-agent systemd
file
Add the following lines
Restart qemu-guest-agent service
Install Python3
Reboot the VM
Mounting a Snapshot to a File Recovery Manager provides read access to all data that is located on the in the mounted Snapshot.
It is possible to run the mounting process against any Openstack instance. During this process will the instance be rebooted.
Always mount Snapshots to File Recovery Manager instances only.
Unmount any mounted Snapshot once there is no further need to keep it mounted. Mounted Snapshots will not be purged by the Retention policy.
There are 2 possibilities to mount a Snapshot in Horizon.
To mount a Snapshot through the Snapshot list follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu
Click "Mount Snapshot"
Choose the File Recovery Manager instance to mount to
Confirm by clicking "Mount"
Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:
tvault_recovery_manager=yes
To mount a Snapshot through the File Search results follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the File Search tab
Identify the Snapshot to be mounted
Click "Mount Snapshot" for the chosen Snapshot
Choose the File Recovery Manager instance to mount to
Confirm by clicking "Mount"
Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:
tvault_recovery_manager=yes
The File Recovery Manager is a normal Linux based Openstack instance.
It can be accessed via SSH or SSH based tools like FileZila or WinSCP.
The mounted Snapshot can be found at the following path:
/home/ubuntu/tvault-mounts/mounts/
Each VM in the Snapshot has its own directory using the VM_ID as the identifier.
Sometimes a Snapshot is mounted for a longer time and it needs to be identified, which Snapshots are mounted.
There are 2 possibilities to identify mounted Snapshots inside Horizon.
Login to Horizon
Navigate to Compute
Navigate to Instances
Identify the File Recovery Manager Instance
Click on the Name of the File Recovery Manager Instance to bring up its details
On the Overview tab look for Metadata
Identify the value for mounted_snapshot_url
The mounted_snapshot_url
contains the Snapshot ID of the Snapshot that has been mounted last.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Search for the Snapshot that has the option "Unmount Snapshot"
Once a mounted Snapshot is no longer needed it is possible and recommended to unmount the snapshot.
Deleting the File Recovery Manager instance will not update the Trilio appliance. The Snapshot will be considered mounted until an unmount command has been received.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Search for the Snapshot that has the option "Unmount Snapshot"
Click "Unmount Snapshot"
This runbook will demonstrate how to set up Disaster Recovery with Trilio for a given scenario.
The chosen scenario is following an actively used Trilio customer environment.
There are two Openstack clouds available "Openstack Cloud A" and Openstack Cloud B". "Openstack Cloud B" is the Disaster Recovery restore point of "Openstack Cloud A" and vice versa. Both clouds have an independent Trilio installation integrated. These Trilio installations are writing their Backups to NFS targets. "Trilio A" is writing to "NFS A1" and "Trilio B" is writing to "NFS B1". The NFS Volumes used are getting synced against another NFS Volume on the other side. "NFS A1" is syncing with "NFS B2" and "NFS B1" is syncing with "NFS A2". The syncing process is set up independently from Trilio and will always favor the newer dataset.
This scenario will cover the Disaster Recovery of a single Workload and a complete Cloud. All processes are done be the Openstack administrator.
This runbook will assume that the following is true:
"Openstack Cloud A" and "Openstack Cloud B" both have an active Trilio installation with a valid license
"Openstack Cloud A" and "Openstack Cloud B" have free resources to host additional VMs
"Openstack Cloud A" and "Openstack Cloud B" have Tenants/Projects available that are the designated restore points for Tenant/Projects of the other side
Access to a user with the admin role permissions on domain level
One of the Openstack clouds is down/lost
For ease of writing will this runbook assume from here on, that "Openstack Cloud A" is down and the Workloads are getting restored into "Openstack Cloud B".
In the case of the usage of shared Tenant networks, beyond the floating IP, the following additional requirement is needed: All Tenant Networks, Routers, Ports, Floating IPs, and DNS Zones are created
A single Workload can do a Disaster Recovery in this Scenario, while both Clouds are still active. To do so the following high-level process needs to be followed:
Copy the Workload directories to the configured NFS Volume
Make the right Mount-Paths available
Reassign the Workload
Restore the Workload
Clean up
As only a single Workload is to be recovered it is more efficient to copy the data of that single Workload over to the "NFS B1" Volume, which is used by "Trilio B".
It is recommended to use the Trilio VM as a connector between both NFS Volumes, as the nova user is available on the Trilio VM.
Trilio Workloads are identified by their ID und which they are stored on the Backup Target. See below example:
In the case that the Workload ID is not known can available Metadata inside the Workload directories be used to identify the correct Workload.
The identified workload needs to be copied with all subdirectories and files. Afterward, it is necessary to adjust the ownership to nova:nova with the right permissions.
Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.
The MTAuMTAuMi4yMDovdXBzdHJlYW0=
part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.
This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.
Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.
Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.
The used hash values can be calculated using the base64 tool in any Linux distribution.
Based on the identified base64 hash values the following paths are required on each Compute node.
/var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
and
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.
To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.
Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.
To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.
Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.
The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.
To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.
Now that all informations have been gathered the workload can be reassigned to the target project.
After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.
This runbook will continue on the CLI only path.
To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.
List all Snapshots of the workload to restore to identify the snapshot to restore
Get Snapshot Details with network details for the desired snapshot
Get Snapshot Details with disk details for the desired Snapshot
The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.
To do the actual restore use the following command:
To verify the success of the restore from a Trilio perspective the restore status is checked.
After the Desaster Recovery Process has been successfully completed it is recommended to bring the TVM installation back into its original state to be ready for the next DR process.
Delete the workload that got restored.
The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.
To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.
Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.
After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.
This Scenario will cover the Disaster Recovery of a full cloud. It is assumed that the source cloud is down or lost completly. To do the disaster recovery the following high-level process needs to be followed:
Reconfigure the Target Trilio installation
Make the right Mount-Paths available
Reassign the Workload
Restore the Workload
Reconfigure the Target Trilio installation back to the original one
Clean up
Before the Desaster Recovery Process can start is it necessary to make the backups to be restored available for the Trilio installation. The following steps need to be done to completely reconfigure the Trilio installation.
During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.
Edit the workloadmgr.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the workloadmgr.conf
Restart the wlm-workloads service
Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.
To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.
Edit the tvault-contego.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the tvault-contego.conf
Restart the tvault-contego service
Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.
The MTAuMTAuMi4yMDovdXBzdHJlYW0=
part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.
This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.
Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.
Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.
The used hash values can be calculated using the base64 tool in any Linux distribution.
Based on the identified base64 hash values the following paths are required on each Compute node.
/var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
and
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.
To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.
Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.
To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.
Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.
The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.
To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.
Now that all informations have been gathered the workload can be reassigned to the target project.
After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.
This runbook will continue on the CLI only path.
To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.
List all Snapshots of the workload to restore to identify the snapshot to restore
Get Snapshot Details with network details for the desired snapshot
Get Snapshot Details with disk details for the desired Snapshot
The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.
To do the actual restore use the following command:
To verify the success of the restore from a Trilio perspective the restore status is checked.
After the Desaster Recovery Process has finished it is necessary to return the Trilio installation to its original configuration. The following steps need to be done to completely reconfigure the Trilio installation.
During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.
Edit the workloadmgr.conf
Look for the line defining the NFS mounts
Delete NFS B2 from the comma-seperated list
Write and close the workloadmgr.conf
Restart the wlm-workloads service
Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.
To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.
Edit the tvault-contego.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the tvault-contego.conf
Restart the tvault-contego service
After the Desaster Recovery Process has been successfully completed and the Trilio installation reconfigured to its original state, it is recommended to do the following additional steps to be ready for the next Disaster Recovery process.
The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.
To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.
Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.
After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.
Trilio Workloads are designed to allow a Desaster Recovery without the need to backup the Trilio database.
As long as the Trilio Workloads are existing on the Backup Target Storage and a Trilio installation has access to them, it is possible to restore the Workloads.
Notify users to of Workloads being available
Trilio incremental Snapshots involve a backing file to the prior backup taken, which makes every Trilio incremental backup a synthetic full backup.
Trilio is using qcow2 backing files for this feature:
As can be seen in the example is the backing file an absolute path, which makes it necessary, that this path exists so the backing files can be accessed.
Trilio is using the base64 hashing algorithm for the NFS mount-paths, to allow the configuration of multiple NFS Volumes at the same time. The hash value is calculated using the provided NFS path.
When the path of the backing file is not available on the Trilio appliance and Compute nodes, will the restores of incremental backups fail.
The tested and recommended method to make the backing files available is creating the required directory path and using mount --bind
to make the path available for the backups.
Running the mount --bind command will make the necessary path available until the next reboot. If it is required to have access to the path beyond a reboot is it necessary to edit the fstab.
This system is used during all backup and restore features.
Openstack Administrators should never have the need to directly work with the trusts created.
The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.
Trusts can only be worked with via CLI
Trilio enables Openstack administrators to set Project Quotas against the usage of Trilio.
The following Quotas can be set:
Number of Workloads a Project is allowed to have
Number of Snapshots a Project is allowed to have
Number of VMs a Project is allowed to protect
Amount of Storage a Project is allowed to use on the Backup Target
The Trilio Quota feature is available for all supported Openstack versions and distributions, but only Train and higher releases include the Horizon integration of the Quota feature.
Workload Quotas are managed like any other Project Quotas.
Login into Horizon as user with admin role
Navigate to Identity
Navigate to Projects
Identify the Project to modify or show the quotas on
Use the small arrow next to "Manage Members" to open the submenu
Choose "Modify Quotas"
Navigate to "Workload Manager"
Edit Quotas as desired
Click "Save"
Trilio is providing several different Quotas. The following command allows listing those.
Trilio 4.1 do not yet have the Quota Type Volume integrated. Using this will not generate any Quotas a Tenant has to apply to.
The following command will show the details of a provided Quota Type.
The following command will create a Quota for a given project and set the provided value.
The following command lists all Trilio Quotas set for a given project.
The following command shows the details about a provided allowed Quota.
The following command shows how to update the value of an already existing allowed Quota.
The following command will delete an allowed Quota and sets the value of the connected Quota Type back to unlimited for the affected project.
Each Trilio Workload has a dedicated owner. The ownership of a Workload is defined by:
OpenStack User - The OpenStack User-ID is assigned to a Workload
OpenStack Project - The OpenStack Project-ID is assigned to a Workload
OpenStack Cloud - The Trilio Serviceuser-ID is assigned to a Workload
This ownership ensures that only the owners of a Workload are able to work with it.
OpenStack Administrators can reassign Workloads or reimport Workloads from older Trilio installations.
Workload import allows a user to import Workloads directly from the Backup Target into the Trilio database.
The Workload import is designed to import Workloads, which are owned by the OpenStack Cloud.
It will not import or list any Workloads that are owned by a different cloud.
To get a list of importable Workloads use the following CLI command:
To import Workloads into the Trilio database use the following CLI command:
For every workload-importworkloads
command triggered, the system will generate a jobid which will be displayed after successful workload-importworkloads
command execution.
Respective import process can further be tracked using this jobid in a CLI provided starting T4O-4.3 release. User can add all workloads (which are to be imported) with a single workload-importworkloads
command OR can trigger multiple such commands. For every command trigger, system will generate one unique jobid.
CLI details below:
The definition of an orphaned Workload is from the perspective of a specific Trilio installation. Any workload that is located on the Backup Target Storage, but not known to the TrilioVualt installation is considered orphaned.
Further is to divide between Workloads that were previously owned by Projects/Users in the same cloud or are migrated from a different cloud.
The following CLI command provides the list of orphaned workloads:
OpenStack administrators are able to reassign a Workload to a new owner. This involves the possibility to migrate a Workload from one cloud to another or between projects.
Reassigning a workload only changes the database of the target Trilio installation. When the Workload was managed before by a different Trilio installation, will that installation not be updated.
Use the following CLI command to reassign a Workload:
A sample mapping file with explanations is shown below:
Trilio provides Backup-as-a-Service, which allows Openstack Users to manage and control their backups themselves. This doesn't eradicate the need for a Backup Administrator, who has an overview of the complete Backup Solution.
To provide Backup Administrators with the tools they need does Trilio for Openstack provide a Backup-Admin area in Horizon in addition to the API and CLI.
To access the Backups-Admin area follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin Tab.
Navigate to Trilio page.
The Backups-Admin area provides the following features.
It is possible to reduce the shown information down to a single tenant. That way seeing the exact impact the chosen Tenant has.
The status overview is always visible in the Backups-Admin area. It provides the most needed information on a glance, including:
Storage Usage (nfs only)
Number of protected VMs compared to number of existing VMs
Number of currently running Snapshots
Status of TVault Nodes
Status of Contego Nodes
This tab provides information about all currently existing Workloads. It is the most important overview tab for every Backup Administrator and therefor the default tab shown when opening the Backup-Admins area.
The following information are shown:
User-ID that owns the Workload
Project that contains the Workload
Workload name
Workload Type
Availability Zone
Amount of protected VMs
Performance information about the last 30 backups
How much data was backed up (green bars)
How long did the Backup take (red line)
Piechart showing amount of Full (Blue) Backups compared to Incremental (Red) Backups
Number of successful Backups
Number of failed Backups
Storage used by that Workload
Which Backup target is used
When is the next Snapshot run
What is the general intervall of the Workload
Scheduler Status including a Switch to deactivate/activate the Workload
Administrators often need to figure out, where a lot of resources are used up, or they need to quickly provide usage information to a billing system. This tab helps in these tasks by providing the following information:
Storage used by a Tenant
VMs protected by a Tenant
It is possible to drill down to see the same information per workload and finally per protected VM.
This tab displays information about Trilio cluster nodes. The following information are shown:
Node name
Node ID
Trilio Version of the node
IP Address
Active Controller Node (True/False)
Status of the Node
This tab displays information about Trilio contego service. The following information are shown:
Service-Name
Compute Node the service is running on
Zone
Service Status from Openstack perspective (enabled/disabled)
Version of the Service
General Status
last time the Status was updated
This tab displays information about the backup target storage. It contains the following information:
Storage Name
Capacity of the storage
Total utilization of the storage
Status of the storage
Statistic information
Percentage all storages are used
Percentage how much storage is used for full backups
Amount of Full backups versus Incremental backups
Audit logs provide the sequence of workload related activities done by users, like workload creation, snapshot creation, etc. The following information are shown:
Time of the entry
What task has been done
Project the task has performed in
User that performed the task
The Audit log can be searched for strings to find for example only entries down by a specific user.
Additionally, can the shown timeframe be changed as necessary.
The license tab provides an overview over the current license and allows to upload new licenses, or validate the current license.
The following information about an active license are shown:
Organization (License name)
License ID
Purchase date - when was the license created
License Expiry Date
Maintenance Expiry Date
License value
License Edition
License Version
License Type
Description of the License
Evaluation (True/False)
Trilio will stop all activities once a license is no longer valid or expired.
The policy tab gives Administrators the possibility to work with workload policies.
This tab manages all global settings for the whole cloud. Trilio has two types of settings:
Email settings
Job scheduler settings.
These settings will be used by Trilio to send email reports of snapshots and restores to users.
Configuring the Email settings is a must-have to provide Email notification to Openstack users.
The following information are required to configure the email settings:
SMTP Server
SMTP username
SMTP password
SMTP port
SMTP timeout
Sender email address
To work with email settings through CLI use the following commands:
To set an email setting for the first time or after deletion use:
To update an already set email setting through CLI use:
To show an already set email setting use:
To delete a set email setting use:
The Global Job Scheduler can be used to deactivate all scheduled workloads without modifying each one of them.
To activate/deactivate the Global Job Scheduler through the Backups-Admin area:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin Tab.
Navigate to Trilio page.
Navigate to the Settings tab
Click "Disable/Enable Job Scheduler"
Check or Uncheck the box for "Job Scheduler Enabled"
Confirm by clicking on "Change"
The Global Job Scheduler can be controlled through CLI as well.
To get the status of the Global Job Scheduler use:
To deactivate the Global Job Scheduler use:
To activate the Global Job Scheduler use:
Trilio’s tenant driven backup service gives tenants control over backup policies. However, sometimes it may be too much control to tenants and the cloud admins may want to limit what policies are allowed by tenants. For example, a tenant may become overzealous and only uses full backups every 1 hr interval. If every tenant were to pursue this backup policy, it puts a severe strain on cloud infrastructure. Instead, if cloud admin can define predefined backup policies and each tenant is only limited to those policies then cloud administrators can exert better control over backup service.
Workload policy is similar to nova flavor where a tenant cannot create arbitrary instances. Instead, each tenant is only allowed to use the nova flavors published by the admin.
To see all available Workload policies in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
The following information are shown in the policy tab for each available policy:
Creation time
name
description
status
set interval
set retention type
set retention value
To create a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
Click new policy
provide a policy name on the Details tab
provide a description on the Details tab
provide the RPO in the Policy tab
Choose the Snapshot Retention Type
provide the Retention value
Choose the Full Backup Interval
Click create
To edit a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to edit
click on "Edit policy" at the end of the line of the chosen policy
edit the policy as desired - all values can be changed
Click "Update"
To assign or remove a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to assign/remove
click on the small arrow at the end of the line of the chosen policy to open the submenu
click "Add/Remove Projects"
Choose projects to add or remove by using the plus/minus buttons
Click "Apply"
To delete a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to assign/remove
click on the small arrow at the end of the line of the chosen policy to open the submenu
click "Delete Policy"
Confirm by clicking "Delete"
--workloadid <workloadid>
Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>
--workloadid <workloadid>
Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>
Please follow this procedure to .
Trilio is using the which enables the Trilio service user to act in the name of another Openstack user.
<workload_id>
ID of the workload to validate
--snapshot_id <snapshot_id>
ID of the Snapshot to show the restores of
<restore_id>
ID of the restore to be shown
--output <output>
Option to get additional restore details, Specify --output metadata for restore metadata,--output networks --output subnets --output routers --output flavors
<restore_id>
ID of the restore to be deleted
<restore_id>
ID of the restore to be deleted
<snapshot_id>
ID of the snapshot to restore.
--display-name <display-name>
Optional name for the restore.
--display-description <display-description>
Optional description for restore.
<snapshot_id>
ID of the snapshot to restore.
--display-name <display-name>
Optional name for the restore.
--display-description <display-description>
Optional description for restore.
--filename <filename>
Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.
<snapshot_id>
ID of the snapshot to restore.
--display-name <display-name>
Optional name for the restore.
--display-description <display-description>
Optional description for restore.
--filename <filename>
Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.
The restore.json requires many information about the backed up resources. All required information can be gathered in the .
name
the name of the restore
description
the description of the restore
oneclickrestore <True/False>
If the restore is a oneclick restore. Setting this to True will override all other settings and a One Click Restore is started.
restore_type <oneclick/selective/inplace>
defines the restore that is intended
type openstack
defines that the restore is into an openstack cloud.
id
original id of the instance
include <True/False>
Set True when the instance shall be restored
name
new name of the instance
availability_zone
Nova Availability Zone the instance shall be restored into. Leave empty for "Any Availability Zone"
Nics
list of openstack Neutron ports that shall be attached to the instance. Each Neutron Port consists of:
id
ID of the Neutron port to use
mac_address
Mac Address of the Neutron port
ip_address
IP Address of the Neutron port
network
network the port is assigned to. Contains the following information:
id
ID of the network the Neutron port is part of
subnet
subnet the port is assigned to. Contains the following information:
id
ID of the network the Neutron port is part of
vdisks
List of all Volumes that are part of the instance. Each Volume requires the following information:
id
Original ID of the Volume
new_volume_type
The Volume Type to use for the restored Volume. Leave empty for Volume Type None
availability_zone
The Cinder Availability Zone to use for the Volume. The default Availability Zone of Cinder is Nova
flavor
Defines the Flavor to use for the restored instance. Contains the following information:
ram
How much RAM the restored instance will have (in MB)
ephemeral
How big the ephemeral disk of the instance will be (in GB)
vcpus
How many vcpus the restored instance will have available
swap
How big the Swap of the restored instance will be (in MB). Leave empty for none.
disk
Size of the root disk the instance will boot with
id
ID of the flavor that matches the provided information
networks
list of snapshot_network and target_network pairs
snapshot_network
the network backed up in the snapshot, contains the following:
id
Original ID of the network backed up
subnet
the subnet of the network backed up in the snapshot, contains the following:
id
Original ID of the subnet backed up
target_network
the existing network to map to, contains the following
id
ID of the network to map to
subnet
the subnet of the network backed up in the snapshot, contains the following:
id
ID of the subnet to map to
id
ID of the instance inside the Snapshot
restore_boot_disk
Set to True if the boot disk of that VM shall be restored.
include
Set to True if at least one Volume from this instance shall be restored
vdisks
List of disks, that are connected to the instance. Each disk contains:
id
Original ID of the Volume
restore_cinder_volume
set to true if the Volume shall be restored
<vm_id>
ID of the VM to be searched
<file_path>
Path of the file to search for
--snapshotids <snapshotid>
Search only in specified snapshot ids snapshot-id: include the instance with this UUID
--end_filter <end_filter>
Displays last snapshots, example , last 10 snapshots, default 0 means displays all snapshots
--start_filter <start_filter>
Displays snapshots starting from , example , snapshot starting from 5, default 0 means starts from first snapshot
--date_from <date_from>
From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If time isn't specified then it takes 00:00 by default
--date_to <date_to>
To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day),Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to
RabbitMQ is a complex system in itself. This guide will only provide the basic commands to do a restart of a node and check the health of the cluster afterward. For complete documentation of how to restart RabbitMQ, please follow the .
Galera Cluster is a complex system in itself. This guide will only provide the basic commands to do a restart of a node and check the health of the cluster afterward. For complete documentation of how to restart Galera clusters, please follow the .
--all {True,False}
List all workloads of all projects (valid for admin user only)
--nfsshare <nfsshare>
List all workloads of nfsshare (valid for admin user only)
--display-name
Optional workload name. (Default=None)
--display-description
Optional workload description. (Default=None)
--workload-type-id
Workload Type ID is required
--source-platform
Workload source platform is required. Supported platforms is 'openstack'
--instance
Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID
--jobschedule
Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'snapshots_to_retain' : '2'
--metadata
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
--policy-id <policy_id>
ID of the policy to assign to the workload
--encryption <True/False>
Enable/Disable encryption for this workload
--secret-uuid <secret_uuid>
UUID of the Barbican secret to be used for the workload
Please refer to the and User Guide to learn more about those.
<workload_id>
ID/name of the workload to show
--verbose
option to show additional information about the workload
--display-name
Optional workload name. (Default=None)
--display-description
Optional workload description. (Default=None)
--instance <instance-id=instance-uuid>
Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID
--jobschedule <key=key-name>
Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. If don't specify timezone, then by default it takes your local machine timezone 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30'
--metadata <key=key-name>
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
--policy-id <policy_id>
ID of the policy to assign
<workload_id>
ID of the workload to edit
All Snapshots need to be deleted before the workload gets deleted. Please refer to the User Guide to learn how to delete Snapshots.
<workload_id>
ID/name of the workload to delete
--database_only <True/False>
Keep True if want to delete from database only.(Default=False)
<workload_id>
ID of the workload to unlock
<workload_id>
ID/name of the workload to reset
<snapshot_id>
ID of the Snapshot to be mounted
<mount_vm_id>
ID of the File Recovery Manager instance to mount the Snapshot to.
--workloadid <workloadid>
Restrict the list to snapshots in the provided workload
<snapshot_id>
ID of the snapshot to unmount.
The reassigned workload can be restored using Horizon following the procedure described .
This script can be found here:
To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.
The reassigned workload can be restored using Horizon following the procedure described .
To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.
This script can be found here:
and Trilio for the target cloud
Trilio is using the which enables the Trilio service user to act in the name of another Openstack user.
<trust_id> ID of the trust to show
<role_name>
Name of the role that trust is created for
--is_cloud_trust {True,False}
Set to true if creating cloud admin trust. While creating cloud trust use same user and tenant which used to configure Trilio and keep the role admin.
<trust_id>
ID of the trust to be deleted
<quota_type_id>
ID of the Quota Type to show
<quota_type_id>
ID of the Quota Type to be created
<allowed_value>
Value to set for this Quota Type
<high_watermark>
Value to set for High Watermark warnings
<project_id>
Project to assign the quota to
<project_id>
Project to list the Quotas from
<allowed_quota_id>
ID of the allowed Quota to show.
<allowed_value>
Value to set for this Quota Type
<high_watermark>
Value to set for High Watermark warnings
<project_id>
Project to assign the quota to
<allowed_quota_id>
ID of the allowed Quota to update
<allowed_quota_id>
ID of the allowed Quota to delete
--project_id <project_id>
List workloads belongs to given project only.
--workloadids <workloadid>
Specify workload ids to import only specified workloads. Repeat option for multiple workloads.
--jobid <jobId>
The ID returned by workload-importworkloads
CLI command.
--migrate_cloud {True,False}
Set to True if you want to list workloads from other clouds as well. Default is False.
--generate_yaml {True,False}
Set to True if want to generate output file in yaml format, which would be further used as input for workload reassign API.
--old_tenant_ids <old_tenant_id>
Specify old tenant ids from which workloads need to reassign to new tenant. Specify multiple times to choose Workloads from multiple tenants.
--new_tenant_id <new_tenant_id>
Specify new tenant id to which workloads need to reassign from old tenant. Only one target tenant can be specified.
--workload_ids <workload_id>
Specify workload_ids which need to reassign to new tenant. If not provided then all the workloads from old tenant will get reassigned to new tenant. Specifiy multiple times for multiple workloads.
--user_id <user_id>
Specify user id to which workloads need to reassign from old tenant. only one target user can be specified.
--migrate_cloud {True,False}
Set to True if want to reassign workloads from other clouds as well. Default if False
--map_file
Provide file path(relative or absolute) including file name of reassign map file. Provide list of old workloads mapped to new tenants. Format for this file is YAML.
Please use in the Admin guide to learn more about how to create and use Workload Policies.
--description
Optional description (Default=None) Not required for email settings
--category
Optional setting category (Default=None) Not required for email settings
--type
settings type set to email_settings
--is-public
sets if the setting can be seen publicly set to False
--is-hidden
sets if the setting will always be hidden set to False
--metadata
sets if the setting can be seen publicly Not required for email settings
<name>name of the setting Take from the list below
<value>value of the setting Take value type from the list below
--description
Optional description (Default=None) Not required for email settings
--category
Optional setting category (Default=None) Not required for email settings
--type
settings type set to email_settings
--is-public
sets if the setting can be seen publicly set to False
--is-hidden
sets if the setting will always be hidden set to False
--metadata
sets if the setting can be seen publicly Not required for email settings
<name>
name of the setting Take from the list below
<value>
value of the setting Take value type from the list below
--get_hidden
show hidden settings (True) or not (False) Not required for email settings, use False
if set
<setting_name>
name of the setting to show Take from the list below
<setting_name>
name of the setting to delete Take from the list below
<policy_id>
Id of the policy to show
--policy-fields <key=key-name>
Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'
--display-description <display_description>
Optional policy description. (Default=No description)
--metadata <key=keyname>
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
<display_name>
the name the policy will get
--display-name <display-name>
Name of the policy
--display-description <display_description>
Optional policy description. (Default=No description)
--policy-fields <key=key-name>
Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'
--metadata <key=keyname>
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
<policy_id>
the name the policy will get
--add_project <project_id>
ID of the project to assign policy to. Use multiple times to assign multiple projects.
--remove_project <project_id>
ID of the project to remove policy from. Use multiple times to remove multiple projects.
<policy_id>
policy to be assigned or removed
<policy_id>
ID of the policy to be deleted
smtp_default___recipient
String
admin@example.net
smtp_default___sender
String
admin@example.net
smtp_port
Integer
587
smtp_server_name
String
Mailserver_A
smtp_server_username
String
admin
smtp_server_password
String
password
smtp_timeout
Integer
10
smtp_email_enable
Boolean
True
To use the workloadmgr CLI tool on the Trilio appliance it is only necessary to activate the virtual environment of the workloadmgr
Troubleshooting inside a complex environment like Openstack can be very time-consuming. The following tipps will help to speed up the troubleshooting process to identify root causes.
Openstack and Trilio are divided into multiple services. Each service has a very specific purpose that is called during a backup or recovery procedure. Knowing which service is doing what helps to understand where the error is happening, allowing more focused troubleshooting.
The Trilio Cluster is the Controller of Trilio. It receives all Workload related requests from the users.
Every task of a backup or restore process is triggered and managed from here. This includes the creation of the directory structure and initial metadata files on the Backup Target.
During a backup process is the Trilio cluster also responsible to gather the metadata about the backed up VMs and networks from the Openstack environment. It is sending API calls towards the Openstack endpoints on the configured endpoint type to fetch this information. Once the metadata has been received does the Trilio Cluster write it as json files on the Backup Target.
The Trilio cluster is also sending the Cinder Snapshot command.
During restore process is the Trilio cluster reading the VM metadata from its Database and is using the metadata to create the Shell for the restore. It is sending API calls to the Openstack environment to create the necessary resources.
The dmapi service is the connector between the Trilio cluster and the datamover running on the compute nodes.
The purpose of the dmapi service is to identify which compute node is responsible for the current backup or restore task. To do so is the dmapi service connecting to the nova api database requesting the compute hose of a provided VM.
Once the compute host has been identified is the dmapi forwarding the command from the Trilio Cluster to the datamover running on the identified compute host.
The datamover is the Trilio service running on the compute nodes.
Each datamover is responsible for the VMs running on top of its compute node. A datamover can not work with VMs running on a different compute node.
The datamover is controlling the freeze and thaw of VMs as well as the actual movement of the data.
Trilio is reading and writing on the Backup Target as nova:nova.
The POSIX user-id and group-id of nova:nova need to be aligned between the Trilio Cluster and all compute nodes. Otherwise backup or restores may fail with permission or file not found issues.
Alternativ ways to achieve the goal are possible, as long as all required nodes can fully write and read as nova:nova on the Backup Target.
It is recommended to verify the required permissions on the Backup Target in case of any errors during the data transfer phase or in case of any file permission errors.
On Cohesity NFS if an Input/Output error is observed, then increase the timeo and retrans parameter values in your NFS options.
Logging inside all datamover containers and add uxsock_timeout with value as 60000 which is equals to 60 sec inside /etc/multipath.conf.Restart datamover container
Trilio is using RBAC to allow the usage of Trilio features to users.
This trustee role is absolutely required and can not be overwritten using the admin role.
It is recommended to verify the assignment of the Trilio Trustee Role in case of any permission errors from Trilio during creation of Workloads, backups or restores.
Trilio is creating Cinder Snapshots and temporary Cinder Volumes. The Openstack Quotas need to allow that.
Every disk that is getting backed up requires one temporary Cinder Volumes.
Every Cinder Volume that is getting backup requires two Cinder Snapshots. The second Cinder Snapshot is temporary to calculate the incremental.
Once Trilio is configured use virtual IP to access its dashboard. If service status panels on the dashboard page are not visible then access the virtual IP on port 3001 (https://<T4O-VIP>:3001/) and accept the SSL exception, and then refresh the dashboard page.
\
Migration within the same cloud to a different owner Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project A — User B Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project B — User B Cloud A — Domain A — Project A — User A =>Cloud A — Domain B — Project B — User B
Steps used:
Create a secret for Project A in Domain A via User A.
Create encrypted workload in Project A in Domain A via User A. Take snapshot.
Reassign workload to new owner
Load rc file of User A & provide read only rights through acl to the new owner
openstack acl user add --user <userB_id> <secret_href> --insecure
Migration between clouds Cloud A — Domain A — Project A — User A => Cloud B — Domain B — Project B — User B
Steps used:
Create a secret for Project A in Domain A via User A.
Create an encrypted workload in Project A in Domain A via User A. Trigger snapshot.
Reassign workload to Cloud B - Domain B — Project B — User B
Load RC file of User B.
Create a secret for Project B in Domain B via User B with the same payload used in Cloud A.
Create token via “openstack token issue --insecure”
Add migrated workload's metadata to the new secret (provide issued token to Auth-Token & workload id to matadata as below)
Trilio is composed of multiple services, which can be checked in case of any errors.
Trilio is using 4 main services on the Trilio Appliance:
wlm-api
wlm-scheduler
wlm-workloads
wlm-cron
Those can be verified to be up and running using the systemctl status
command.
The second component to check the Trilio Appliance's health is the nginx and pacemaker cluster.
Checking the availability of the Trilio API on the chosen endpoints is recommended.
The following example curl command lists the available workload-types and verifies that the connection is available and working:
The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.
In order to check the dmapi service go to dmapi container which is residing on controller nodes and run below command
The datamover service is running on each compute node. Logging to compute node and run below command
The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.
Run the following command on “nova-api” nodes and make sure “triliovault_datamover_api” container is in started state.
Run the following command on "nova-compute" nodes and make sure the container is in a started state.
Run the following command on horizon nodes and make sure the container is in a started state.
Run the following command on MAAS nodes and make sure all trilio units like trilio-data-mover
, trilio-dm-api
, trilio-horizon-plugin
, trilio-wlm
are in active state
Make sure the Trilio dmapi and horizon containers (shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.
Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly.
Please check dmapi endpoints on overcloud node.
The Trilio Cluster contains multiple log files.
The main log is workloadmgr-workloads.log, which contains all logs about ongoing and past Trilio backup and restore tasks. It can be found at:
/var/log/workloadmgr/workloadmgr-workloads.log
The next important log is the workloadmgr-api.log, which contains all logs about API calls received by the Trilio Cluster. It can be found at:
/var/log/workloadmgr/workloadmgr-api.log
The log for the third service is the workloadmgr-scheduler.log, which contains all logs about the internal job scheduling between Trilio nodes in the Trilio Cluster.
/var/log/workloadmgr/workloadmgr-scheduler.log
The last but not least service running on the Trilio Nodes is the wlm-cron service, which is controlling the scheduled automated backups.
/var/log/workloadmgr/workloadmgr-workloads.log
In the case of using S3 as a backup target is there also a log file that keeps track of the S3-Fuse plugin used to connect with the S3 storage.
/var/log/workloadmgr/s3vaultfuse.py.log
The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running under:
/var/log/containers/trilio-datamover-api/dmapi.log
The log for the Trilio Datamover service is located on the nodes, typically compute, where the Trilio Datamover container is running under:
/var/log/containers/trilio-datamover/tvault-contego.log
In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:
/var/log/containers/trilio-datamover/tvault-object-store.log
The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running under:
/var/log/kolla/trilio-datamover-api/dmapi.log
The log for the Trilio Datamover service is located on the nodes, typically compute, where the Trilio Datamover container is running under:
/var/log/kolla/triliovault-datamover/tvault-contego.log
In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:
/var/log/kolla/trilio-datamover/tvault-object-store.log
The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running.
Log into the dmapi container using lxc-attach
command (example below).
lxc-attach -n controller_dmapi_container-a11984bf
The log file is then located under:
/var/log/dmapi/dmapi.log
The log for the Trilio Datamover service is typically located on the compute nodes and the logs can be found here:
/var/log/tvault-contego/tvault-contego.log
In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:
/var/log/tvault-object-store/tvault-object-store.log
The Trilio solution is using the qcow2 backing file to provide full synthetic backups.
Especially when the NFS backup target is used, there are scenarios at which this backing file needs to be updated to a new mount path.
To make this process easier and streamlined Trilio is providing the following rebase tool.
Frequently Asked Questions about Trilio for OpenStack
Answer: NO
Trilio for OpenStack does not restore Instance UUIDs (also known as Instance IDs). The only scenario where we do not modify the Instance UUID is during an Inplace Restore, where we only recover the data without creating new instances.
When Trilio for OpenStack restores virtual machines (VMs), it effectively creates new instances. This means that new Virtual Machine Instance UUIDs are generated for the restored VMs. We achieve this by orchestrating a call to Nova, which creates new VMs with new UUIDs.
By following this approach, we maintain the principles of OpenStack and auditing. We do not update or modify existing database entries when objects are deleted and subsequently recovered. Instead, all deletions are marked as such, and new instances, including the recovered ones, are created as new objects in the Nova tables. This ensures compliance and preserves the integrity of the OpenStack environment.
Answer: YES
Trilio can restore the VMs MAC address, however, there is a caveat when restoring a virtual machine (VM) to a different IP address: a new MAC address will be assigned to the VM.
In the case of a One-Click Restore, the original MAC addresses and IP addresses will be recovered, but the VM will be created with a new UUID, as mentioned in question #1.
When performing a Selective Restore, you have the option to recover the original MAC address. To do so, you need to select the original IP address from the available dropdown menu during the recovery process.
By choosing the original IP address, Trilio for OpenStack will ensure that the VM is restored with its original MAC address, providing more flexibility and customization in the restoration process.
Example of Selective Restore with original MAC (and IP address):
In this example, we have taken a Trilio backup of a VM called prod-1.
The VM is deleted and we perform a Selective Restore of a VM called prod-1, selecting the IP address it was originally assigned from the drop-down menu:
Trilio then restores the VM with the original MAC address:
If you left the option as "Choose next available IP address", it will assign a new MAC to the VM instead as Neutron maps all MAC addresses to IP addresses on the Subnet - so logically a new IP will result in a new MAC address.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/restores/detail
Lists Restores with details
GET
https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>
Provides all details about the specified Restore
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>
Deletes the specified Restore
GET
https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>/cancel
Cancels an ongoing Restore
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information
The One-Click restore requires a body to provide all necessary information in json format.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information.
The One-Click restore requires a body to provide all necessary information in json format.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information
The One-Click restore requires a body to provide all necessary information in json format.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/search
Starts a File Search with the given parameters
POST
https://$(tvm_address):8780/v1/$(tenant_id)/search/<search_id>
Starts a filesearch with the given parameters
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads
Provides the list of all workloads for the given tenant/project id
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads
Creates a workload in the provided Tenant/Project with the given details.
Workload create requires a Body in json format, to provide the requested information.
Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Shows all details of a specified workload
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Modifies a workload in the provided Tenant/Project with the given details.
Workload modify requires a Body in json format, to provide the information about the values to modify.
Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Deletes the specified Workload.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/unlock
Unlocks the specified Workload
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/reset
Resets the defined workload
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/mount
Mounts a Snapshot to the provided File Recovery Manager
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/mounted/list
Provides the list of all Snapshots mounted in a Tenant
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/snapshots/mounted/list
Provides the list of all Snapshots mounted in a specified Workload
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/dismount
Unmounts a Snapshot of the provided File Recovery Manager
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/pause
Disables the scheduler of a given Workload
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/resume
Enables the scheduler of a given Workload
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>
Validates the Scheduler trust for a given Workload
All following API commands require an Authentication token against a user with admin-role in the authentication project.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler
Requests the status of the Global Job Scheduler
POST
https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/disable
Requests disabling the Global Job Scheduler
POST
https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/enable
Requests enabling the Global Job Scheduler
GET
https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types
Lists all available Quota Types
GET
https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types/<quota_type_id>
Requests the details of a Quota Type
POST
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>
Creates an allowed Quota with the given parameters
GET
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>
Lists all allowed Quotas for a given project.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quota/<allowed_quota_id>
Shows details for a given allowed Quota
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/update_allowed_quota/<allowed_quota_id>
Updates an allowed Quota with the given parameters
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<allowed_quota_id>
Deletes a given allowed Quota
E-Mail Notification Settings are done through the settings API. Use the values from the following table to set Email Notifications up through API.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/settings
Creates a Trilio setting.
Setting create requires a Body in json format, to provide the requested information.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>
Shows all details of a specified setting
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/settings
Modifies the provided setting with the given details.
Workload modify requires a Body in json format, to provide the information about the values to modify.
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>
Deletes the specified Workload.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots
Lists all Snapshots.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
When creating a Snapshot it is possible to provide additional information
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Shows the details of a specified Snapshot
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Deletes a specified Snapshot
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/cancel
Cancels the Snapshot process of a given Snapshot
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the Restores from
snapshot_id
string
ID of the Snapshot to fetch the Restores from
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the restore from
restore_id
string
ID of the restore to show
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the Restore from
restore_id
string
ID of the Restore to be deleted
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of the Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the Restore from
restore_id
string
ID of the Restore to cancel
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
snapshot_id
string
ID of the snapshot to restore
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
snapshot_id
string
ID of the snapshot to restore
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
snapshot_id
string
ID of the snapshot to restore
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to run the search in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to run the search in
search_id
string
ID of the File Search to get
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to fetch the workloads from
nfs_share
string
lists workloads located on a specific nfs-share
all_workloads
boolean
admin role required - True lists workloads of all tenants/projects
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to create the workload in
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Project/Tenant where to find the Workload
workload_id
string
ID of the Workload to show
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project where to find the workload in
workload_id
string
ID of the Workload to modify
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
workload_id
string
ID of the Workload to delete
database_only
boolean
True leaves the Workload data on the Backup Target
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
workload_id
string
ID of the Workload to unlock
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
workload_id
string
ID of the Workload to reset
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project the Snapshot is located in
snapshot_id
string
ID of the Snapshot to mount
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant to search for mounted Snapshots
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgr
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant to search for mounted Snapshots
workload_id
string
ID of the Workload to search for mounted Snapshots
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgr
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project the Snapshot is located in
snapshot_id
string
ID of the Snapshot to dismount
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project the Workload is located in
workload_id
string
ID of the Workload to disable the Scheduler in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project to work in
quota_type_id
string
ID of the Quota Type to show
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to work in
project_id
string
ID of the Tenant/Project to create the allowed Quota in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to work in
project_id
string
ID of the Tenant/Project to list allowed Quotas from
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to work in
<allowed_quota_id>
string
ID of the allowed Quota to show
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to work in
<allowed_quota_id>
string
ID of the allowed Quota to update
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to work in
<allowed_quota_id>
string
ID of the allowed Quota to delete
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
smtp_default___recipient
email_settings
String
admin@example.net
smtp_default___sender
email_settings
String
admin@example.net
smtp_port
email_settings
Integer
587
smtp_server_name
email_settings
String
Mailserver_A
smtp_server_username
email_settings
String
admin
smtp_server_password
email_settings
String
password
smtp_timeout
email_settings
Integer
10
smtp_email_enable
email_settings
Boolean
True
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to work with
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Project/Tenant where to find the Workload
setting_name
string
Name of the setting to show
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to work with w
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
setting_name
string
Name of the setting to delete
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Projects to fetch the Snapshots from
host
string
host name of the TVM that took the Snapshot
workload_id
string
ID of the Workload to list the Snapshots off
date_from
string
starting date of Snapshots to show
\
Format: YYYY-MM-DDTHH:MM:SS
string
ending date of Snapshots to show
\
Format: YYYY-MM-DDTHH:MM:SS
all
boolean
admin role required - True lists all Snapshots of all Workloads
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of the Trilio Service
tenant_id
string
ID of the Tenant/Project to take the Snapshot in
workload_id
string
ID of the Workload to take the Snapshot in
full
boolean
True creates a full Snapshot
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of the Trilio Service
tenant_id
string
ID of the Tenant/Project to take the Snapshot from
snapshot_id
string
ID of the Snapshot to show
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to find the Snapshot in
snapshot_id
string
ID of the Snapshot to delete
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to find the Snapshot in
snapshot_id
string
ID of the Snapshot to cancel
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy
Requests the list of available Workload Policies
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>
Requests the details of a given policy
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project
policy_id
string
ID of the Policy to show
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/assigned/<project_id>
Requests the lists of Policies assigned to a Project.
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project
project_id
string
ID of the Project to fetch assigned Policies from
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy
Creates a Policy with the given parameters
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>
Updates a Policy with the given information
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
policy_id
string
ID of the Policy to update
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>
Updates a Policy with the given information
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
policy_id
string
ID of the Policy to assign
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>
Deletes a given Policy
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of Tenant/Project
policy_id
string
ID of the Policy to delete
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
Openstack Administrators should never have the need to directly work with the trusts created.
The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts
Provides the lists of trusts for the given Tenant.
tvm_name
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant / Project to fetch the trusts from
is_cloud_admin
boolean
true/false
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
POST
https://$(tvm_address):8780/v1/$(tenant_id)/trusts
Creates a workload in the provided Tenant/Project with the given details.
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to create the Trust for
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>
Shows all details of a specified trust
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Project/Tenant where to find the Workload
workload_id
string
ID of the Workload to show
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>
Deletes the specified trust.
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Trust in
trust_id
string
ID of the Trust to delete
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>
Validates the Trust of a given Workload.
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Project/Tenant where to find the Workload
workload_id
string
ID of the Workload to validate the Trust of
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/get_list/import_workloads
Provides the list of all importable workloads
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to work in
project_id
string
restricts the output to the given project
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/orphan_workloads
Provides the list of all orphaned workloads
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to work in
migrate_cloud
boolean
True also shows Workloads from different clouds
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/import_workloads
Imports all or the provided workloads
tvm_address
string
IP or FQDN of the Trilio Service
tenant_id
string
ID of the Tenant/Project to take the Snapshot in
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/import_workloads/progress
Track Workload Import Progress against jobid
jobid
int
jobid returned by workload-importworkloads CLI command.