Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Trilio release 4.0 introduces new features and capabilities including
Openstack Train support
Trilio Openstack Quota integration
Openstack CLI integration
CLI restore of networks only
CLI restore of security groups only
Disk integrity check after Snapshots
Performance Enhancements for big environments
Enablement of rolling upgrades
Trilio 4.0.92 is the GA release of Trilio 4.0 Trilio 4.0.115 is the first patch release of Trilio 4.0.
Trilio 4.0 continues to enable Openstack versions and Distributions, allowing Trilio customers to stay up to date with Openstack releases.
Trilio 4.0 introduces full support for Openstack Train over the four supported Distributions. In addition, does Trilio 4.0 of course support longterm releases, giving Openstack users of those releases the possibility to use the latest Trilio functionalities.
Trilio for Openstack integrates natively with Openstack and we are happy to announce, that Trilio made another important step.
With the possibility to create Project quotas for Workloads, VMs and Storage usage can Openstack administrators now clearly control how Trilio is used by any tenant.
Combining the Quota feature with the already available policies enables Openstack administrators to define exactly what an Openstack Project can do or can't do with Trilio.
Trilio has been using its very own Python based CLI client, the workloadmgr client.
This client has followed the Openstack standards for CLI clients and is now again following the established standards, which integrated all Python Openstack clients into a single Openstack Python CLI client.
Trilio 4.0 enables the usage of the Openstack client for the most common commands required during the daily work with Trilio.
Trilio restore possibilities allowed to restore any VM and its attached resources as required. Still it was always necessary to restore the complete VM to get the networks or security groups tied to that VM or backup.
Trilio 4.0 introduces the possibility to restore only the networks or security groups directly from the CLI.
Trilio always protected the integrity of any Snapshots. However, when Trilio identified a failure in the integrity did the backup job fail and not only was the Snapshot with a failed integrity lost but the following backups as well.
To overcome this limitation did Trilio enhance the already available disk integrity check and implemented it at the end of each Snapshot. When it is detected that a Snapshot disk integrity is not provided, will the next Snapshot automatically be a full Snapshot.
That way ensuring that future backups are again having a reliable disk integrity.
With the success of Trilio around the world was it only a question of time until Trilio would be tested against a real big environment with thousands of API calls per second.
We are happy to say that our solution did keep up to the task, but it still showed potential to improve even more. And that improvement comes with a small change on the Trilio Appliance, which allows the load-balancing of incoming API calls against a complete Trilio cluster, instead of a master node.
Trilio has been part of some Openstack environments for years, going through the process of updating and upgrading together with the environment and stand alone.
This upgrade process so far was always a complete uninstallation and reinstallation of the Trilio solution, which was very time consuming and frustrating for Openstack Administrators.
Trilio 4.0 changed the way the Trilio Appliance is build to allow to update and upgrade of the complete Trilio solution.
The exact process will be announced together with the first patch release of Trilio 4.0.
This release contains the following known issues which are tracked for a future update.
The workloadmgr Quota feature is still fully supported through CLI.
Observation:
It has been observed that on non-kolla, non-rhosp setups, such as openstack ansible, nova user id is not same as we consider as default(162).
Another observation is that, the id which is assigned to nova, was conflicting with id of system user in TVM, this created a situation where we had to redeploy Openstack.
Workaround:
Update permissions of /var/triliovault-mounts to 755
Snapshots are getting stuck when wlm-workloads service stops during a backup process
Observation:
When wlm-workloads service gets stopped on the node with an active snapshot process does the process get stuck without erroring.
Workaround:
Restart wlm-workloads service on Trilio cluster, snapshot will throw error.
Observation:
VM Volumes stored on Ceph are successfully excluded from backup if desired
Restore does create empty Ceph Volume
created empty Ceph Volume is not attachable or formatable
Observation:
For every restore will the metadata config_drive be set as blank value
No impact on restored VMs known
Workaround
delete metadata config_drive
or set desired value
Observation:
Workload gets imported successfully
original User and Project still exist (ownership stays same)
All schedulers trusts are defined as disabled
Workaround:
Edit any (just 1) workload, modify the scheduler policy and submit
trust will be recreated
Observation:
TVault re-configuration while adding nodes to existing TVM cluster fails at "Configuring Trilio Cluster"
Reason is that the prev mysql password was not working and mysql root access has be reset.
Workaround:
remove /root/.my.cnf file on already configured TVM and reconfigure it
Observation:
After TVault re-configuration post addition of 2 more nodes to existing TVM cluster ("import workloads" was not seleted), the databases do not sync against already existing TVM.
It is expected that while adding the 2 new nodes, the db on node1 should get synced up with 2 new nodes and the existing workloads should be available post the reconfig on the new 3 node TVM cluster.
Workaround:
Run workload import from CLI
Observation:
Deactivating the primary node of a TVM cluster deactivates scheduler of associated workloads.
Workaround:
restart wlm-cron service on new primary node
Observation:
VM was set with metadata exclude_boot_disk_from_backup set to true
Restored instance showed, that data was backed up and restored
Observation:
Workload with scheduler trust established gets reassigned to new project
scheduler trust is getting disabled
Workaround:
Edit the workload to enable scheduler
Observation:
using workloadmgr commands with Openstack CLI still requires OS_TENANT_ID and OS_USER_DOMAIN_ID
Workaround:
edit rc file and add required fields
It is recommended to think about the following elements prior to the installation of Trilio for Openstack.
Trilio uses Cinder snapshots for calculating full and incremental backups. For full backups, Trilio creates Cinder snapshots for all the volumes in the backup job. It then leaves these Cinder snapshots behind for calculating the incremental backup image during next backup. During an incremental backup operation it creates new Cinder snapshots, calculates the changed blocks between the new snapshots and the old snapshots that were left behind during full/previous backups. It then deletes the old snapshots but leaves the newly created snapshots behind. So, it is important that each tenant who is availing Trilio backup functionality has sufficient Cinder snapshot quotas to accommodate these additional snapshots. The guideline is to add 2 snapshots for every volume that is added to backups to volume snapshot quotas for that tenant. You may also increase the volume quotas for the tenant by the same amount because Trilio briefly creates a volume from snapshot to read data from the snapshot for backup purposes. During a restore process, Trilio creates additional instances and Cinder volumes. To accommodate restore operations, a tenant should have sufficient quota for Nova instances and Cinder volumes. Otherwise restore operations will result in failures.
AWS S3 object consistency model includes:
Read-after-write
Read-after-update
Read-after-delete
Each of them describes how an object will reach its consistent state after an object is created/updated or deleted. None of them provides strong consistency and there is a lag time for an object to reach the consistent state. Though Trilio employed mechanisms to work around the limitations of eventual consistency of AWS S3, when an object reach its consistency state is not deterministic. There is no official statement from AWS on how long it takes for an object to reach consistent state. However read-after-write has a shorter time to reach consistency compared to other IO patterns. Our solution is designed to maximize read-after-write IO pattern. The time in which an object reaches eventual consistency also depends on the AWS region. For example, aws-standard region does not have strong consistency model compared to us-east or us-west. We suggest to use these regions when creating s3 buckets for Trilio. Though read-after-update IO pattern is hard to avoid completely, we employed ample delays in accessing objects to accommodate larger durations for objects to get into consistent state. However in rare occasions, backups may still fail and need to restarted.
Trilio can be deployed as a single node or a three node cluster. It is highly recommended that Trilio is deployed as three node cluster for fault tolerance and load balancing. Starting with 3.0 release, Trilio requires additional IP for cluster and is required for both single node and three node deployments. Cluster ip a.k.a virtual ip is used for managing cluster and is used to register Trilio service endpoint in the keystone sevice catalog.
Trilio integrates natively with Openstack. This includes that Trilio communicates completely through APIs using the Openstack Endpoints. Trilio is also generating its own Openstack endpoints. In addition, is the Trilio appliance and the compute nodes writing to and reading from the backup target. These points affect the network planning for the Trilio installation.
Openstack knows 3 types of endpoints:
Public Endpoints
Internal Endpoints
Admin Endpoints
Each of these endpoint types is designed for a specific purpose. Public endpoints are meant to be used by the Openstack end-users to work with Openstack. Internal endpoints are meant to be used by the Openstack services to communicate with each other. Admin endpoints are meant to be used by Openstack administrators.
Out of those 3 endpoint types does only the admin endpoint sometimes contain APIs which are not available on any other endpoint type.
To learn more about Openstack endpoints please visit the official Openstack documentation.
Trilio is communicating with all services of Openstack on a defined endpoint type. Which endpoint type Trilio is using to communicate with Openstack is decided during the configuration of the Trilio appliance.
There is one exception: The Trilio Appliance always requires access to the Keystone admin endpoint.
The following network requirement can be identified this way:
Trilio appliance needs access to the Keystone admin endpoint on the admin endpoint network
Trilio appliance needs access to all endpoints of one type
Trilio is recommending providing full access to all Openstack endpoints to the Trilio appliance to follow the Openstack standards and best practices.
Trilio is generating its own endpoints as well. These endpoints are pointing towards the Trilio Appliance directly. This means that using those endpoints will not send the API calls towards the Openstack Controller nodes first, but directly to the Trilio VM.
Following the Openstack standards and best practices, it is therefore recommended to put the Trilio endpoints on the same networks as the already existing Openstack endpoints. This allows to extend the purpose of each endpoint type to the Trilio service:
The public endpoint to be used by Openstack users when using Trilio CLI or API
The internal endpoint to communicate with the Openstack services
The admin endpoint to use the required admin only APIs of Keystone
The Trilio solution is using backup target storage to securely place the backup data. Trilio is dividing its backup data into two parts:
Metadata
Volume Disk Data
The first type of data is generated by the Trilio appliance through communicating with the Openstack Endpoints. All metadata that is stored together with a backup is written by the Trilio Appliance to the backup target in the json format.
The second type of data is generated by the Trilio Datamover service running on the compute nodes. The Datamover service is reading the Volume Data from the Cinder or Nova storage and transferring this data as qcow2 image to the backup target. Each Datamover service is hereby responsible for the VMs running on its compute node.
The network requirements are therefor:
The Trilio appliance needs access to the backup target
Every compute node needs access to the backup target
Most Trilio customers are following the Openstack standards and best practices to have the public, internal, and admin endpoints on separate networks. They also typically don't have any network yet, which can access the desired backup target.
The starting network configuration typically looks like this:
Following the Openstack standards and Trilio's recommendation will the Trilio Appliance be placed on all those 3 networks. Further is the access to the backup target required by Trilio Appliance and Compute nodes. Here done by adding a 4th network.
The resulting network configuration would look like this:
It is of course possible to combine networks as necessary. As long as the required network access is available will Trilio work.
Each Openstack installation is different and so is the network configuration. There are endless possibilities of how to configure the Openstack network and how to implement the Trilio appliance into this network. The following three examples have been seen in production:
The first example is from a manufacturing company, which wanted to split the networks by function and decided to put the Trilio backup target on the internal network as the backup and recovery function was identified as an Openstack internal solution. This example looks complex but integrates Trilio just as recommended.
The second example is from a financial institute that wanted to be sure that the Openstack Users have no direct uncontrolled network access to the Openstack infrastructure. Following this example requires additional work as the internal HA-Proxy needs to be configured to correctly translates the API calls towards the Trilio appliance.
The third example is from a service company that was forced to treat Trilio as an external 3rd party solution, as we require a virtual machine running outside of Openstack. This kind of network configuration requires good planning on the Trilio endpoints and firewall rules.
Trilio, by TrilioData, is a native OpenStack service that provides policy-based comprehensive backup and recovery for OpenStack workloads. The solution captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data and Metadata of an environment) as full or incremental snapshots. These snapshots can be held in a variety of storage environments including NFS AWS S3 compatible storage. With Trilio and its single click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). With Trilio, IT departments are enabled to fully deploy OpenStack solutions and provide business assurance through enhanced data retention, protection and integrity.
With the use of Trilio’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. Trilio takes point-in-time backup of the entire workload consisting of compute resources, network configurations and storage data as one unit. It also takes incremental backups that only captures the changes that were made since last backup. Incremental snapshots save time and storage space as the backup only includes changes since the last backup. The benefits of using VAST for backup and restore could be summarized as below:
Efficient capture and storage of snapshots. Since our full backups only include data that is committed to storage volume and the incremental backups only include changed blocks of data since last backup, our backup processes are efficient and storages backup images efficiently on the backup media
Faster and reliable recovery. When your applications become complex that snap multiple VMs and storage volumes, our efficient recovery process will bring your application from zero to operational with just click of button
Easy migration of workloads between clouds. Trilio captures all the details of your application and hence our migration includes your entire application stack without leaving any thing for guess work.
Through policy and automation lower the Total Cost of Ownership. Our tenant driven backup process and automation eliminates the need for dedicated backup administrators, there by improves your total cost of ownership.
Trilio has four main software components:
Trilio ships as a QCOW2 image. User can instantiate one or more VMs from the QCOW2 image on a standalone KVM boxes.
Trilio API is a python module that is an extension to nova api service. This module is installed on all OpenStack controller nodes
Trilio Datamover is a python module that is installed on every OpenStack compute nodes
Trilio horizon plugin is installed as an add on to horizon servers. This module is installed on every server that runs horizon service.
The Trilio Appliance is not supported as an instance inside Openstack.
The Trilio Appliance gets delivered as a qcow2 image, which gets attached to a virtual machine.
Trilio supports KVM-based hypervisors on x86 architectures, with the following properties:
Software | Supported |
---|
The recommended size of the VM for the Trilio Appliance is:
When running Trilio in production, a 3-node cluster of the Trilio appliance is recommended for high availability and load balancing.
Ressource | Value |
---|
The qcow2 image itself defines the 40GB disk size of the VM.
In the rare case of the Trilio Appliance database or log files getting larger than 40GB disk, contact or open a ticket with Trilio Customer Success to attach another drive to the Trilio Appliance.
In addition to the Trilio Appliance does Trilio contain components, which are installed directly into the Openstack itself.
Additional it is necessary to have the nfs-common
packages installed on the compute nodes in case of using the NFS protocol for the backup target.
For Canonical Openstack it is not necessary to spin up the Trilio VM.
The Trilio Appliance is delivered as qcow2 image and runs as VM on top of a KVM Hypervisor.
This guide shows the tested way to spin up the Trilio Appliance on a RHV Cluster. Please contact a RHV Administrator and Trilio Customer Success Agent in case of incompatibility with company standards.
The Trilio appliance is utilizing cloud-init to provide the initial network and user configuration.
Cloud-init is reading it's information either from a metadata server or from a provided cd image. Trilio is utilizing the cd image.
To create the cloud-init image it is required to have genisoimage available.
Cloud-init is using two files for it's metadata.
The first file is called meta-data
and contains the information about the network configuration.
Below is an example of this file.
Keep the hostname localhost. The hostname gets changed through the configuration step. Changing the hostname will lead to the tvault-config service not properly starting, blocking further configuration.
The instance-id has to match the VM name in virsh
The second file is called user-data
and contains little scripts and information to set up for example the user passwords.
Below is an example of this file.
Both files meta-data and user-data are needed. Even when one is empty, is it needed to create a working cloud-init image.
The image is getting created using genisoimage follwing this general command:
genisoimage -output <name>.iso -volid cidata -joliet -rock </path/user-data> </path/meta-data>
An example of this command is shown below.
After the cloud-init image has been created the TriloVault appliance can be spun up on the desired KVM server.
Extract the Trilio QCOW2 tar file using the following command :
See below an example command, how to spin up the Trilio appliance using virsh and the created iso image.
It is of course possible to spin up the Trilio appliance without a cloud-init iso-image. It will spin up with default values.
Once the Trilio appliance is up and running with it's initial configuration is it recommended to uninstall cloud-init.
If cloud-init is not installed it will rerun the network configuration upon every boot. Setting the network configuration back to DHCP, if no metadata is provided.
To uninstall cloud-init, follow the example below.
Each Openstack distribution comes with a set of supported operating systems. Please check the to see, which Openstack Distribution is supported with which Operating System.
The Trilio Appliance qcow2 image can be downloaded from the . Please contact your Trilio sales or technical lead to get access to the portal.
RHOSP13
NFSv3
Supported
Red Hat Director
RHOSP16.0
NFSv3
Supported
Red Hat Director
Canonical Queens
NFSv3
Not supported
JuJu Charms
Canonical Rocky
NFSv3
Not supported
JuJu Charms
Canonical Stein
NFSv3
Not supported
JuJu Charms
Canonical Train
NFSv3
Not supported
JuJu Charms
Ansible Openstack Train
NFSv3
Supported
Manually
Kolla Openstack Train
NFSv3
Not supported
Manually
RHOSP13
NFSv3
Supported
Red Hat Director
RHOSP16.0
NFSv3
Supported
Red Hat Director
RHOSP16.1
NFSv3
Supported
Red Hat Director
Canonical Queens
NFSv3
Not supported
JuJu Charms
Canonical Rocky
NFSv3
Not supported
JuJu Charms
Canonical Stein
NFSv3
Not supported
JuJu Charms
Canonical Train
NFSv3
Not supported
JuJu Charms
Ansible Openstack Train
NFSv3
Supported
Manually
Kolla Openstack Train
NFSv3
Not supported
Manually
libvirt | 2.0.0 and above |
QEMU | 2.0.0 and above |
qemu-img | 2.6.0 and above |
vCPU | 8 |
RAM | 24 GB |
The Red Hat Openstack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.
Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
Depending whether the RHOSP environment is already installed or is getting installed for the first time different steps are done to be able to deploy Trilio.
All commands need to be run as user 'stack'
The following commands upload the Trilio puppet module to the overcloud. The actual upload happens upon the next deployment.
Gitub branches to use:
Trilio 4.0 GA == stable/4.0 Trilio 4.0 SP1 == v4.0maintenance
Trilio contains of multiple services. Add these services to your roles_data.yaml.
In case of uncostomized roles_data.yaml can the default file be found on the undercloud at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
This service needs to share the same role as the nova-api
service.
In case of the pre-defined roles will the nova-api
service run on the role Controller
.
In case of custom defined roles, it is necessary to use the role the nova-api
service is using.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In case of custom defined roles, it is necessary to use the role the nova-compute
service is using.
Add the following line to the identified role:
All commands need to be run as user 'stack'
Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: 'registry.connect.redhat.com'. Container pull urls are given below.
Trilio Datamover container: registry.connect.redhat.com/trilio/trilio-datamover:<container-tag>
Trilio Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:<container-tag>
Trilio horizon plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<container-tag>
'4.0.92' is Trilio 4.0 build version. Container tag: 4.0.92-rhosp16 '4.0.115' is Trilio 4.0 SP1 build version. Container tag: 4.0.115-rhosp16
There are three registry methods available in RedHat Openstack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from Redhat registry.
Populate the trilio_env_osp16.yaml with container urls for:
Trilio Datamover container
Trilio Datamover api container
Trilio Horizon Plugin
trilio_env_osp16.yaml will be available in triliovault-cfg-scripts/redhat-director-scripts/
Follow this section when 'local registry' is used on the undercloud.
In this case it is necessary to pull the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' to the undercloud and updates the trilio_env_osp16.yaml with the values for the datamover and datamover api containers.
The script assumes that the undercloud container registry is running on port 8787. If the registry is running on a different port, the script needs to be adjusted manually.
The changes can be verified using the following commands.
Follow this section when a Satellite Server is used for the container registry.
Pull the Trilio containers on the Red Hat Satellite using the given Red Hat registry URLs.
Populate the trilio_env_osp16.yaml with container urls for:
Trilio Datamover container
Trilio Datamover api container
Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.
The following information are required additionally:
Network for the datamover api
Backup target type {nfs/s3}
In case of NFS
list of NFS Shares
NFS options
In case of S3
s3 type {amazon_s3/ceph_s3}
Access key
Secret key
S3 Region name
S3 Bucket name
S3 Endpoint URL
S3 SSL Enabled {true/false}
Use the following heat environment file and roles data file in overcloud deploy command:
trilio_env_osp16.yaml
roles_data.yaml
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes.
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.
Once RHOSP16.0 Installation steps have completed successfully, follow the instructions below to now configure the Trilio Appliance
In RHOSP, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all Trilio nodes:
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that 'nova' user and group id has changed to '42436'
Follow this documentation to configure Trilio Appliance.
It is necessary to first configure the Trilio appliance, before the steps of this section can be done.
RHOSP16 is using a different mount point in its datamover containers than other Openstack distribution. It is necessary to adjust the mountpoint of the Trilio Nodes to match this. If the mountpoints are not getting aligned, will files created by the datamover and read by the Trilio appliance not match in their paths and backup and restore processes will fail.
Please follow these steps to align the mountpoints:
Edit /etc/workloadmgr/workloadmgr.conf file
Set parameter 'vault_data_directory' to '/var/lib/nova/triliovault-mounts'
create the directory for the mountpoint
assign the created directory to nova:nova
unmount the old mountpoint
Update the Trilio configurator
Restart the Trilio services
Verify the mountpoint has been configured correctly
Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing does the following command provide the list of errors:
Further commands that can help identifying any errors.
In case of the trilio datamover api container is failing to start or stuck in restart, check these logs:
In case of Trilio datamover container failing to start or being stuck in restart, check these logs:
If Cinder backend is Ceph it is necessary to manually add the ceph details to tvault-contego.conf on all compute nodes.
The file can be found here:
/var/lib/config-data/puppet-generated/triliodm/etc/tvault-contego/tvault-contego.conf
Add the following information:
The same block of information can be found in the nova.conf file.
Once the Trilio VM or the Cluster of Trilio VMs has been spun, the actual installation process can begin. This process contains of the following steps:
Install the Trilio dm-api service on the control plane
Install the Trilio datamover service on the compute plane
Install the Trilio Horizon plugin into the Horizon service
How these steps look in detail is depending on the Openstack distribution Trilio is installed in. Each supported Openstack distribution has its own deployment tools. Trilio is integrating into these deployment tools to provide a native integration from the beginning to the end.
In Kolla, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all compute nodes:
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that 'nova' user and group id has changed to '42436'
Trilio Datamover Api container should be deployed on all nodes where nova_api
container is running. In standard deployment, we can call these nodes as OpenStack controller nodes.
The very first step is to pull the container image from docker.io.
Login to docker using credentials: triliodocker/triliopassword
Login to docker and pull the Trilio Datamover API container.
Example command for train openstack on ubuntu platform with triliovault 4.0 GA release: docker pull docker.io/trilio/ubuntu-source-trilio-datamover-api:4.0.92-train
In this part of the process is the configuration file for the Trilio Datamover API created.
The following steps need to be done:
Create config directory
get default config file dmapi.conf
edit dmapi.conf
copy nova.conf to config directory
The dmapi.conf located in /etc/kolla/trilio-datamover-api/
needs to be edited to adjust to the Openstack environment.
Nearly all required values can be copied from the nova.conf located at:
/etc/kolla/nova-api/
Follow comments inside the dmapi.conf to learn which parameters are the minimum needed.
An example dmapi.conf can be seen here:
For CentOS, we need nova user ownership on datamover api log directory where as for Ubuntu, we need dmapi user's ownership on datmover api log directory.
Now the trilio-datamover-api container can be deployed and started.
To verify the deployment was successful check the container status using docker ps
.
Trilio Datamover container should be deployed on all nodes where nova_compute
container is running. In standard deployment, we can call these nodes as openstack compute nodes.
At this stage it is necessary to know if the deployment shall use NFS or S3 protocol for the backup target.
The very first step is to pull the container image from docker.io.
Login to docker using credentials: triliodocker/triliopassword
Login to docker and pull the Trilio Datamover API container.
Example command for train openstack on ubuntu platform with Trilio 4.0 GA release: docker pull docker.io/trilio/ubuntu-source-trilio-datamover:4.0.92-train
In this part of the process is the configuration file for the Trilio Datamover created.
The following steps need to be done:
Create config directory
copy nova.conf to config directory
get default config file tvault-contego.conf
edit tvault.conf
Edit /etc/kolla/trilio-datamover/tvault-contego.conf
config file to provide NFS/S3 details as per backup storage selected.
In case of NFS backup target, only nfs share details need to provided. No other conf parameters need to edit, unless you know the details of it.
If ceph is getting used for cinder/nova, the correct permissions for ceph.conf and keyrings files need to be assigned. The trilio_datamover container will be using ceph.conf and keyring files with the 'nova' user.
If nova/cinder backend is ceph, you need to add ceph user and keyring details to /etc/kolla/trilio-datamover/tvault-contego.conf file. Add the following sections to the tvault-contego.conf file. In the provided example is ceph's 'cinder' user configured to use for trilio read/write operations.
Mount /etc/ceph on trilio_datamover container in read only mode. Check docker run command provided in the next step. The ceph user (example -'cinder') should have read and write permissions on ceph pool used for nova/cinder backend. Verify nova user (uid - 42436) on trilio_datamover container is able to read ceph user's keyring file and ceph.conf after mounting /etc/ceph on the container. Set appropriate permissions for /etc/ceph/ files on the host itself.
If the cloud does not use 'ceph' storage for nova/cinder, remove '/etc/ceph' volume mount option from below commands.
To verify the deployment was successful check the container status using docker ps
.
Trilio Horizon plugin needs to be installed inside the OpenStack horizon container. Once installed, the Trilio dashboard will be visible in OpenStack Horizon.
The following steps need to be done:
Download installation shell script
Run the shell script
Edit Horizon settings
Restart the Horizon container
To download the shell script directly into the Horizon container do:
You have to run the script inside the Horizon container as root.
Run the shell script and restart horizon container. This will restart apache service, which may enforce a log out of the container.
The following line needs to be aded in 'local_settings' of Openstack's Horizon file to enable workloadmanager quota feature in the Horizon dashboard.
To enable the done changes restart the Horizon container
This issue has not been observed in all CentOS based Kolla Train installations. Please verify before disabling grafana repository.
Grafana yum repository has an issue on the latest horizon containers of OpenStack (not Trilio). To confirm the issue, you can just run yum repolist
, it will fail. Use the following command to disable the grafana repository.
Trilio horizon install script will ask for horizon's openstack_dashboard directory path if it's not at the default location - /usr/shar/openstack-dashboard
For train ubuntu bionic, it's : /var/lib/kolla/venv/lib/python2.7/site-packages
If Trilio Horizon tabs are not accessible but OpenStack Horizon is working fine, make sure that endpoints for service 'TrilioVaultWLM' are created correctly. The root cause of this issue is typically, that SSL is enabled on all three endpoint types of 'TrilioVaultWLM' service.
If SSL is enable only on public 'keystone' service endpoints, then create 'TrilioVaultWLM' service endpoints in the same fashion. Endpoints for service 'TrilioVaultWLM' get created during Trilio configuration step. If these endpoints need to be edited reconfigure the Trilio.
To make 'snapshot mount' functionality work, the cloud administrator needs to complete the following steps.
Identify backup target mount point on Trilio VM
install nfs-common on nova_compute and nova_libvirt containers
Mount backup target nfs share on nova_compute and nova_libvirt containers
The following command will provide the active mountpoint on the Trilio VM
This example gives the following information:
Backup target is NFS share: 192.168.1.33:/mnt/tvault
Mountpoint is: /var/triliovault-mounts/MTkyLjE2OC4xLjMzOi9tbnQvdHZhdWx0
It is necessary to install nfs-common
package on both nova_compute
and nova_libvirt
containers.
Mount the backup target nfs share on 'nova_compute' and 'nova_libvirt' container at exactly same mount point as done on triliovault VM.
Create the mountpoint directory as necessary.
If any triliovault container is stuck in restarting state the following logs can be checked.
Possible issues for trilio-datamover container failure are for example NFS mount issues or S3 credentials might be wrong. If it's Amazon S3, then network connectivity between compute node and AWS s3 is needed. The docker logs will clearly tell the exact error.
If the above logs do not help OR If containers running well but, backups fail, following service logs will help:
If the Trilio Horizon tabs are not visible on Openstack, verify the following:
Make sure trilio horizon plugin is installed on OpenStack horizon container
Trilio configuration step needs to be completed to see the triliovault dashboard on OpenStack
Make sure correct openstack_dashboard directory got provided and the triliovault horizon plugin files got successfully copied there.
Trilio is an add on service to OpenStack cloud infrastructure and provides backup and disaster recovery functions for tenant workloads. Trilio is very similar to other openstack services including nova, cinder, glance, etc and adheres to all tenets of OpenStack. It is a stateless service that scales with your cloud.
Trilio has four main software components:
Trilio ships as a QCOW2 image. User can instantiate one or more VMs from the QCOW2 image on a standalone KVM boxes.
Trilio API is a python module that is installed on all OpenStack controller nodes where the nova-api service is running.
Trilio Datamover is a python module that is installed on every OpenStack compute nodes
Trilio horizon plugin is installed as an add on to horizon servers. This module is installed on every server that runs horizon service.
Trilio is both a provider and consumer into OpenStack ecosystem. It uses other OpenStack services such as nova, cinder, glance, neutron, and keystone and provides its own service to OpenStack tenants. To accomodate all possible OpenStack deployments, Trilio can be configured to use either public or internal URLs of services. Likewise Trilio provides its own public, internal and admin URLs.
This figure represents a typical network topology. Trilio exposes its public URL endpoint on public network and Trilio virtual appliances and data movers typically use either internal network or dedicated backup network for storing and retrieving backup images from backup store.
Trilio and Canonical have started a partnership to ensure a native deployment of Trilio using JuJu Charms.
Those JuJu Charms are publicly available as Open Source Charms.
The charms are currently listed as tech-preview. They will be fully supported by Canonical with the next charm release.
Canonical Openstack doesn't require the Trilio Cluster. The required services are installed and managed via JuJu Charms.
The following charms exist:
Installs and manages Trilio Controller services.
Installs and manages the Trilio Datamover API service.
Installs and manages the Trilio Datamover service.
Installs and manages the Trilio Horizon Plugin.
The documentation of the charms can be found here:
The installation of the Datamover-API, short dmapi, requires to create a new container, in which all necessary packages and the Trilio dmapi code are loaded.
Create lxc container for hosting dmapi service on controller nodes with below commands.
Add nova user and required directory on container controller_dmapi.
Add required packages on container controller_dmapi.
Copy nova.conf file from nova-api container to /etc/nova directory in dmapi container. Run the below command on controller nodes:
Create a new interface with specific ip for dmapi container.
Edit /var/lib/lxc/controller_dmapi/config and add below section as per network bridge available on the controller node.
Restart the container with the below commands.
Download and run the tvault-installation script inside the container.
The script to be executed inside dmapi container, after the following changes have been done: Comment the 2 lines below and add a line below NOVA_VERSION = 20, as nova-manage doesn't work in Ansible Openstack.
Run the script
Verify in dmapi.conf domain name for the nova service user under keystone section.
Check field values for project_domain_name
and user_domain_name
and update those if not in keystone section
If SSL is enabled then add the following section in dmapi.conf:
Verify below entries are there in keystone policy.json file
file path : /var/lib/lxc/controller_keystone_container/rootfs/etc/keystone
Once verified above checks, start the dmapi service.
Activate the virtual environment on the compute node.
After activating the virtual environment, find out the location of compute.filters file.
Download the installation script.
Modify install script to use the same location for creating trilio filters.
Also comment the 2 lines below and add a line below NOVA_VERSION = 20, as nova-manage doesn't work in Ansible Openstack.
Run install script and if you get prompt while installing, choose the default selection.
If non-default values are selected, it will overwrite the current configuration and will impact nova-compute service.
Make sure ExecStart value look like below in /etc/systemd/system/tvault-contego.service file.
Use below commands to restart and verify the service.
Use below command and check if nfs/s3 storage is mounted or not.
List running containers on controller nodes and login to horizon container using the below command.
Install curl package on the Horizon container if not present.
Activate virtual environment on horizon container
Download script to install horizon plugin on horizon container and run install script
Install script will ask for the dashboard folder, provide below path
Verify installation using below commands
Refer to the keystone haproxy settings for dmapi haproxy.
Haproxy config file can be found here: /etc/haproxy/haproxy.cfg
A sample configuration is shown below.
Check the syntax of the file and restart the service.
The is the supported and recommended method to deploy and maintain any RHOSP installation.
Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
Depending whether the RHOSP environment is already installed or is getting installed for the first time different steps are done to be able to deploy Trilio.
All commands need to be run as user 'stack'
The following command clones the Trilio configscripts github.
If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/puppet/trilio/files'
The following commands upload the Trilio puppet module to the overcloud registry. The actual upload happens upon the next deployment.
Trilio contains of multiple services. Add these services to your roles_data.yaml.
In case of uncostomized roles_data.yaml can the default file be found on the undercloud at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
This service needs to share the same role as the nova-api
service.
In case of the pre-defined roles will the nova-api
service run on the role Controller
.
In case of custom defined roles, it is necessary to use the role the nova-api
service is using.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In case of custom defined roles, it is necessary to use the role the nova-compute
service is using.
Add the following line to the identified role:
All commands need to be run as user 'stack'
Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: 'registry.connect.redhat.com'. Container pull urls are given below.
Trilio Datamover container: registry.connect.redhat.com/trilio/trilio-datamover:4.0.116-rhosp16.1
Trilio Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:4.0.116-rhosp16.1
Trilio horizon plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.0.116-rhosp16.1
There are three registry methods available in RedHat Openstack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from Redhat registry.
Populate the trilio_env_osp16.yaml with container urls for:
Trilio Datamover container
Trilio Datamover api container
Trilio Horizon Plugin
trilio_env_osp16.yaml will be available in triliovault-cfg-scripts/redhat-director-scripts/
Follow this section when 'local registry' is used on the undercloud.
In this case it is necessary to pull the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' to the undercloud and updates the trilio_env_osp16.yaml with the values for the datamover and datamover api containers.
The script assumes that the undercloud container registry is running on port 8787. If the registry is running on a different port, the script needs to be adjusted manually.
The changes can be verified using the following commands.
Follow this section when a Satellite Server is used for the container registry.
Pull the Trilio containers on the Red Hat Satellite using the given Red Hat registry URLs.
Populate the trilio_env_osp16.yaml with container urls for:
Trilio Datamover container
Trilio Datamover api container
Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.
The following information are required additionally:
Network for the datamover api
Backup target type {nfs/s3}
In case of NFS
list of NFS Shares
NFS options
In case of S3
s3 type {amazon_s3/ceph_s3}
Access key
Secret key
S3 Region name
S3 Bucket name
S3 Endpoint URL
S3 SSL Enabled {true/false}
S3 SSL Cert
Use the following heat environment file and roles data file in overcloud deploy command:
trilio_env_osp16.yaml
roles_data.yaml
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes.
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.
Once RHOSP16.1 Installation steps have completed successfully, follow the instructions below to now configure the Trilio Appliance.
In RHOSP, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all Trilio nodes:
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that 'nova' user and group id has changed to '42436'
It is necessary to first configure the Trilio appliance, before the steps of this section can be done.
RHOSP16 is using a different mount point in its datamover containers than other Openstack distribution. It is necessary to adjust the mountpoint of the Trilio Nodes to match this. If the mountpoints are not getting aligned, will files created by the datamover and read by the Trilio appliance not match in their paths and backup and restore processes will fail.
Please follow these steps to align the mountpoints:
Edit /etc/workloadmgr/workloadmgr.conf file
Set parameter 'vault_data_directory' to '/var/lib/nova/triliovault-mounts'
create the directory for the mountpoint
assign the created directory to nova:nova
unmount the old mountpoint
Update the Trilio configurator
Restart the Trilio services
Verify the mountpoint has been configured correctly
Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing does the following command provide the list of errors:
Further commands that can help identifying any errors.
In case of the trilio datamover api container is failing to start or stuck in restart, check these logs:
In case of Trilio datamover container failing to start or being stuck in restart, check these logs:
If Cinder backend is Ceph it is necessary to manually add the ceph details to tvault-contego.conf on all compute nodes.
The file can be found here:
/var/lib/config-data/puppet-generated/triliodm/etc/tvault-contego/tvault-contego.conf
Add the following information:
The same block of information can be found in the nova.conf file.
Trilio configuration process is using ansible scripts. Ansible, in the last few years, has grown in popularity as a preferred configuration management tool and Trilio uses ansible play books extensively to configure the Trilio cluster. To troubleshoot Trilio configuration issues, user should have basic understanding of ansible playbook output.
Ansible modules are inherently idempotent and hence Trilio configuration can run any number of times to change or reconfigure Trilio cluster.
Once the VM is booted, point your browser (Chrome or Firefox) to Trilio node IP address.
This will bring you to the Trilio Dashboard, which contains the Trilio configurator.
The user is: admin The default password is: password
After the very first login are you requested to change the admin password.
Unlike previous versios of Trilio, the current version only requires you to configure the cluster once and the Trilio dashboard provides cluster wide management capability.
OpenStack endpoints can be configured to use TLS. In such a configuration the Trilio appliance needs to trust the certificates provided by the OpenStack endpoints.
To achieve this trust it is required to upload the OpenStack certificate bundle through the OS API certificate tab of the Trilio appliance Dashboard.
The certificate bundle is located on the controller nodes of the OpenStack installation.
The default paths for each distribution are as follows:
The uploaded certificates can be verified on the Trilio appliance at the following location.
Upon login into an unconfigured Trilio Appliance, the shown page is the configurator. The configurator requires some information about the Trilio Appliance, Openstack and Backup Storage.
The Trilio Cluster needs to be integrated into an existing environment to be able to operate correctly. This block asks for the information about the Trilio Cluster operating details.
Controller Nodes
This is the list of Trilio virtual appliance IP addresses along with their hostnames.
Format: comma separated list with pairs combined through '='
Example: 172.20.4.151=tvault-104-1,172.20.4.152=tvault-104-2,172.20.4.153=tvault-104-3’
The Trilio Cluster supports only 1 node and 3 node clusters.
Virtual IP Address
This is the Trilio cluster IP address which is mandatory
Format: IP/Subnet
Example: 172.20.4.150/24
The Virtual IP is mandatory even for single node clusters and has to be different from any IP given at the Controller Nodes.
Name Server
List of nameservers, primarily used to resolve OpenStack service endpoints.
Format: comma separated list
example: 10.10.10.1,172.20.4.1
If defining OpenStack endpoint hostnames in the /etc/hosts file on the VM is preferred over a DNS solution you may set the nameserver to 0.0.0.0, the default gateway.
Domain Search Order
The domain the Trilio Cluster will use.
Format: comma separated list
example: trilio.io,trilio.demo
NTP Servers
NTP servers the Trilio Cluster will use
format: comma separated list
example: 0.pool.ntp.org,10.10.10.10
Timezone
Timezone the Trilio Cluster will use internally
format: pre-populated list
example: UTC
The Trilio appliance integrates with one RHV environment. This block asks for the information required to access and connect with the RHV Cluster.
Keystone Admin URL
The Keystone admin endpoint mainly used during configuration
format: URL
example: https://keystone.trilio.io:35357/v3
Keystone Public/Internal URL
The URL type defines which endpoint type will communicate with the Openstack endpoints
format: URL
example: https://internal.trilio.io:5000/v3
When FQDNs are used for the Keystone endpoints it is necessary to configure at least one DNS server before the configuration.
Otherwise will the validations of the Openstack Credentials fail.
Administrator
Username of an account with the domain admin role
format: String
example: admin
Password
password for the user provided before
format: String
example: password
Admin Tenant
The tenant to be used together with the provided user
Region
Openstack Region the user and tenant are located in
format: String
example: RegionOne
Domain ID
domain the provided user and tenant are located in
format: ID
exmaple: default
Trustee Role
The Openstack role required to be able to use Trilio functionalities
The Trilio configurator verifies after every entry if it is possible to login into Openstack using the provided credentials.
This verification will fail until all entries are set and correct.
When the verification is successful it is possible to choose the trustee role and no error message is shown.
Trilio requires domain admin role access. To provide domain admin role to a user, the following command can be used:
openstack role add --domain <domain id> --user <username> admin
This block is requesting the necessary information about the backup target that the Trilio installation will be used to store and read backups.
The very first field in this block decides the protocol used to connect with Backup Storage, NFS or S3.
NFS Export
Path under which the NFS Volumes to be used can be found
format: comma separated list of NFS Volumes paths
example: 10.10.2.20:/upstream,10.10.5.100:/nfs2
NFS Options
NFS options used by the Trilio Cluster when mounting the NFS Exports
format: NFS options
example: nolock,soft,timeo=180,intr,lookupcache=none
Please use the predefined NFS Options and only change them when it is know that changes are necessary.
Trilio is testing against the predefined NFS options.
S3 Compatible
Switch between Amazon and Ceph
format: predefined list
example: Amazon S3
Use Ceph S3 for any non AWS S3 Storage.
Access Key
Access Key necessary to login into the S3 storage
format: access key
example: SFHSAFHPFFSVVBSVBSZRF
Secret Key
Secret Key necessary to login into the S3 storage
format: secret key
example: bfAEURFGHsnvd3435BdfeF
Region
Configured Region for the S3 Bucket
format: String
example: us-east
Bucket Name
Name of the bucket to be used as Backup target
format: string
example: Trilio-backup
(CEPH S3 ONLY) Endpoint URL
URL to be used to reach and access the provided S3 compatible storage
format: URL
example: objects.trilio.io
If you are upgrading either from older versions of Trilio or reinstalling the appliance for maintenance reasons, please check this box during the configuration. Trilio is a stateless appliance and all the state is securely saved on the NFS/S3 storage. So, during the upgrade process, the user will need to import all backup job records back from the NFS/S3 storage to the appliance MySQL database. By checking this box, the configuration automatically imports all backup records.
Workloads that are not assigned to a still existing tenant will fail there import and need to be reassigned manually once the configuration is done.
At the end of the configuratorformulars is the option to activate the advanced settings. Activating this option does provide the possibility to configure the Keystone endpoints used for the Datamover API and Trilio.
It is recommended to verify the datamover api settings against the ones configured during installation of the Trilio components.
If these endpoints do already exist in Keystone are the values prefilled and can not be changed. In case of a change required, delete the old Keystone endpoints first.
Providing an URL with https activates the TLS enabled configuration, which requires the upload of certificates and the connected private key.
Once all entries have been set and all validations are error free the configurator can be started.
Click Finish
Reconfirm in the pop-up that you want to start the configuration
Wait for the configurator to finish
Some elements of the configurator take time. Even when it looks like the configurator is stuck, please wait till the configurator finishes. Should the configurator have not finished after 6h, please contact Trilio Support for help.
The configurator is using Ansible and a few Trilio internal API calls. After each configuration block or after the configurator finished it is possible to visit the ansible output.
At the end of a successful configuration does the configurator forward to the set VIP.
After the Trilio VM has been configured and all components are installed can the license be applied.
The license can be applied either through the admin-tab in Horizon or the CLI
To apply the license through Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to License
Click "Update License"
Click "Choose File"
choose license-file on client system
click "Apply"
Follow .
<license_file>
path to the license file
Please ensure the following points before starting the upgrade process:
No snapshot OR restore to be running.
Global job scheduler should be disabled.
The mentioned gemfury repository should be accessible from TVault VM.
Add gemfury repo on each dmapi, horizon containers & compute node(s) to get updated packages.
Create a file /etc/apt/sources.list.d/fury.list and add below line in it.
Use the below commands to get a list of updated packages available on the configured repositories.
Login to dmapi container from the controller node using the below command.
Take a backup of the following file on each dmapi container(s).
Add the gemfury repo and upgrade the dmapi package using the below command. Select the appropriate package depending on the python version used.
Restore the backed-up config files
Now restart and check the service tvault-datamover-api on controller.
For the Horizon plugin upgrade, we need to upgrade below two packages.
python3-tvault-horizon-plugin python3-workloadmgrclient
Login to the horizon container from the controller node using the below command.
Add the gemfury repo and upgrade the tvault-horizon-plugin & workloadmgrclient packages using the below command. Select the appropriate package depending on the python version used.
Restart the apache2 service and verify the workloadmgrclient version using the below commands.
Take a backup of following file on each compute node(s) for nfs storage backend.
Upgrade the tvault-contego package using below command.Select the appropriate package depending on python version used.
Restore the backed-up config files
Now restart and check the service tvault-contego on compute node(s).
Please make sure all configuration files are unchanged.
Take a backup of following file on each compute node(s) for s3 storage backend.
Upgrade the tvault-contego package using below command. Select the appropriate package depending on the python version used.
Note*:- If you get prompt while installing, choose the default selection.
Restore the backed-up config files
Now restart and check the service tvault-contego on compute node(s).
Please make sure all configuration files are unchanged.
Container trilio_datamover_api needs to be redeployed. Follow below steps on all controller nodes :
Backup Existing conf files/folders
Stop and Remove existing container trilio_datamover_api
Pull trilio_datamover_api container image:
Run datamover api container.
Verify deployment
Login to horizon container (on controller node)
Add gemfury repo on horizon to get updated packages.
Create a file /etc/apt/sources.list.d/fury.list and add below line in it.
Use below commands to get list of updated packages available on repo's.
For Horizon plugin upgrade, following packages need to be upgraded.
python3-tvault-horizon-plugin python3-workloadmgrclient
Run following commands to upgrade the tvault-horizon-plugin & workloadmgrclient packages. Select the appropriate package depending on python version used.
Restart docker container (from controller node) and verify the workloadmgrclient version (inside horizon container).
Container trilio_datamover needs to be redeployed. Follow below steps on all Compute nodes :
Backup Existing conf files/folders
Stop & remove existing trilio_datamover container
Pull Trilio Datamover container image using the following command:
Run datamover container.
Verify Deployment of trilio_datamover
Please ensure following points before starting the upgrade process:
No snapshot OR restore to be running.
Global job scheduler should be disabled.
The mentioned gemfury repository should be accessible from TVault VM.
Add trilio repo on each dmapi, horizon containers & compute node(s) to get updated packages.
Modify the file /etc/yum.repos.d/trilio.repo and add below line in it.
Use below commands to get list of updated packages available on repo's.
Login to dmapi container from controller node using below command.
Take a backup of following file on each dmapi container(s).
Add the trilio repo and upgrade the dmapi package using below command.Select the appropriate package depending on python version used.
You can check installed package using below command.
Upgrade the dmapi package using below command.
Restore the backed-up config files
Now restart and check the service tvault-datamover-api on the controller.
We need to upgrade below two packages to upgrade horizon plugin.
tvault-horizon-plugin workloadmgrclient
Login to horizon container from controller node using below command.
Add the trilio repo and upgrade the tvault-horizon-plugin & workloadmgrclient packages using below command.Select the appropriate package depending on python version used.
Depending on the output of above command upgrade appropriate packages.
Restart the httpd service and verify the workloadmgrclient version using below commands.
Take a backup of following file on each compute node(s) for nfs storage backend.
Add the trilio repo and upgrade the tvault-contego package using below command.Select the appropriate package depending on python version used.
Depending on the output of above command upgrade appropriate packages.
Restore the backed-up config files
Now restart the service tvault-contego. Verify the status of the service and check the mount point.
Take a backup of following file on each compute node(s) for s3 storage backend.
Upgrade the tvault-contego and s3fuse-plugin packages using below command. Select the appropriate package depending on the python version used.
Note*:- If you get prompt while installing, choose the default selection.
Restore the backed-up config files
Now restart and check the service tvault-contego on compute node(s).
Container trilio_datamover_api needs to be redeployed. Follow below steps on all controller nodes :
Backup Existing conf files/folders
Stop and Remove existing container trilio_datamover_api
Pull trilio_datamover_api container image
Run datamover api container.
Verify deployment
Container trilio_datamover needs to be redeployed. Follow below steps on all Compute nodes :
Backup Existing conf files/folders
Stop & remove existing trilio_datamover container
Pull Trilio Datamover container image using the following command:
Run datamover container.
Verify Deployment of trilio_datamover
Login to horizon container
Add trilio repo on each controller(s) to get updated packages.
Create a file /etc/yum.repos.d/trilio.repo and add below line in it.
Use below commands to get list of updated packages available on repo's.
For Horizon plugin upgrade, following packages need to be upgraded.
tvault-horizon-plugin workloadmgrclient
Select the appropriate package depending on python version used.
Depending on the output of above command upgrade appropriate packages
Restart docker container (from controller node) and verify the workloadmgrclient version (inside horizon container).
Please refer https://triliodata.atlassian.net/wiki/spaces/TRIL/pages/2055864321/Upgrade+of+Trilio+components+on+RHOSP
It is possible to configure Cinder to have multiple configurations and keyrings for CEPH.
In this case, the Trilio Datamover file needs to be extended with the CEPH information.
For Trilio to be able to work in such an environment it is required to put copies of each of these configurations and keyrings into a separate directory, which is then made known to the Trilio Datamover inside a [ceph]
block in the tvault-contego.conf.
A tvault-contego.conf file with the extended [ceph] block would look like this.
The Red Hat Openstack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.
Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
Depending whether the RHOSP environment is already installed or is getting installed for the first time different steps are done to be able to deploy Trilio.
All commands need to be run as user 'stack'
The following commands upload the Trilio puppet module to the overcloud. The actual upload happens upon the next deployment.
Gitub branches to use:
Trilio 4.0 GA == stable/4.0 Trilio 4.0 SP1 == v4.0maintenance
Trilio contains of multiple services. Add these services to your roles_data.yaml.
In case of uncostomized roles_data.yaml can the default file be found on the undercloud at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
This service needs to share the same role as the nova-api
service.
In case of the pre-defined roles will the nova-api
service run on the role Controller
.
In case of custom defined roles, it is necessary to use the role the nova-api
service is using.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In case of custom defined roles, it is necessary to use the role the nova-compute
service is using.
Add the following line to the identified role:
All commands need to be run as user 'stack'
Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: 'registry.connect.redhat.com'. Container pull urls are given below.
Trilio Datamover container: registry.connect.redhat.com/trilio/trilio-datamover:<container-tag>
Trilio Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:<container-tag>
Trilio horizon plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<container-tag>
'4.0.92' is Trilio 4.0 build version. Container tag: 4.0.92-rhosp13 '4.0.115' is Trilio 4.0 SP1 build version. Container tag: 4.0.115-rhosp13
There are three registry methods available in RedHat Openstack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml and overcloud_images.yaml with the RedHat Registry URLs to the right containers.
Populate the trilio_env.yaml with container urls for:
Trilio Datamover container
Trilio Datamover api container
In the overcloud_images.yaml replace the standard Horizon Container with the Trilio Horizon container. This is necessary to gain access to the Trilio Horizon Dashboard.
Follow this section when 'local registry' is used on the undercloud.
In this case it is necessary to pull the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' to the undercloud and updates the trilio_env.yaml with the values for the datamover and datamover api containers.
The script assumes that the undercloud container registry is running on port 8787. If the registry is running on a different port, the script needs to be adjusted manually.
The changes can be verified in the trilio_env.yaml.
In the overcloud_images.yaml replace the standard Horizon Container with the Trilio Horizon container. This is necessary to gain access to the Trilio Horizon Dashboard.
Follow this section when a Satellite Server is used for the container registry.
Pull the Trilio containers on the Red Hat Satellite using the given Red Hat registry URLs.
Populate the trilio_env.yaml with container urls for:
Trilio Datamover container
Trilio Datamover api container
In the overcloud_images.yaml replace the standard Horizon Container with the Trilio Horizon container. This is necessary to gain access to the Trilio Horizon Dashboard.
Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.
The following information are required additionally:
Network for the datamover api
Backup target type {nfs/s3}
In case of NFS
list of NFS Shares
NFS options
In case of S3
s3 type {amazon_s3/ceph_s3}
Access key
Secret key
S3 Region name
S3 Bucket name
S3 Endpoint URL
S3 SSL Enabled {true/false}
Use the following heat environment file and roles data file in overcloud deploy command:
trilio_env.yaml
roles_data.yaml
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes.
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Once RHOSP13 Installation steps have completed successfully, follow the instructions below to now configure the Trilio Appliance.
In RHOSP, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all Trilio nodes:
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that 'nova' user and group id has changed to '42436'
Follow this documentation to configure Trilio Appliance.
It is necessary to first configure the Trilio appliance, before the steps of this section can be done.
RHOSP13 is using a different mount point in its datamover containers than other Openstack distribution. It is necessary to adjust the mountpoint of the Trilio Nodes to match this. If the mountpoints are not getting aligned, will files created by the datamover and read by the Trilio appliance not match in their paths and backup and restore processes will fail.
Please follow these steps to align the mountpoints:
Edit /etc/workloadmgr/workloadmgr.conf file
Set parameter 'vault_data_directory' to '/var/lib/nova/triliovault-mounts'
create the directory for the mountpoint
assign the created directory to nova:nova
unmount the old mountpoint
Update the Trilio configurator
Restart the Trilio services
Verify the mountpoint has been configured correctly
Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing does the following command provide the list of errors:
Further commands that can help identifying any errors.
In case of the trilio datamover api container is failing to start or stuck in restart, check these logs:
In case of Trilio datamover container failing to start or being stuck in restart, check these logs:
If Cinder backend is Ceph it is necessary to manually add the ceph details to tvault-contego.conf on all compute nodes.
The file can be found here:
/var/lib/config-data/puppet-generated/triliodm/etc/tvault-contego/tvault-contego.conf
Add the following information:
The same block of information can be found in the nova.conf file.
You may want to decommission Trilio as a service for your OpenStack. In this approach, you may not have the need to keep backups that were created and you would like to do complete uninstall of the product.
Delete all instances of Trilio.
On nova controller node, uninstall tvault-contego-api module.
On each nova compute node, uninstall tvault-contego data mover.
Uninstall tvault-horizon-plugin from the horizon dashboard node
Login as Admin to horizon dashboard and delete triliovault user.
You can also clear all the contents from the NFS share or S3 bucket that is configured as backup target for Trilio.
To uninstall the Trilio components, you need modify the deploy command and update the cloud using the new command.
Remove the trilio_env_osp16.yaml and roles_data.yaml from overcloud deploy command and then run it. After the changes the overcloud deployment command will look like this:
Note: Run all the commands with 'stack' user.
Clone updated repo\
If your backup target is Ceph S3 with self signed certs If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/puppet/trilio/files'
Upload triliovault puppet module
Copy old trilio env file to new repository
Trilio containers are pushed to 'RedHat Container Registry'. Registry URL is 'registry.connect.redhat.com'. Following are the triliovault container pull urls.
Trilio container urls for RHOSP13:
Trilio container urls for RHOSP16:
There are three registry methods available in RedHat Openstack Platform.
Remote Registry 2. Local Registry 3. Satellite Server
Identify which method you are using. Below we have explained all three methods to pull and configure trilioVault's container images for overcloud deployment.
If you are using 'Remote Registry' method follow this section. You don't need to pull anything. You just need to populate following container urls in trilio env yaml.
If it’s RHOSP13, populate 'trilio_env.yaml' file with triliovault container urls. Changes looks like following.\
If it’s RHOSP16, populate 'trilio_env_osp16.yaml' file with triliovault container urls. Changes looks like following.\
If you are using 'local registry' on undercloud, follow this section.
Note: Run following script from undercloud node with 'stack' user
If it’s RHOSP13, run following script. Script, pulls the triliovault containers and updates the triliovault environment file with urls.
If it’s RHOSP13, user needs to set horizon image url to triliovault horizon image url in overcloud_images.yaml. Changes looks like following.
If it’s RHOS16, run following script. Script, pulls the triliovault containers and updates the triliovault environment file with urls.\
## Above script pushes trilio container images to undercloud registry and sets correct trilio images urls in trilio_env_osp16.yaml. Verify the changes using following command.
If you are using 'Satellite Server' for container registry, follow this section.
Following are the pull urls for Trilio containers. You need to pull these containers on Redhat Satellite that you already have.
Trilio container urls for RHOSP13:
Trilio container urls for RHOSP16:
Pull above containers as per your RHOSP release and push to redhat satellite server. For this, you can follow process same as other openstack containers.
After this, if it’s RHOSP13, populate 'trilio_env.yaml' file with the urls. Changes looks like following.
If it’s RHOSP13, user needs to set horizon image url to triliovault horizon image url in overcloud_images.yaml. Changes looks like following.
If it’s RHOSP16, populate 'trilio_env_osp16.yaml' file with the triliovault container urls. Changes looks like following.
Run your overcloud deploy command again. It will update the triliovault containers on overcloud.
Command looks like following. \
Verify triliovault datamover api container got updated on nova-api nodes
2. Verify triliovault datamover container got updated on nova-compute nodes
3. Verify triliovault horizon got updated on horizon nodes
Please ensure to complete the upgrade of all the TVault components on Openstack controller & compute nodes before starting the rolling upgrade of TVM.
The mentioned gemfury repository should be accessible from TVault VM.
Please ensure the following points before starting the upgrade process:
No snapshot OR restore to be running.
Global job-scheduler should be disabled.
Take a backup of the conf files on all TVM nodes.
Activate the virtual environment on all TVM nodes.
Export new TVault version PYPI url:
Run the following command on all TVM nodes to upgrade s3fuse and its dependent packages.
Run the following command on all TVM nodes to upgrade tvault-configurator and its dependent packages.
Run the upgrade command on all TVM nodes to upgrade workloadmgr and its dependent packages (workloadmgrclient, contegoclient, etc)
Update wlm-cron service entries
If Reconfigure is NOT planned, please perform following steps on all TVM nodes, else skip.
Update the following two parameters in wlm-cron.service systemd file (/etc/systemd/system/wlm-cron.service) :
And once done, use the following command to reload the service file:
Maria DB changes
If Reconfigure is planned
Remove Galera clustered-flag from all TVM nodes & proceed with reconfigure
If Reconfigure is NOT planned
Increase the max SQL connections limit by doing the following steps:
Edit /etc/my.cnf.d/server.cnf file on each TVM node
Add the parameter max_connections=5000 under [mysqld] section
Stop and Start MariaDB service on each node one by one
Restore the backed-up config files
Restart following services on all node(s) using respective commands
Enable Global Job Scheduler
Restart pcs resources only on the primary node
Verify the status of the services
Note: tvault-object-store will run only if TVault configured with S3 backend storage
Additional check for wlm-cron on primary node
Check the mount point using “df -h” command
After the installation and configuration of Trilio for Openstack did succeed the following steps can be done to verify that the Trilio installation is healthy.
Trilio is using 4 main services on the Trilio Appliance:
wlm-api
wlm-scheduler
wlm-workloads
wlm-cron
Those can be verified to be up and running using the systemctl status
command.
The second component to check the Trilio Appliance's health is the nginx and pacemaker cluster.
Checking the availability of the Trilio API on the chosen endpoints is recommended.
The following example curl command lists the available workload-types and verifies that the connection is available and working:
Please check the API guide for more commands and how to generate the X-Auth-Token.
The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.
In order to check the dmapi service go to dmapi container which is residing on controller nodes and run below command
The datamover service is running on each compute node. Logging to compute node and run below command
The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.
Run the following command on “nova-api” nodes and make sure “triliovault_datamover_api” container is in started state.
Run the following command on "nova-compute" nodes and make sure the container is in a started state.
Run the following command on horizon nodes and make sure the container is in a started state.
Run the following command on MAAS nodes and make sure all trilio units like trilio-data-mover
, trilio-dm-api
, trilio-horizon-plugin
, trilio-wlm
are in active state
Make sure the Trilio dmapi and horizon containers (shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.
Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly.
Please check dmapi endpoints on overcloud node.
Trilio allows you to view or download a file from the snapshot. Any changes to the files or directories when snapshot is mounted are temporary and are discarded when the snapshot is unmounted. Mounting is a faster way to restore a single or multiple files. To mount a snapshot follow these steps.
Trilio ships with a ubuntu based file manager image. This image includes a web based file manager application that helps with browsing and download files when snapshot is mounted. Before you create a File Manager instance, File Manager (a.k.a Recovery Manager) image needs to be uploaded to the Glance. The cloud administrator can upload the image to Glance and mark the image public so that every tenant will have access to the image. File Manager image includes a qemu guest agent. Trilio data mover communicates with the qemu agent to map backup images and discover file systems in the backup images. Since the file manager image includes qemu guest agent, the image should be uploaded to glance with property hw_qemu_guest_agent=yes so Nova will create a qmp channel when creating an instance from this image. Trilio horizon plugin uses a special property tvault_recovery_manager, when set to yes it will filter out instances that don’t have this property. If a tenant has a huge number of instances, setting this property on the image will help Trilio horizon plug-in present only the instances whose image has this property. In this release we only support launching File Manager instance with virtio bus so please set hw_disk_bus to virtio. Use the following command to upload the image to Glance and mark it as public image and set the property.
The workloadmgr CLI client is provided as rpm and deb packages.
It got tested against the following operating systems:
CentOS7, CentOS8
Ubuntu 18.04, Ubuntu 20.04
Installing the workloadmgr client will automatically install all required Openstack clients as well.
Further will the installation of the workloadmgr client integrate the client into the global openstack python client, if available.
The required connection strings and package names can be found on the Trilio Dashboard under the Downloads tab.
The Trilio workload manager CLI client has several requirements that need to be met before the client can be installed without dependency issues.
The following steps need to be done to prepare the installation the workloadmgr client:
Add required repositories
epel-release
for CentOS7: centos-release-openstack-stein
for CentOS8: centos-release-openstack-train
install base packages
yum -y install epel-release
for CentOS7: yum -y install centos-release-openstack-stein
for CentOS8: yum -y install centos-release-openstack-train
These repositories are required to fulfill the following dependencies:
On CentOS7 Python2: python-pbr,python-prettytable,python2-requests,python2-simplejson,python2-six,pytz,PyYAML,python2-openstackclient
On CentOS8 Python3: python3-pbr,python3-prettytable,python3-requests,python3-simplejson,python3-six,python3-pyyaml,python3-pytz,python3-openstackclient
There are 2 possibilities for how the workloadmgr client packages can be installed.
The Trilio appliance is shipping the workloadmgr client version, that is matching the Trilio version of the Trilio appliance. These clients will always work with their respective Trilio versions.
The workloadmgr client can be directly downloaded using the following command:
For CentOS7:
wget http://<TVM-IP>:8085/yum-repo/queens/workloadmgrclient-<Trilio-Version>-<Trilio-Release>.noarch.rpm
For CentOS8: http://<TVM-IP>:8085/yum-repo/queens/python3-workloadmgrclient-<Trilio-Version>-<TVault-Release>.noarch.rpm
To identify the Trilio Version and Trilio release login into the Trilio Dashboard and check the upper left corner.
The yum package manager is used to install the workloadmgr client package:
yum install workloadmgrclient-<Trilio-Version>-<Trilio-Release>.noarch.rpm
An example installation can be found below:
To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:
Create the Trilio yum repository file /etc/yum.repos.d/trilio.repo
Enter the following details into the repository file:
Install the workloadmgr client issuing the following command:
For CentOS7: yum install workloadmgrclient
For CentOS8: yum install python-3-workloadmgrclient-el8
An example installation can be found below:
The Trilio workloadmgr client packages for Ubuntu are only available from the online repository.
There is no preparation required. All dependencies are automatically resolved by the standard repositories provided by Ubuntu.
There are 2 possibilities for how the workloadmgr client packages can be installed.
The Trilio appliance is shipping the workloadmgr client version, that is matching the Trilio version of the Trilio appliance. These clients will always work with their respective Trilio versions.
The workloadmgr client can be directly downloaded using the following command:
For Python2:
curl -Og6 http://<TVM-IP>:8085/deb-repo/deb-repo/python-workloadmgrclient_<Trilio-Version>_all.deb
For Python3:curl -Og6 http://<TVM-IP>:8085/deb-repo/deb-repo/python3-workloadmgrclient_<Trilio-Version>_all.deb
o identify the Trilio Version and Trilio release login into the Trilio Dashboard and check the upper left corner.
The apt package manager is used to install the workloadmgr client package:
For Python2:apt-get install ./python-workloadmgrclient_<Trilio-Version>_all.deb -y
For Python3:apt-get install ./python3-workloadmgrclient_<Trilio-Version>_all.deb -y
An example installation can be found below:
To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:
Create the Trilio yum repository file /etc/apt/sources.list.d/fury.list
Enter the following details into the repository file:
run apt update
to make the new repository available.
The apt package manager is used to install the workloadmgr client package:
For Python2:apt-get install python-workloadmgrclient
For Python3:apt-get install python3-workloadmgrclient
An example installation can be seen below:
In case of the Trilio Dashboard being lost it can be resetted as long as SSH access to the appliance is available.
To reset the password to its default do the following:
The dashboard login will be reset to:
To change the Trilio GUI password do:
Login into the Trilio Dashboard
Click on "admin" in the upper right corner to open the submenu
Choose "Reset Password"
Set the new Trilio password
The Trilio appliance can be reconfigured at any time to adjust the Trilio cluster to any changes in the Openstack environment or the general backup solution.
To reconfigure the Trilio Cluster go to the "Configure". The configure page shows the current configuration of the TriloVault cluster.
The configuration page also gives access to the ansible playbooks of the last successful configuration.
To start the reconfiguration of the Trilio Cluster click "Reconfigure" at the end of the table.
Follow the Configuring Trilio guide afterwards.
Once the Trilio configurator has started, it needs to run through successfully to continue to use Trilio.
The cluster will not roll back to it's last working state in case of any errors.
The Trilio Appliance can be reinitialized, which will delete all workload related values from the Trilio database.
To reinitialize the Trilio Appliance do:
Login into the Trilio Dashboard
Click on "admin" in the upper right corner to open the submenu
Choose "Reinitialize"
Verify that you want to reinitialize the Trilio
Like all Openstack services is Trilio using a technical openstack user.
The password for this user can be changed through the Trilio web gui.
To change it use the Service Password option on the main menu.
The file search functionality allows the user to search for files and folders located on a chosen VM in a workload in one or more Backups.
The file search tab is part of every workload overview. To reach it follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload a file search shall be done in
Click the workload name to enter the Workload overview
Click File Search to enter the file search tab
A file search runs against a single virtual machine for a chosen subset of backups using a provided search string.
To run a file search the following elements need to be decided and configured
Under VM Name/ID choose the VM that the search is done upon. The drop down menu provides a list of all VMs that are part of any Snapshot in the Workload.
VMs that are no longer activly protected by the Workload but are still part of an existing Snapshot are listed in red.
The File Path defines the search string that is run against the chosen VM and Snapshots. This search string does support basic RegEx.
The File Path has to start with a '/'
Windows partitions are fully supported. Each partition is its own Volume with its own root. Use '/Windows' instead of 'C:\Windows'
The file search does not go into deeper directories and always searches on the directory provided in the File Path
Example File Path for all files inside /etc : /etc/*
"Filter Snapshots by" is the third and last component that needs to be set. This defines which Snapshots are going to be searched.
There are 3 possibilities for a pre-filtering:
All Snapshots - Lists all Snapshots that contain the chosen VM from all available Snapshots
Last Snapshots - Choose between the last 10, 25, 50, or custom Snapshots and click Apply to get the list of the available Snapshots for the chosen VM that match the criteria.
Date Range - Set a start and end date and click apply to get the list of all available Snapshots for the chosen VM within the set dates.
After the pre-filtering is done all matching Snapshots are automatic prechosen. Uncheck any Snapshot that shall not be searched.
When no Snapshot is chosen the file search will not start.
To start a File Search the following elements need to be set:
A VM to search in has to be chosen
A valid File Path provided
At least one Snapshot to search in selected
Once those have been set click "Search" to start the file search.
Do not navigate to any other Horizon tab or website after starting the File Search. Results are lost and the search has to be repeated to regain them.
After a short time the results will be presented. The results are presented in a tabular format grouped by Snapshots and Volumes inside the Snapshot.
For each found file or folder the following information are provided:
POSIX permissions
Amount of links pointing to the file or folder
User ID who owns the file or folder
Group ID assigned to the file or folder
Actual size in Bytes of the file or folder
Time of creation
Time of last modification
Time of last access
Full path to the found file or folder
Once the Snapshot of interest has been identified it is possible to go directly to the Snapshot using the "View Snapshot" option at the top of the table. It is also possible to directly mount the Snapshot using the "Mound Snapshot" Button at the end of the table.
A Snapshot is a single Trilio backup of a workload including all data and metadata. It contains the information of all VM's that are protected by the workload.
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to show the details on
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
The List of Snapshots for the chosen Workload contains the following additional information:
Creation Time
Name of the Snapshot
Description of the Snapshot
Total amount of Restores from this Snapshot
Total amount of succeeded Restores
Total amount of failed Restores
Snapshot Type
Snapshot Size
Snapshot Status
Snapshots are automatically created by the Trilio scheduler. If necessary or in case of deactivated scheduler is it possible to create a Snapshot on demand.
There are 2 possibilities to create a snapshot on demand.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that shall create a Snapshot
Click "Create Snapshot"
Provide a name and description for the Snapshot
Decide between Full and Incremental Snapshot
Click "Create"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that shall create a Snapshot
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Click "Create Snapshot"
Provide a name and description for the Snapshot
Decide between Full and Incremental Snapshot
Click "Create"
Each Snapshot contains a lot of information about the backup. These information can be seen in the Snapshot overview.
To reach the Snapshot Overview follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
The Snapshot Details Tab shows the most important information about the Snapshot.
Snapshot Name / Description
Snapshot Type
Time Taken
Size
Which VMs are part of the Snapshot
for each VM in the Snapshot
Instance Info - Name & Status
Security Group(s) - Name & Type
Flavor - vCPUs, Disk & RAM
Networks - IP, Networkname & Mac Address
Attached Volumes - Name, Type, size (GB), Mount Point & Restore Size
Misc - Original ID of the VM
The Snapshot Restores Tab shows the list of Restores that have been started from the chosen Snapshot. It is possible to start Restores from here.
The Snapshot Miscellaneous Tab provides the remaining metadata information about the Snapshot.
Creation Time
Last Update time
Snapshot ID
Workload ID of the Workload containing the Snapshot
Once a Snapshot is no longer needed, it can be safely deleted from a Workload.
The retention policy will automatically delete the oldest Snapshots according to the configure policy.
You have to delete all Snapshots to be able to delete a Workload.
Deleting a Trilio Snapshot will not delete any Openstack Cinder Snapshots. Those need to be deleted separately if desired.
There are 2 possibilities to delete a Snapshot.
To delete a single Snapshot through the submenu follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu
Click "Delete Snapshot"
Confirm by clicking "Delete"
To delete one or more Snapshots through the Snapshot overview do the following:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshots in the Snapshot list
Check the checkbox for each Snapshot that shall be deleted
Click "Delete Snapshots"
Confirm by clicking "Delete"
Ongoing Snapshots can be canceled.
Canceled Snapshots will be treated like errored Snapshots
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to cancel
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click "Cancel" on the same line as the identified Snapshot
Confirm by clicking "Cancel"
A workload is a backup job that protects one or more Virtual Machines according to a configured policy. There can be as many workloads as needed. But each VM can only be part of one Workload.
To view all available workloads of a project inside Horizon do:
Login to Horizon
Navigate to Backups
Navigate to Workloads
The overview in Horizon lists all workloads with the following additional information:
Creation time
Workload Name
Workload description
Total amount of Snapshots inside this workload
Total amount of succeeded Snapshots
Total amount of failed Snapshots
Workload Type
Status of the Workload
To create a workload inside Horizon do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Click "Create Workload"
Provide Workload Name and Workload Description on the first tab "Details"
Choose between Serial or Parallel workload on the first tab "Details"
Choose the Policy if available to use on the first tab "Details"
Choose the VMs to protect on the second Tab "Workload Members"
Decide for the schedule of the workload on the Tab "Schedule"
Provide the Retention policy on the Tab "Policy"
Choose the Full Backup Interval on the Tab "Policy"
If required check "Pause VM" on the Tab "Options"
Click create
The created Workload will be available after a few seconds and starts to take backups according to the provided schedule and policy.
A workload contains many information, which can be seen in the workload overview.
To enter the workload overview inside Horizon do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to show the details on
Click the workload name to enter the Workload overview
The Workload Details tab provides you with the general most important information about the workload:
Name
Description
Availability Zone
List of protected VMs including the information of qemu guest agent availability
The status of the qemu-guest-agent just shows, whether the necessary Openstack configuration has been done for this VM to provide qemu guest agent integration. It does not check, whether the qemu guest agent is installed and configured on the VM.
It is possible to navigate to the protected VM directly from the list of protected VMs.
The Workload Snapshots Tab shows the list of all available Snapshots in the chosen Workload.
From here it is possible to work with the Snapshots, create Snapshots on demand and start Restores.
The Workload Policy Tab gives an overview of the current configured scheduler and retention policy. The following elements are shown:
Scheduler Enabled / Disabled
Start Date / Time
End Date / Time
RPO
Time till next Snapshot run
Retention Policy and Value
Full Backup Interval policy and value
The Workload Filesearch Tab provides access to the powerful search engine, which allows to find files and folders on Snapshots without the need of a restore.
Please refer to the File Search User Guide to learn more about this feature.
The Workload Miscellaneous Tab shows the remaining metadata of the Workload. The following information are provided:
Creation time
last update time
Workload ID
Workload Type
Workloads can be modified in all components to match changing needs.
Editing a Workload will set the User, who edits the Workload, as the new owner.
To edit a workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Modify the workload as desired - All parameters except workload type can be changed
Click "Update"
Once a workload is no longer needed it can be safely deleted.
To delete a workload do the following steps:
Login to Horizon
Navigate to the Backups
Navigate to Workloads
Identify the workload to be deleted
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Delete Workload"
Confirm by clicking "Delete Workload" yet again
Workloads that are actively taking backups or restores are locked for further tasks. It is possible to unlock a workload by force if necessary.
It is highly recommend to use this feature only as last resort in case of backups/restores being stuck without failing or a restore is required while a backup is running.
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to unlock
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Unlock Workload"
Confirm by clicking "Unlock Workload" yet again
In rare cases it might be necessary to start a backup chain all over again, to ensure the quality of the created backups. To not recreate a Workload in such cases is it possible to reset a Workload.
The Workload reset will:
Cancel all ongoing tasks
Delete all existing Openstack Trilio Snapshots from the protected VMs
recalculate the next Snapshot time
take a full backup at the next Snapshot
To reset a Workload do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to reset
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Reset Workload"
Confirm by clicking "Reset Workload" yet again
A Restore is the workflow to bring back the backed up VMs from a Trilio Snapshot.
To reach the list of Restores for a Snapshot follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restores tab
To reach the detailed Restore overview follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restores tab
Identify the restore to show
Click the restore name
The Restore Details Tab shows the most important information about the Restore.
Name
Description
Restore Type
Status
Time taken
Size
Progress Message
Progress
Host
Restore Options
The Restore Options are the restore.json provided to Trilio.
List of VMs restored
restored VM Name
restored VM Status
restored VM ID
The Misc tab provides additional Metadata information.
Creation Time
Restore ID
Snapshot ID containing the Restore
Workload
Once a Restore is no longer needed, it can be safely deleted from a Workload.
Deleting a Restore will only delete the Trilio information about this Restore. No Openstack resources are getting deleted.
There are 2 possibilities to delete a Restore.
To delete a single Restore through the submenu follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restore tab
Click "Delete Restore" in the line of the restore in question
Confirm by clicking "Delete Restore"
To delete one or more Restores through the Restore list do the following:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to show
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshots in the Snapshot list
Enter the Snapshot by clicking the Snapshot name
Navigate to the Restore tab
Check the checkbox for each Restore that shall be deleted
Click "Delete Restore" in the menu above
Confirm by clicking "Delete Restore"
Ongoing Restores can be canceled.
To cancel a Restore in Horizon follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to delete
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the Snapshot Name
Navigate to the Restore tab
Identify the ongoing Restore
Click "Cancel Restore" in the line of the restore in question
Confirm by clicking "Cancel Restore"
The One Click Restore will bring back all VMs from the Snapshot in the same state as they were backed up. They will:
be located in the same cluster in the same datacenter
use the same storage domain
connect to the same network
have the same flavor
The user can't change any Metadata.
The One Click Restore requires, that the original VM's that have been backed up are deleted or otherwise lost. If even one VM is still existing, will the One Click Restore fail.
The One Click Restore will automatically update the Workload to protect the restored VMs.
There are 2 possibilities to start a One Click Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click "One Click Restore" in the same line as the identified Snapshot
(Optional) Provide a name / description
Click "Create"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "One Click Restore"
(Optional) Provide a name / description
Click "Create"
The Selective Restore is the most complex restore Trilio has to offer. It allows to adapt the restored VMs to the exact needs of the User.
With the selective restore the following things can be changed:
Which VMs are getting restored
Name of the restored VMs
Which networks to connect with
Which Storage domain to use
Which DataCenter / Cluster to restore into
Which flavor the restored VMs will use
The Selective Restore is always available and doesn't have any prerequirements.
The Selective Restore will automatically update the Workload to protect the created instance in the case that the original instance is no longer existing.
There are 2 possibilities to start a Selective Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot
Click on "Selective Restore"
Configure the Selective Restore as desired
Click "Restore"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "Selective Restore"
Configure the Selective Restore as desired
Click "Restore"
The Inplace Restore covers those use cases, where the VM and its Volumes are still available, but the data got corrupted or needs to a rollback for other reasons.
It allows the user to restore only the data of a selected Volume, which is part of a backup.
The Inplace Restore only works when the original VM and the original Volume are still available and connected. Trilio is checking this by the saved Object-ID.
The Inplace Restore will not create any new RHV resources. Please use one of the other restore options if new Volumes or VMs are required.
There are 2 possibilities to start an Inplace Restore.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot
Click on "Inplace Restore"
Configure the Inplace Restore as desired
Click "Restore"
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to be restored
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the Snapshot to be restored
Click the Snapshot Name
Navigate to the "Restores" tab
Click "Inplace Restore"
Configure the Inplace Restore as desired
Click "Restore"
The workloadmgr client CLI is using a restore.json file to define the restore parameters for the selective and the inplace restore.
An example for a selective restore of this restore.json is shown below. A detailed analysis and explanation is given afterwards.
Before the exact details of the restore are to be provided it is necessary to provide the general metadata for the restore.
openstack
starts the exact definition of the restore
The Selective Restore requires a lot of information to be able to execute the restore as desired.
Those information are divided into 3 components:
instances
restore_topology
networks_mapping
This part contains all information about all instances that are part of the Snapshot to restore and how they are to be restored.
Even when VMs are not to be restored are they required inside the restore.json to allow a clean execution of the restore.
Each instance requires the following information
All further information are only required, when the instance is part of the restore.
To use the next free IP available in the set Nics to an empty list [ ]
Using an empty list for Nics combined with the Network Topology Restore, will the restore automatically restore the original IP address of the instance.
The root disk needs to be at least as big as the root disk of the backed up instance was.
The following example describes a single instance with all values.
Do not mix network topology restore together with network mapping.
To activate a network topology restore set:
To activate network mapping set:
When the network mapping is activated it is used, it is necessary to provide the mapping details, which are part of the networks_mapping block:
The Inplace Restore requires less information thana selective restore. It only requires the base file with some information about the Instances and Volumes to be restored.
When the boot disk is at the same time a Cinder Disk, both values need to be set true.
There are no network information required, but the field have to exist as empty value for the restore to work.
Starting Trilio for Openstack 4.0 does Trilio for Openstack allow in-place upgrades.
The following versions can be upgraded to each other:
Old | New |
---|---|
The upgrade process contains upgrading the Trilio appliance and the Openstack components and is dependent on the underlying operating system.
The Upgrade of Trilio for Canonical Openstack is managed through the charms.
The Trilio Appliance Dashboard gives an overview of the running services and their Status inside the Cluster.
It shows for each Trilio Appliance the Status of the following Trilio services:
wlm-workloads
wlm-scheduler
wlm-api
The wlm-scheduler and wlm-api run on only one Trilio appliance at all times. That they are shown inactive on other nodes is not an error
To give administrators an overview of the HA status, does the dashboard also show the service status for:
Pacemaker
RabbitMQ
MySQL Galeera Cluster
Every Trilio appliance provides the possibility to download the following elements:
Shell-Scripts usable to install Trilio components
Workloadmgr Python client
Trilio Logs
To download the shell scripts:
Login into the Trilio web gui
Go to "Downloads"
Click on the script to be downloaded
tvault-contego-install.shused to install the nova-api extension and the datamover
tvault-contego-install.answerused for the automated installation method
tvault-horizon-plugin-install.shused to install the Trilio Horizon plugin
To download the workloadmgr python client:
Login into the Trilio web gui
Go to "Downloads"
Copy the link provided for the workloadmgr client package
download the workloadmgr client using any webbrowser or cli tool (for example wget)
The workloadmgr client package is provided as deb or rpm.
It is possible to download the Trilio logs directly through the Trilio web gui.
To download logs throught the Trilio web gui:
Login into the Trilio web gui
Go to "Logs"
Choose the log to be downloaded
Each log for every Trilio Appliance can be downloaded seperatly
or a zip of all logfiles can be created and downloaded
This will download the current log files. Already rotated logs need to be downloaded through SSH from the Trilio appliance directly. All logs, including rotated old logs, can be found at:
/var/logs/workloadmgr/
<vm_id>
ID of the VM to be searched
<file_path>
Path of the file to search for
--snapshotids <snapshotid>
Search only in specified snapshot ids snapshot-id: include the instance with this UUID
--end_filter <end_filter>
Displays last snapshots, example , last 10 snapshots, default 0 means displays all snapshots
--start_filter <start_filter>
Displays snapshots starting from , example , snapshot starting from 5, default 0 means starts from first snapshot
--date_from <date_from>
From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If time isn't specified then it takes 00:00 by default
--date_to <date_to>
To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day),Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to
--workload_id <workload_id>
Filter results by workload_id
--tvault_node <host>
List all the snapshot operations scheduled on a tvault node(Default=None)
--date_from <date_from>
From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If don't specify time then it takes 00:00 by default
--date_to <date_to>
To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day), Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to
--all {True,False}
List all snapshots of all the projects(valid for admin user only)
<workload_id>
ID of the workload to snapshot.
--full
Specify if a full snapshot is required.
--display-name <display-name>
Optional snapshot name. (Default=None)
--display-description <display-description>
Optional snapshot description. (Default=None)
Please refer to the User Guide to learn more about Restores.
<snapshot_id>
ID of the snapshot to be shown
--output <output>
Option to get additional snapshot details, Specify --output metadata for snapshot metadata, Specify --output networks for snapshot vms networks, Specify --output disks for snapshot vms disks
<snapshot_id>
ID of the snapshot to be deleted
<snapshot_id>
ID of the snapshot to be canceled
--all {True,False}
List all workloads of all projects (valid for admin user only)
--nfsshare <nfsshare>
List all workloads of nfsshare (valid for admin user only)
--display-name
Optional workload name. (Default=None)
--display-description
Optional workload description. (Default=None)
--workload-type-id
Workload Type ID is required
--source-platform
Workload source platform is required. Supported platforms is 'openstack'
--instance
Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID
--jobschedule
Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'snapshots_to_retain' : '2'
--metadata
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
--policy-id <policy_id>
ID of the policy to assign to the workload
Please refer to the and User Guide to learn more about those.
<workload_id>
ID/name of the workload to show
--verbose
option to show additional information about the workload
--display-name
Optional workload name. (Default=None)
--display-description
Optional workload description. (Default=None)
--instance <instance-id=instance-uuid>
Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID
--jobschedule <key=key-name>
Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. If don't specify timezone, then by default it takes your local machine timezone 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30'
--metadata <key=key-name>
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
--policy-id <policy_id>
ID of the policy to assign
<workload_id>
ID of the workload to edit
All Snapshots need to be deleted before the workload gets deleted. Please refer to the User Guide to learn how to delete Snapshots.
<workload_id>
ID/name of the workload to delete
--database_only <True/False>
Keep True if want to delete from database only.(Default=False)
<workload_id>
ID of the workload to unlock
<workload_id>
ID/name of the workload to reset
--snapshot_id <snapshot_id>
ID of the Snapshot to show the restores of
<restore_id>
ID of the restore to be shown
--output <output>
Option to get additional restore details, Specify --output metadata for restore metadata,--output networks --output subnets --output routers --output flavors
<restore_id>
ID of the restore to be deleted
<restore_id>
ID of the restore to be deleted
<snapshot_id>
ID of the snapshot to restore.
--display-name <display-name>
Optional name for the restore.
--display-description <display-description>
Optional description for restore.
<snapshot_id>
ID of the snapshot to restore.
--display-name <display-name>
Optional name for the restore.
--display-description <display-description>
Optional description for restore.
--filename <filename>
Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.
<snapshot_id>
ID of the snapshot to restore.
--display-name <display-name>
Optional name for the restore.
--display-description <display-description>
Optional description for restore.
--filename <filename>
Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.
The restore.json requires many information about the backed up resources. All required information can be gathered in the .
name
the name of the restore
description
the description of the restore
oneclickrestore <True/False>
If the restore is a oneclick restore. Setting this to True will override all other settings and a One Click Restore is started.
restore_type <oneclick/selective/inplace>
defines the restore that is intended
type openstack
defines that the restore is into an openstack cloud.
id
original id of the instance
include <True/False>
Set True when the instance shall be restored
name
new name of the instance
availability_zone
Nova Availability Zone the instance shall be restored into. Leave empty for "Any Availability Zone"
Nics
list of openstack Neutron ports that shall be attached to the instance. Each Neutron Port consists of:
id
ID of the Neutron port to use
mac_address
Mac Address of the Neutron port
ip_address
IP Address of the Neutron port
network
network the port is assigned to. Contains the following information:
id
ID of the network the Neutron port is part of
subnet
subnet the port is assigned to. Contains the following information:
id
ID of the network the Neutron port is part of
vdisks
List of all Volumes that are part of the instance. Each Volume requires the following information:
id
Original ID of the Volume
new_volume_type
The Volume Type to use for the restored Volume. Leave empty for Volume Type None
availability_zone
The Cinder Availability Zone to use for the Volume. The default Availability Zone of Cinder is Nova
flavor
Defines the Flavor to use for the restored instance. Contains the following information:
ram
How much RAM the restored instance will have (in MB)
ephemeral
How big the ephemeral disk of the instance will be (in GB)
vcpus
How many vcpus the restored instance will have available
swap
How big the Swap of the restored instance will be (in MB). Leave empty for none.
disk
Size of the root disk the instance will boot with
id
ID of the flavor that matches the provided information
networks
list of snapshot_network and target_network pairs
snapshot_network
the network backed up in the snapshot, contains the following:
id
Original ID of the network backed up
subnet
the subnet of the network backed up in the snapshot, contains the following:
id
Original ID of the subnet backed up
target_network
the existing network to map to, contains the following
id
ID of the network to map to
subnet
the subnet of the network backed up in the snapshot, contains the following:
id
ID of the subnet to map to
id
ID of the instance inside the Snapshot
restore_boot_disk
Set to True if the boot disk of that VM shall be restored.
include
Set to True if at least one Volume from this instance shall be restored
vdisks
List of disks, that are connected to the instance. Each disk contains:
id
Original ID of the Volume
restore_cinder_volume
set to true if the Volume shall be restored
4.0 GA (4.0.92)
4.0 SP1 (4.0.115)
Trilio Workloads are designed to allow a Desaster Recovery without the need to backup the Trilio database.
As long as the Trilio Workloads are existing on the Backup Target Storage and a Trilio installation has access to them, it is possible to restore the Workloads.
Notify users to of Workloads being available
This procedure is designed to be applicable to all Openstack installations using Trilio. It is to be used as a starting point to develop the exact Desaster Recovery process of a specific environment.
In case that instead of noticing the users, the workloads shall be restored is it necessary to have an User in each Project, that has the necessary privileges to restore.
Trilio incremental Snapshots involve a backing file to the prior backup taken, which makes every Trilio incremental backup a synthetic full backup.
Trilio is using qcow2 backing files for this feature:
As can be seen in the example is the backing file an absolute path, which makes it necessary, that this path exists so the backing files can be accessed.
Trilio is using the base64 hashing algorithm for the NFS mount-paths, to allow the configuration of multiple NFS Volumes at the same time. The hash value is calculated using the provided NFS path.
When the path of the backing file is not available on the Trilio appliance and Compute nodes, will the restores of incremental backups fail.
The tested and recommended method to make the backing files available is creating the required directory path and using mount --bind
to make the path available for the backups.
Running the mount --bind command will make the necessary path available until the next reboot. If it is required to have access to the path beyond a reboot is it necessary to edit the fstab.
To use the workloadmgr CLI tool on the Trilio appliance it is only necessary to activate the virtual environment of the workloadmgr
An rc-file to authenticate against Openstack is required.
Trilio is composed of multiple services, which can be checked in case of any errors.
Trilio is using 4 main services on the Trilio Appliance:
wlm-api
wlm-scheduler
wlm-workloads
wlm-cron
Those can be verified to be up and running using the systemctl status
command.
The second component to check the Trilio Appliance's health is the nginx and pacemaker cluster.
Checking the availability of the Trilio API on the chosen endpoints is recommended.
The following example curl command lists the available workload-types and verifies that the connection is available and working:
Please check the API guide for more commands and how to generate the X-Auth-Token.
The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.
In order to check the dmapi service go to dmapi container which is residing on controller nodes and run below command
The datamover service is running on each compute node. Logging to compute node and run below command
The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.
Run the following command on “nova-api” nodes and make sure “triliovault_datamover_api” container is in started state.
Run the following command on "nova-compute" nodes and make sure the container is in a started state.
Run the following command on horizon nodes and make sure the container is in a started state.
Run the following command on MAAS nodes and make sure all trilio units like trilio-data-mover
, trilio-dm-api
, trilio-horizon-plugin
, trilio-wlm
are in active state
Make sure the Trilio dmapi and horizon containers (shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.
Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly.
Please check dmapi endpoints on overcloud node.
The Trilio Cluster contains multiple log files.
The main log is workloadmgr-workloads.log, which contains all logs about ongoing and past Trilio backup and restore tasks. It can be found at:
/var/log/workloadmgr/workloadmgr-workloads.log
The next important log is the workloadmgr-api.log, which contains all logs about API calls received by the Trilio Cluster. It can be found at:
/var/log/workloadmgr/workloadmgr-api.log
The log for the remaining service is the workloadmgr-scheduler.log, which contains all logs about the internal job scheduling in the Trilio Cluster.
/var/log/workloadmgr/workloadmgr-scheulder.log
The logs for the nova-api extension can be found here:
/var/log/nova/nova-api.log
The logs for the datamover can be found here:
/var/log/nova/tvault-contego.log
Troubleshooting inside a complex environment like Openstack can be very time-consuming. The following tipps will help to speed up the troubleshooting process to identify root causes.
Openstack and Trilio are divided into multiple services. Each service has a very specific purpose that is called during a backup or recovery procedure. Knowing which service is doing what helps to understand where the error is happening, allowing more focused troubleshooting.
The Trilio Cluster is the Controller of Trilio. It receives all Workload related requests from the users.
Every task of a backup or restore process is triggered and managed from here. This includes the creation of the directory structure and initial metadata files on the Backup Target.
During a backup process is the Trilio cluster also responsible to gather the metadata about the backed up VMs and networks from the Openstack environment. It is sending API calls towards the Openstack endpoints on the configured endpoint type to fetch this information. Once the metadata has been received does the Trilio Cluster write it as json files on the Backup Target.
The Trilio cluster is also sending the Cinder Snapshot command.
During restore process is the Trilio cluster reading the VM metadata from its Database and is using the metadata to create the Shell for the restore. It is sending API calls to the Openstack environment to create the necessary resources.
The dmapi service is the connector between the Trilio cluster and the datamover running on the compute nodes.
The purpose of the dmapi service is to identify which compute node is responsible for the current backup or restore task. To do so is the dmapi service connecting to the nova api database requesting the compute hose of a provided VM.
Once the compute host has been identified is the dmapi forwarding the command from the Trilio Cluster to the datamover running on the identified compute host.
The datamover is the Trilio service running on the compute nodes.
Each datamover is responsible for the VMs running on top of its compute node. A datamover can not work with VMs running on a different compute node.
The datamover is controlling the freeze and thaw of VMs as well as the actual movement of the data.
Trilio is reading and writing on the Backup Target as nova:nova.
The POSIX user-id and group-id of nova:nova need to be aligned between the Trilio Cluster and all compute nodes. Otherwise backup or restores may fail with permission or file not found issues.
Alternativ ways to achieve the goal are possible, as long as all required nodes can fully write and read as nova:nova on the Backup Target.
It is recommended to verify the required permissions on the Backup Target in case of any errors during the data transfer phase or in case of any file permission errors.
Trilio is using RBAC to allow the usage of Trilio features to users.
This trustee role is absolutely required and can not be overwritten using the admin role.
It is recommended to verify the assignment of the Trilio Trustee Role in case of any permission errors from Trilio during creation of Workloads, backups or restores.
Trilio is creating Cinder Snapshots and temporary Cinder Volumes. The Openstack Quotas need to allow that.
Every disk that is getting backed up requires one temporary Cinder Volumes.
Every Cinder Volume that is getting backup requires two Cinder Snapshots. The second Cinder Snapshot is temporary to calculate the incremental.
Trilio’s tenant driven backup service gives tenants control over backup policies. However, sometimes it may be too much control to tenants and the cloud admins may want to limit what policies are allowed by tenants. For example, a tenant may become overzealous and only uses full backups every 1 hr interval. If every tenant were to pursue this backup policy, it puts a severe strain on cloud infrastructure. Instead, if cloud admin can define predefined backup policies and each tenant is only limited to those policies then cloud administrators can exert better control over backup service.
Workload policy is similar to nova flavor where a tenant cannot create arbitrary instances. Instead, each tenant is only allowed to use the nova flavors published by the admin.
To see all available Workload policies in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
The following information are shown in the policy tab for each available policy:
Creation time
name
description
status
set interval
set retention type
set retention value
To create a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
Click new policy
provide a policy name on the Details tab
provide a description on the Details tab
provide the RPO in the Policy tab
Choose the Snapshot Retention Type
provide the Retention value
Choose the Full Backup Interval
Click create
To edit a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to edit
click on "Edit policy" at the end of the line of the chosen policy
edit the policy as desired - all values can be changed
Click "Update"
To assign or remove a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to assign/remove
click on the small arrow at the end of the line of the chosen policy to open the submenu
click "Add/Remove Projects"
Choose projects to add or remove by using the plus/minus buttons
Click "Apply"
To delete a policy in Horizon follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin
Navigate to Trilio
Navigate to Policy
identify the policy to assign/remove
click on the small arrow at the end of the line of the chosen policy to open the submenu
click "Delete Policy"
Confirm by clicking "Delete"
Trilio enables Openstack administrators to set Project Quotas against the usage of Trilio.
The following Quotas can be set:
Number of Workloads a Project is allowed to have
Number of VMs a Project is allowed to protect
Amount of Storage a Project is allowed to use on the Backup Target
The Trilio Quota feature is available for all supported Openstack versions and distributions, but only Train releases include the Horizon integration of the Quota feature.
Workload Quotas are managed like any other Project Quotas.
Login into Horizon as user with admin role
Navigate to Identity
Navigate to Projects
Identify the Project to modify or show the quotas on
Use the small arrow next to "Manage Members" to open the submenu
Choose "Modify Quotas"
Navigate to "Workload Manager"
Edit Quotas as desired
Click "Save"
Trilio is providing several different Quotas. The following command allows listing those.
Trilio 4.0 and 4.0 SP1 do not yet have the Quota Types Snapshots and Volumes integrated. Using those will not generate any Quotas a Tenant has to apply to.
The following command will show the details of a provided Quota Type.
The following command will create a Quota for a given project and set the provided value.
The high watermark is automatically set to 80% of the allowed value when set via Horizon.
A created Quota will generate an allowed_quota_object with its own ID. This is ID is needed when continuing to work with the created Quota.
The following command lists all Trilio Quotas set for a given project.
The following command shows the details about a provided allowed Quota.
The following command shows how to update the value of an already existing allowed Quota.
The following command will delete an allowed Quota and sets the value of the connected Quota Type back to unlimited for the affected project.
Trilio can notify users via E-Mail upon the completion of backup and restore jobs.
The E-Mail will be sent to the owner of the Workload.
To use the E-mail notifications, two requirements need to be met.
Both requirements need to be set or configured by the Openstack Administrator. Please contact your Openstack Administrator to verify the requirements.
As the E-Mail will be sent to the owner of the Workload does the Openstack User, who created the workload, require to have an E-Mail address associated.
Trilio needs to know which E-Mail server to use, to send the E-mail notifications. Backup Administrators can do this in the "Backup Admin" area.
E-Mail notifications are activated tenant wide. To activate the E-Mail notification feature for a tenant follow these steps:
Login to Horizon
Navigate to the Backups
Navigate to Settings
Check/Uncheck the box for "Enable Email Alerts"
The following screenshots show example E-mails send by Trilio.
Trilio provides Backup-as-a-Service, which allows Openstack Users to manage and control their backups themselves. This doesn't eradicate the need for a Backup Administrator, who has an overview of the complete Backup Solution.
To provide Backup Administrators with the tools they need does Trilio for Openstack provide a Backup-Admin area in Horizon in addition to the API and CLI.
To access the Backups-Admin area follow these steps:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin Tab.
Navigate to Trilio page.
The Backups-Admin area provides the following features.
It is possible to reduce the shown information down to a single tenant. That way seeing the exact impact the chosen Tenant has.
The status overview is always visible in the Backups-Admin area. It provides the most needed information on a glance, including:
Storage Usage (nfs only)
Number of protected VMs compared to number of existing VMs
Number of currently running Snapshots
Status of TVault Nodes
Status of Contego Nodes
The status of nodes is filled when the services are running and in good status.
This tab provides information about all currently existing Workloads. It is the most important overview tab for every Backup Administrator and therefor the default tab shown when opening the Backup-Admins area.
The following information are shown:
User-ID that owns the Workload
Project that contains the Workload
Workload name
Workload Type
Availability Zone
Amount of protected VMs
Performance information about the last 30 backups
How much data was backed up (green bars)
How long did the Backup take (red line)
Piechart showing amount of Full (Blue) Backups compared to Incremental (Red) Backups
Number of successful Backups
Number of failed Backups
Storage used by that Workload
Which Backup target is used
When is the next Snapshot run
What is the general intervall of the Workload
Scheduler Status including a Switch to deactivate/activate the Workload
Administrators often need to figure out, where a lot of resources are used up, or they need to quickly provide usage information to a billing system. This tab helps in these tasks by providing the following information:
Storage used by a Tenant
VMs protected by a Tenant
It is possible to drill down to see the same information per workload and finally per protected VM.
The Usage tab includes workloads and VMs that are no longer actively used by a Tenant, but exist on the backup target.
This tab displays information about Trilio cluster nodes. The following information are shown:
Node name
Node ID
Trilio Version of the node
IP Address
Active Controller Node (True/False)
Status of the Node
The Virtual IP is shown as it's own node. It is typically shown directly below the current active Controller Node.
This tab displays information about Trilio contego service. The following information are shown:
Service-Name
Compute Node the service is running on
Zone
Service Status from Openstack perspective (enabled/disabled)
Version of the Service
General Status
last time the Status was updated
This tab displays information about the backup target storage. It contains the following information:
Storage Name
Clicking on the Storage name provides an overview of all workloads stored on that storage.
Capacity of the storage
Total utilization of the storage
Status of the storage
Statistic information
Percentage all storages are used
Percentage how much storage is used for full backups
Amount of Full backups versus Incremental backups
Audit logs provide the sequence of workload related activities done by users, like workload creation, snapshot creation, etc. The following information are shown:
Time of the entry
What task has been done
Project the task has performed in
User that performed the task
The Audit log can be searched for strings to find for example only entries down by a specific user.
Additionally, can the shown timeframe be changed as necessary.
The license tab provides an overview over the current license and allows to upload new licenses, or validate the current license.
A license validation is automatically done, when opening the tab.
The following information about an active license are shown:
Organization (License name)
License ID
Purchase date - when was the license created
License Expiry Date
Maintenance Expiry Date
License value
License Edition
License Version
License Type
Description of the License
Evaluation (True/False)
Trilio will stop all activities once a license is no longer valid or expired.
The policy tab gives Administrators the possibility to work with workload policies.
Please use Workload Policies in the Admin guide to learn more about how to create and use Workload Policies.
This tab manages all global settings for the whole cloud. Trilio has two types of settings:
Email settings
Job scheduler settings.
These settings will be used by Trilio to send email reports of snapshots and restores to users.
Configuring the Email settings is a must-have to provide Email notification to Openstack users.
The following information are required to configure the email settings:
SMTP Server
SMTP username
SMTP password
SMTP port
SMTP timeout
Sender email address
A test email can be sent directly from the configuration page.
To work with email settings through CLI use the following commands:
To set an email setting for the first time or after deletion use:
To update an already set email setting through CLI use:
To show an already set email setting use:
To delete a set email setting use:
The Global Job Scheduler can be used to deactivate all scheduled workloads without modifying each one of them.
To activate/deactivate the Global Job Scheduler through the Backups-Admin area:
Login to Horizon using admin user.
Click on Admin Tab.
Navigate to Backups-Admin Tab.
Navigate to Trilio page.
Navigate to the Settings tab
Click "Disable/Enable Job Scheduler"
Check or Uncheck the box for "Job Scheduler Enabled"
Confirm by clicking on "Change"
The Global Job Scheduler can be controlled through CLI as well.
To get the status of the Global Job Scheduler use:
To deactivate the Global Job Scheduler use:
To activate the Global Job Scheduler use:
Trilio allows you to view or download a file from the snapshot. Any changes to the files or directories when snapshot is mounted are temporary and are discarded when the snapshot is unmounted. Mounting is a faster way to restore a single or multiple files. To mount a snapshot follow these steps.
Trilio ships with a ubuntu based file manager image. This image includes a web based file manager application that helps with browsing and download files when snapshot is mounted. Before you create a File Manager instance, File Manager (a.k.a Recovery Manager) image needs to be uploaded to the Glance.
Ask the Openstack administrator about the name and location of the File Recovery image.
To be able to mount a Snapshot a File Recovery Manager needs to be available in the tenant.
The File Recovery Manager is a normal Openstack instance, which requires at least 40GB boot disk.
In addition are the following metadata entries required upon boot:
hw_qemu_guest_agent=yes
tvault-recovery-manager=yes
It is recommended to provide the File Recovery Manager instance with a Floating IP and appropriate Security Groups. Appropriate would be to restrict access to ports 80, 443, and 22.
It is recommended to spin up the file recovery manager using a cloud-init User Data script or Key Pairs to configure SSH access.
The size of the Snapshot to mount has direct impact on the RAM needed by the File Recovery Manager. A 1TB backup for example requires 32GB RAM to mount successfully.
Mounting a Snapshot to a File Recovery Manager provides read access to all data that is located on the in the mounted Snapshot.
Unmount any mounted Snapshot once there is no further need to keep it mounted. Mounted Snapshots will not be purged by the Retention policy.
It is possible to run the mounting process against any Openstack instance. During this process will the instance be rebooted.
Always mount Snapshots to File Recovery Manager instances only.
There are 2 possibilities to mount a Snapshot in Horizon.
To mount a Snapshot through the Snapshot list follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Identify the searched Snapshot in the Snapshot list
Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu
Click "Mount Snapshot"
Choose the File Recovery Manager instance to mount to
Confirm by clicking "Mount"
Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:
tvault_recovery_manager=yes
To mount a Snapshot through the File Search results follow these steps:
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the File Search tab
Identify the Snapshot to be mounted
Click "Mount Snapshot" for the chosen Snapshot
Choose the File Recovery Manager instance to mount to
Confirm by clicking "Mount"
Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:
tvault_recovery_manager=yes
The File Recovery Manager is a normal Openstack instance running a webserver. It can be accessed through
the Openstack VNC console (requires password access)
SSH
HTTP on port 80
The Webserver provides a File Commander application, which will show the mounted Snapshot and provides the possibility to navigate through it and download Files as necessary.
When using SSH or the Openstack VNC console can the mounted Snapshot be found under:
/home/ubuntu/tvault-mounts/mounts/
Sometimes a Snapshot is mounted for a longer time and it needs to be identified, which Snapshots are mounted.
There are 2 possibilities to identify mounted Snapshots inside Horizon.
Login to Horizon
Navigate to Compute
Navigate to Instances
Identify the File Recovery Manager Instance
Click on the Name of the File Recovery Manager Instance to bring up its details
On the Overview tab look for Metadata
Identify the value for mounted_snapshot_url
The mounted_snapshot_url
contains the Snapshot ID of the Snapshot that has been mounted last.
This value only gets updated, when a new Snapshot is mounted.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Search for the Snapshot that has the option "Unmount Snapshot"
Once a mounted Snapshot is no longer needed it is possible and recommended to unmount the snapshot.
Unmounting a Snapshot frees the File Recovery Manager instance to mount the next Snapshot and allows Trilio retention policy to purge the former mounted Snapshot.
Deleting the File Recovery Manager instance will not update the Trilio appliance. The Snapshot will be considered mounted until an unmount command has been received.
Login to Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload that contains the Snapshot to mount
Click the workload name to enter the Workload overview
Navigate to the Snapshots tab
Search for the Snapshot that has the option "Unmount Snapshot"
Click "Unmount Snapshot"
Every Workload has its own schedule. Those schedules can be activated, deactivated and modified.
A schedule is defined by:
Status (Enabled/Disabled)
Start Day/Time
End Day
Hrs between 2 snapshots
To disable the scheduler of a single Workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Navigate to the tab "Schedule"
Uncheck "Enabled"
Click "Update"
To disable the scheduler of a single Workload in Horizon do the following steps:
Login to the Horizon
Navigate to Backups
Navigate to Workloads
Identify the workload to be modified
Click the small arrow next to "Create Snapshot" to open the sub-menu
Click "Edit Workload"
Navigate to the tab "Schedule"
check "Enabled"
Click "Update"
To modify a schedule the workload itself needs to be modified.
Please follow this procedure to modify the workload.
Trilio is using the Openstack Keystone Trust system which enables the Trilio service user to act in the name of another Openstack user.
This system is used during all backup and restore features.
As a trust is bound to a specific user for each Workload does the Trilio Horizon plugin show the status of the Scheduler on the Workload list page.
Trilio is using the Openstack Keystone Trust system which enables the Trilio service user to act in the name of another Openstack user.
This system is used during all backup and restore features.
Openstack Administrators should never have the need to directly work with the trusts created.
The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.
Trusts can only be worked with via CLI
<trust_id> ID of the trust to show
<role_name>
Name of the role that trust is created for
--is_cloud_trust {True,False}
Set to true if creating cloud admin trust. While creating cloud trust use same user and tenant which used to configure Trilio and keep the role admin.
This runbook will demonstrate how to set up Disaster Recovery with Trilio for a given scenario.
The chosen scenario is following an actively used Trilio customer environment.
There are two Openstack clouds available "Openstack Cloud A" and Openstack Cloud B". "Openstack Cloud B" is the Disaster Recovery restore point of "Openstack Cloud A" and vice versa. Both clouds have an independent Trilio installation integrated. These Trilio installations are writing their Backups to NFS targets. "Trilio A" is writing to "NFS A1" and "Trilio B" is writing to "NFS B1". The NFS Volumes used are getting synced against another NFS Volume on the other side. "NFS A1" is syncing with "NFS B2" and "NFS B1" is syncing with "NFS A2". The syncing process is set up independently from Trilio and will always favor the newer dataset.
This scenario will cover the Disaster Recovery of a single Workload and a complete Cloud. All processes are done be the Openstack administrator.
This runbook will assume that the following is true:
"Openstack Cloud A" and "Openstack Cloud B" both have an active Trilio installation with a valid license
"Openstack Cloud A" and "Openstack Cloud B" have free resources to host additional VMs
"Openstack Cloud A" and "Openstack Cloud B" have Tenants/Projects available that are the designated restore points for Tenant/Projects of the other side
Access to a user with the admin role permissions on domain level
One of the Openstack clouds is down/lost
For ease of writing will this runbook assume from here on, that "Openstack Cloud A" is down and the Workloads are getting restored into "Openstack Cloud B".
In the case of the usage of shared Tenant networks, beyond the floating IP, the following additional requirement is needed: All Tenant Networks, Routers, Ports, Floating IPs, and DNS Zones are created
A single Workload can do a Disaster Recovery in this Scenario, while both Clouds are still active. To do so the following high-level process needs to be followed:
Copy the Workload directories to the configured NFS Volume
Make the right Mount-Paths available
Reassign the Workload
Restore the Workload
Clean up
This process only shows how to get a Workload from "Openstack Cloud A" to "Openstack Cloud B". The vice versa process is similar.
As only a single Workload is to be recovered it is more efficient to copy the data of that single Workload over to the "NFS B1" Volume, which is used by "Trilio B".
It is recommended to use the Trilio VM as a connector between both NFS Volumes, as the nova user is available on the Trilio VM.
Trilio Workloads are identified by their ID und which they are stored on the Backup Target. See below example:
In the case that the Workload ID is not known can available Metadata inside the Workload directories be used to identify the correct Workload.
The identified workload needs to be copied with all subdirectories and files. Afterward, it is necessary to adjust the ownership to nova:nova with the right permissions.
Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.
The MTAuMTAuMi4yMDovdXBzdHJlYW0=
part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.
This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.
Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.
Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.
The used hash values can be calculated using the base64 tool in any Linux distribution.
Based on the identified base64 hash values the following paths are required on each Compute node.
/var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
and
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.
To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.
Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.
To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.
Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.
The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.
To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.
Now that all informations have been gathered the workload can be reassigned to the target project.
After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.
The reassigned workload can be restored using Horizon following the procedure described here.
This runbook will continue on the CLI only path.
To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.
List all Snapshots of the workload to restore to identify the snapshot to restore
Get Snapshot Details with network details for the desired snapshot
Get Snapshot Details with disk details for the desired Snapshot
The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.
To do the actual restore use the following command:
To verify the success of the restore from a Trilio perspective the restore status is checked.
After the Desaster Recovery Process has been successfully completed it is recommended to bring the TVM installation back into its original state to be ready for the next DR process.
Delete the workload that got restored.
The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.
To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.
Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.
This script can be found here: https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabase
After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.
This Scenario will cover the Disaster Recovery of a full cloud. It is assumed that the source cloud is down or lost completly. To do the disaster recovery the following high-level process needs to be followed:
Reconfigure the Target Trilio installation
Make the right Mount-Paths available
Reassign the Workload
Restore the Workload
Reconfigure the Target Trilio installation back to the original one
Clean up
Before the Desaster Recovery Process can start is it necessary to make the backups to be restored available for the Trilio installation. The following steps need to be done to completely reconfigure the Trilio installation.
During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.
To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be fully reconfigured to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.
Edit the workloadmgr.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the workloadmgr.conf
Restart the wlm-workloads service
Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.
To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.
Edit the tvault-contego.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the tvault-contego.conf
Restart the tvault-contego service
Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.
The MTAuMTAuMi4yMDovdXBzdHJlYW0=
part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.
This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.
Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.
Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.
The used hash values can be calculated using the base64 tool in any Linux distribution.
Based on the identified base64 hash values the following paths are required on each Compute node.
/var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
and
/var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.
To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.
Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.
To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.
Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.
The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.
To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.
Now that all informations have been gathered the workload can be reassigned to the target project.
After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.
The reassigned workload can be restored using Horizon following the procedure described here.
This runbook will continue on the CLI only path.
To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.
List all Snapshots of the workload to restore to identify the snapshot to restore
Get Snapshot Details with network details for the desired snapshot
Get Snapshot Details with disk details for the desired Snapshot
The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.
To do the actual restore use the following command:
To verify the success of the restore from a Trilio perspective the restore status is checked.
After the Desaster Recovery Process has finished it is necessary to return the Trilio installation to its original configuration. The following steps need to be done to completely reconfigure the Trilio installation.
During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.
To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be fully reconfigured to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.
Edit the workloadmgr.conf
Look for the line defining the NFS mounts
Delete NFS B2 from the comma-seperated list
Write and close the workloadmgr.conf
Restart the wlm-workloads service
Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.
To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.
Edit the tvault-contego.conf
Look for the line defining the NFS mounts
Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.
Write and close the tvault-contego.conf
Restart the tvault-contego service
After the Desaster Recovery Process has been successfully completed and the Trilio installation reconfigured to its original state, it is recommended to do the following additional steps to be ready for the next Disaster Recovery process.
The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.
To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.
Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.
This script can be found here: https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabase
After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.
Each Trilio Workload has a dedicated owner. The ownership of a Workload is defined by:
Openstack User - The Openstack User-ID is assigned to a Workload
Openstack Project - The Openstack Project-ID is assigned to a Workload
Openstack Cloud - The Trilio Serviceuser-ID is assigned to a Workload
Openstack Users can update the User ownership of a Workload by modifying the Workload.
This ownership secures, that only the owners of a Workload are able to work with it.
Openstack Administrators can reassign Workloads or reimport Workloads from older Trilio installations.
Workload import allows to import Workloads existing on the Backup Target into the Trilio database.
The Workload import is designed to import Workloads, which are owned by the Cloud.
It will not import or list any Workloads that are owned by a different cloud.
To get a list of importable Workloads use the following CLI command:
--project_id <project_id>
List workloads belongs to given project only.
To import Workloads into the Trilio database use the following CLI command:
--workloadids <workloadid>
Specify workload ids to import only specified workloads. Repeat option for multiple workloads.
The definition of an orphaned Workload is from the perspective of a specific Trilio installation. Any workload that is located on the Backup Target Storage, but not known to the TrilioVualt installation is considered orphaned.
Further is to divide between Workloads that were previously owned by Projects/Users in the same cloud or are migrated from a different cloud.
The following CLI command provides the list of orphaned workloads:
Running this command against a Backup Target with many Workloads can take a bit of time. Trilio is reading the complete Storage and verifies every found Workload against the Workloads known in the database.
Openstack administrators are able to reassign a Workload to a new owner. This involves the possibility to migrate a Workload from one cloud to another or between projects.
Reassigning a workload only changes the database of the target Trilio installation. When the Workload was managed before by a different Trilio installation, will that installation not be updated.
Use the following CLI command to reassign a Workload:
A sample mapping file with explanations is shown below:
GET
https://$(tvm_address):8780/v1/$(tenant_id)/restores/detail
Lists Restores with details
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
GET
https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>
Provides all details about the specified Restore
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>
Deletes the specified Restore
GET
https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>/cancel
Cancels an ongoing Restore
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information
The One-Click restore requires a body to provide all necessary information in json format.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information.
The One-Click restore requires a body to provide all necessary information in json format.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Starts a restore according to the provided information
The One-Click restore requires a body to provide all necessary information in json format.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots
Lists all Snapshots.
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
When creating a Snapshot it is possible to provide additional information
This Body is completely optional
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Shows the details of a specified Snapshot
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>
Deletes a specified Snapshot
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/cancel
Cancels the Snapshot process of a given Snapshot
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads
Provides the list of all workloads for the given tenant/project id
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads
Creates a workload in the provided Tenant/Project with the given details.
Workload create requires a Body in json format, to provide the requested information.
Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Shows all details of a specified workload
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Modifies a workload in the provided Tenant/Project with the given details.
Workload modify requires a Body in json format, to provide the information about the values to modify.
All values in the body are optional.
Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>
Deletes the specified Workload.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/unlock
Unlocks the specified Workload
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/reset
Resets the defined workload
GET
https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types
Lists all available Quota Types
Name | Type | Description |
---|
Name | Type | Description |
---|
GET
https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types/<quota_type_id>
Requests the details of a Quota Type
POST
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>
Creates an allowed Quota with the given parameters
GET
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>
Lists all allowed Quotas for a given project.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quota/<allowed_quota_id>
Shows details for a given allowed Quota
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/update_allowed_quota/<allowed_quota_id>
Updates an allowed Quota with the given parameters
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<allowed_quota_id>
Deletes a given allowed Quota
E-Mail Notification Settings are done through the settings API. Use the values from the following table to set Email Notifications up through API.
Setting name | Settings Type | Value type | example |
---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/settings
Creates a Trilio setting.
Name | Type | Description |
---|
Name | Type | Description |
---|
Setting create requires a Body in json format, to provide the requested information.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>
Shows all details of a specified setting
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/settings
Modifies the provided setting with the given details.
Workload modify requires a Body in json format, to provide the information about the values to modify.
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>
Deletes the specified Workload.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy
Requests the list of available Workload Policies
Name | Type | Description |
---|
Name | Type | Description |
---|
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>
Requests the details of a given policy
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/assigned/<project_id>
Requests the lists of Policies assigned to a Project.
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy
Creates a Policy with the given parameters
PUT
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>
Updates a Policy with the given information
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>
Updates a Policy with the given information
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>
Deletes a given Policy
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/mount
Mounts a Snapshot to the provided File Recovery Manager
Name | Type | Description |
---|
Name | Type | Description |
---|
GET
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/mounted/list
Provides the list of all Snapshots mounted in a Tenant
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/snapshots/mounted/list
Provides the list of all Snapshots mounted in a specified Workload
POST
https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/dismount
Unmounts a Snapshot of the provided File Recovery Manager
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/pause
Disables the scheduler of a given Workload
Name | Type | Description |
---|
Name | Type | Description |
---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/resume
Enables the scheduler of a given Workload
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>
Validates the Scheduler trust for a given Workload
All following API commands require an Authentication token against a user with admin-role in the authentication project.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler
Requests the status of the Global Job Scheduler
POST
https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/disable
Requests disabling the Global Job Scheduler
POST
https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/enable
Requests enabling the Global Job Scheduler
POST
https://$(tvm_address):8780/v1/$(tenant_id)/search
Starts a File Search with the given parameters
Name | Type | Description |
---|
Name | Type | Description |
---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/search/<search_id>
Starts a filesearch with the given parameters
Openstack Administrators should never have the need to directly work with the trusts created.
The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts
Provides the lists of trusts for the given Tenant.
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
POST
https://$(tvm_address):8780/v1/$(tenant_id)/trusts
Creates a workload in the provided Tenant/Project with the given details.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>
Shows all details of a specified trust
DELETE
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>
Deletes the specified trust.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>
Validates the Trust of a given Workload.
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/get_list/import_workloads
Provides the list of all importable workloads
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
GET
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/orphan_workloads
Provides the list of all orphaned workloads
POST
https://$(tvm_address):8780/v1/$(tenant_id)/workloads/import_workloads
Imports all or the provided workloads
<policy_id>
Id of the policy to show
--policy-fields <key=key-name>
Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'
--display-description <display_description>
Optional policy description. (Default=No description)
--metadata <key=keyname>
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
<display_name>
the name the policy will get
--display-name <display-name>
Name of the policy
--display-description <display_description>
Optional policy description. (Default=No description)
--policy-fields <key=key-name>
Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'
--metadata <key=keyname>
Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value
<policy_id>
the name the policy will get
--add_project <project_id>
ID of the project to assign policy to. Use multiple times to assign multiple projects.
--remove_project <project_id>
ID of the project to remove policy from. Use multiple times to remove multiple projects.
<policy_id>
policy to be assigned or removed
<policy_id>
ID of the policy to be deleted
<quota_type_id>
ID of the Quota Type to show
<quota_type_id>
ID of the Quota Type to be created
<allowed_value>
Value to set for this Quota Type
<high_watermark>
Value to set for High Watermark warnings
<project_id>
Project to assign the quota to
<project_id>
Project to list the Quotas from
<allowed_quota_id>
ID of the allowed Quota to show.
<allowed_value>
Value to set for this Quota Type
<high_watermark>
Value to set for High Watermark warnings
<project_id>
Project to assign the quota to
<allowed_quota_id>
ID of the allowed Quota to update
<allowed_quota_id>
ID of the allowed Quota to delete
--description
Optional description (Default=None) Not required for email settings
--category
Optional setting category (Default=None) Not required for email settings
--type
settings type set to email_settings
--is-public
sets if the setting can be seen publicly set to False
--is-hidden
sets if the setting will always be hidden set to False
--metadata
sets if the setting can be seen publicly Not required for email settings
<name>name of the setting Take from the list below
<value>value of the setting Take value type from the list below
--description
Optional description (Default=None) Not required for email settings
--category
Optional setting category (Default=None) Not required for email settings
--type
settings type set to email_settings
--is-public
sets if the setting can be seen publicly set to False
--is-hidden
sets if the setting will always be hidden set to False
--metadata
sets if the setting can be seen publicly Not required for email settings
<name>
name of the setting Take from the list below
<value>
value of the setting Take value type from the list below
--get_hidden
show hidden settings (True) or not (False) Not required for email settings, use False
if set
<setting_name>
name of the setting to show Take from the list below
<setting_name>
name of the setting to delete Take from the list below
Setting name | Value type | example |
---|---|---|
<snapshot_id>
ID of the Snapshot to be mounted
<mount_vm_id>
ID of the File Recovery Manager instance to mount the Snapshot to.
--workloadid <workloadid>
Restrict the list to snapshots in the provided workload
<snapshot_id>
ID of the snapshot to unmount.
--workloadid <workloadid>
Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>
--workloadid <workloadid>
Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>
<workload_id>
ID of the workload to validate
<trust_id>
ID of the trust to be deleted
--migrate_cloud {True,False}
Set to True if you want to list workloads from other clouds as well. Default is False.
--generate_yaml {True,False}
Set to True if want to generate output file in yaml format, which would be further used as input for workload reassign API.
--old_tenant_ids <old_tenant_id>
Specify old tenant ids from which workloads need to reassign to new tenant. Specify multiple times to choose Workloads from multiple tenants.
--new_tenant_id <new_tenant_id>
Specify new tenant id to which workloads need to reassign from old tenant. Only one target tenant can be specified.
--workload_ids <workload_id>
Specify workload_ids which need to reassign to new tenant. If not provided then all the workloads from old tenant will get reassigned to new tenant. Specifiy multiple times for multiple workloads.
--user_id <user_id>
Specify user id to which workloads need to reassign from old tenant. only one target user can be specified.
--migrate_cloud {True,False}
Set to True if want to reassign workloads from other clouds as well. Default if False
--map_file
Provide file path(relative or absolute) including file name of reassign map file. Provide list of old workloads mapped to new tenants. Format for this file is YAML.
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|---|---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
Name | Type | Description |
---|
smtp_default___recipient
String
admin@example.net
smtp_default___sender
String
admin@example.net
smtp_port
Integer
587
smtp_server_name
String
Mailserver_A
smtp_server_username
String
admin
smtp_server_password
String
password
smtp_timeout
Integer
10
smtp_email_enable
Boolean
True
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the Restores from
snapshot_id
string
ID of the Snapshot to fetch the Restores from
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the restore from
restore_id
string
ID of the restore to show
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the Restore from
restore_id
string
ID of the Restore to be deleted
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of the Trilio service
tenant_id
string
ID of the Tenant/Project to fetch the Restore from
restore_id
string
ID of the Restore to cancel
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
snapshot_id
string
ID of the snapshot to restore
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
snapshot_id
string
ID of the snapshot to restore
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to do the restore in
snapshot_id
string
ID of the snapshot to restore
X-Auth-Project-Id
string
Project to authenticate against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Projects to fetch the Snapshots from
host
string
host name of the TVM that took the Snapshot
workload_id
string
ID of the Workload to list the Snapshots off
date_from
string
starting date of Snapshots to show
\
Format: YYYY-MM-DDTHH:MM:SS
string
ending date of Snapshots to show
\
Format: YYYY-MM-DDTHH:MM:SS
all
boolean
admin role required - True lists all Snapshots of all Workloads
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of the Trilio Service
tenant_id
string
ID of the Tenant/Project to take the Snapshot in
workload_id
string
ID of the Workload to take the Snapshot in
full
boolean
True creates a full Snapshot
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of the Trilio Service
tenant_id
string
ID of the Tenant/Project to take the Snapshot from
snapshot_id
string
ID of the Snapshot to show
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to find the Snapshot in
snapshot_id
string
ID of the Snapshot to delete
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio service
tenant_id
string
ID of the Tenant/Project to find the Snapshot in
snapshot_id
string
ID of the Snapshot to cancel
X-Auth-Project-Id
string
Project to run authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to fetch the workloads from
nfs_share
string
lists workloads located on a specific nfs-share
all_workloads
boolean
admin role required - True lists workloads of all tenants/projects
X-Auth-Project-Id
string
project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project to create the workload in
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Project/Tenant where to find the Workload
workload_id
string
ID of the Workload to show
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant/Project where to find the workload in
workload_id
string
ID of the Workload to modify
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication token to use
Content-Type
string
application/json
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
workload_id
string
ID of the Workload to delete
database_only
boolean
True leaves the Workload data on the Backup Target
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
workload_id
string
ID of the Workload to unlock
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address
string
IP or FQDN of Trilio Service
tenant_id
string
ID of the Tenant where to find the Workload in
workload_id
string
ID of the Workload to reset
X-Auth-Project-Id
string
Project to run the authentication against
X-Auth-Token
string
Authentication Token to use
Accept
string
application/json
User-Agent
string
python-workloadmgrclient
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project to work in |
quota_type_id | string | ID of the Quota Type to show |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to work in |
project_id | string | ID of the Tenant/Project to create the allowed Quota in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to work in |
project_id | string | ID of the Tenant/Project to list allowed Quotas from |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to work in |
<allowed_quota_id> | string | ID of the allowed Quota to show |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to work in |
<allowed_quota_id> | string | ID of the allowed Quota to update |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to work in |
<allowed_quota_id> | string | ID of the allowed Quota to delete |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
smtp_default___recipient | email_settings | String | admin@example.net |
smtp_default___sender | email_settings | String | admin@example.net |
smtp_port | email_settings | Integer | 587 |
smtp_server_name | email_settings | String | Mailserver_A |
smtp_server_username | email_settings | String | admin |
smtp_server_password | email_settings | String | password |
smtp_timeout | email_settings | Integer | 10 |
smtp_email_enable |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to work with |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Project/Tenant where to find the Workload |
setting_name | string | Name of the setting to show |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to work with w |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant where to find the Workload in |
setting_name | string | Name of the setting to delete |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication Token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project |
policy_id | string | ID of the Policy to show |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project |
project_id | string | ID of the Project to fetch assigned Policies from |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to do the restore in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to do the restore in |
policy_id | string | ID of the Policy to update |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to do the restore in |
policy_id | string | ID of the Policy to assign |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project |
policy_id | string | ID of the Policy to delete |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant to search for mounted Snapshots |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgr |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant to search for mounted Snapshots |
workload_id | string | ID of the Workload to search for mounted Snapshots |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgr |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project the Snapshot is located in |
snapshot_id | string | ID of the Snapshot to dismount |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project the Workload is located in |
workload_id | string | ID of the Workload to disable the Scheduler in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project the Workload is located in |
workload_id | string | ID of the Workload to disable the Scheduler in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project the Workload is located in |
workload_id | string | ID of the Workload to disable the Scheduler in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project the Workload is located in |
workload_id | string | ID of the Workload to disable the Scheduler in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project the Workload is located in |
workload_id | string | ID of the Workload to disable the Scheduler in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to run the search in |
search_id | string | ID of the File Search to get |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to create the Trust for |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Project/Tenant where to find the Workload |
workload_id | string | ID of the Workload to show |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant where to find the Trust in |
trust_id | string | ID of the Trust to delete |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication Token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Project/Tenant where to find the Workload |
workload_id | string | ID of the Workload to validate the Trust of |
X-Auth-Project-Id | string | Project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to work in |
migrate_cloud | boolean | True also shows Workloads from different clouds |
X-Auth-Project-Id | string | project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of the Trilio Service |
tenant_id | string | ID of the Tenant/Project to take the Snapshot in |
X-Auth-Project-Id | string | Project to run authentication against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project the Snapshot is located in |
snapshot_id | string | ID of the Snapshot to mount |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of Tenant/Project the Workload is located in |
workload_id | string | ID of the Workload to disable the Scheduler in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio service |
tenant_id | string | ID of the Tenant/Project to run the search in |
X-Auth-Project-Id | string | Project to authenticate against |
X-Auth-Token | string | Authentication token to use |
Content-Type | string | application/json |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_name | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant / Project to fetch the trusts from |
is_cloud_admin | boolean | true/false |
X-Auth-Project-Id | string | project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |
tvm_address | string | IP or FQDN of Trilio Service |
tenant_id | string | ID of the Tenant/Project to work in |
project_id | string | restricts the output to the given project |
X-Auth-Project-Id | string | project to run the authentication against |
X-Auth-Token | string | Authentication token to use |
Accept | string | application/json |
User-Agent | string | python-workloadmgrclient |