Only this pageAll pages
Powered by GitBook
1 of 96

T4O-4.1

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deployment Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Trilio Appliance Administration Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

User Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Admin Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Troubleshooting

Loading...

Loading...

Loading...

Loading...

API GUIDE

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Trilio for OpenStack Architecture

Backup-as-a-Service

Trilio is like a Data Protection project providing Backup-as-a-Service

Trilio is an add on service to OpenStack cloud infrastructure and provides backup and disaster recovery functions for tenant workloads. Trilio is very similar to other OpenStack services including nova, cinder, glance, etc and adheres to all tenets of OpenStack. It is a stateless service that scales with your cloud.

Main Components

Trilio has four main software components:

  1. Trilio ships as a QCOW2 image. User can instantiate one or more VMs from the QCOW2 image on a standalone KVM boxes.

  2. Trilio API is a python module that is installed on all OpenStack controller nodes where the nova-api service is running.

  3. Trilio Datamover is a python module that is installed on every OpenStack compute nodes

  4. Trilio horizon plugin is installed as an add on to horizon servers. This module is installed on every server that runs horizon service.

Service Endpoints

Trilio is both a provider and consumer into OpenStack ecosystem. It uses other OpenStack services such as nova, cinder, glance, neutron, and keystone and provides its own service to OpenStack tenants. To accomodate all possible OpenStack deployments, Trilio can be configured to use either public or internal URLs of services. Likewise Trilio provides its own public, internal and admin URLs.

Network Topology

This figure represents a typical network topology. Trilio exposes its public URL endpoint on public network and Trilio virtual appliances and data movers typically use either internal network or dedicated backup network for storing and retrieving backup images from backup store.

About Trilio for OpenStack

Trilio, by TrilioData, is a native OpenStack service that provides policy-based comprehensive backup and recovery for OpenStack workloads. The solution captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data and Metadata of an environment) as full or incremental snapshots. These snapshots can be held in a variety of storage environments including NFS AWS S3 compatible storage. With Trilio and its single click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). With Trilio, IT departments are enabled to fully deploy OpenStack solutions and provide business assurance through enhanced data retention, protection and integrity.

With the use of Trilio’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. Trilio takes point-in-time backup of the entire workload consisting of compute resources, network configurations and storage data as one unit. It also takes incremental backups that only captures the changes that were made since last backup. Incremental snapshots save time and storage space as the backup only includes changes since the last backup. The benefits of using VAST for backup and restore could be summarized as below:

  1. Efficient capture and storage of snapshots. Since our full backups only include data that is committed to storage volume and the incremental backups only include changed blocks of data since last backup, our backup processes are efficient and storages backup images efficiently on the backup media

  2. Faster and reliable recovery. When your applications become complex that snap multiple VMs and storage volumes, our efficient recovery process will bring your application from zero to operational with just click of button

  3. Easy migration of workloads between clouds. Trilio captures all the details of your application and hence our migration includes your entire application stack without leaving any thing for guess work.

  4. Through policy and automation lower the Total Cost of Ownership. Our tenant driven backup process and automation eliminates the need for dedicated backup administrators, there by improves your total cost of ownership.

Trilio architecture overview
Service endpoints overview
Example network topology

Uninstall Trilio

The uninstallation of Trilio is depending on the Openstack Distribution it is installed in. The high-level process is the same for all Distributions.

  1. Uninstall the Horizon Plugin or the Trilio Horizon container

  2. Uninstall the datamover-api container

  3. Uninstall the datamover

  4. Delete the Trilio Appliance Cluster

Advanced Ceph configuration

Ceph is the most common OpenSource solution to provide block storage through OpenStack Cinder.

Ceph is a very flexible solution. The possibilities of Ceph require additional steps to the Trilio solution.

Uninstalling from Canonical OpenStack

Trilio is not providing the JuJu Charms to deploy Trilio 4.1 in Canonical Openstack. At the time of release are the JuJu Charms not yet updated to Trilio 4.1. We will update this page once the Charms are available.

Installing on Canonical Openstack

Trilio and Canonical have started a partnership to ensure a native deployment of Trilio using JuJu Charms.

Those JuJu Charms are publicly available as Open Source Charms.

Trilio is not providing the JuJu Charms to deploy Trilio 4.1 in Canonical Openstack. These are developed and maintained by Canonical.

Preparing the installation

It is recommended to think about the following elements prior to the installation of Trilio for Openstack.

Tenant Quotas

Trilio uses Cinder snapshots for calculating full and incremental backups. For full backups, Trilio creates Cinder snapshots for all the volumes in the backup job. It then leaves these Cinder snapshots behind for calculating the incremental backup image during next backup. During an incremental backup operation it creates new Cinder snapshots, calculates the changed blocks between the new snapshots and the old snapshots that were left behind during full/previous backups. It then deletes the old snapshots but leaves the newly created snapshots behind. So, it is important that each tenant who is availing Trilio backup functionality has sufficient Cinder snapshot quotas to accommodate these additional snapshots. The guideline is to add 2 snapshots for every volume that is added to backups to volume snapshot quotas for that tenant. You may also increase the volume quotas for the tenant by the same amount because Trilio briefly creates a volume from snapshot to read data from the snapshot for backup purposes. During a restore process, Trilio creates additional instances and Cinder volumes. To accommodate restore operations, a tenant should have sufficient quota for Nova instances and Cinder volumes. Otherwise restore operations will result in failures.

Installing Trilio Components

Once the Trilio VM or the Cluster of Trilio VMs has been spun, the installation process can begin. This process contains the following steps:

  1. Install the Trilio dm-api service on the control plane.

  2. Install the Trilio datamover service on the compute plane.

  3. Install the Trilio Horizon plugin into the Horizon service.

Set Trilio GUI login banner

To configure the banner shown upon accessing the Trilio Appliance GUI do the following.

  1. Login into Trilio Appliance console

  2. edit the banner.yaml at /etc/tvault-config/banner.yaml

  3. restart tvault-config service

The content of the banner.yaml looks as follows and can be edited as required:

Reconfigure the Trilio Cluster

The Trilio appliance can be reconfigured at any time to adjust the Trilio cluster to any changes in the Openstack environment or the general backup solution.

To reconfigure the Trilio Cluster go to the "Configure". The configure page shows the current configuration of the TriloVault cluster.

The configuration page also gives access to the ansible playbooks of the last successful configuration.

To start the reconfiguration of the Trilio Cluster click "Reconfigure" at the end of the table.

Follow the guide afterwards.

Once the Trilio configurator has started, it needs to run through successfully to continue to use Trilio.

Reset the Trilio GUI password

In case of the Trilio Dashboard being lost it can be resetted as long as SSH access to the appliance is available.

To reset the password to its default do the following:

The dashboard login will be reset to:

Using the workloadmgr CLI tool on the Trilio Appliance

To use the workloadmgr CLI tool on the Trilio appliance it is only necessary to activate the virtual environment of the workloadmgr

An rc-file to authenticate against Openstack is required.

AWS S3 eventual consistency¶

AWS S3 object consistency model includes:

  1. Read-after-write

  2. Read-after-update

  3. Read-after-delete

Each of them describes how an object will reach its consistent state after an object is created/updated or deleted. None of them provides strong consistency and there is a lag time for an object to reach the consistent state. Though Trilio employed mechanisms to work around the limitations of eventual consistency of AWS S3, when an object reach its consistency state is not deterministic. There is no official statement from AWS on how long it takes for an object to reach consistent state. However read-after-write has a shorter time to reach consistency compared to other IO patterns. Our solution is designed to maximize read-after-write IO pattern. The time in which an object reaches eventual consistency also depends on the AWS region. For example, aws-standard region does not have strong consistency model compared to us-east or us-west. We suggest to use these regions when creating s3 buckets for Trilio. Though read-after-update IO pattern is hard to avoid completely, we employed ample delays in accessing objects to accommodate larger durations for objects to get into consistent state. However in rare occasions, backups may still fail and need to restarted.

Trilio Cluster¶

Trilio can be deployed as a single node or a three node cluster. It is highly recommended that Trilio is deployed as three node cluster for fault tolerance and load balancing. Starting with 3.0 release, Trilio requires additional IP for cluster and is required for both single node and three node deployments. Cluster ip a.k.a virtual ip is used for managing cluster and is used to register Trilio service endpoint in the keystone sevice catalog.

How these steps look in detail is dependent on the Openstack distribution that Trilio is installed in. Each supported Openstack distribution has its own deployment tools. Trilio integrates into these deployment tools to provide a native integration from the beginning to the end.

The cluster will not roll back to its last working state in case of any errors.

When the reconfiguration is required to switch to an external database it is necessary to reinitialize the Trilio appliance and configure it from scratch.

Configuring Trilio

Reinitialize Trilio

The Trilio Appliance can be reinitialized, which will delete all workload related values from the Trilio database.

To reinitialize the Trilio Appliance do:

  • Login into the Trilio Dashboard

  • Click on "admin" in the upper right corner to open the submenu

  • Choose "Reinitialize"

  • Verify that you want to reinitialize the Trilio

Change the Trilio GUI password

Change Trilio Dashboard password

To change the Trilio GUI password do:

  • Login into the Trilio Dashboard

  • Click on "admin" in the upper right corner to open the submenu

  • Choose "Reset Password"

  • Set the new Trilio password

[root@TVM1 ~]# source /home/stack/myansible/bin/activate
(myansible) [root@TVM1 ~]# cd /home/stack/myansible/lib/python3.6/site-packages/tvault_configurator
(myansible) [root@TVM1 tvault_configurator]# python recreate_conf.py
(myansible) [root@TVM1 tvault_configurator]# systemctl restart tvault-config
Username: admin
Password: password
source /home/stack/myansible/bin/activate

Additions for multiple Ceph users

It is possible to configure Cinder and Ceph to use different Ceph users for different Ceph pools and Cinder volume types. Or to have the nova boot volumes and cinder block volumes controlled by different users.

In the case of multiple Ceph users, it is required to adopt the keyring extension in the tvault-contego.conf inside the Ceph block.

The following example will try all files with the extension keyring that are located inside /etc/ceph to access the Ceph cluster for a Trilio related task.

[DEFAULT]

vault_storage_type = nfs
vault_storage_nfs_export = 192.168.1.34:/mnt/tvault/tvm5
vault_storage_nfs_options = nolock,soft,timeo=180,intr,lookupcache=none


vault_data_directory_old = /var/triliovault
vault_data_directory = /var/trilio/triliovault-mounts
log_file = /var/log/kolla/triliovault-datamover/tvault-contego.log
debug = False
verbose = True
max_uploads_pending = 3
max_commit_pending = 3

dmapi_transport_url = rabbit://openstack:[email protected]:5672,openstack:[email protected]:5672,openstack:[email protected]:5672//

[dmapi_database]
connection = mysql+pymysql://dmapi:x5nvYXnAn4rXmCHfWTK8h3wwShA4vxMq3gE2jH57@kolla-victoriaR-internal.triliodata.demo:3306/dmapi


[libvirt]
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = volumes

[ceph]
keyring_ext = .keyring
ceph_dir = /etc/ceph/

[contego_sys_admin]
helper_command = sudo /usr/bin/privsep-helper


[conductor]
use_local = True

[oslo_messaging_rabbit]
ssl = false

[cinder]
http_retries = 10
header: 
header_color: blue 
body_text_color: "#DC143C" 
body_text: 
header_font_size: 25px 
body_text_font_size: 22px
Canonical Openstack doesn't require the Trilio Cluster. The required services are installed and managed via JuJu Charms.

The following charms exist:

  • trilio-wlm ➡️ Installs and manages Trilio Controller services.

  • trilio-dm-api ➡️Installs and manages the Trilio Datamover API service.

  • trilio-data-mover ➡️ Installs and manages the Trilio Datamover service.

  • ➡️ Installs and manages the Trilio Horizon Plugin.

The documentation of the charms can be found here:

Apply the Trilio license

After the Trilio VM has been configured and all components are installed can the license be applied.

The license can be applied either through the admin-tab in Horizon or the CLI

Apply license through Horizon

To apply the license through Horizon follow these steps:

  1. Login to Horizon using admin user.

  2. Click on Admin Tab.

  3. Navigate to Backups-Admin

  4. Navigate to Trilio

  5. Navigate to License

  6. Click "Update License"

  7. Click "Choose File"

  8. choose license-file on client system

  9. click "Apply"

Apply license through CLI

  • <license_file> ➡️ path to the license file

Upgrade Trilio

Starting Trilio for Openstack 4.0 does Trilio for Openstack allow in-place upgrades.

The following versions can be upgraded to each other:

Old
New

4.0 GA (4.0.92)

4.0 SP1 (4.0.115)

4.0 GA (4.0.92)

4.1 GA (4.1.94)

4.1 GA (4.1.94)

4.1 HF1 (4.1.94-hotfix1)

The upgrade process contains upgrading the Trilio appliance and the Openstack components and is dependent on the underlying operating system.

The Upgrade of Trilio for Canonical Openstack is managed through the charms.

E-Mail Notifications

Definition

Trilio can notify users via E-Mail upon the completion of backup and restore jobs.

The E-Mail will be sent to the owner of the Workload.

Requirements to activate E-Mail Notifications

To use the E-mail notifications, two requirements need to be met.

Both requirements need to be set or configured by the Openstack Administrator. Please contact your Openstack Administrator to verify the requirements.

User E-Mail assigned

As the E-Mail will be sent to the owner of the Workload does the Openstack User, who created the workload, require to have an E-Mail address associated.

Trilio E-Mail Server configured

Trilio needs to know which E-Mail server to use, to send the E-mail notifications. Backup Administrators can do this in the "Backup Admin" area.

Activate/Deactivate the E-Mail Notifications

E-Mail notifications are activated tenant wide. To activate the E-Mail notification feature for a tenant follow these steps:

  1. Login to Horizon

  2. Navigate to the Backups

  3. Navigate to Settings

  4. Check/Uncheck the box for "Enable Email Alerts"

Example E-Mails

The following screenshots show example E-mails send by Trilio.

Managing Trusts

Trilio is using the Openstack Keystone Trust system which enables the Trilio service user to act in the name of another Openstack user.

This system is used during all backup and restore features.

Openstack Administrators should never have the need to directly work with the trusts created.

The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.

Trusts can only be worked with via CLI

List all trusts

Show a trust

  • <trust_id> ➡️ ID of the trust to show

Create a trust

  • <role_name> ➡️Name of the role that trust is created for

  • --is_cloud_trust {True,False} ➡️ Set to true if creating cloud admin trust. While creating cloud trust use same user and tenant which used to configure Trilio and keep the role admin.

Delete a trust

  • <trust_id> ➡️ ID of the trust to be deleted

Installing on Ansible Openstack Victoria

The installation of Trilio for Openstack on Kolla Victoria with Trilio 4.1 is following this procedure:

  1. Deplo T4O-4.1 GA Appliance

  2. Upgrade to 4.1 HF5 packages on the appliance

  3. Deploy Trilio components on Openstack Victoria

Trilio Appliance Dashboard

The Trilio Appliance Dashboard gives an overview of the running services and their Status inside the Cluster. This dashboard is accessible through the virtual IP.

If service status panels on the dashboard page are not visible then access the virtual IP on port 3001 (https://<TVO-VIP>:3001/) and accept the SSL exception, and then refresh the dashboard page.

It shows for each Trilio Appliance the Status of the following Trilio services:

  • wlm-workloads

Download Trilio logs

Download Trilio logs

It is possible to download the Trilio logs directly through the Trilio web gui.

To download logs throught the Trilio web gui:

  • Login into the Trilio web gui

Disaster Recovery

Trilio Workloads are designed to allow a Desaster Recovery without the need to backup the Trilio database.

As long as the Trilio Workloads are existing on the Backup Target Storage and a Trilio installation has access to them, it is possible to restore the Workloads.

Disaster Recovery Process

  1. and

General Troubleshooting Tips

Troubleshooting inside a complex environment like Openstack can be very time-consuming. The following tipps will help to speed up the troubleshooting process to identify root causes.

What is happening where

Openstack and Trilio are divided into multiple services. Each service has a very specific purpose that is called during a backup or recovery procedure. Knowing which service is doing what helps to understand where the error is happening, allowing more focused troubleshooting.

Switch NFS Backing file

Trilio is using a base64 hash for the mount point of NFS Backup targets. This hash makes sure, that multiple NFS Shares can be used with the same Trilio installation.

This base64 hash is part of the Trilio incremental backups as an absolute path of the backing files. This requires the usage of mount bind during a DR scenario or quick migration scenario.

In the case that there is time for a thorough migration there is another option to change the backing file and make the Trilio backups available on a different NFS Share. This option is updating the backing file to the new NFS Share mount point.

Backing file change script

Trilio is providing a shell script for the purpose of changing the backing file. This script is used after the Trilio appliance has been reconfigured to use the new NFS share.

Update Trilio components on Openstack Victora

  • Configure the Trilio appliance

  • Deplo T4O-4.1 GA Appliance

    Please follow this deployment guide to spin up the base Trilio 4.1GA appliance.

    Upgrade to the latest 4.1 HF Appliance

    Trilio supports Ansible OpenStack Victoria from 4.1HF5 onwards, so it is recommended to upgrade to the latest available hotfix on 4.1 to make deployment successful. Please follow this upgrade guide to upgrade the appliance to the latest 4.1 Hotfix.

    Deploy Trilio components on Openstack

    Run the deployment of the components following this guide using the following values:

    Variable
    Value

    Branch

    hotfix-13-TVO/4.1

    Change parameterOPENSTACK_DISTin the file/etc/openstack_deploy/user_tvault_vars.ymlto victoria

    Update Trilio components

    Follow this guide to update the packages on the OpenStack environment.

    Configure the Trilio Appliance

    Please follow this guide to configure the upgraded Trilio 4.1 appliance.

    Go to "Logs"

  • Choose the log to be downloaded

    • Each log for every Trilio Appliance can be downloaded seperatly

    • or a zip of all logfiles can be created and downloaded

  • This will download the current log files. Already rotated logs need to be downloaded through SSH from the Trilio appliance directly. All logs, including rotated old logs, can be found at:

    /var/logs/workloadmgr/

    Trilio cluster

    The Trilio Cluster is the Controller of Trilio. It receives all Workload related requests from the users.

    Every task of a backup or restore process is triggered and managed from here. This includes the creation of the directory structure and initial metadata files on the Backup Target.

    During a backup process

    During a backup process is the Trilio cluster also responsible to gather the metadata about the backed up VMs and networks from the Openstack environment. It is sending API calls towards the Openstack endpoints on the configured endpoint type to fetch this information. Once the metadata has been received does the Trilio Cluster write it as json files on the Backup Target.

    The Trilio cluster is also sending the Cinder Snapshot command.

    During a restore process

    During restore process is the Trilio cluster reading the VM metadata from its Database and is using the metadata to create the Shell for the restore. It is sending API calls to the Openstack environment to create the necessary resources.

    dmapi

    The dmapi service is the connector between the Trilio cluster and the datamover running on the compute nodes.

    The purpose of the dmapi service is to identify which compute node is responsible for the current backup or restore task. To do so is the dmapi service connecting to the nova api database requesting the compute hose of a provided VM.

    Once the compute host has been identified is the dmapi forwarding the command from the Trilio Cluster to the datamover running on the identified compute host.

    datamover

    The datamover is the Trilio service running on the compute nodes.

    Each datamover is responsible for the VMs running on top of its compute node. A datamover can not work with VMs running on a different compute node.

    The datamover is controlling the freeze and thaw of VMs as well as the actual movement of the data.

    Everything on the Backup Target is happening as user nova

    Trilio is reading and writing on the Backup Target as nova:nova.

    The POSIX user-id and group-id of nova:nova need to be aligned between the Trilio Cluster and all compute nodes. Otherwise backup or restores may fail with permission or file not found issues.

    Alternativ ways to achieve the goal are possible, as long as all required nodes can fully write and read as nova:nova on the Backup Target.

    It is recommended to verify the required permissions on the Backup Target in case of any errors during the data transfer phase or in case of any file permission errors.

    Trilio Trustee Role

    Trilio is using RBAC to allow the usage of Trilio features to users.

    This trustee role is absolutely required and can not be overwritten using the admin role.

    It is recommended to verify the assignment of the Trilio Trustee Role in case of any permission errors from Trilio during creation of Workloads, backups or restores.

    Openstack Quotas

    Trilio is creating Cinder Snapshots and temporary Cinder Volumes. The Openstack Quotas need to allow that.

    Every disk that is getting backed up requires one temporary Cinder Volumes.

    Every Cinder Volume that is getting backup requires two Cinder Snapshots. The second Cinder Snapshot is temporary to calculate the incremental.

    Trilio Configurator

    Once Trilio is configured use virtual IP to access its dashboard. If service status panels on the dashboard page is not visible then access virtual IP on port 3001 (https://<TVO-VIP>:3001/) by accepting the exception and again refresh the dashboard page.

    4.1 GA (4.1.94)

    4.1 HF2 (4.1.94-hotfix2)

    4.1 HF1 (4.1.94-hotfix1)

    4.1 HF2 (4.1.94-hotfix2)

    Important log files

    On the Trilio Nodes

    The Trilio Cluster contains multiple log files.

    The main log is workloadmgr-workloads.log, which contains all logs about ongoing and past Trilio backup and restore tasks. It can be found at:

    /var/log/workloadmgr/workloadmgr-workloads.log

    The next important log is the workloadmgr-api.log, which contains all logs about API calls received by the Trilio Cluster. It can be found at:

    /var/log/workloadmgr/workloadmgr-api.log

    The log for the third service is the workloadmgr-scheduler.log, which contains all logs about the internal job scheduling between Trilio nodes in the Trilio Cluster.

    /var/log/workloadmgr/workloadmgr-scheduler.log

    The last but not least service running on the Trilio Nodes is the wlm-cron service, which is controlling the scheduled automated backups.

    /var/log/workloadmgr/workloadmgr-workloads.log

    In the case of using S3 as a backup target is there also a log file that keeps track of the S3-Fuse plugin used to connect with the S3 storage.

    /var/log/workloadmgr/s3vaultfuse.py.log

    Canonical Openstack is having these logs inside the workloadmgr container.

    Trilio Datamover service logs on RHOSP

    Datamover API log

    The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running under:

    /var/log/containers/trilio-datamover-api/dmapi.log

    Datamover log

    The log for the Trilio Datamover service is located on the nodes, typically compute, where the Trilio Datamover container is running under:

    /var/log/containers/trilio-datamover/tvault-contego.log

    In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:

    /var/log/containers/trilio-datamover/tvault-object-store.log

    Trilio Datamover service logs on Kolla Openstack

    Datamover API log

    The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running under:

    /var/log/kolla/trilio-datamover-api/dmapi.log

    Datamover log

    The log for the Trilio Datamover service is located on the nodes, typically compute, where the Trilio Datamover container is running under:

    /var/log/kolla/triliovault-datamover/tvault-contego.log

    In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:

    /var/log/kolla/trilio-datamover/tvault-object-store.log

    Trilio Datamover service logs on Ansible Openstack

    Datamover API log

    The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running. Log into the dmapi container using lxc-attach command (example below).

    lxc-attach -n controller_dmapi_container-a11984bf

    The log file is then located under:

    /var/log/dmapi/dmapi.log

    Datamover log

    The log for the Trilio Datamover service is typically located on the compute nodes and the logs can be found here:

    /var/log/tvault-contego/tvault-contego.log

    In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:

    /var/log/tvault-object-store/tvault-object-store.log

    workloadmgr license-create <license_file>
    Trilio for the target cloud
  • Verify required mount-paths and create if necessary

  • Reassign Workloads

  • Notify users to of Workloads being available

  • This procedure is designed to be applicable to all Openstack installations using Trilio. It is to be used as a starting point to develop the exact Desaster Recovery process of a specific environment.

    In case that instead of noticing the users, the workloads shall be restored is it necessary to have an User in each Project, that has the necessary privileges to restore.

    Mount-paths

    Trilio incremental Snapshots involve a backing file to the prior backup taken, which makes every Trilio incremental backup a synthetic full backup.

    Trilio is using qcow2 backing files for this feature:

    As can be seen in the example is the backing file an absolute path, which makes it necessary, that this path exists so the backing files can be accessed.

    Trilio is using the base64 hashing algorithm for the NFS mount-paths, to allow the configuration of multiple NFS Volumes at the same time. The hash value is calculated using the provided NFS path.

    When the path of the backing file is not available on the Trilio appliance and Compute nodes, will the restores of incremental backups fail.

    The tested and recommended method to make the backing files available is creating the required directory path and using mount --bind to make the path available for the backups.

    Running the mount --bind command will make the necessary path available until the next reboot. If it is required to have access to the path beyond a reboot is it necessary to edit the fstab.

    Install
    Configure
    workloadmgr trust-list
    workloadmgr trust-show <trust_id>
    workloadmgr trust-create [--is_cloud_trust {True,False}] <role_name>
    workloadmgr trust-delete <trust_id>
    qemu-img info 85b645c5-c1ea-4628-b5d8-1faea0e9d549
    image: 85b645c5-c1ea-4628-b5d8-1faea0e9d549
    file format: qcow2
    virtual size: 1.0G (1073741824 bytes)
    disk size: 21M
    cluster_size: 65536
    backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_3c2fbee5-ad90-4448-b009-5047bcffc2ea/snapshot_f4874ed7-fe85-4d7d-b22b-082a2e068010/vm_id_9894f013-77dd-4514-8e65-818f4ae91d1f/vm_res_id_9ae3a6e7-dffe-4424-badc-bc4de1a18b40_vda/a6289269-3e72-4085-adca-e228ba656984
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
    # echo -n 10.10.2.20:/upstream | base64
    MTAuMTAuMi4yMDovdXBzdHJlYW0=
    #mount --bind <mount-path1> <mount-path2>
    #vi /etc/fstab
    <mount-path1> <mount-path2>	none bind	0 0

    wlm-scheduler

  • wlm-api

  • wlm-cron

  • The wlm-cron service runs on only one Trilio appliance at all times. That they are shown inactive on other nodes is not an error

    To give administrators an overview of the HA status, does the dashboard also show the service status for:

    • Pacemaker

    • RabbitMQ

    • MySQL Galera Cluster \

    Downloading the shell script

    The Shell script is publicly available at:

    Pre-Requisites

    The following requirements need to be met before the change of the backing file can be attempted.

    • The Trilio Appliance has been reconfigured with the new NFS Share

      • Please check here for reconfiguring the Trilio Appliance

    • The Openstack environment has been reconfigured with the new NFS Share

      • Please check for Red Hat Openstack Platform

      • Please check for Canonical Openstack

      • Please check for Kolla Ansible Openstack

      • Please check for Ansible Openstack

    • The workloads are available on the new NFS Share

    • The workloads are owned by nova:nova user

    Usage

    The shell script is changing one workload at a time.

    The shell script has to run as nova user, otherwise the owner will get changed and the backup can not be used by Trilio.

    Run the following command:

    with

    • /var/triliovault-mounts/<base64>/ being the new NFS mount path

    • workload_<workload_id> being the workload to rebase

    Logging of the procedure

    The shell script is generating the following log file at the following location:

    The log file will not get overwritten when the script is run multiple times. Each run of the script will append the available log file.

    trilio-horizon-plugin
    Screenshot of a notification E-mail for a successful Snapshot
    Screenshot of a notification E-Mail for a failed Snapshot
    Screenshot of a notification E-Mail for a successful Restore

    Requirements

    Trilio has four main software components:

    1. Trilio ships as a QCOW2 image. User can instantiate one or more VMs from the QCOW2 image on a standalone KVM boxes.

    2. Trilio API is a python module that is an extension to nova api service. This module is installed on all OpenStack controller nodes

    3. Trilio Datamover is a python module that is installed on every OpenStack compute nodes

    4. Trilio horizon plugin is installed as an add on to horizon servers. This module is installed on every server that runs horizon service.

    System requirements Trilio Appliance

    The Trilio Appliance is not supported as an instance inside Openstack.

    The Trilio Appliance gets delivered as a qcow2 image, which gets attached to a virtual machine.

    Trilio supports KVM-based hypervisors on x86 architectures, with the following properties:

    Software
    Supported

    The recommended size of the VM for the Trilio Appliance is:

    When running Trilio in production, a 3-node cluster of the Trilio appliance is recommended for high availability and load balancing.

    Ressource
    Value

    The qcow2 image itself defines the 40GB disk size of the VM.

    In the rare case of the Trilio Appliance database or log files getting larger than 40GB disk, contact or open a ticket with Trilio Customer Success to attach another drive to the Trilio Appliance.

    2.3. Software Requirements

    In addition to the Trilio Appliance does Trilio contain components, which are installed directly into the Openstack itself.

    Each Openstack distribution comes with a set of supported operating systems. Please check the to see, which Openstack Distribution is supported with which Operating System.

    Additional it is necessary to have the nfs-common packages installed on the compute nodes in case of using the NFS protocol for the backup target.

    Installing on Kolla Victoria

    The installation of Trilio for Openstack on Kolla Victoria with Trilio 4.1 is following this procedure:

    1. Deplo T4O-4.1 GA Appliance

    2. Upgrade to 4.1 HF5 or higher on the appliance

    3. Deploy Trilio components of 4.1 HF5 or higher on the Kolla Openstack Victoria

    4. Configure the Trilio appliance

    Deplo T4O-4.1 GA Appliance

    Please follow to spin up the base Trilio 4.1GA appliance.

    Upgrade the T4O appliance to the latest 4.1 HF

    Trilio supports Kolla Victoria from 4.1HF5 onwards, so it is recommended to upgrade to the latest available hotfix on 4.1 to make deployment successful. Please follow to upgrade the appliance to the latest 4.1 Hotfix.

    Deploy Trilio components of 4.1 HF11 or higher

    Run the deployment of the components following using the following values:

    Variable
    Value

    Configure the Trilio Appliance

    Please follow to configure the upgraded Trilio 4.1 appliance.

    Spinning up the Trilio VM

    For Canonical Openstack it is not necessary to spin up the Trilio VM.

    The Trilio Appliance is delivered as qcow2 image and runs as VM on top of a KVM Hypervisor.

    This guide shows the tested way to spin up the Trilio Appliance on a RHV Cluster. Please contact a RHV Administrator and Trilio Customer Success Agent in case of incompatibility with company standards.

    Additions for multiple CEPH configurations

    It is possible to configure Cinder to have multiple configurations and keyrings for CEPH.

    In this case, the Trilio Datamover file needs to be extended with the CEPH information.

    For Trilio to be able to work in such an environment it is required to put copies of each of these configurations and keyrings into a separate directory, which is then made known to the Trilio Datamover inside a [ceph] block in the tvault-contego.conf.

    A tvault-contego.conf file with the extended [ceph] block would look like this.

    Trilio network considerations

    Trilio integrates natively with Openstack. This includes that Trilio communicates completely through APIs using the Openstack Endpoints. Trilio is also generating its own Openstack endpoints. In addition, is the Trilio appliance and the compute nodes writing to and reading from the backup target. These points affect the network planning for the Trilio installation.

    Existing endpoints in Openstack

    Openstack knows 3 types of endpoints:

    Set network accessibility of Trilio GUI

    By default is the Trilio GUI available on all NICs on port 443.

    To limit this to only one IP the following steps need to be applied.

    Network Setup

    The Trilio Appliance provides by default the possibility of 4 VIPs.

    Please ask your Trilio Customer Success Manager or Engineer.
    This page will be updated once the script is publicly available.
    ./backing_file_update.sh /var/triliovault-mounts/<base64>/workload_<workload_id>
    /tmp/backing_file_update.log
    here
    here
    here
    here

    libvirt

    2.0.0 and above

    QEMU

    2.0.0 and above

    qemu-img

    2.6.0 and above

    vCPU

    8

    RAM

    24 GB

    ¶
    support matrix

    Branch

    hotfix-13-TVO/4.1

    Tag

    4.1.94-hotfix-12-victoria

    this deployment guide
    this upgrade guide
    this guide
    this guide
    [DEFAULT]
    
    vault_storage_type = nfs
    vault_storage_nfs_export = 192.168.1.34:/mnt/tvault/tvm5
    vault_storage_nfs_options = nolock,soft,timeo=180,intr,lookupcache=none
    
    
    vault_data_directory_old = /var/triliovault
    vault_data_directory = /var/trilio/triliovault-mounts
    log_file = /var/log/kolla/triliovault-datamover/tvault-contego.log
    debug = False
    verbose = True
    max_uploads_pending = 3
    max_commit_pending = 3
    
    dmapi_transport_url = rabbit://openstack:[email protected]:5672,openstack:[email protected]:5672,openstack:[email protected]:5672//
    
    [dmapi_database]
    connection = mysql+pymysql://dmapi:x5nvYXnAn4rXmCHfWTK8h3wwShA4vxMq3gE2jH57@kolla-victoriaR-internal.triliodata.demo:3306/dmapi
    
    
    [libvirt]
    images_rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_user = volumes
    
    [ceph]
    keyring_ext = .volumes.keyring
    ceph_dir = /etc/ceph/directory1/,/etc/ceph/directory2/
    
    [contego_sys_admin]
    helper_command = sudo /usr/bin/privsep-helper
    
    
    [conductor]
    use_local = True
    
    [oslo_messaging_rabbit]
    ssl = false
    
    [cinder]
    http_retries = 10
    
    Creating the cloud-init image

    The Trilio appliance is utilizing cloud-init to provide the initial network and user configuration.

    Cloud-init is reading it's information either from a metadata server or from a provided cd image. Trilio is utilizing the cd image.

    Needed tools

    To create the cloud-init image it is required to have genisoimage available.

    Providing the Metadata

    Cloud-init is using two files for it's metadata.

    The first file is called meta-data and contains the information about the network configuration. Below is an example of this file.

    Keep the hostname localhost. The hostname gets changed through the configuration step. Changing the hostname will lead to the tvault-config service not properly starting, blocking further configuration.

    The instance-id has to match the VM name in virsh

    The second file is called user-data and contains little scripts and information to set up for example the user passwords. Below is an example of this file.

    creating the image file

    Both files meta-data and user-data are needed. Even when one is empty, is it needed to create a working cloud-init image.

    The image is getting created using genisoimage follwing this general command:

    genisoimage -output <name>.iso -volid cidata -joliet -rock </path/user-data> </path/meta-data>

    An example of this command is shown below.

    Spining up the Trilio appliance

    The Trilio Appliance qcow2 image can be downloaded from the Trilio customer portal. Please contact your Trilio sales or technical lead to get access to the portal.

    After the cloud-init image has been created the TriloVault appliance can be spun up on the desired KVM server.

    Extract the Trilio QCOW2 tar file using the following command :

    See below an example command, how to spin up the Trilio appliance using virsh and the created iso image.

    It is of course possible to spin up the Trilio appliance without a cloud-init iso-image. It will spin up with default values.

    Uninstalling cloud-init after first boot

    Once the Trilio appliance is up and running with it's initial configuration is it recommended to uninstall cloud-init.

    If cloud-init is not installed it will rerun the network configuration upon every boot. Setting the network configuration back to DHCP, if no metadata is provided.

    To uninstall cloud-init, follow the example below.

    Updating the appliance to the latest minor version

    It is recommended to directly update the Trilio appliance to the latest version.

    To do so follow the minor update guide provided here:

    • Online update Trilio Appliance

    • Offline update Trilio Appliance

    A general VIP which can be used for everything
  • A public VIP for the public endpoint

  • An internal VIP for the internal endpoint

  • An admin VIP for the admin endpoint

  • Should an additional VIP be required to restrict the access of the Trilio Dashboard to this VIP the new VIP needs to be created as a new resource inside the PCS cluster.

    Nginx setup

    When the new dashboard_ip has been created or decided, then the next step is to set up the proxy forwarding inside Nginx, which will make the Trilio GUI available through port 8000.

    All of the following steps need to be done all Trilio appliances of the cluster.

    1. Create new conf file at /etc/nginx/conf.d/tvault-dashboard.conf. Replace variables dashboard_ip and virtual_ip as configured or decided.

    2. edit /etc/nginx/nginx.conf and uncomment line #include /etc/nginx/conf.d/*.conf;

    3. check nginx syntax: nginx -t

    4. reload nginx conf: nginx -s reload

    5. Verify if the new cluster resource is visible or not using pcs resource command and by accessing the dashboard_ip.

    Limit the access of the Dashboard

    The configured dashboard_ip will always end on the nginx service on port 8000 and will then be forwarded to the local dashboard service on port 443.

    This configuration limits the required access to the local dashboard service to the Trilio appliance cluster itself. All other connections on port 443 can be dropped.

    The following commands will set the required iptable rules.

    Verify the accessibility as required

    At this point is the Trilio GUI only reachable on the dashboard_ip on port 8000. Accessing the Trilio GUI through any other IP or on port 443 is not allowed.

    #For RHEL and centos
    yum install genisoimage
    #For Ubuntu 
    apt-get install genisoimage
    [root@kvm]# cat meta-data
    instance-id: triliovault
    network-interfaces: |
       auto eth0
       iface eth0 inet static
       address 158.69.170.20
       netmask 255.255.255.0
       gateway 158.69.170.30
    
       dns-nameservers 11.11.0.51
    local-hostname: localhost
    [root@kvm]# cat user-data
    #cloud-config
    chpasswd:
      list: |
        root:password1
        stack:password2
      expire: False
    genisoimage  -output tvault-firstboot-config.iso -volid cidata -joliet -rock user-data meta-data
    tar Jxvf TrilioVault_file.tar.xz
    virt-install -n triliovault-vm  --memory 24576 --vcpus 8 \
    --os-type linux \ 
    --disk tvault-appliance-os-3.0.154.qcow2,device=disk,bus=virtio,size=40 \
    --network bridge=virbr0,model=virtio \
    --network bridge=virbr1,model=virtio \
    --graphics none \
    --import \
    --disk path=tvault-firstboot-config.iso,device=cdrom
    sudo yum remove cloud-init
    server {
        listen <dashboard_ip>:8000 ssl ;
        ssl_certificate "/opt/stack/data/cert/workloadmgr.cert";
        ssl_certificate_key "/opt/stack/data/cert/workloadmgr.key";
        keepalive_timeout 65;
        proxy_read_timeout 1800;
        access_log on;
        location / {
            proxy_set_header Host $host:$server_port;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_pass https://<virtual_ip>:443;
        }
    }
    server {
        listen <dashboard_ip>:3001 ssl ;
        ssl_certificate "/opt/stack/data/cert/workloadmgr.cert";
        ssl_certificate_key "/opt/stack/data/cert/workloadmgr.key";
        keepalive_timeout 65;
        proxy_read_timeout 1800;
        access_log on;
        location / {
            proxy_set_header Host $host:$server_port;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_pass https://<virtual_ip>:3001;
        }
    }
    
    pcs resource create dashboard_ip ocf:heartbeat:IPaddr2 ip=<new_vip> cidr_netmask=<netmask> nic=<new_nw_interface> op monitor interval=30s
    pcs constraint colocation add dashboard_ip virtual_ip
    iptables -A INPUT -p tcp -s tvm1,tvm2,tvm3 --dport 80 -j ACCEPT
    iptables -A INPUT -p tcp -s tvm1,tvm2,tvm3 --dport 443 -j ACCEPT
    iptables -A INPUT -p tcp --dport 80 -j DROP
    iptables -A INPUT -p tcp --dport 443 -j DROP
    https://<dashboard_ip>:8000
    Public Endpoints
  • Internal Endpoints

  • Admin Endpoints

  • Each of these endpoint types is designed for a specific purpose. Public endpoints are meant to be used by the Openstack end-users to work with Openstack. Internal endpoints are meant to be used by the Openstack services to communicate with each other. Admin endpoints are meant to be used by Openstack administrators.

    Out of those 3 endpoint types does only the admin endpoint sometimes contain APIs which are not available on any other endpoint type.

    To learn more about Openstack endpoints please visit the official Openstack documentation.

    Openstack endpoints required by Trilio

    Trilio is communicating with all services of Openstack on a defined endpoint type. Which endpoint type Trilio is using to communicate with Openstack is decided during the configuration of the Trilio appliance.

    There is one exception: The Trilio Appliance always requires access to the Keystone admin endpoint.

    The following network requirement can be identified this way:

    • Trilio appliance needs access to the Keystone admin endpoint on the admin endpoint network

    • Trilio appliance needs access to all endpoints of one type

    Recommendation: Provide access to all Openstack Endpoint types

    Trilio is recommending providing full access to all Openstack endpoints to the Trilio appliance to follow the Openstack standards and best practices.

    Trilio is generating its own endpoints as well. These endpoints are pointing towards the Trilio Appliance directly. This means that using those endpoints will not send the API calls towards the Openstack Controller nodes first, but directly to the Trilio VM.

    Following the Openstack standards and best practices, it is therefore recommended to put the Trilio endpoints on the same networks as the already existing Openstack endpoints. This allows to extend the purpose of each endpoint type to the Trilio service:

    • The public endpoint to be used by Openstack users when using Trilio CLI or API

    • The internal endpoint to communicate with the Openstack services

    • The admin endpoint to use the required admin only APIs of Keystone

    Backup target access required by Trilio

    The Trilio solution is using backup target storage to securely place the backup data. Trilio is dividing its backup data into two parts:

    1. Metadata

    2. Volume Disk Data

    The first type of data is generated by the Trilio appliance through communicating with the Openstack Endpoints. All metadata that is stored together with a backup is written by the Trilio Appliance to the backup target in the json format.

    The second type of data is generated by the Trilio Datamover service running on the compute nodes. The Datamover service is reading the Volume Data from the Cinder or Nova storage and transferring this data as qcow2 image to the backup target. Each Datamover service is hereby responsible for the VMs running on its compute node.

    The network requirements are therefor:

    • The Trilio appliance needs access to the backup target

    • Every compute node needs access to the backup target

    Example of a typical Trilio network integration

    Most Trilio customers are following the Openstack standards and best practices to have the public, internal, and admin endpoints on separate networks. They also typically don't have any network yet, which can access the desired backup target.

    The starting network configuration typically looks like this:

    Typical Openstack Network configuration before Trilio gets installed

    Following the Openstack standards and Trilio's recommendation will the Trilio Appliance be placed on all those 3 networks. Further is the access to the backup target required by Trilio Appliance and Compute nodes. Here done by adding a 4th network.

    The resulting network configuration would look like this:

    Typical Openstack network configuration with Trilio installed

    It is of course possible to combine networks as necessary. As long as the required network access is available will Trilio work.

    Other examples of Trilio network integrations

    Each Openstack installation is different and so is the network configuration. There are endless possibilities of how to configure the Openstack network and how to implement the Trilio appliance into this network. The following three examples have been seen in production:

    The first example is from a manufacturing company, which wanted to split the networks by function and decided to put the Trilio backup target on the internal network as the backup and recovery function was identified as an Openstack internal solution. This example looks complex but integrates Trilio just as recommended.

    The split them all network example

    The second example is from a financial institute that wanted to be sure that the Openstack Users have no direct uncontrolled network access to the Openstack infrastructure. Following this example requires additional work as the internal HA-Proxy needs to be configured to correctly translates the API calls towards the Trilio

    The no trust network example

    The third example is from a service company that was forced to treat Trilio as an external 3rd party solution, as we require a virtual machine running outside of Openstack. This kind of network configuration requires good planning on the Trilio endpoints and firewall rules.

    Trilio as third party component network example

    Uninstalling from Ansible OpenStack

    Uninstall Trilio Services

    The Trilio Ansible OpenStack playbook can be run to uninstall the Trilio services.

    Destroy Trilio Datamover API container

    To cleanly remove the Trilio Datamover API container run the following Ansible playbook.

    Clean openstack_user_config.yml

    Remove the tvault-dmapi_hosts and tvault_compute_hosts entries from /etc/openstack_deploy/openstack_user_config.yml

    Remove Trilio haproxy settings in user_variables.yml

    Remove Trilio Datamover API settings from /etc/openstack_deploy/user_variables.yml

    Remove Trilio Datamover API inventory file

    Remove Trilio Datamover API service endpoints

    Delete Trilio Datamover API database and user

    • Go inside galera container.

    • Login as root user in mysql database engine.

    • Drop dmapi database.

    • Drop dmapi user

    Remove dmapi rabbitmq user from rabbitmq container

    • Go inside rabbitmq container.

    • Delete dmapi user.

    • Delete dmapi vhost.

    Clean haproxy

    Remove /etc/haproxy/conf.d/datamover_service file.

    Remove HAproxy configuration entry from /etc/haproxy/haproxy.cfg file.

    Restart the HAproxy service.

    Remove certificates from Compute nodes

    Destroy the Trilio VM Cluster

    List all VMs running on the KVM node

    Destroy the Trilio VMs

    Undefine the Trilio VMs

    Delete the TrlioVault VM disk from KVM Host storage

    Uninstalling from Kolla OpenStack

    Clean triliovault_datamover_api container

    The container needs to be cleaned on all nodes where the triliovault_datamover_api container is running. The Kolla Openstack inventory file helps to identify the nodes with the service.

    Following steps need to be done to clean the triliovault_datamover_api container:

    Stop the triliovault_datamover_api container.

    Remove the triliovault_datamover_api container.

    Clean /etc/kolla/triliovault-datamover-api directory.

    Clean log directory of triliovault_datamover_api container.

    Clean triliovault_datamover container

    The container needs to be cleaned on all nodes where the triliovault_datamover container is running. The Kolla Openstack inventory file helps to identify the nodes with the service.

    Following steps need to be done to clean the triliovault_datamover container:

    Stop the triliovault_datamover container.

    Remove the triliovault_datamover container.

    Clean /etc/kolla/triliovault-datamover directory.

    Clean log directory of triliovault_datamover container.

    Clean haproxy of Trilio Datamover API

    The Trilio Datamover API entries need to be cleaned on all haproxy nodes. The Kolla Openstack inventory file helps to identify the nodes with the service.

    Following steps need to be done to clean the haproxy container:

    Clean Kolla Ansible deployment procedure

    Delete all Trilio related entries from:

    To cross-verify the uninstallation undo all steps done in and .

    Trilio entries can be found in:

    • /usr/local/share/kolla-ansible/ansible/roles/ ➡️ There is a role triliovault

    • /etc/kolla/globals.yml➡️ Trilio entries had been appended at the end of the file

    • /etc/kolla/passwords.yml➡️

    Revert to original Horizon container

    Run deploy command to replace the Trilio Horizon container with original Kolla Ansible Horizon container.

    Clean Keystone resources

    Trilio created a dmapi service with dmapi user.

    Clean Trilio database resources

    Trilio Datamover API service has its own database in the Openstack database.

    Login to Openstack database as root user or user with similar priviliges.

    Delete dmapi database and user.

    Destroy the Trilio VM Cluster

    List all VMs running on the KVM node

    Destroy the Trilio VMs

    Undefine the Trilio VMs

    Delete the TrlioVault VM disk from KVM Host storage

    Workload Quotas

    Trilio enables Openstack administrators to set Project Quotas against the usage of Trilio.

    The following Quotas can be set:

    • Number of Workloads a Project is allowed to have

    • Number of Snapshots a Project is allowed to have

    • Number of VMs a Project is allowed to protect

    • Amount of Storage a Project is allowed to use on the Backup Target

    Work with Workload Quotas via Horizon

    The Trilio Quota feature is available for all supported Openstack versions and distributions, but only Train and higher releases include the Horizon integration of the Quota feature.

    Workload Quotas are managed like any other Project Quotas.

    1. Login into Horizon as user with admin role

    2. Navigate to Identity

    3. Navigate to Projects

    4. Identify the Project to modify or show the quotas on

    Work with Workload Quotas via CLI

    List available Quota Types

    Trilio is providing several different Quotas. The following command allows listing those.

    Trilio 4.1 do not yet have the Quota Type Volume integrated. Using this will not generate any Quotas a Tenant has to apply to.

    Show Quota Type Details

    The following command will show the details of a provided Quota Type.

    • <quota_type_id> ➡️ID of the Quota Type to show

    Create a Quota

    The following command will create a Quota for a given project and set the provided value.

    • <quota_type_id> ➡️ID of the Quota Type to be created

    • <allowed_value>➡️ Value to set for this Quota Type

    • <high_watermark>➡️

    The high watermark is automatically set to 80% of the allowed value when set via Horizon.

    A created Quota will generate an allowed_quota_object with its own ID. This is ID is needed when continuing to work with the created Quota.

    List allowed Quotas

    The following command lists all Trilio Quotas set for a given project.

    • <project_id>➡️ Project to list the Quotas from

    Show allowed Quota

    The following command shows the details about a provided allowed Quota.

    • <allowed_quota_id> ➡️ID of the allowed Quota to show.

    Update allowed Quota

    The following command shows how to update the value of an already existing allowed Quota.

    • <allowed_value>➡️ Value to set for this Quota Type

    • <high_watermark>➡️ Value to set for High Watermark warnings

    • <project_id>➡️

    Delete allowed Quota

    The following command will delete an allowed Quota and sets the value of the connected Quota Type back to unlimited for the affected project.

    • <allowed_quota_id> ➡️ID of the allowed Quota to delete

    Schedulers

    Definition

    Every Workload has its own schedule. Those schedules can be activated, deactivated and modified.

    A schedule is defined by:

    • Status (Enabled/Disabled)

    • Start Day/Time

    • End Day

    • Hrs between 2 snapshots

    Disable a schedule

    Using Horizon

    To disable the scheduler of a single Workload in Horizon do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to be modified

    Using CLI

    • --workloadid <workloadid> ➡️ Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>

    Enable a schedule

    Using Horizon

    To disable the scheduler of a single Workload in Horizon do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to be modified

    Using CLI

    • --workloadid <workloadid> ➡️ Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>

    Modify a schedule

    To modify a schedule the workload itself needs to be modified.

    Please follow this procedure to .

    Verify the scheduler trust is working

    Trilio is using the which enables the Trilio service user to act in the name of another Openstack user.

    This system is used during all backup and restore features.

    Using Horizon

    As a trust is bound to a specific user for each Workload does the Trilio Horizon plugin show the status of the Scheduler on the Workload list page.

    Using CLI

    • <workload_id> ➡️ ID of the workload to validate

    Workload Import & Migration

    Each Trilio Workload has a dedicated owner. The ownership of a Workload is defined by:

    • Openstack User - The Openstack User-ID is assigned to a Workload

    • Openstack Project - The Openstack Project-ID is assigned to a Workload

    • Openstack Cloud - The Trilio Serviceuser-ID is assigned to a Workload

    Openstack Users can update the User ownership of a Workload by modifying the Workload.

    This ownership secures, that only the owners of a Workload are able to work with it.

    Openstack Administrators can reassign Workloads or reimport Workloads from older Trilio installations.

    Import workloads

    Workload import allows to import Workloads existing on the Backup Target into the Trilio database.

    The Workload import is designed to import Workloads, which are owned by the Cloud.

    It will not import or list any Workloads that are owned by a different cloud.

    To get a list of importable Workloads use the following CLI command:

    • --project_id <project_id> ➡️ List workloads belongs to given project only.

    To import Workloads into the Trilio database use the following CLI command:

    • --workloadids <workloadid> ➡️ Specify workload ids to import only specified workloads. Repeat option for multiple workloads.

    Orphaned Workloads

    The definition of an orphaned Workload is from the perspective of a specific Trilio installation. Any workload that is located on the Backup Target Storage, but not known to the TrilioVualt installation is considered orphaned.

    Further is to divide between Workloads that were previously owned by Projects/Users in the same cloud or are migrated from a different cloud.

    The following CLI command provides the list of orphaned workloads:

    • --migrate_cloud {True,False} ➡️ Set to True if you want to list workloads from other clouds as well. Default is False.

    • --generate_yaml {True,False} ➡️ Set to True if want to generate output file in yaml format, which would be further used as input for workload reassign API.

    Running this command against a Backup Target with many Workloads can take a bit of time. Trilio is reading the complete Storage and verifies every found Workload against the Workloads known in the database.

    Reassigning Workloads

    Openstack administrators are able to reassign a Workload to a new owner. This involves the possibility to migrate a Workload from one cloud to another or between projects.

    Reassigning a workload only changes the database of the target Trilio installation. When the Workload was managed before by a different Trilio installation, will that installation not be updated.

    Use the following CLI command to reassign a Workload:

    • --old_tenant_ids <old_tenant_id>➡️ Specify old tenant ids from which workloads need to reassign to new tenant. Specify multiple times to choose Workloads from multiple tenants.

    • --new_tenant_id <new_tenant_id> ➡️ Specify new tenant id to which workloads need to reassign from old tenant. Only one target tenant can be specified.

    • --workload_ids <workload_id>

    A sample mapping file with explanations is shown below:

    Trilio 4.1 HF5 Release

    Release Versions

    Packages

    Name

    Trilio 4.1 HF10 Release

    Release Versions

    Packages

    Name

    Change Certificates used by Trilio

    The following Trilio services are providing certificates for secured access to the Trilio solution.

    Service
    Port used
    Description

    File Search

    Definition

    The file search functionality allows the user to search for files and folders located on a chosen VM in a workload in one or more Backups.

    Navigating to the file search tab in Horizon

    Upgrading on Canonical Openstack

    Upgrading Trilio 4.0 to 4.1 using JuJu Charms

    For the major upgrade from 4.0 to 4.1 use the JuJu charms upgrade path.

    The charms will always install the latest version available of T4O 4.1. This will only work when upgrading from 4.0 to 4.1.

    Switch Backup Target on Kolla-ansible

    Unmount old mount point

    The first step is to remove the datamover container and to unmount the old mounts. This is necessary to make sure, that the new datamover container with the new backend target is not getting any interference from the old backup target.

    cd /opt/openstack-ansible/playbooks
    openstack-ansible os-tvault-install.yml --tags "tvault-all-uninstall"
    Trilio entries had been appended at the end of the file
  • /usr/local/share/kolla-ansible/ansible/site.yml ➡️ Trilio entries had been appended at the end of the file

  • /root/multinode ➡️ Trilio entries had been appended at the end of this example inventory file

  • append Kolla Ansible yml files
    clone Trilio Ansible role
    ➡️
    Specify workload_ids which need to reassign to new tenant. If not provided then all the workloads from old tenant will get reassigned to new tenant. Specifiy multiple times for multiple workloads.
  • --user_id <user_id>➡️ Specify user id to which workloads need to reassign from old tenant. only one target user can be specified.

  • --migrate_cloud {True,False}➡️ Set to True if want to reassign workloads from other clouds as well. Default if False

  • --map_file➡️ Provide file path(relative or absolute) including file name of reassign map file. Provide list of old workloads mapped to new tenants. Format for this file is YAML.

  • The file search tab is part of every workload overview. To reach it follow these steps:
    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload a file search shall be done in

    5. Click the workload name to enter the Workload overview

    6. Click File Search to enter the file search tab

    Configuring and starting a file search Horizon

    A file search runs against a single virtual machine for a chosen subset of backups using a provided search string.

    To run a file search the following elements need to be decided and configured

    Choose the VM the file search shall run against

    Under VM Name/ID choose the VM that the search is done upon. The drop down menu provides a list of all VMs that are part of any Snapshot in the Workload.

    VMs that are no longer activly protected by the Workload but are still part of an existing Snapshot are listed in red.

    Set the File Path

    The File Path defines the search string that is run against the chosen VM and Snapshots. This search string does support basic RegEx.

    The File Path has to start with a '/'

    Windows partitions are fully supported. Each partition is its own Volume with its own root. Use '/Windows' instead of 'C:\Windows'

    The file search does not go into deeper directories and always searches on the directory provided in the File Path

    Example File Path for all files inside /etc : /etc/*

    Define the Snapshots to search in

    "Filter Snapshots by" is the third and last component that needs to be set. This defines which Snapshots are going to be searched.

    There are 3 possibilities for a pre-filtering:

    1. All Snapshots - Lists all Snapshots that contain the chosen VM from all available Snapshots

    2. Last Snapshots - Choose between the last 10, 25, 50, or custom Snapshots and click Apply to get the list of the available Snapshots for the chosen VM that match the criteria.

    3. Date Range - Set a start and end date and click apply to get the list of all available Snapshots for the chosen VM within the set dates.

    After the pre-filtering is done all matching Snapshots are automatic prechosen. Uncheck any Snapshot that shall not be searched.

    When no Snapshot is chosen the file search will not start.

    Start the File Search and retrieve the results in Horizon

    To start a File Search the following elements need to be set:

    • A VM to search in has to be chosen

    • A valid File Path provided

    • At least one Snapshot to search in selected

    Once those have been set click "Search" to start the file search.

    Do not navigate to any other Horizon tab or website after starting the File Search. Results are lost and the search has to be repeated to regain them.

    After a short time the results will be presented. The results are presented in a tabular format grouped by Snapshots and Volumes inside the Snapshot.

    For each found file or folder the following information are provided:

    • POSIX permissions

    • Amount of links pointing to the file or folder

    • User ID who owns the file or folder

    • Group ID assigned to the file or folder

    • Actual size in Bytes of the file or folder

    • Time of creation

    • Time of last modification

    • Time of last access

    • Full path to the found file or folder

    Once the Snapshot of interest has been identified it is possible to go directly to the Snapshot using the "View Snapshot" option at the top of the table. It is also possible to directly mount the Snapshot using the "Mound Snapshot" Button at the end of the table.

    Doing a CLI File Search

    • <vm_id> ➡️ ID of the VM to be searched

    • <file_path> ➡️ Path of the file to search for

    • --snapshotids <snapshotid> ➡️ Search only in specified snapshot ids snapshot-id: include the instance with this UUID

    • --end_filter <end_filter> ➡️Displays last snapshots, example , last 10 snapshots, default 0 means displays all snapshots

    • --start_filter <start_filter> ➡️Displays snapshots starting from , example , snapshot starting from 5, default 0 means starts from first snapshot

    • --date_from <date_from> ➡️ From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If time isn't specified then it takes 00:00 by default

    • --date_to <date_to> ➡️ To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day),Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to

    Update globals.yml

    Edit the globals.yml file to contain the new backup target.

    Follow the installation documentation to learn about the globals.yml Trilio variables.

    Deploy Trilio components with new backup target

    Verify the successful change in backup target

    Reconfigure the Trilio Appliance

    Follow the documentation to reconfigure the Trilio appliance with the new backup target.

    root@controller:~# kolla-ansible -i multinode deploy
    cd /opt/openstack-ansible/playbooks
    openstack-ansible lxc-containers-destroy.yml --limit "DMPAI CONTAINER_NAME"
    #tvault-dmapi
    tvault-dmapi_hosts:
      infra-1:
        ip: 172.26.0.3
      infra-2:
        ip: 172.26.0.4
        
    #tvault-datamover
    tvault_compute_hosts:
      infra-1:
        ip: 172.26.0.7
      infra-2:
        ip: 172.26.0.8
    # Datamover haproxy setting
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    rm /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml
     source cloudadmin.rc
     openstack endpoint delete "internal datamover service endpoint_id"
     openstack endpoint delete "public datamover service endpoint_id"
     openstack endpoint delete "admin datamover service endpoint_id"
    lxc-attach -n "GALERA CONTAINER NAME"
    mysql -u root -p "root password"
    DROP DATABASE dmapi;
    DROP USER dmapi;
    lxc-attach -n "RABBITMQ CONTAINER NAME"
    rabbitmqctl delete_user dmapi
    rabbitmqctl delete_vhost /dmapi
    rm  /etc/haproxy/conf.d/datamover_service
    frontend datamover_service-front-1
        bind ussuriubuntu.triliodata.demo:8784 ssl crt /etc/ssl/private/haproxy.pem ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
        option httplog
        option forwardfor except 127.0.0.0/8
        reqadd X-Forwarded-Proto:\ https
        mode http
        default_backend datamover_service-back
    
    frontend datamover_service-front-2
        bind 172.26.1.2:8784
        option httplog
        option forwardfor except 127.0.0.0/8
        mode http
        default_backend datamover_service-back
    
    
    backend datamover_service-back
        mode http
        balance leastconn
        stick store-request src
        stick-table type ip size 256k expire 30m
        option forwardfor
        option httplog
        option httpchk GET / HTTP/1.0\r\nUser-agent:\ osa-haproxy-healthcheck
    
    
        server controller_dmapi_container-bf17d5b3 172.26.1.75:8784 check port 8784 inter 12000 rise 1 fall 1
    systemctl restart haproxy
    rm -rf /opt/config-certs/rabbitmq
    rm -rf /opt/config-certs/s3
    virsh list
    virsh destroy <Trilio VM Name or ID>
    virsh undefine <Trilio VM name>
    docker stop triliovault_datamover_api
    docker rm triliovault_datamover_api
    rm -rf /etc/kolla/triliovault-datamover-api
    rm -rf /var/log/kolla/triliovault-datamover-api/
    docker stop triliovault_datamover
    docker rm triliovault_datamover
    rm -rf /etc/kolla/triliovault-datamover
    rm -rf /var/log/kolla/triliovault-datamover/
    rm /etc/kolla/haproxy/services.d/triliovault-datamover-api.cfg
    docker restart haproxy
    kolla-ansible -i multinode deploy
    openstack service delete dmapi
    openstack user delete dmapi
    mysql -u root -p
    DROP DATABASE dmapi;
    DROP USER dmapi;
    virsh list
    virsh destroy <Trilio VM Name or ID>
    virsh undefine <Trilio VM name>
    workloadmgr workload-get-importworkloads-list [--project_id <project_id>]
    workloadmgr workload-importworkloads [--workloadids <workloadid>]
    workloadmgr workload-get-orphaned-workloads-list [--migrate_cloud {True,False}]
                                                     [--generate_yaml {True,False}]
    workloadmgr workload-reassign-workloads
                                            [--old_tenant_ids <old_tenant_id>]
                                            [--new_tenant_id <new_tenant_id>]
                                            [--workload_ids <workload_id>]
                                            [--user_id <user_id>]
                                            [--migrate_cloud {True,False}]
                                            [--map_file <map_file>]
    reassign_mappings:
       - old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
         new_tenant_id: new_tenant_id
         user_id: user_id
         workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
         migrate_cloud: True/False #Set to True if want to reassign workloads from
                      # other clouds as well. Default is False
    
       - old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
         new_tenant_id: new_tenant_id
         user_id: user_id
         workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
         migrate_cloud: True/False #Set to True if want to reassign workloads from
                      # other clouds as well. Default is False
    workloadmgr filepath-search [--snapshotids <snapshotid>]
                                [--end_filter <end_filter>]
                                [--start_filter <start_filter>]
                                [--date_from <date_from>]
                                [--date_to <date_to>]
                                <vm_id> <file_path>
    #check current mount point
    [root@compute ~]# df -h
    Filesystem                      Size  Used Avail Use% Mounted on
    devtmpfs                        7.8G     0  7.8G   0% /dev
    tmpfs                           7.8G     0  7.8G   0% /dev/shm
    tmpfs                           7.8G   26M  7.8G   1% /run
    tmpfs                           7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root             280G   12G  269G   5% /
    /dev/sda1                       976M  197M  713M  22% /boot
    192.168.1.34:/mnt/tvault/42436  2.5T 1005G  1.5T  41% /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    
    #Stop triliovault_datamover
    [root@compute ~]# docker stop triliovault_datamover
    triliovault_datamover
    [root@compute ~]#
    
    #Delete triliovault_datamover
    [root@compute ~]# docker rm triliovault_datamover
    triliovault_datamover
    [root@compute ~]#
    
    #unmount mount point
    [root@compute ~]# umount /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    
    #check mount point is unmounted successfully 
    [root@compute ~]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    devtmpfs             7.8G     0  7.8G   0% /dev
    tmpfs                7.8G     0  7.8G   0% /dev/shm
    tmpfs                7.8G   26M  7.8G   1% /run
    tmpfs                7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root  280G   12G  269G   5% /
    /dev/sda1            976M  197M  713M  22% /boot
    [root@compute ~]#
    
    #Delete mounted dir from compute node
    [root@compute trilio]# rm -rf /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    ##Check that all Containers are up and running
    #Controller node
    root@controller:~# docker ps -a | grep trilio
    583b8d42ab42        trilio/ubuntu-binary-trilio-datamover-api:4.1.36-ussuri    "dumb-init --single-…"   3 days ago          Up 3 days                               openstack-nova-api-triliodata-plugin
    3be25d3819ac        trilio/ubuntu-binary-trilio-horizon-plugin:4.1.36-ussuri   "dumb-init --single-…"   4 days ago          Up 4 days                               horizon
    
    #Compute node
    root@compute:~# docker ps -a | grep trilio
    bf52face23fb        trilio/ubuntu-binary-trilio-datamover:4.1.36-ussuri    "dumb-init --single-…"   3 days ago          Up 3 days                               trilio-datamover
    
    ## Verify the backup target has been changed successfully
    # In case of switch to NFS
    [root@compute ~]# df -h
    Filesystem                      Size  Used Avail Use% Mounted on
    devtmpfs                        7.8G     0  7.8G   0% /dev
    tmpfs                           7.8G     0  7.8G   0% /dev/shm
    tmpfs                           7.8G   26M  7.8G   1% /run
    tmpfs                           7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root             280G   12G  269G   5% /
    /dev/sda1                       976M  197M  713M  22% /boot
    192.168.1.34:/mnt/tvault/42436  2.5T 1005G  1.5T  41% /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    
    #In case of switch to S3
    [root@compute ~]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    devtmpfs             7.8G     0  7.8G   0% /dev
    tmpfs                7.8G     0  7.8G   0% /dev/shm
    tmpfs                7.8G   34M  7.8G   1% /run
    tmpfs                7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root  280G   12G  269G   5% /
    /dev/sda1            976M  197M  713M  22% /boot
    Trilio             -     -  0.0K    - /var/trilio/triliovault-mounts
    
    ##Reverify in the triliovault_datamover containers
    [root@compute ~]# docker exec -it triliovault_datamover bash
    (triliovault-datamover)[nova@compute /]$ df -h
    Filesystem           Size  Used Avail Use% Mounted on
    overlay              280G   12G  269G   5% /
    tmpfs                7.8G     0  7.8G   0% /sys/fs/cgroup
    devtmpfs             7.8G     0  7.8G   0% /dev
    tmpfs                7.8G     0  7.8G   0% /dev/shm
    /dev/mapper/cl-root  280G   12G  269G   5% /etc/iscsi
    tmpfs                6.3G     0  6.3G   0% /var/triliovault/tmpfs
    Trilio             -     -  0.0K    - /var/trilio/triliovault-mounts

    Use the small arrow next to "Manage Members" to open the submenu

  • Choose "Modify Quotas"

  • Navigate to "Workload Manager"

  • Edit Quotas as desired

  • Click "Save"

  • Value to set for High Watermark warnings
  • <project_id>➡️ Project to assign the quota to

  • Project to assign the quota to
  • <allowed_quota_id> ➡️ID of the allowed Quota to update

  • Screenshot of Horizon integration for Workload Manager Quotas

    Click the small arrow next to "Create Snapshot" to open the sub-menu

  • Click "Edit Workload"

  • Navigate to the tab "Schedule"

  • Uncheck "Enabled"

  • Click "Update"

  • Click the small arrow next to "Create Snapshot" to open the sub-menu

  • Click "Edit Workload"

  • Navigate to the tab "Schedule"

  • check "Enabled"

  • Click "Update"

  • modify the workload
    Openstack Keystone Trust system
    Screenshot of an Workload with established scheduler trust
    Type
    Version

    s3fuse

    python package

    4.1.94.4

    tvault-configurator

    python package

    4.1.94.7

    workloadmgr

    python package

    4.1.94.9

    dmapi

    deb package

    4.1.94.3

    python3-dmapi

    deb package

    Containers and Gitbranch

    Name
    Tag

    Gitbranch

    hotfix-5-TVO/4.1

    RHOSP13 containers

    4.1.94-hotfix-8-rhosp13

    RHOSP16.0 containers

    4.1.94-hotfix-8-rhosp16

    RHOSP16.1 containers

    4.1.94-hotfix-8-rhosp16.1

    Kolla Ansible Ussuri containers

    4.1.94-hotfix-7-ussuri

    Changelog

    Contains all changes from HF1, HF2, HF3, and HF4.

    Fixes bugs and issues

    Post restore Windows VM boots into recovery console

    An issue has been fixed which prevented VMs running Windows to boot properly after a restore in the case of Nova boot volumes being used.

    Workload list can not be retrieved after manual changes in the Trilio database

    An issue has been fixed which prevented the successful pull of data from the Trilio database in case of incorrectly applied manual changes.

    admin endpoint type not honored by the configurator

    An issue has been fixed which prevented the successful usage of the Openstack admin endpoint network as the standard communication network for T4O

    Workloads and Snapshots stuck in delete status

    An issue has been fixed which prevented resetting the status of Workloads and Snapshots in case they are stuck in the deletion state.

    Enhancements

    Workload scheduler stability

    The workload scheduler stability has been enhanced to prevent the start of multiple scheduled jobs at the same time or delayed from its expected time.

    Type
    Version

    s3fuse

    python package

    4.1.94.6

    tvault-configurator

    python package

    4.1.94.11

    workloadmgr

    python package

    4.1.94.18

    dmapi

    deb package

    4.1.94.3

    python3-dmapi

    deb package

    Containers and Gitbranch

    Name
    Tag

    Gitbranch

    hotfix-10-TVO/4.1

    RHOSP13 containers

    4.1.94-hotfix-10-rhosp13

    RHOSP16.0 containers

    4.1.94-hotfix-10-rhosp16

    RHOSP16.1 containers

    4.1.94-hotfix-10-rhosp16.1

    RHOSP16.2 containers

    4.1.94-hotfix-10-rhosp16.2

    Changelog

    Fixed Bugs and issues

    Rebase permission error

    An issue has been fixed which prevented the correct rebase of T4O incremental backups in the case that root didn't have the required permissions on the backup target.

    3001

    VIP for the dashboard of Grafana service running on TrilIioVault VM

    Changing the certificate of TVault-Config and Nginx for Grafana Service

    The TVault-Config service and the Nginx Resource for the Grafana Dashboard are using the same certificate.

    The certificate used is a symlink to a host-specific certificate. Each Trilio VM has its own self-signed certificate by default which is getting recreated every time the TVault-Config service is restarted.

    When the certificate for the TVault-Config and Nginx (Grafana) is to be changed to a customer chosen certificate it is required to deactivate the recreation of the certificates upon service restart.

    Trilio is planning to change this behavior to make it easier for customers to change the certificate in the future.

    1. Login into the Trilio VM via SSH

    2. Edit the following file: /home/stack/myansible/lib/python3.6/site-packages/tvault_configurator/tvault_config_bottle.py

    3. Look for create_ssl_certificates() in the main function

    4. Comment out create_ssl_certificates()

    5. Repeat for all nodes of the Trilio cluster

    The resulting main function will look like this:

    Afterward, the certificates can be replaced manually by overwriting the files.

    Once the certificates have been replaced by the desired ones restart the TVault-Config service and the Nginx pcs resource.

    Changing the certificate used by Nginx for wlm-api service

    The certificate provided by the Nginx for the wlm-api service is set during configuration when HTTPS endpoints are configured for the Trilio appliance. This certificate is provided to the end-user or Openstack every time an API call to the Trilio solution is sent.

    To change the certificate through the configurator make sure to create HTTPS endpoints and upload the certificate and key using the advanced options of the configurator.

    Setting HTTPS at the advanced options

    The certificates can be changed manually if necessary.

    They are located under /opt/stack/data/cert/

    These certificates can be replaced manually and the Nginx resource restarted afterward.

    TVault-Config

    443

    Webservice providing the TrilIoVault Dashboard

    Nginx (wlm-api)

    8780

    provides the VIP for wlm-api service

    Nginx (Grafana)

    The following charms exist:
    • trilio-wlm ➡️ Installs and manages Trilio Controller services.

    • trilio-dm-api ➡️Installs and manages the Trilio Datamover API service.

    • trilio-data-mover ➡️ Installs and manages the Trilio Datamover service.

    • ➡️ Installs and manages the Trilio Horizon Plugin.

    The documentation of the charms can be found here:

    The following steps have been tested and verified within Trilio environments. There have been cases where these steps updated all packages inside the LXC containers, leading to failures in basic OpenStack services.

    It is recommended to run each of these steps in dry-run first.

    When any other packages but Trilio packages are getting updated, stop the upgrade procedure and contact your Trilio customer success manager.

    Upgrading the Trilio packages

    Trilio is releasing hotfixes, which require updating the packages inside the containers. These hotfixes can not be installed using the Juju charms as they don't require an update to the charms.

    Generic Pre-requisites

    1. Either 4.1 GA OR any hotfix patch against 4.1 should be already deployed for performing upgrades mentioned in the current document.

    2. No snapshot OR restore to be running.

    3. Global job scheduler should be disabled.

    4. wlm-cron should be disabled ( Following commands are to be run on MAAS node)

      1. If trilio-wlm is HA enabled, set the cluster configuration to maintenance mode ( this command will fail for single node deployment) juju exec [-m <model>] --unit trilio-wlm/leader "sudo crm configure property maintenance-mode=true"

      2. Stop wlm-cron service juju exec [-m <model>] --application trilio-wlm "sudo systemctl stop wlm-cron"

      3. Ensure that no stale wlm-cron processes are there juju exec [-m <model>] --application trilio-wlm "sudo ps -ef | grep [w]orkloadmgr-cron"

    5. The mentioned gemfury repository should be accessible from trilio units.

    Upgrade package on trilio units

    The deployed Trilio version is controlled by the triliovault-pkg-source charm configuration option. For each trilio charm it should be pointing to below gemfury repository. This can be checked via juju [-m ] config triliovault-pkg-source command output.

    This is the preferred, recommended and tested method to update the packages is through the Juju command line.

    Run below commands form MASS node

    On Ubuntu Bionic environments

    On other (not Ubuntu Bionic) environments

    Check the trilio units status in juju status [-m ] | grep trilio output. All the trilio units will be with new package.

    Update the DB schema

    Run the below command to update the schema

    Check the schema head with below command. It should point to latest schema head.

    Restart the apache2 service

    Run below command to restart the apache2 service on horizon container

    Post-Upgrade steps

    1. If the trilio-wlm nodes are HA enabled:

      1. Make sure the wlm-cron services are down after the pkg upgrade. Run the following command for the same:juju exec [-m <model>] --application trilio-wlm "sudo systemctl stop wlm-cron"

      2. Unset the cluster maintenance modejuju exec [-m <model>] --unit trilio-wlm/leader "sudo crm configure property maintenance-mode=false"

    2. Make sure the wlm-cron service up and running on any one node.juju exec [-m <model>] --application trilio-wlm "sudo systemctl status wlm-cron"

    3. Set the Global Job Scheduler to the original state.

    Troubleshooting

    If any trilio unit get into error state with message

    hook failed: "update-status"

    Follow below steps

    TrilioVault data protection — charm-guide 0.0.1.dev818 documentationdocs.openstack.org

    Trilio 4.1 HF2 Release Notes

    Release Versions

    Packages

    Name
    Type
    Version

    Containers and Gitbranch

    Name
    Tag

    Changelog

    Contains all changes from HF1.

    Fixed bugs and issues

    Restore of Cinder boot volumes with multi-attach activated

    An issue has been fixed that prevented a successful restore in the case of restoring a Cinder boot volume with a volume type that has the multi-attach functionality activated.

    Backup of many instances or of instances with long names

    An issue has been fixed which prevented the successful finish of the backup process for workloads with many protected instances or instances with long names.

    Blank Ansible output for Trilio Appliance reconfiguration

    An issue has been fixed which led to no visible Ansible logs upon reconfiguring the Trilio appliance.

    misleading SMTP timeout error message upon sending a test email

    An issue has been fixed which led to the SMTP configuration always throwing the misleading error smtp_timeout cannot be greater than 10 upon sending a test email.

    Security Group restore for remote security groups with identical rules

    An issue has been fixed which led to Security Groups not being restored when a remote Security Group was having the exact same Security Group Rule as another Security Group in the chain.

    Failed Security Group Restore now leads to failed restore

    An issue has been fixed which led to a restore apparently completing successfully despite an error during the restore of the Security Groups.

    Trilio 4.1 HF3 Release Notes

    Release Versions

    Packages

    Name
    Type
    Version

    Containers and Gitbranch

    Name
    Tag

    Changelog

    Contains all changes from HF1 + HF2.

    This Hotfix extends the Support Matrix of T4O 4.1 as follows:

    • Canonical Openstack Victoria based on Focal (20.04) Support

    • Kolla Ansible Openstack Victoria on Ubuntu 20.04 and CentOS8

    • Openstack Ansible Victoria on Ubuntu 20.04 and CentOS8

    • TripleO train on CentOS7 and CentOS8

    The installation into these environments requires upgrading the Trilio Appliance from 4.1 GA to 4.1 HF3 or higher

    Shutdown/Restart the Trilio cluster

    To gracefully shutdown/restart the Trilio cluster the following steps are recommended.

    Verify no snapshots or restores are running

    It is recommended to verify that no snapshots or restores are running on the Trilio Cluster.

    Stopping or restarting the Trilio cluster will cancel all running actively running backup or restore jobs. These jobs will be marked as errored after the system has come up again.

    This can be verified using the following two commands:

    Identify the master node for the VIP(s) and wlm-cron service

    The Trilio cluster is using the pacemaker service for setting the VIP(s) of the cluster and controlling the active node for the wlm-cron service. The identified node will be the last to shut down in case that the whole cluster gets shut down.

    This can be checked using the following command:

    In the following example is the master node the tvm1

    Shutdown/Restart of a single node in the cluster

    A single node in the cluster can be shut down or restarted without issues. All services will come up and the RabbitMQ and Galeera service will rejoin the remaining cluster.

    When the master node gets shutdown or restarted the VIP(s) and the wlm-cron service will switch to one of the remaining cluster nodes.

    Stop the services on the node

    To speed up the shutdown/restart process it is recommended to stop the Trilio services, the RabbitMQ service, and the MariaDB service on the node.

    The wlm-cron service and the VIP(s) are not getting stopped when only the master node gets rebooted or shut down. The pacemaker will automatically move the wlm-cron service and the VIP(s) to one of the remaining nodes.

    Shutdown/Restart the node

    After the services have been stopped the node can be restarted or shut down using standard Linux commands.

    Restarting the complete cluster node by node

    Restarting the whole cluster node by node follows the same procedure as restarting a single node, with the difference that each restarted node needs to be fully started again before the next node can be restarted.

    Shutdown/Restart the complete cluster as a whole

    When the complete cluster needs to get stopped and restarted at the same time the following procedure needs to be completed.

    The procedure on a high level is:

    • Shutdown the two slave nodes

    • Shutdown the master node

    • Start the master node

    • Enable the Galeera cluster

    Shutdown the two slave nodes

    Before shutting down the two slave nodes it is recommended to stop running Trilio services, the RabbitMQ server, and the MariaDB on the nodes.

    Afterward, the nodes can be shut down.

    Shutdown the master node

    Before shutting down the master node it is recommended to stop running Trilio services, the RabbitmQ server, the MariaDB, the wlm-cron and the VIP(s) resource in Pacemaker.

    Afterward, the node can be shut down.

    Start the master node

    The first server that is getting booted will be the master node. It is highly recommended that the old master node will be booted first again.

    Not booting the old mater node first again can lead to data loss when the Galeera Cluster is restarted.

    Enable the Galeera cluster

    Login into the freshly started master node and run the following command. This will restart the Galeera cluster with this node as master.

    Start the slave nodes

    After the master node has been booted and the Galeera cluster started the remaining nodes can be started and will automatically rejoin the Trilio cluster.

    Trilio 4.1 HF9 Release

    This hotfix contains only a package update for the Trilio appliance. There are no new containers available compared to earlier releases.

    Release Versions

    Packages

    Name
    Type
    Version

    Containers and Gitbranch

    Name
    Tag

    Changelog

    200% cpu usage or spike in CLOSE_WAIT connections with S3 backup target

    An issue has been fixed which lead to a high CPU resource usage in case of fluctuating connection to the S3 backup target.

    Trilio 4.1 HF7 Release

    Release Versions

    Packages

    Name

    Uninstalling from RHOSP

    Clean Trilio Datamover API service

    The following steps need to be run on all nodes, which have the Trilio Datamover API service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::TrilioDatamoverApi.

    Once the role that runs the Trilio Datamover API service has been identified will the following commands clean the nodes from the service.

    Trilio 4.1 HF11 Release

    Release Versions

    Packages

    Name

    Offline upgrade Trilio Appliance

    The offline upgrade of the Trilio Appliance is only recommended for hotfix upgrades. For major upgrades in offline environments, it is recommended to download the latest qcow2 image and redeploy the appliance.

    Generic Pre-requisites

    Trilio 4.1 HF6 Release

    Release Versions

    Packages

    Name

    File Search

    Start File Search

    POST https://$(tvm_address):8780/v1/$(tenant_id)/search

    Starts a File Search with the given parameters

    TrilioVault data protection — charm-guide 0.0.1.dev818 documentationdocs.openstack.org
    workloadmgr project-quota-type-list
    workloadmgr project-quota-type-show <quota_type_id>
    workloadmgr project-allowed-quota-create --quota-type-id quota_type_id
                                             --allowed-value allowed_value 
                                             --high-watermark high_watermark 
                                             --project-id project_id
    workloadmgr project-allowed-quota-list <project_id>
    workloadmgr project-allowed-quota-show <allowed_quota_id>
    workloadmgr project-allowed-quota-update [--allowed-value <allowed_value>]
                                             [--high-watermark <high_watermark>]
                                             [--project-id <project_id>]
                                             <allowed_quota_id>
    workloadmgr project-allowed-quota-delete <allowed_quota_id>
    workloadmgr disable-scheduler --workloadids <workloadid>
    workloadmgr enable-scheduler --workloadids <workloadid>
    workloadmgr scheduler-trust-validate <workload_id>
    [root@TVM1 ssl]# cd /etc/tvault/ssl/
    [root@TVM1 ssl]# ls -lisa server*
     577678 0 lrwxrwxrwx 1 root root 8 Jan 21 14:36 server.crt -> TVM1.crt
     577672 0 lrwxrwxrwx 1 root root 8 Jan 21 14:36 server.key -> TVM1.key
    1178820 0 lrwxrwxrwx 1 root root 8 Jan 21 14:36 server.pem -> TVM1.pem
    def main():
        # configure the networking
        #create_ssl_certificates()
    
        http_thread = Thread(target=main_http)
        http_thread.daemon = True  # thread dies with the program
        http_thread.start()
    
        bottle.debug(True)
        srv = SSLWSGIRefServer(host='::', port=443)
        bottle.run(server=srv, app=app, quiet=False, reloader=False)
    [root@TVM1 ~]# systemctl restart tvault-config
    [root@TVM1 ~]# pcs resource restart lb_nginx-clone
    lb_nginx-clone successfully restarted
    [root@TVM1 ~]# cd /opt/stack/data/cert/
    [root@TVM1 cert]# ls -lisa workloadmgr*
     577678 0 lrwxrwxrwx 1 root root 8 Jan 21 14:36 workloadmgr.crt
     577672 0 lrwxrwxrwx 1 root root 8 Jan 21 14:36 workloadmgr.key
    [root@TVM1 ~]# pcs resource restart lb_nginx-clone
    lb_nginx-clone successfully restarted
    deb [trusted=yes] https://apt.fury.io/triliodata-4-1/ /
    juju exec [-m <model>] --application trilio-wlm 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" workloadmgr python3-workloadmgrclient python3-contegoclient s3-fuse-plugin'
    juju exec [-m <model>] --application trilio-horizon-plugin 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" tvault-horizon-plugin python-workloadmgrclient'
    juju exec [-m <model>] --application trilio-dm-api 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-dmapi'
    juju exec [-m <model>] --application trilio-data-mover 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" tvault-contego s3-fuse-plugin'
    
    juju exec [-m <model>] --application trilio-wlm 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" workloadmgr python3-workloadmgrclient python3-contegoclient python3-s3-fuse-plugin'
    juju exec [-m <model>] --application trilio-horizon-plugin 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-tvault-horizon-plugin python3-workloadmgrclient'
    juju exec [-m <model>] --application trilio-dm-api 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-dmapi'
    juju exec [-m <model>] --application trilio-data-mover 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-tvault-contego python3-s3-fuse-plugin'
    
    trilio-data-mover      <package version>  active      3  trilio-data-mover      jujucharms    8  ubuntu
    trilio-dm-api          <package version>  active      1  trilio-dm-api          jujucharms    5  ubuntu
    trilio-horizon-plugin  <package version>  active      1  trilio-horizon-plugin  jujucharms    4  ubuntu
    trilio-wlm             <package version>  active      3  trilio-wlm             jujucharms    7  ubuntu
    juju exec [-m <model>] --unit trilio-wlm/leader "alembic -c /etc/workloadmgr/alembic.ini upgrade heads"
    juju exec [-m <model>] --unit trilio-wlm/leader "alembic -c /etc/workloadmgr/alembic.ini current"
    juju exec [-m <model>] -application trilio-horizon-plugin "systemctl restart apache2"
    Login to trilio unit and run "sudo dpkg --configure -a"
    It will ask for user input, hit enter and log out from the unit.
    From mass node run command "juju resolve <trilio unit name>"

    4.1.94.3

    tvault-contego

    deb package

    4.1.94.8

    python3-tvault-contego

    deb package

    4.1.94.8

    tvault-horizon-plugin

    deb package

    4.1.94.4

    python3-tvault-horizon-plugin

    deb package

    4.1.94.4

    s3-fuse-plugin

    deb package

    4.1.94.4

    python3-s3-fuse-plugin

    deb package

    4.1.94.4

    workloadmgr

    deb package

    4.1.94.9

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.8-4.1

    python3-tvault-contego

    rpm package

    4.1.94.8-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.4-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.4-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.4-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.4-4.1

    workloadmgrclient

    rpm pacakage

    4.1.94

    Kolla Ansible Victoria containers

    4.1.94-hotfix-5-victoria

    TripleO Train container

    4.1.94-hotfix-5-tripleo

    4.1.94.3

    tvault-contego

    deb package

    4.1.94.9

    python3-tvault-contego

    deb package

    4.1.94.9

    tvault-horizon-plugin

    deb package

    4.1.94.4

    python3-tvault-horizon-plugin

    deb package

    4.1.94.4

    s3-fuse-plugin

    deb package

    4.1.94.6

    python3-s3-fuse-plugin

    deb package

    4.1.94.6

    workloadmgr

    deb package

    4.1.94.18

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.9-4.1

    python3-tvault-contego

    rpm package

    4.1.94.9-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.4-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.4-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.6-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.6-4.1

    workloadmgrclient

    rpm pacakage

    4.1.94

    Kolla Ansible Ussuri containers

    4.1.94-hotfix-9-ussuri

    Kolla Ansible Victoria containers

    4.1.94-hotfix-7-victoria

    TripleO Train container

    4.1.94-hotfix-7-tripleo

    python3-dmapi

    deb package

    4.1.94.3

    tvault-contego

    deb package

    4.1.94.4

    python3-tvault-contego

    deb package

    4.1.94.4

    tvault-horizon-plugin

    deb package

    4.1.94.3

    python3-tvault-horizon-plugin

    deb package

    4.1.94.3

    s3-fuse-plugin

    deb package

    4.1.94.3

    python3-s3-fuse-plugin

    deb package

    4.1.94.3

    workloadmgr

    deb package

    4.1.94.5

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.4-4.1

    python3-tvault-contego

    rpm package

    4.1.94.4-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.3-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.3-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.3-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.3-4.1

    workloadmgrclient

    rpm package

    4.1.94

    s3fuse

    python package

    4.1.94.3

    tvault-configurator

    python package

    4.1.94.5

    workloadmgr

    python package

    4.1.94.5

    workloadmgrclient

    python package

    4.1.94

    dmapi

    deb package

    Gitbranch

    hotfix-2-TVO/4.1

    RHOSP13 containers

    4.1.94-hotfix-2-rhosp13

    RHOSP16.0 containers

    4.1.94-hotfix-2-rhosp16

    RHOSP16.1 containers

    4.1.94-hotfix-2-rhosp16.1

    Kolla Ansible Ussuri containers

    4.1.94-hotfix-2-ussuri

    4.1.94.3

    python3-dmapi

    deb package

    4.1.94.3

    tvault-contego

    deb package

    4.1.94.6

    python3-tvault-contego

    deb package

    4.1.94.6

    tvault-horizon-plugin

    deb package

    4.1.94.3

    python3-tvault-horizon-plugin

    deb package

    4.1.94.3

    s3-fuse-plugin

    deb package

    4.1.94.3

    python3-s3-fuse-plugin

    deb package

    4.1.94.3

    workloadmgr

    deb package

    4.1.94.5

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.6-4.1

    python3-tvault-contego

    rpm package

    4.1.94.6-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.3-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.3-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.3-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.3-4.1

    workloadmgrclient

    rpm package

    4.1.94

    4.1.94-hotfix2-victoria

    TripleO Train container

    4.1.94-hotfix-2-tripleo

    s3fuse

    python package

    4.1.94.3

    tvault-configurator

    python package

    4.1.94.6

    workloadmgr

    python package

    4.1.94.5

    workloadmgrclient

    python package

    4.1.94

    dmapi

    deb package

    Gitbranch

    hotfix-3-TVO/4.1

    RHOSP13 containers

    4.1.94-hotfix-4-rhosp13

    RHOSP16.0 containers

    4.1.94-hotfix-4-rhosp16

    RHOSP16.1 containers

    4.1.94-hotfix-4-rhosp16.1

    Kolla Ansible Ussuri containers

    4.1.94-hotfix-4-ussuri

    4.1.94.3

    Kolla Ansible Victoria containers

    deb package

    4.1.94.3

    python3-dmapi

    deb package

    4.1.94.3

    tvault-contego

    deb package

    4.1.94.9

    python3-tvault-contego

    deb package

    4.1.94.9

    tvault-horizon-plugin

    deb package

    4.1.94.4

    python3-tvault-horizon-plugin

    deb package

    4.1.94.4

    s3-fuse-plugin

    deb package

    4.1.94.5

    python3-s3-fuse-plugin

    deb package

    4.1.94.5

    workloadmgr

    deb package

    4.1.94.17

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.9-4.1

    python3-tvault-contego

    rpm package

    4.1.94.9-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.4-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.4-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.5-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.5-4.1

    workloadmgrclient

    rpm pacakage

    4.1.94

    4.1.94-hotfix-8-ussuri

    Kolla Ansible Victoria containers

    4.1.94-hotfix-6-victoria

    TripleO Train container

    4.1.94-hotfix-6-tripleo

    s3fuse

    python package

    4.1.94.5

    tvault-configurator

    python package

    4.1.94.11

    workloadmgr

    python package

    4.1.94.17

    Gitbranch

    hotfix-9-TVO/4.1

    RHOSP13 containers

    4.1.94-hotfix-9-rhosp13

    RHOSP16.0 containers

    4.1.94-hotfix-9-rhosp16

    RHOSP16.1 containers

    4.1.94-hotfix-9-rhosp16.1

    RHOSP16.2 containers

    4.1.94-hotfix-9-rhosp16.2

    dmapi

    Kolla Ansible Ussuri containers

    Logo
    If any stale process is found, that needs to be killed manually.
    trilio-horizon-plugin
    Logo

    Start the two slave nodes

    Run all commands as root or user with sudo permissions.

    Stop trilio_dmapi container.

    Remove trilio_dmapi container.

    Clean Trilio Datamover API service conf directory.

    Clean Trilio Datamover API service log directory.

    Clean Trilio Datamover Service

    The following steps need to be run on all nodes, which have the Trilio Datamover service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::TrilioDatamover.

    Once the role that runs the Trilio Datamover service has been identified will the following commands clean the nodes from the service.

    Run all commands as root or user with sudo permissions.

    Stop trilio_datamover container.

    Remove trilio_datamover container.

    Unmount Trilio Backup Target on compute host.

    Clean Trilio Datamover service conf directory.

    Clean log directory of Trilio Datamover service.

    Clean Trilio haproxy resources

    The following steps need to be run on all nodes, which have the haproxy service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::HAproxy.

    Once the role that runs the haproxy service has been identified will the following commands clean the nodes from all Trilio resources.

    Run all commands as root or user with sudo permissions.

    Edit the following file inside the haproxy container and remove all Trilio entries.

    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg

    An example of these entries is given below.

    Restart the haproxy container once all edits have been done.

    Clean Trilio Keystone resources

    Trilio registers services and users in Keystone. Those need to be unregistered and deleted.

    Clean Trilio database resources

    Trilio creates a database for the dmapi service. This database needs to be cleaned.

    Login into the database cluster

    Run the following SQL statements to clean the database.

    Revert overcloud deploy command

    Remove the following entries from roles_data.yaml used in the overcloud deploy command.

    • OS::TripleO::Services::TrilioDatamoverApi

    • OS::TripleO::Services::TrilioDatamover

    In the case that the overcloud deploy command used prior to the deployment of Trilio is still available, it can directly be used.

    Follow these steps to clean the overcloud deploy command from all Trilio entries.

    1. Remove trilio_env.yaml entry

    2. Remove trilio endpoint map file Replace with original map file if existing

    Revert back to original RHOSP Horizon container

    Run the cleaned overcloud deploy command.

    Destroy the Trilio VM Cluster

    List all VMs running on the KVM node

    Destroy the Trilio VMs

    Undefine the Trilio VMs

    Delete the TrlioVault VM disk from KVM Host storage

    Please ensure to complete the upgrade of all the TVault components on the Openstack controller & compute nodes before starting the rolling upgrade of TVO.

  • The mentioned gemfury repository should be accessible from a VM/Server.

  • Please ensure the following points before starting the upgrade process:

    • No snapshot OR restore to be running.

    • Global job-scheduler should be disabled.

    • wlm-cron should be disabled & any lingering process should be killed. (This should already have been done during Trilio components upgrade on Openstack)

      • pcs resource disable wlm-cron

      • Check: systemctl status wlm-cron OR pcs resource show wlm-cron

      • Additional step: To ensure that cron is actually stopped, search for any lingering processes against wlm-cron and kill them. [Cmd : ps -ef | grep -i workloadmgr-cron]

  • Download and copy packages from VM/server to TVM node(s)

    VM/Server must have internet connectivity and connectivity to Trilio gemfury repo

    Download the required system packages

    Download latest pip package

    Download Trilio packages

    Download the latest available version of the below-mentioned packages. To know more about the latest releases, check out the latest release note under this section.

    Export the index URL

    Download s3fuse package

    Download tvault-configurator dependent package

    Download workloadmgr and dependent package

    Download workloadmgrclient package

    Download contegoclient package

    Download oslo.messaging package

    Copy the downloaded packages from VM/Server to TVM node(s)

    All downloaded packages need to be copied from VM/server to all the TVM nodes.

    • Copy all the downloaded packages(listed below) from the VM/server to all the TVM nodes

      1. pip

      2. s3fuse

      3. tvault-configurator

      4. workloadmgr

      5. workloadmgrclient

      6. contegoclient

    Upgrade packages on all T4O Node(s)

    If any of the packages are already on the latest, the upgrade won’t happen. Make sure you should be present at the right dir which means run the below commands from where there all packages should be present

    Please refer to the versions of the downloaded packages for the placeholder <HF_VERSION> in the below sections.

    Preparation

    Take a backup of the configuration files

    Activate the virtual environment

    Upgrade system packages

    Run the following command on all TVM nodes to upgrade the pip package

    Upgrade s3fuse/tvault-object-store

    Run the following command on all TVM nodes to upgrade s3fuse

    Upgrade tvault-configurator

    Run the following command on all TVM nodes to upgrade tvault-configurator

    Upgrade workloadmgr

    Run the upgrade command on all TVM nodes to upgrade workloadmgr

    Upgrade workloadmgrclient

    Run the upgrade command on all TVM nodes to upgrade workloadmgrclient

    Upgrade contegoclient

    Run the upgrade command on all TVM nodes to upgrade contegoclient

    Set oslo.messaging version

    Using the latest available oslo.messaging version can lead to stuck RPC and API calls.

    It is therefore required to fix the oslo.messaging version on the TVM.

    Post Upgrade Steps

    Verify if the upgrade successfully completed or not.

    And match the versions with the respective latest downloaded versions.

    Restore the backed-up configuration files

    Restart following services on all node(s) using respective commands

    tvault-object-store restart required only if Trilio is configured with S3 backend storage

    Enable wlm-cron service on primary node through pcs cmd, if T4O is configured with Openstack

    Enable Global Job Scheduler

    Verify the status of the services, if T4O is configured with Openstack.

    tvault-object-store will run only if TVault is configured with S3 backend storage

    Additional check for wlm-cron on the primary node, if T4O is configured with Openstack_._

    Check the mount point using the “df -h” command if T4O is configured with Openstack

    workloadmgr snapshot-list --all=True
    workloadmgr restore-list
    pcs status
    pcs status
    Cluster name: triliovault
    
    WARNINGS:
    Corosync and pacemaker node names do not match (IPs used in setup?)
    
    Stack: corosync
    Current DC: tvm3 (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum
    Last updated: Thu Aug 26 12:10:32 2021
    Last change: Thu Aug 26 08:02:51 2021 by root via crm_resource on tvm1
    
    3 nodes configured
    8 resource instances configured
    
    Online: [ tvm1 tvm2 tvm3 ]
    
    Full list of resources:
    
     virtual_ip     (ocf::heartbeat:IPaddr2):       Started tvm1
     virtual_ip_public      (ocf::heartbeat:IPaddr2):       Started tvm1
     virtual_ip_admin       (ocf::heartbeat:IPaddr2):       Started tvm1
     virtual_ip_internal    (ocf::heartbeat:IPaddr2):       Started tvm1
     wlm-cron       (systemd:wlm-cron):     Started tvm1
     Clone Set: lb_nginx-clone [lb_nginx]
         Started: [ tvm1 ]
         Stopped: [ tvm2 tvm3 ]
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    systemctl stop wlm-api
    systemctl stop wlm-scheduler
    systemctl stop wlm-workloads
    systemctl stop mysqld
    rabbitmqctl stop
    reboot
    shutdown
    systemctl stop wlm-api
    systemctl stop wlm-scheduler
    systemctl stop wlm-workloads
    systemctl stop mysqld
    rabbitmqctl stop
    shutdown
    systemctl stop wlm-api
    systemctl stop wlm-scheduler
    systemctl stop wlm-workloads
    systemctl stop mysqld
    rabbitmqctl stop
    pcs resource disable wlm-cron
    pcs resource disable lb_nginx-clone
    shutdown
    galera_new_cluster
    # For RHOSP13
    systemctl disable tripleo_trilio_dmapi.service
    systemctl stop tripleo_trilio_dmapi.service
    docker stop trilio_dmapi
    
    # For RHOSP16 onwards
    systemctl disable tripleo_trilio_dmapi.service
    systemctl stop tripleo_trilio_dmapi.service
    podman stop trilio_dmapi
    # For RHOSP13
    docker rm trilio_dmapi
    docker rm trilio_datamover_api_init_log
    docker rm trilio_datamover_api_db_sync
    
    # For RHOSP16 onwards
    podman rm trilio_dmapi
    podman rm trilio_datamover_api_init_log
    podman rm trilio_datamover_api_db_sync
    rm -rf /var/lib/config-data/puppet-generated/triliodmapi
    rm /var/lib/config-data/puppet-generated/triliodmapi.md5sum
    rm -rf /var/log/containers/trilio-datamover-api/
    # For RHOSP13
    docker stop trilio_datamover
    
    # For RHOSP16 onwards
    systemctl disable tripleo_trilio_datamover.service
    systemctl stop tripleo_trilio_datamover.service
    podman stop trilio_datamover
    # For RHOSP13
    docker rm trilio_datamover
    
    # For RHOSP16 onwards
    podman rm trilio_datamover
    ## Following steps applicable for all supported RHOSP releases.
    
    # Check triliovault backup target mount point
    mount | grep trilio
    
    # Unmount it
    -- If it's NFS	(COPY UUID_DIR from your compute host using above command)
    umount /var/lib/nova/triliovault-mounts/<UUID_DIR>
    
    -- If it's S3
    umount /var/lib/nova/triliovault-mounts
    
    # Verify that it's unmounted		
    mount | grep trilio
    	
    df -h  | grep trilio
    
    # Remove mount point directory after verifying that backup target unmounted successfully.
    # Otherwise actual data from backup target may get cleaned.	
    
    rm -rf /var/lib/nova/triliovault-mounts
    rm -rf /var/lib/config-data/puppet-generated/triliodm/
    rm /var/lib/config-data/puppet-generated/triliodm.md5sum
    rm -rf /var/log/containers/trilio-datamover/
    listen trilio_datamover_api
      bind 172.25.3.60:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
      bind 172.25.3.60:8784 transparent
      http-request set-header X-Forwarded-Proto https if { ssl_fc }
      http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
      http-request set-header X-Forwarded-Port %[dst_port]
      option httpchk
      option httplog
      server overcloud-controller-0.internalapi.localdomain 172.25.3.59:8784 check fall 5 inter 2000 rise 2
    # For RHOSP13
    docker restart haproxy-bundle-docker-0
    
    # For RHOSP16 onwards
    podman restart haproxy-bundle-podman-0
    openstack service delete dmapi
    openstack user delete dmapi
    ## On RHOSP13, run following command on node where database service runs
    docker exec -ti -u root galera-bundle-docker-0 mysql -u root
    
    ## On RHOSP16
    podman exec -it galera-bundle-podman-0 mysql -u root
    ## Clean database
    DROP DATABASE dmapi;
    
    ## Clean dmapi user
    => List 'dmapi' user accounts
    MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
    +-------+-------------+
    | user  | host        |
    +-------+-------------+
    | dmapi | 172.25.2.10 |
    | dmapi | 172.25.2.8  |
    +-------+-------------+
    2 rows in set (0.00 sec)
    
    => Delete those user accounts
    MariaDB [mysql]> DROP USER [email protected];
    Query OK, 0 rows affected (0.82 sec)
    
    MariaDB [mysql]> DROP USER [email protected];
    Query OK, 0 rows affected (0.05 sec)
    
    => Verify that dmapi user got cleaned
    MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
    Empty set (0.00 sec)
    virsh list
    virsh destroy <Trilio VM Name or ID>
    virsh undefine <Trilio VM name>
    pip3 install --upgrade pip
    pip3 download pip
    export PIP_EXTRA_INDEX_URL=https://pypi.fury.io/triliodata-4-1/
    pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL s3fuse --no-cache-dir --no-deps
    #First install dependent package configobj
    pip3 install configobj
    pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL tvault-configurator --no-cache-dir --no-deps
    #First install dependent package jinja2
    pip3 install jinja2 
    pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgr --no-cache-dir --no-deps
    pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgrclient --no-cache-dir --no-deps
    pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL contegoclient --no-cache-dir --no-deps
    pip3 download oslo.messaging==12.1.6 --no-deps
    tar -czvf tvault_backup.tar.gz /etc/tvault /etc/tvault-config /etc/workloadmgr /home/stack/myansible/lib/python3.6/site-packages/tvault_configurator/conf/users.json 
    cp tvault_backup.tar.gz /root/
    source /home/stack/myansible/bin/activate
    pip3 install --upgrade pip-<downloaded-version>-py3-none-any.whl --no-deps
    systemctl stop tvault-object-store
    pip3 install --upgrade s3fuse-<HF_VERSION>.tar.gz --no-deps
    rm -rf /var/triliovault/*
    pip3 install --upgrade tvault_configurator-<HF_VERSION>.tar.gz --no-deps
    pip3 install --upgrade workloadmgr-<HF_VERSION>.tar.gz --no-deps
    pip3 install --upgrade workloadmgrclient-<HF_VERSION>.tar.gz --no-deps
    pip3 install --upgrade contegoclient-<HF_VERSION>.tar.gz --no-deps
    pip3 install ./oslo.messaging-12.1.6-py3-none-any.whl  --no-deps
    source /home/stack/myansible/bin/activate 
    pip3 list | grep <package_name> 
    cd /root 
    tar -xzvf tvault_backup.tar.gz -C /
    systemctl restart tvault-object-store 
    systemctl restart wlm-api 
    systemctl restart wlm-scheduler 
    systemctl restart wlm-workloads 
    systemctl restart tvault-config 
    pcs resource enable wlm-cron 
    
    ## run below command to check status of wlm-cron 
    pcs status
    ## On All nodes 
    systemctl status wlm-api wlm-scheduler wlm-workloads tvault-config tvault-object-store | grep -E 'Active|loaded' 
    ## On primary node 
    pcs status 
    systemctl status wlm-cron
    ps -ef | grep workloadmgr-cron | grep -v grep 
    
    ## Above command should show only 2 processes running; sample below 
    
    [root@tvm6 ~]# ps -ef | grep workloadmgr-cron | grep -v grep 
    nova 8841 1 2 Jul28 ? 00:40:44 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf 
    nova 8898 8841 0 Jul28 ? 00:07:03 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    Type
    Version

    s3fuse

    python package

    4.1.94.4

    tvault-configurator

    python package

    4.1.94.7

    workloadmgr

    python package

    4.1.94.14

    dmapi

    deb package

    4.1.94.3

    python3-dmapi

    deb package

    Containers and Gitbranch

    Name
    Tag

    Gitbranch

    hotfix-7-TVO/4.1

    RHOSP13 containers

    4.1.94-hotfix-8-rhosp13

    RHOSP16.0 containers

    4.1.94-hotfix-8-rhosp16

    RHOSP16.1 containers

    4.1.94-hotfix-8-rhosp16.1

    Kolla Ansible Ussuri containers

    4.1.94-hotfix-7-ussuri

    Changelog

    Contains all changes from previous hotfixes.

    Fixed bugs and issues

    Temporary Volumes left behind under certain circumstances

    An issue has been fixed which left the Trilio temporary Cinder Volumes behind when the upload was timing out or when the Trilio cluster got restarted.

    Setting invalid date through workload edit

    An issue has been fixed which allowed setting invalid dates through the workload edit command.

    Parallel workload performing significantly worse

    A race condition has been fixed which led to an exponential growth of required time with an increased amount of protected VMs.

    Restore of ports with port-security disabled

    An issue has been fixed which prevented a successful restore of Neutron ports that have the port-security functionality disabled.

    Type
    Version

    s3fuse

    python package

    4.1.94.7

    tvault-configurator

    python package

    4.1.94.11

    workloadmgr

    python package

    4.1.94.20

    dmapi

    deb package

    4.1.94.3

    python3-dmapi

    deb package

    Containers and Gitbranch

    Name
    Tag

    Gitbranch

    hotfix-11-TVO/4.1

    RHOSP16.0 containers

    4.1.94-hotfix-13-rhosp16

    RHOSP16.1 containers

    4.1.94-hotfix-13-rhosp16.1

    RHOSP16.2 containers

    4.1.94-hotfix-13-rhosp16.2

    Kolla Ansible Ussuri containers

    4.1.94-hotfix-11-ussuri

    Changelog

    Fixed Bugs and issues

    If Trilio created Cinder snapshot is not in available state, all subsequent backups fail.

    An issue has been fixed which led to backups ending unsuccessfully when Trilio wasn't able to delete older no longer required Cinder Snapshots.

    Multipath rescan commands not executing in a timely manner

    An issue has been fixed which led to failed detach and deletion of temporary volumes due to the rescan command not executing fast enough.

    Improved stability of the S3fuse plugin

    Several issues have been fixed which led to an unstable connection of the S3fuse plugin, leading to backups and restore failing during the data transfer phase.

    Enhanced support for Latin characters

    An issue has been fixed where the usage of Latin characters in the restore name or description did lead to the restore being unsuccessful.

    Type
    Version

    s3fuse

    python package

    4.1.94.4

    tvault-configurator

    python package

    4.1.94.7

    workloadmgr

    python package

    4.1.94.11

    dmapi

    deb package

    4.1.94.3

    python3-dmapi

    deb package

    Containers and Gitbranch

    Name
    Tag

    Gitbranch

    hotfix-7-TVO/4.1

    RHOSP13 containers

    4.1.94-hotfix-8-rhosp13

    RHOSP16.0 containers

    4.1.94-hotfix-8-rhosp16

    RHOSP16.1 containers

    4.1.94-hotfix-8-rhosp16.1

    Kolla Ansible Ussuri containers

    4.1.94-hotfix-7-ussuri

    Changelog

    Contains all changes from previous hotfixes.

    Fixes bugs and issues

    Restore of tenant-shared security groups failed

    An issue has been fixed which prevented a successful restore when a security group was referring to a shared security group from a different tenant.

    Email alerts not working as intended

    Fixed an issue where email alerts were not sent in the case of a passwordless SMTP server.

    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to run the search in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 13:23:25 GMT
    Content-Type: application/json
    Content-Length: 244
    Connection: keep-alive
    X-Compute-Request-Id: req-bdfd3fb8-5cbf-4108-885f-63160426b2fa
    
    {
       "file_search":{
          "created_at":"2020-11-09T13:23:25.698534",
          "updated_at":null,
          "id":14,
          "deleted_at":null,
          "status":"executing",
          "error_msg":null,
          "filepath":"/etc/h*",
          "json_resp":null,
          "vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
    

    Body format

    Get File Search Results

    POST https://$(tvm_address):8780/v1/$(tenant_id)/search/<search_id>

    Starts a filesearch with the given parameters

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to run the search in

    search_id

    string

    ID of the File Search to get

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Trilio 4.1 HF12 Release

    Release Versions

    Packages

    Name
    Type
    Version

    Containers and Gitbranch

    Name
    Tag

    Changelog

    Fixed Bugs and issues

    • Datamover container restarting

      /var/trilio/triliovault-mounts was having incorrect ownership. We have fixed it through DevOps code

    • Once the original project is deleted, the workload cannot be reassigned to a different UserID/ProjectID.

      We removed the condition where we were checking if older tenant_ids if present in the newer tenant list

    Trilio 4.1 HF4 Release

    Release Versions

    Packages

    Name
    Type
    Version

    Containers and Gitbranch

    Name
    Tag

    Changelog

    Contains all changes from HF1, HF2 and HF3.

    Fixes bugs and issues

    Filesearch in ext2 filesystem

    A bug has been fixed which prevented the correct presentation of files on an ext2 filesystem when running a filesearch.

    Filesearch on CentOS8 and RHEL8 instances

    An issue has been fixed which prevented a correct filesearch on CentOS8 and RHEL8 backups with the xfs filesystem.

    Selective Restore without original Glance image being available

    An issue has been fixed which prevented a successful selective restore in case of the original Glance image not being available anymore.

    Creation of a new glance image during restore

    An issue has been fixed which prevented the successful creation of a Glance image during a restore.

    RHOSP13 installation mountpath ownership

    An issue has been fixed which set the wrong ownership to the Trilio mountpoint.

    Enhancements

    Configurable maximum S3 connection

    A new configuration parametervault_s3_max_pool_connectionshas been added to adjust the number of pool connections. The default value is 500.

    This parameter can be set in the workloadmgr.conf on the Trilio appliance.

    In the case of a Canonical installation this parameter is to be set in the tvault-object-store.conf in the workloadmgr container.

    Default value for s3 read timeout has been decreased

    The default value for the configuration parameter vault_s3_max_pool_connections has been reduced from 120 to 30.

    This parameter can be set in the workloadmgr.conf on the Trilio appliance.

    In the case of a Canonical installation this parameter is to be set in the tvault-object-store.conf in the workloadmgr container.

    NFS mount process of datamover enhanced

    A timeout with automatic process kill has been added to prevent high CPU usage from stale NFS mount operations in case of mounting errors.

    Snapshot mount on Ubuntu 20.04

    The documentation has been extended to support the usage of Ubuntu 20.04 cloud images for the Snapshot mount functionality.

    Trilio 4.1 HF8 Release

    Release Versions

    Packages

    Name
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 13:24:28 GMT
    Content-Type: application/json
    Content-Length: 819
    Connection: keep-alive
    X-Compute-Request-Id: req-d57bea9a-9968-4357-8743-e0b906466063
    
    {
       "file_search":{
          "created_at":"2020-11-09T13:23:25.000000",
          "updated_at":"2020-11-09T13:23:48.000000",
          "id":14,
          "deleted_at":null,
          "status":"completed",
          "error_msg":null,
          "filepath":"/etc/h*",
          "json_resp":"[
                          {
                             "ed4f29e8-7544-4e1c-af8a-a76031211926":[
                                {
                                   "/dev/vda1":[
                                      "/etc/hostname",
                                      "/etc/hosts"
                                   ],
                                   "/etc/hostname":{
                                      "dev":"2049",
                                      "ino":"32",
                                      "mode":"33204",
                                      "nlink":"1",
                                      "uid":"0",
                                      "gid":"0",
                                      "rdev":"0",
                                      "size":"1",
                                      "blksize":"1024",
                                      "blocks":"2",
                                      "atime":"1603455255",
                                      "mtime":"1603455255",
                                      "ctime":"1603455255"
                                   },
                                   "/etc/hosts":{
                                      "dev":"2049",
                                      "ino":"127",
                                      "mode":"33204",
                                      "nlink":"1",
                                      "uid":"0",
                                      "gid":"0",
                                      "rdev":"0",
                                      "size":"37",
                                      "blksize":"1024",
                                      "blocks":"2",
                                      "atime":"1603455257",
                                      "mtime":"1431011050",
                                      "ctime":"1431017172"
                                   }
                                }
                             ]
                          }
                      ]",
          "vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
       }
    }
    {
       "file_search":{
          "start":<Integer>,
          "end":<Integer>,
          "filepath":"<Reg-Ex String>",
          "date_from":<Date Format: YYYY-MM-DDTHH:MM:SS>,
          "date_to":<Date Format: YYYY-MM-DDTHH:MM:SS>,
          "snapshot_ids":[
             "<Snapshot-ID>"
          ],
          "vm_id":"<VM-ID>"
       }
    }

    4.1.94.3

    tvault-contego

    deb package

    4.1.94.8

    python3-tvault-contego

    deb package

    4.1.94.8

    tvault-horizon-plugin

    deb package

    4.1.94.4

    python3-tvault-horizon-plugin

    deb package

    4.1.94.4

    s3-fuse-plugin

    deb package

    4.1.94.4

    python3-s3-fuse-plugin

    deb package

    4.1.94.4

    workloadmgr

    deb package

    4.1.94.14

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.8-4.1

    python3-tvault-contego

    rpm package

    4.1.94.8-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.4-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.4-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.4-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.4-4.1

    workloadmgrclient

    rpm pacakage

    4.1.94

    Kolla Ansible Victoria containers

    4.1.94-hotfix-5-victoria

    TripleO Train container

    4.1.94-hotfix-5-tripleo

    4.1.94.3

    tvault-contego

    deb package

    4.1.94.10

    python3-tvault-contego

    deb package

    4.1.94.10

    tvault-horizon-plugin

    deb package

    4.1.94.4

    python3-tvault-horizon-plugin

    deb package

    4.1.94.4

    s3-fuse-plugin

    deb package

    4.1.94.7

    python3-s3-fuse-plugin

    deb package

    4.1.94.7

    workloadmgr

    deb package

    4.1.94.20

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.10-4.1

    python3-tvault-contego

    rpm package

    4.1.94.10-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.4-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.4-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.7-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.7-4.1

    workloadmgrclient

    rpm pacakage

    4.1.94

    Kolla Ansible Victoria containers

    4.1.94-hotfix-9-victoria

    TripleO Train container

    4.1.94-hotfix-9-tripleo

    4.1.94.3

    tvault-contego

    deb package

    4.1.94.8

    python3-tvault-contego

    deb package

    4.1.94.8

    tvault-horizon-plugin

    deb package

    4.1.94.4

    python3-tvault-horizon-plugin

    deb package

    4.1.94.4

    s3-fuse-plugin

    deb package

    4.1.94.4

    python3-s3-fuse-plugin

    deb package

    4.1.94.4

    workloadmgr

    deb package

    4.1.94.11

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.8-4.1

    python3-tvault-contego

    rpm package

    4.1.94.8-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.4-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.4-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.4-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.4-4.1

    workloadmgrclient

    rpm pacakage

    4.1.94

    Kolla Ansible Victoria containers

    4.1.94-hotfix-5-victoria

    TripleO Train container

    4.1.94-hotfix-5-tripleo

    }
    }

    User-Agent

    string

    python-workloadmgrclient

    tvault-contego

    deb package

    4.1.94.10

    python3-tvault-contego

    deb package

    4.1.94.10

    tvault-horizon-plugin

    deb package

    4.1.94.6-4.1

    python3-tvault-horizon-plugin

    deb package

    4.1.94.6

    s3-fuse-plugin

    deb package

    4.1.94.7

    python3-s3-fuse-plugin

    deb package

    4.1.94.7

    workloadmgr

    deb package

    4.1.94.21

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.10-4.1

    python3-tvault-contego

    rpm package

    4.1.94.10-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.6-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.6-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.7-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.7-4.1

    workloadmgrclient

    rpm pacakage

    4.1.94

    RHOSP16.1 Containers

    Containers

    4.1.94-hotfix-15-rhosp16.1

    RHOSP16.2 Containers

    Containers

    4.1.94-hotfix-15-rhosp16.2

    RHOSP13 Containers

    Containers

    4.1.94-hotfix-15-rhosp13

    Kolla Containers

    Containers

    4.1.94-hotfix-12-ussuri

    4.1.94-hotfix-11-victoria

    TripleO Containers

    Containers

    4.1.94-hotfix-11-tripleo

    4.1.94-hotfix-9-tripleo

    4.1-RHOSP13-CONTAINER

    4.1.94-hotfix-15-rhosp13

    4.1-KOLLA-CONTAINER

    4.1.94-hotfix-12-ussuri

    4.1.94-hotfix-11-victoria

    4.1-TRIPLEO-CONTAINER

    4.1.94-hotfix-11-tripleo

    Trilio core functionality operations do not perform as expected when the master T4O node is powered off

    Caching bug in the code where the in-memory dictionary was not in sync with the service table in MySQL.

  • tvault-config service is in crashloop on 2 out of 3 nodes in T4O cluster

  • workload policy shows an incorrect start time

  • default_tvault_dashboard_tvo-tvm not available after yum update

  • Reassign of the workload from deleted project fails

  • s3fuse

    python package

    4.1.94.7

    tvault-configurator

    python package

    4.1.94.15

    workloadmgr

    python package

    4.1.94.22

    dmapi

    deb package

    4.1.94.3

    python3-dmapi

    deb package

    Gitbranch

    hotfix-12-TVO/4.1

    RHOSP16.1 containers

    4.1.94-hotfix-15-rhosp16.1

    RHOSP16.2 containers

    4.1.94-hotfix-15-rhosp16.2

    Kolla Ansible Ussuri containers

    4.1.94-hotfix-11-ussuri

    Kolla Ansible Victoria containers

    4.1.94-hotfix-9-victoria

    4.1.94.3

    TripleO Train container

    python3-dmapi

    deb package

    4.1.94.3

    tvault-contego

    deb package

    4.1.94.7

    python3-tvault-contego

    deb package

    4.1.94.7

    tvault-horizon-plugin

    deb package

    4.1.94.3

    python3-tvault-horizon-plugin

    deb package

    4.1.94.3

    s3-fuse-plugin

    deb package

    4.1.94.4

    python3-s3-fuse-plugin

    deb package

    4.1.94.4

    workloadmgr

    deb package

    4.1.94.8

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.7-4.1

    python3-tvault-contego

    rpm package

    4.1.94.7-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.3-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.3-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.4-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.4-4.1

    workloadmgrclient

    rpm package

    4.1.94

    4.1.94-hotfix-3-victoria

    TripleO Train container

    4.1.94-hotfix-3-tripleo

    s3fuse

    python package

    4.1.94.4

    tvault-configurator

    python package

    4.1.94.6

    workloadmgr

    python package

    4.1.94.8

    workloadmgrclient

    python package

    4.1.94

    dmapi

    deb package

    Gitbranch

    hotfix-4-TVO/4.1

    RHOSP13 containers

    4.1.94-hotfix-5-rhosp13

    RHOSP16.0 containers

    4.1.94-hotfix-5-rhosp16

    RHOSP16.1 containers

    4.1.94-hotfix-5-rhosp16.1

    Kolla Ansible Ussuri containers

    4.1.94-hotfix-5-ussuri

    4.1.94.3

    Kolla Ansible Victoria containers

    Type
    Version

    s3fuse

    python package

    4.1.94.4

    tvault-configurator

    python package

    4.1.94.11

    workloadmgr

    python package

    4.1.94.17

    dmapi

    deb package

    4.1.94.3

    python3-dmapi

    deb package

    Containers and Gitbranch

    Name
    Tag

    Gitbranch

    hotfix-8-TVO/4.1

    RHOSP13 containers

    4.1.94-hotfix-9-rhosp13

    RHOSP16.0 containers

    4.1.94-hotfix-9-rhosp16

    RHOSP16.1 containers

    4.1.94-hotfix-9-rhosp16.1

    RHOSP16.2 containers

    4.1.94-hotfix-9-rhosp16.2

    Qualifications

    The following OpenStack distributions and versions have been added to the Trilio support matrix.

    • Red Hat OpenStack 16.2

    Changelog

    Enhancements

    Support for Ceph NFS-Ganesha

    It is now possible to utilize the NFS-Ganesha gateway as backup target.

    Restore continues even when Security Group restore fails

    Failures upon restoring Security Groups are no longer leading to a complete fail of the restore. Failed Security Groups are logged to provide the required information for next steps.

    Future versions and hotfixes will continue to improve the Security Group restore process and reduce the amount of reasons why Security Groups can't be restored.

    Fixed bugs and issues

    AttributeError: 'unicode' object has no attribute 'get'

    An issue with multipathing environments has been fixed which led to failed backups with the AttributeError: 'unicode' object has no attribute 'get' error message.

    Restricting the IP address of the Trilio GUI lead to logs not being downloadable

    The documentation about restricting the IP access to the Trilio GUI has been updated to include the required port 3001 to enable the download of logs through the dashboard_ip.

    Multipath.conf file not present in Datamover container

    An issue has been fixed which prevented the correct placement of the multipath.conf file inside the Trilio Datamover container.

    Restart Trilio Services

    In complex environments it is sometimes necessary to restart a single service or the complete solution. Rarely is restarting the complete node, where a service is running possible or even the ideal solution.

    This page describes the services running by Trilio and how to restart those.

    Trilio Appliance Services

    The Trilio Appliance is the controller of Trilio. Most services on the Appliance are running in a High Availability mode on a 3-node cluster.

    wlm-api

    The wlm-api service takes the API calls against the Trilio Appliance. It is running in active-active mode on all nodes of the Trilio cluster.

    To restart the wlm-api service run on each Trilio node:

    wlm-scheduler

    The wlm-scheduler service is taking job requests and identifies which Trilio node should take the request. It is running in active-active mode on all nodes of the Trilio cluster.

    To restart the wlm-scheduler service run on each Trilio node:

    wlm-workloads

    The wlm-workloads service is the task worker of Trilio executing all jobs given to the Trilio node. It is running in active-active mode on all nodes of the Trilio cluster.

    To restart the wlm-workloads service run on each Trilio node:

    wlm-cron

    The wlm-cron service is responsible for starting scheduled Backups according to the configurtation of Tenant Workloads. It is running in active-passive mode and controlled by the pacemaker cluster.

    To restart the wlm-workloads service run on the Trilio node with VIP assigned:

    VIP resources

    The Trilio appliance is running 1 to 4 virtual IPs on the Trilio cluster. These are controlled by the pacemaker cluster and provided through NGINX.

    To restart these resources the pacemaker NGINX resource is getting restarted:

    RabbitMQ

    The Trilio cluster is using RabbitMQ as messaging service. It is running in active-active mode on all nodes of the Trilio cluster.

    RabbitMQ is a complex system in itself. This guide will only provide the basic commands to do a restart of a node and check the health of the cluster afterward. For complete documentation of how to restart RabbitMQ, please follow the .

    To restart a RabbitMQ node run on each Trilio node:

    It is recommended to wait for the node to rejoin and sync with the cluster before restarting another RabbitMQ node.

    When the complete cluster is getting stopped and restarted it is important to keep the order of nodes in mind. The last node to be stopped needs to be the first node to be started.

    Galera Cluster (MariaDB)

    The Galera Cluster is managing the Trilio MariaDB database. It is running in active-active mode on all nodes of the Trilio cluster.

    Galera Cluster is a complex system in itself. This guide will only provide the basic commands to do a restart of a node and check the health of the cluster afterward. For complete documentation of how to restart Galera clusters, please follow the .

    When restarting Galera two different use-cases need to be considered:

    • Restarting a single node

    • Restarting the whole cluster

    Restarting a single node

    A single node can be restarted without any issues. It will automatically rejoin the cluster and sync against the remaining nodes.

    The following commands will gracefully stop and restart the mysqld service.

    After a restart will the cluster start the syncing process. Don't restart node after node to reach a complete cluster restart.

    Check the cluster health after the restart.

    Restarting the complete cluster

    Restarting a complete cluster requires some additional steps as the Galera cluster is basically destroyed once all nodes have been shut down. It needs to be rebuild afterwards.

    First gracefully shutdown the Galera cluster on all nodes:

    The second step is to identify the Galera node with the latest dataset. This can be achieved by reading the grastate.dat file on the Trilio nodes.

    When this documentation is followed the last mysqld service that got shut down will be the one with the latest dataset.

    The value to check for are the seqno.

    The node with the highest seqno is the node that contains the latest data. This node will also contain safe_to_bootstrap: 1 to indicate that the Galera cluster can be rebuild from this node.

    On the identified node the new cluster is getting generated with the following command:

    Running galera_new_cluster on the wrong node will lead to data loss as this command will set the node the command is issued on as the first node of the cluster. All nodes which join afterward will sync against the data of this first node.

    After the command has been issued is the mysqld service running on this node. Now the other nodes can be restarted one by one. The started nodes will automatically rejoin the cluster and sync against the master node. Once a synced status has been reached is each node a primary node in the cluster.

    Check the Cluster health after all services are up again.

    Verify Health of the Galera Cluster

    Verify the cluster health by running the following commands inside each Trilio MariaDB. The values returned from these statements have to be the same for each node.

    Canonical workloadmgr container services

    Canonical Openstack is not using the Trilio Appliance. In Canonical environments is the Trilio controller unit part of the JuJu deployment as workloadmgr container.

    To restart the services inside this container the following commands are to be issued.

    Single Node deployment

    HA deployment

    On all nodes:

    On a single node:

    Trilio dmapi service

    The Trilio dmapi service is running on the Openstack controller nodes. Depending on the Openstack Distribution Trilio is installed on different commands are issued to restart the dmapi service.

    RHOSP13

    RHOSP13 is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    RHOSP16

    RHOSP16 is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    Canonical

    Canonical is running the Trilio services in JuJu controlled LXD containers. The dmapi service can be restarted by issuing the following command from the MASS node.

    Kolla-Ansible Openstack

    Kolla-Ansible Openstack is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    Ansible Openstack

    Ansible Openstack is running the Trilio services as LXD containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    Trilio datamover service (tvault-contego)

    The Trilio datamover service is running on the Openstack compute nodes. Depending on the Openstack Distribution Trilio is installed on different commands are issued to restart the datamover service.

    RHOSP13

    RHOSP13 is running the Trilio services as docker containers. The datamover service can be restarted by issuing the following command on the compute node.

    RHOSP16

    RHOSP16 is running the Trilio services as docker containers. The datamover service can be restarted by issuing the following command on the compute node.

    Canonical

    Canonical is running the Trilio services in JuJu controlled LXD containers. The datamover service can be restarted by issuing the following command from the MASS node.

    Kolla-Ansible Openstack

    Kolla-Ansible Openstack is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    Ansible Openstack

    Ansible Openstack is running the Trilio datamover service directly on the compute node. The datamover service can be restarted by issuing the following command on.

    Installing on TripleO Train

    1. Prepare for deployment

    1.1] Select 'backup target' type

    Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

    Installing on Ansible Openstack Ussuri

    Please ensure that the Trilio Appliance has been updated to the latest hotfix before continuing the installation.

    Change the nova user id on the Trilio Nodes

    Trilio is by default using the nova user id and group id 997:998 Ansible Openstack is not always 'nova' user id 162 on nova-compute containers. The 'nova' user id on the Trilio nodes need to be set the same as in the nova-compute containers. Do the following steps on all Trilio nodes in case of nova id not being 162:162:

    Snapshots

    Definition

    A Snapshot is a single Trilio backup of a workload including all data and metadata. It contains the information of all VM's that are protected by the workload.

    List of Snapshots

    Snapshot Mount

    Definition

    Trilio allows you to view or download a file from the snapshot. Any changes to the files or directories when snapshot is mounted are temporary and are discarded when the snapshot is unmounted. Mounting is a faster way to restore a single or multiple files. To mount a snapshot follow these steps.

    4.1.94.3

    tvault-contego

    deb package

    4.1.94.9

    python3-tvault-contego

    deb package

    4.1.94.9

    tvault-horizon-plugin

    deb package

    4.1.94.4

    python3-tvault-horizon-plugin

    deb package

    4.1.94.4

    s3-fuse-plugin

    deb package

    4.1.94.4

    python3-s3-fuse-plugin

    deb package

    4.1.94.4

    workloadmgr

    deb package

    4.1.94.17

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.9-4.1

    python3-tvault-contego

    rpm package

    4.1.94.9-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.4-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.4-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.4-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.4-4.1

    workloadmgrclient

    rpm pacakage

    4.1.94

    Kolla Ansible Ussuri containers

    4.1.94-hotfix-8-ussuri

    Kolla Ansible Victoria containers

    4.1.94-hotfix-6-victoria

    TripleO Train container

    4.1.94-hotfix-6-tripleo

    official RabbitMQ documentation
    official Galera documentation
    Following backup target types are supported by Trilio

    a) NFS

    Need NFS share path

    b) Amazon S3

    - S3 Access Key - Secret Key - Region - Bucket name

    c) Other S3 compatible storage (Like, Ceph based S3)

    - S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

    1.2] Clone triliovault-cfg-scripts repository

    The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.

    All commands need to be run as user 'stack' on undercloud node

    The following command clones the triliovault-cfg-scripts github repository.

    Please note that the Trilio Appliance needs to get updated to hf3 as well.

    1.3] If backup target type is 'Ceph based S3' with SSL:

    If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE___Directory/puppet/trilio/files'

    2] Upload Trilio puppet module

    3] Update overcloud roles data file to include Trilio services

    Trilio contains multiple services. Add these services to your roles_data.yaml.

    In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:

    /usr/share/openstack-tripleo-heat-templates/roles_data.yaml

    Add the following services to the roles_data.yaml

    All commands need to be run as user 'stack'

    3.1] Add Trilio Datamover Api Service to role data file

    This service needs to share the same role as the keystone and database service. In case of the pre-defined roles will these services run on the role Controller. In case of custom defined roles, it is necessary to use the same role where 'OS::TripleO::Services::Keystone' service installed.

    Add the following line to the identified role:

    3.2] Add Trilio Datamover Service to role data file

    This service needs to share the same role as the nova-compute service. In case of the pre-defined roles will the nova-compute service run on the role Compute. In case of custom defined roles, it is necessary to use the role the nova-compute service is using.

    Add the following line to the identified role:

    4] Prepare Trilio container images

    All commands need to be run as user 'stack'

    Refer to the below-mentioned value of the respective placeholder in this document. HOTFIX-TAG-VERSION : 4.1.94-hotfix-12-tripleo

    Trilio containers are pushed to 'Dockerhub'. Registry URL: 'docker.io'. Container pull URLs are given below.

    CentOS7

    CentOS8

    There are two registry methods available in TripleO Openstack Platform.

    1. Remote Registry

    2. Local Registry

    4.1] Remote Registry

    Follow this section when 'Remote Registry' is used.

    For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from Dockerhub registry.

    Populate the trilio_env.yaml with container URLs for:

    • Trilio Datamover container

    • Trilio Datamover api container

    • Trilio Horizon Plugin

    trilio_env.yaml will be available in __triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments

    4.2] Local Registry

    Follow this section when 'local registry' is used on the undercloud.

    Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.

    Acceptable values for the below two parameters:

    OS_platform: [centos7, centos8]

    container_tool_available_on_undercloud: [docker, podman]

    The changes can be verified using the following commands.

    5] Fill in triliovault environment details

    Fill triliovault details in file - '/home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml', triliovault environment file is self explanatory. Fill details of backup target, verify image urls and other details.

    6] Install Trilio on Overcloud

    Use the following heat environment file and roles data file in overcloud deploy command

    1. trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations

    2. roles_data.yaml: This file contains overcloud roles data with Trilio roles added.

    3. Use the correct trilio endpoint map file as per your keystone endpoint configuration. - Instead of tls-endpoints-public-dns.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’ - Instead of tls-endpoints-public-ip.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’ - Instead of tls-everywhere-endpoints-dns.yaml this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’

    Deploy command with triliovault environment file looks like following.

    7] Verify deployment

    If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

    7.1] On Controller node

    Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

    Verify the haproxy configuration under:

    7.2] On Compute node

    Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

    7.3] On the node with Horizon service

    Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.

    8] Deplo T4O-4.1 GA Appliance

    Please follow this deployment guide to spin up the base Trilio 4.1GA appliance.

    9] Upgrade to the latest 4.1 HF on the appliance

    Trilio supports TripleO Train from 4.1HF5 onwards, so it is recommended to upgrade to the latest available hotfix on 4.1 to make deployment successful. Please follow this upgrade guide to upgrade the appliance to the latest 4.1 Hotfix.

    10] Configure the Trilio Appliance

    Please follow this guide to configure the upgraded Trilio 4.1 appliance.

    1. Download the shell script that will change the user-id

    2. Assign executable permissions

    3. Edit script to use the correct nova id

    4. Execute the script

    5. Verify that 'nova' user and group id has changed to the desired value

    Prepare deployment host

    Clone triliovault-cfg-scripts from github repository on Ansible Host.

    Available values for <branch>:

    Openstack Version
    Branch

    Ussuri

    hotfix-13-TVO/4.1

    Victoria

    hotfix-13-TVO/4.1

    Copy Ansible roles and vars to required places.

    In case of installing on OSA Victora edit OPENSTACK_DIST in the file /etc/openstack_/user_tvault_vars.yml to Victoria

    Add Trilio playbook to /opt/openstack-ansible/playbooks/setup-openstack.ymlat the end of the file.

    Add the following content at the end of the file /etc/openstack_deploy/user_variables.yml

    Create the following file /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml

    Edit the file /etc/openstack_deploy/openstack_user_config.yml according to the example below to set host entries for Trilio components.

    Edit the common editable parameter section in the file /etc/openstack_deploy/user_tvault_vars.yml

    Append the required details like Trilio Appliance IP address, Trilio package version, Openstack distribution, snapshot storage backend, SSL related information, etc.

    The possible package versions are:

    GA Trilio 4.1: 4.1.94

    Deploy Trilio components

    Run the following commands to deploy only Trilio components in case of an already deployed Ansible Openstack.

    If Ansible Openstack is not already deployed then run the native Openstack deployment commands to deploy Openstack and Trilio Components together. An example for the native deployment command is given below:

    Verify the Trilio deployment

    Verify triliovault datamover api service deployed and started well. Run the below commands on controller node(s).

    Verify triliovault datamover service deployed and started well on compute node(s). Run the following command oncompute node(s).

    Verify that triliovault horizon plugin, contegoclient, and workloadmgrclient are installed on the Horizon container.

    Run the following command on Horizon container.

    Verify that haproxy setting on controller node using below commands.

    Update to the latest hotfix

    After the deployment has been verified it is recommended to update to the latest hotfix to ensure the best possible experience.

    To update the environment follow this procedure.

    systemctl restart wlm-api
    systemctl restart wlm-scheduler
    systemctl restart wlm-workloads
    pcs resource restart wlm-cron
    pcs resource restart lb_nginx-clone
    [root@TVM1 ~]# rabbitmqctl stop
    Stopping and halting node rabbit@TVM1 ...
    [root@TVM1 ~]# rabbitmq-server -detached
    Warning: PID file not written; -detached was passed.
    [root@TVM1 ~]# rabbitmqctl cluster_status
    Cluster status of node rabbit@TVM1 ...
    [{nodes,[{disc,[rabbit@TVM1,rabbit@TVM2,rabbit@TVM3]}]},
     {running_nodes,[rabbit@TVM2,rabbit@TVM3,rabbit@TVM1]},
     {cluster_name,<<"rabbit@TVM1">>},
     {partitions,[{rabbit@TVM2,[rabbit@TVM1,rabbit@TVM3]},
                  {rabbit@TVM3,[rabbit@TVM1,rabbit@TVM2]}]},
     {alarms,[{rabbit@TVM2,[]},{rabbit@TVM3,[]},{rabbit@TVM1,[]}]}]
    systemctl stop mysqld
    systemctl start mysqld
    systemctl stop mysqld
    cat /var/lib/mysql/grastate.dat
    
    # GALERA saved state
    version: 2.1
    uuid:    353e129f-11f2-11eb-b3f7-76f39b7b455d
    seqno:   213576545367
    safe_to_bootstrap: 1
    galera_new_cluster
    systemctl start mysqld
    MariaDB [(none)]> show status like 'wsrep_incoming_addresses';
    +--------------------------+-------------------------------------------------+
    | Variable_name            | Value                                           |
    +--------------------------+-------------------------------------------------+
    | wsrep_incoming_addresses | 10.10.2.13:3306,10.10.2.14:3306,10.10.2.12:3306 |
    +--------------------------+-------------------------------------------------+
    1 row in set (0.01 sec)
    
    MariaDB [(none)]> show status like 'wsrep_cluster_size';
    +--------------------+-------+
    | Variable_name      | Value |
    +--------------------+-------+
    | wsrep_cluster_size | 3     |
    +--------------------+-------+
    1 row in set (0.00 sec)
    
    MariaDB [(none)]> show status like 'wsrep_cluster_state_uuid';
    +--------------------------+--------------------------------------+
    | Variable_name            | Value                                |
    +--------------------------+--------------------------------------+
    | wsrep_cluster_state_uuid | 353e129f-11f2-11eb-b3f7-76f39b7b455d |
    +--------------------------+--------------------------------------+
    1 row in set (0.00 sec)
    
    MariaDB [(none)]> show status like 'wsrep_local_state_comment';
    +---------------------------+--------+
    | Variable_name             | Value  |
    +---------------------------+--------+
    | wsrep_local_state_comment | Synced |
    +---------------------------+--------+
    1 row in set (0.01 sec)
    juju ssh <workloadmgr unit name>/<unit-number>
    Systemctl restart wlm-api wlm-scheduler wlm-workloads wlm-cron
    juju ssh <workloadmgr unit name>/<unit-number>
    Systemctl restart wlm-api wlm-scheduler wlm-workloads
    juju ssh <workloadmgr unit name>/<unit-number>
    crm_resource --restart -r res_trilio_wlm_wlm_cron
    docker restart trilio_dmapi
    podman restart trilio_dmapi
    juju ssh <trilio-dm-api unit name>/<unit-number>
    sudo systemctl restart tvault-datamover-api
    docker restart triliovault_datamover_api
    lxc-stop -n <dmapi container name>
    lxc-start -n <dmapi container name>
    docker restart trilio_datamover
    podman restart trilio_datamover
    juju ssh <trilio-data-mover unit name>/<unit-number>
    sudo systemctl restart tvault-contego
    docker restart triliovault_datamover
    service tvault-contego restart
    cd /home/stack
    git clone -b hotfix-13-TVO/4.1 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/puppet/trilio/files/
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    chmod +x *.sh
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following.
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    'OS::TripleO::Services::TrilioDatamoverApi'
    'OS::TripleO::Services::TrilioDatamover' 
    Trilio Datamover container:       docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>
    Trilio Datamover Api Container:   docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>
    Trilio horizon plugin:            docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>
    Trilio Datamover container:        docker.io/trilio/tripleo-train-centos8-trilio-datamover:<HOTFIX-TAG-VERSION>
    Trilio Datamover Api Container:   docker.io/trilio/tripleo-train-centos8-trilio-datamover-api:<HOTFIX-TAG-VERSION>
    Trilio horizon plugin:            docker.io/trilio/tripleo-train-centos8-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>
    # For TripleO Train Centos7
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>
       DockerTrilioDmApiImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>
       DockerHorizonImage: docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>
    
    # For Tripleo Train Centos8
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: docker.io/trilio/tripleo-train-centos8-trilio-datamover:<HOTFIX-TAG-VERSION>
       DockerTrilioDmApiImage: docker.io/trilio/tripleo-train-centos8-trilio-datamover-api:<HOTFIX-TAG-VERSION>
       ContainerHorizonImage: docker.io/trilio/tripleo-train-centos8-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>
    
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_registry_hostname_or_ip> <OS_platform> <4.1-TRIPLEO-CONTAINER> <container_tool_available_on_undercloud>
    
    ## To get undercloud registry hostname/ip, we have two approaches. Use either one.
    1. openstack tripleo container image list
    
    2. find your 'containers-prepare-parameter.yaml' (from overcloud deploy command) and search for 'push_destination'
    cat /home/stack/containers-prepare-parameter.yaml | grep push_destination
     - push_destination: "undercloud.ctlplane.ooo.prod1:8787"
    
    Here, 'undercloud.ctlplane.ooo.prod1' is undercloud registry hostname. Use it in our command like following example.
    
    # Command Example:
    sudo ./prepare_trilio_images.sh undercloud.ctlplane.ooo.prod1 centos7 4.1.94-hotfix-1-tripleo podman
    
    ## Verify changes
    # For TripleO Train Centos7
    $ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>
       DockerTrilioDmApiImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>
       DockerHorizonImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>
    
    # For Tripleo Train Centos8
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos8-trilio-datamover:<HOTFIX-TAG-VERSION>
       DockerTrilioDmApiImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos8-trilio-datamover-api:<HOTFIX-TAG-VERSION>
       ContainerHorizonImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos8-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>
    ## For Centos7 Train
    
    (undercloud) [stack@undercloud redhat-director-scripts/tripleo-train]$ openstack tripleo container image list | grep trilio
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>                       |                        |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>                  |                   |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>                  |
    
    -----------------------------------------------------------------------------------------------------
    ## For Centos8 Train
    (undercloud) [stack@undercloud redhat-director-scripts/tripleo-train]$ openstack tripleo container image list | grep trilio
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos8-trilio-datamover:<HOTFIX-TAG-VERSION>                       |                        |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos8-trilio-datamover-api:<HOTFIX-TAG-VERSION>                  |                   |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos8-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>                 |
    
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env_tls_endpoints_public_dns.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
    [root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
    26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>       kolla_start           5 days ago  Up 5 days ago         horizon
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>       kolla_start           5 days ago  Up 5 days ago         horizon
    curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    chmod +x nova_userid.sh
    vi nova_userid.sh  # change nova user_id and group_id to uid & gid present on compute nodes. 
    ./nova_userid.sh
    id nova
    git clone -b <branch> https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/
    cp -R ansible/roles/* /opt/openstack-ansible/playbooks/roles/
    cp ansible/main-install.yml   /opt/openstack-ansible/playbooks/os-tvault-install.yml
    cp ansible/environments/group_vars/all/vars.yml /etc/openstack_deploy/user_tvault_vars.yml
    - import_playbook: os-tvault-install.yml
    # Datamover haproxy setting
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_balance_alg: roundrobin
          haproxy_timeout_client: 10m
          haproxy_timeout_server: 10m
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    cat > /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml 
    component_skel:
      dmapi_api:
        belongs_to:
          - dmapi_all
    
    container_skel:
      dmapi_container:
        belongs_to:
          - tvault-dmapi_containers
        contains:
          - dmapi_api
    
    physical_skel:
      tvault-dmapi_containers:
        belongs_to:
          - all_containers
      tvault-dmapi_hosts:
        belongs_to:
          - hosts
    #tvault-dmapi
    tvault-dmapi_hosts:   # Add controller details in this section as tvault DMAPI is resides on controller nodes.
      infra-1:            # controller host name. 
        ip: 172.26.0.3    # Ip address of controller
      infra-2:            # If we have multiple controllers add controllers details in same manner as shown in Infra-2  
        ip: 172.26.0.4    
        
    #tvault-datamover
    tvault_compute_hosts: # Add compute details in this section as tvault datamover is resides on compute nodes.
      infra-1:            # compute host name. 
        ip: 172.26.0.7    # Ip address of compute node
      infra-2:            # If we have multiple compute nodes add compute details in same manner as shown in Infra-2
        ip: 172.26.0.8
    ##common editable parameters required for installing tvault-horizon-plugin, tvault-contego and tvault-datamover-api
    #ip address of TVM
    IP_ADDRESS: sample_tvault_ip_address
    
    ##Time Zone
    TIME_ZONE: "Etc/UTC"
    
    #Update TVAULT package version here, we will install mentioned version plugins for Example# TVAULT_PACKAGE_VERSION: 3.3.36
    TVAULT_PACKAGE_VERSION: 4.1.94  #GA build version
    
    # Update Openstack dist code name like ussuri etc.
    OPENSTACK_DIST: ussuri
    
    #Need to add the following statement in nova sudoers file
    #nova ALL = (root) NOPASSWD: /home/tvault/.virtenv/bin/privsep-helper *
    #These changes require for Datamover, Otherwise Datamover will not work
    #Are you sure? Please set variable to
    #  UPDATE_NOVA_SUDOERS_FILE: proceed
    #other wise ansible tvault-contego installation will exit
    UPDATE_NOVA_SUDOERS_FILE: proceed
    
    ##### Select snapshot storage type #####
    #Details for NFS as snapshot storage , NFS_SHARES should begin with "-".
    ##True/False
    NFS: False
    NFS_SHARES:
              - sample_nfs_server_ip1:sample_share_path
              - sample_nfs_server_ip2:sample_share_path
    
    #if NFS_OPTS is empty then default value will be "nolock,soft,timeo=180,intr,lookupcache=none"
    NFS_OPTS: ""
    
    #### Details for S3 as snapshot storage
    ##True/False
    S3: False
    VAULT_S3_ACCESS_KEY: sample_s3_access_key
    VAULT_S3_SECRET_ACCESS_KEY: sample_s3_secret_access_key
    VAULT_S3_REGION_NAME: sample_s3_region_name
    VAULT_S3_BUCKET: sample_s3_bucket
    VAULT_S3_SIGNATURE_VERSION: default
    #### S3 Specific Backend Configurations
    #### Provide one of follwoing two values in s3_type variable, string's case should be match
    #Amazon/Other_S3_Compatible
    s3_type: sample_s3_type
    #### Required field(s) for all S3 backends except Amazon
    VAULT_S3_ENDPOINT_URL: ""
    #True/False
    VAULT_S3_SECURE: True
    VAULT_S3_SSL_CERT: ""
    
    ###details of datamover API
    ##If SSL is enabled "DMAPI_ENABLED_SSL_APIS" value should be dmapi.
    #DMAPI_ENABLED_SSL_APIS: dmapi
    ##If SSL is disabled "DMAPI_ENABLED_SSL_APIS" value should be empty.
    DMAPI_ENABLED_SSL_APIS: ""
    DMAPI_SSL_CERT: ""
    DMAPI_SSL_KEY: ""
    
    #### Any service is using Ceph Backend then set ceph_backend_enabled value to True
    #True/False
    ceph_backend_enabled: False
    
    #Set verbosity level and run playbooks with -vvv option to display custom debug messages
    verbosity_level: 3
    cd /opt/openstack-ansible/playbooks
    
    # To create Dmapi container
    openstack-ansible lxc-containers-create.yml 
    
    #To Deploy Trilio Components
    openstack-ansible os-tvault-install.yml
    
    #To configure Haproxy for Dmapi
    openstack-ansible haproxy-install.yml
    openstack-ansible setup-infrastructure.yml --syntax-check
    openstack-ansible setup-hosts.yml
    openstack-ansible setup-infrastructure.yml
    openstack-ansible setup-openstack.yml
    lxc-ls                                           # Check the dmapi container is present on controller node.
    lxc-info -s controller_dmapi_container-a11984bf  # Confirm running status of the container
    systemctl status tvault-contego.service
    systemctl status tvault-objest-store  # If Storage backend is S3
    df -h                                 # Verify the mount point is mounted on compute node(s)
    lxc-attach -n controller_horizon_container-1d9c055c                                   # To login on horizon container
    apt list | egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient'              # For ubuntu based container
    yum list installed |egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient'     # For CentOS based container
     haproxy -c -V -f /etc/haproxy/haproxy.cfg # Verify the keyword datamover_service-back is present in output.
    Using Horizon
    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    4. Identify the workload to show the details on

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    The List of Snapshots for the chosen Workload contains the following additional information:

    • Creation Time

    • Name of the Snapshot

    • Description of the Snapshot

    • Total amount of Restores from this Snapshot

      • Total amount of succeeded Restores

      • Total amount of failed Restores

    • Snapshot Type

    • Snapshot Size

    • Snapshot Status

    Using CLI

    • --workload_id <workload_id> ➡️ Filter results by workload_id

    • --tvault_node <host> ➡️ List all the snapshot operations scheduled on a tvault node(Default=None)

    • --date_from <date_from>➡️ From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If don't specify time then it takes 00:00 by default

    • --date_to <date_to>➡️To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day), Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to

    • --all {True,False} ➡️ List all snapshots of all the projects(valid for admin user only)

    Creating a Snapshot

    Snapshots are automatically created by the Trilio scheduler. If necessary or in case of deactivated scheduler is it possible to create a Snapshot on demand.

    Using Horizon

    There are 2 possibilities to create a snapshot on demand.

    Possibility 1: From the Workloads overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that shall create a Snapshot

    5. Click "Create Snapshot"

    6. Provide a name and description for the Snapshot

    7. Decide between Full and Incremental Snapshot

    8. Click "Create"

    Possibility 2: From the Workload Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that shall create a Snapshot

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Click "Create Snapshot"

    8. Provide a name and description for the Snapshot

    9. Decide between Full and Incremental Snapshot

    10. Click "Create"

    Using CLI

    • <workload_id>➡️ID of the workload to snapshot.

    • --full➡️ Specify if a full snapshot is required.

    • --display-name <display-name>➡️Optional snapshot name. (Default=None)

    • --display-description <display-description>➡️Optional snapshot description. (Default=None)

    Snapshot overview

    Each Snapshot contains a lot of information about the backup. These information can be seen in the Snapshot overview.

    Using Horizon

    To reach the Snapshot Overview follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the searched Snapshot in the Snapshot list

    8. Click the Snapshot Name

    Details Tab

    The Snapshot Details Tab shows the most important information about the Snapshot.

    • Snapshot Name / Description

    • Snapshot Type

    • Time Taken

    • Size

    • Which VMs are part of the Snapshot

    • for each VM in the Snapshot

      • Instance Info - Name & Status

      • Security Group(s) - Name & Type

      • Flavor - vCPUs, Disk & RAM

    Restores Tab

    The Snapshot Restores Tab shows the list of Restores that have been started from the chosen Snapshot. It is possible to start Restores from here.

    Please refer to the Restores User Guide to learn more about Restores.

    Misc. Tab

    The Snapshot Miscellaneous Tab provides the remaining metadata information about the Snapshot.

    • Creation Time

    • Last Update time

    • Snapshot ID

    • Workload ID of the Workload containing the Snapshot

    Using CLI

    • <snapshot_id>➡️ID of the snapshot to be shown

    • --output <output>➡️Option to get additional snapshot details, Specify --output metadata for snapshot metadata, Specify --output networks for snapshot vms networks, Specify --output disks for snapshot vms disks

    Delete Snapshots

    Once a Snapshot is no longer needed, it can be safely deleted from a Workload.

    The retention policy will automatically delete the oldest Snapshots according to the configure policy.

    You have to delete all Snapshots to be able to delete a Workload.

    Deleting a Trilio Snapshot will not delete any Openstack Cinder Snapshots. Those need to be deleted separately if desired.

    Using Horizon

    There are 2 possibilities to delete a Snapshot.

    Possibility 1: Single Snapshot deletion through the submenu

    To delete a single Snapshot through the submenu follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to delete

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the searched Snapshot in the Snapshot list

    8. Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu

    9. Click "Delete Snapshot"

    10. Confirm by clicking "Delete"

    Possibility 2: Multiple Snapshot deletion through checkbox in Snapshot overview

    To delete one or more Snapshots through the Snapshot overview do the following:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the searched Snapshots in the Snapshot list

    8. Check the checkbox for each Snapshot that shall be deleted

    9. Click "Delete Snapshots"

    10. Confirm by clicking "Delete"

    Using CLI

    • <snapshot_id>➡️ID of the snapshot to be deleted

    Snapshot Cancel

    Ongoing Snapshots can be canceled.

    Canceled Snapshots will be treated like errored Snapshots

    Using Horizon

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to cancel

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the searched Snapshot in the Snapshot list

    8. Click "Cancel" on the same line as the identified Snapshot

    9. Confirm by clicking "Cancel"

    Using CLI

    • <snapshot_id>➡️ID of the snapshot to be canceled

    Create a File Recovery Manager Instance

    It is recommended to do these steps once to the chosen cloud-Image and then upload the modified cloud image to Glance.

    • Create an Openstack image using a Linux based cloud-image like Ubuntu, CentOS or RHEL with the following metadata parameters.

    • Spin up an instance from that image It is recommended to have at least 8GB RAM for the mount operation. Bigger Snapshots can require more RAM.

    Steps to apply on CentOS and RHEL cloud-images

    • install and activate qemu-guest-agent

    • Edit /etc/sysconfig/qemu-ga and remove the following from BLACKLIST_RPC section

    • Disable SELINUX in /etc/sysconfig/selinux

    • Install python3 and lvm2

    • Reboot the Instance

    Steps to apply on Ubuntu cloud-images

    • install and activate qemu-guest-agent

    • Verify the loaded path of qemu-guest-agent

    Loaded path init.d (Ubuntu 18.04)

    Follow this path when systemctl returns the following loaded path

    Edit /etc/init.d/qemu-guest-agent and add Freeze-Hook file path in daemon args

    Loaded path systemd (Ubuntu 20.04)

    Follow this path when systemctl returns the following loaded path

    Edit qemu-guest-agent systemd file

    Add the following lines

    Finalize the FRM on Ubuntu

    • Restart qemu-guest-agent service

    • Install Python3

    • Reboot the VM

    Mounting a Snapshot

    Mounting a Snapshot to a File Recovery Manager provides read access to all data that is located on the in the mounted Snapshot.

    It is possible to run the mounting process against any Openstack instance. During this process will the instance be rebooted.

    Always mount Snapshots to File Recovery Manager instances only.

    To be able to successfully mount Windows (NTFS) Snapshots the ntfs filesystem support is required on the File Recovery Manager instance.

    Unmount any mounted Snapshot once there is no further need to keep it mounted. Mounted Snapshots will not be purged by the Retention policy.

    Using Horizon

    There are 2 possibilities to mount a Snapshot in Horizon.

    Through the Snapshot list

    To mount a Snapshot through the Snapshot list follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the searched Snapshot in the Snapshot list

    8. Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu

    9. Click "Mount Snapshot"

    10. Choose the File Recovery Manager instance to mount to

    11. Confirm by clicking "Mount"

    Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:

    tvault_recovery_manager=yes

    Through the File Search results

    To mount a Snapshot through the File Search results follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    5. Click the workload name to enter the Workload overview

    6. Navigate to the File Search tab

    7. Identify the Snapshot to be mounted

    8. Click "Mount Snapshot" for the chosen Snapshot

    9. Choose the File Recovery Manager instance to mount to

    10. Confirm by clicking "Mount"

    Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:

    tvault_recovery_manager=yes

    Using CLI

    • <snapshot_id> ➡️ ID of the Snapshot to be mounted

    • <mount_vm_id> ➡️ ID of the File Recovery Manager instance to mount the Snapshot to.

    Accessing the File Recovery Manager

    The File Recovery Manager is a normal Linux based Openstack instance.

    It can be accessed via SSH or SSH based tools like FileZila or WinSCP.

    SSH login is often disabled by default in cloud-images. Enable SSH login if necessary.

    The mounted Snapshot can be found at the following path:

    /home/ubuntu/tvault-mounts/mounts/

    Each VM in the Snapshot has its own directory using the VM_ID as the identifier.

    Identifying mounted Snapshots

    Sometimes a Snapshot is mounted for a longer time and it needs to be identified, which Snapshots are mounted.

    Using Horizon

    There are 2 possibilities to identify mounted Snapshots inside Horizon.

    From the File Recovery Manager instance Metadata

    1. Login to Horizon

    2. Navigate to Compute

    3. Navigate to Instances

    4. Identify the File Recovery Manager Instance

    5. Click on the Name of the File Recovery Manager Instance to bring up its details

    6. On the Overview tab look for Metadata

    7. Identify the value for mounted_snapshot_url

    The mounted_snapshot_url contains the Snapshot ID of the Snapshot that has been mounted last.

    This value only gets updated, when a new Snapshot is mounted.

    From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Search for the Snapshot that has the option "Unmount Snapshot"

    Using CLI

    • --workloadid <workloadid> ➡️ Restrict the list to snapshots in the provided workload

    Unmounting a Snapshot

    Once a mounted Snapshot is no longer needed it is possible and recommended to unmount the snapshot.

    Unmounting a Snapshot frees the File Recovery Manager instance to mount the next Snapshot and allows Trilio retention policy to purge the former mounted Snapshot.

    Deleting the File Recovery Manager instance will not update the Trilio appliance. The Snapshot will be considered mounted until an unmount command has been received.

    Using Horizon

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Search for the Snapshot that has the option "Unmount Snapshot"

    8. Click "Unmount Snapshot"

    Using the CLI

    • <snapshot_id> ➡️ ID of the snapshot to unmount.

    Compatibility Matrix

    Learn about Trilio Support for OpenStack Distributions

    The CentOS community has moved over to CentOS stream.

    The support for CentOS8 has ended on December 31st 2021. The official announcement can be found here.

    CentOS7 is still supported and maintained till June 30th 2024

    Kolla Ansible environments running on CentOS8 are receiving continuoues limited support. This means that future updates from Trilio for Kolla Ansible environments on CentOS8 will use the latest available CentOS8 base containers and only the Trilio for OpenStack code gets updated. When the Kolla Ansible community provides CentOS Stream based containers, Trilio will provide CentOS Stream based containers as well.

    Trilio for OpenStack Compatibility Matrix

    Trilio Release
    RHOSP
    Canonical
    Ansible
    Kolla
    TripleO

    NFS & S3 Support:

    All versions of Trilio for OpenStack support NFSv3 and S3 as backup targets.

    Supported OS:

    RHEL7 and RHEL8 for RHOSP; Ubuntu 18.04 and 20.04 for Canonical distributions; Ubuntu 18.04, 20.04, and CentOS Stream for Ansible OpenStack; Ubuntu 18.04, 20.04 and CentOS Stream for Kolla OpenStack; CentOS7 for TripleO Train.

    Deployment:

    RHOSP distributions are deployed using Red Hat Director, Canonical distributions are deployed using JuJu Charms, Ansible OpenStack and Kolla OpenStack distributions are deployed using Ansible, and TripleO Train is deployed using Heat.

    Compatibility Matrix Detailed View

    Distribution/Version
    Trilio 4.1 HF10+
    Trilio 4.1 HF8+
    Trilio 4.1 HF3+
    Trilio 4.1 GA
    OS
    NFS Support
    S3 Support
    Deployment

    Upgrading on Ansible OpenStack

    Upgrading from Trilio 4.0 to Trilio 4.1

    Due to the new installation method of Trilio for Kolla OpenStack, it is required to reinstall the Trilio components running on the Kolla OpenStack nodes when upgrading from Trilio 4.0.

    The Trilio appliance can be upgraded as documented here.

    Upgrading from Trilio 4.1 to a higher version

    Trilio 4.1 can be upgraded without reinstallation to a higher version of T4O if available.

    Pre-requisites

    Please ensure the following points are met before starting the upgrade process:

    • No Snapshot or Restore is running

    • Global job scheduler is disabled

    • wlm-cron is disabled on the Trilio Appliance

    • Access to the gemfury repository to fetch new packages

    Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut down.

    Update the repositories

    Deb-based (Ubuntu)

    Add the Gemfury repository on each dmapi, horizon containers & compute nodes.

    Create file /etc/apt/sources.list.d/fury.list and add the below line to it.

    The following commands can be used to verify the connection to the gemfury repository and to check for available packages.

    RPM-based (CentOS)

    Add trilio repo on each dmapi, horizon containers & compute nodes.

    Modify the file /etc/yum.repos.d/trilio.repo and add below line in it.

    The following commands can be used to verify the connection to the Trilio rpm server and to check for available packages.

    Upgrade tvault-datamover-api package

    The following steps represent the best practice procedure to upgrade the dmapi service.

    1. Login to dmapi container

    2. Take a backup of the dmapi configuration in /etc/dmapi/

    3. use apt list --upgradeable to identify the package used for the dmapi service

    These steps are done with the following commands. This example is assuming that the more common python3 packages are used.

    Deb-based (Ubuntu)

    RPM-based (CentOS)

    Upgrade Horizon plugin

    The following steps represent the best practice procedure to update the Horizon plugin.

    1. Login to Horizon Container

    2. use apt list --upgradeable to identify the package the Trilio packages for the workloadmgrclient, contegoclient and Horizon plugin

    3. Install the tvault-horizon-plugin package in the required python version

    These steps are done with the following commands. This example is assuming that the more common python3 packages are used.

    Deb-based (Ubuntu)

    RPM-based (CentOS)

    Upgrade the tvault-contego service

    The following steps represent the best practice procedure to update the tvault-contego service on the compute nodes.

    1. Login into the compute node

    2. Take a backup of the config files in

      1. (NFS and S3) /etc/tvault-contego/

    These steps are done with the following commands. This example is assuming that the more common python3 packages are used.

    Deb-based (Ubuntu)

    RPM-based (CentOS)

    Advance settings/configuration

    Customize HAproxy cfg parameters for Trilio datamover api service

    Following are the haproxy cfg parameters recommended for optimal performance of dmapi service. File location on controller /etc/haproxy/haproxy.cfg

    Parameters timeout client, timeout server and balance for DMAPI service

    If values were already updated during any of the previous releases, further steps can be skipped.

    Remove below content, if present in the file/etc/openstack_deploy/user_variables.ymlon ansible host.

    Add the below lines at end of the file /etc/openstack_deploy/user_variables.yml on the ansible host.

    Update Haproxy configuration using the below command on ansible host.

    Snapshot Mount

    Mount Snapshot

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/mount

    Mounts a Snapshot to the provided File Recovery Manager

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    List of mounted Snapshots in Tenant

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/mounted/list

    Provides the list of all Snapshots mounted in a Tenant

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    List of mounted Snapshots in Workload

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/snapshots/mounted/list

    Provides the list of all Snapshots mounted in a specified Workload

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Dismount Snapshot

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/dismount

    Unmounts a Snapshot of the provided File Recovery Manager

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    Post Installation Health-Check

    After the installation and configuration of Trilio for Openstack did succeed the following steps can be done to verify that the Trilio installation is healthy.

    Verify the Trilio Appliance

    Verify the services are up

    Install workloadmgr CLI client

    About the workloadmgr CLI client

    The workloadmgr CLI client is provided as rpm and deb packages.

    It got tested against the following operating systems:

    • CentOS7, CentOS8

     workloadmgr snapshot-list [--workload_id <workload_id>]
                               [--tvault_node <host>]
                               [--date_from <date_from>]
                               [--date_to <date_to>]
                               [--all {True,False}]
    workloadmgr workload-snapshot [--full] [--display-name <display-name>]
                                  [--display-description <display-description>]
                                  <workload_id>
    workloadmgr snapshot-show [--output <output>] <snapshot_id>
    workloadmgr snapshot-delete <snapshot_id>
    workloadmgr snapshot-cancel <snapshot_id>
    openstack image create \
    --file <File Manager Image Path> \
    --container-format bare \
    --disk-format qcow2 \
    --public \
    --property hw_qemu_guest_agent=yes \
    --property tvault_recovery_manager=yes \
    --property hw_disk_bus=virtio \
    tvault-file-manager
    guest-file-read
    guest-file-write
    guest-file-open
    guest-file-close
    SELINUX=disabled
    yum install python3 lvm2
    apt-get update
    apt-get install qemu-guest-agent
    systemctl enable qemu-guest-agent
    Loaded: loaded (/etc/init.d/qemu-guest-agent; generated)
    DAEMON_ARGS="-F/etc/qemu/fsfreeze-hook"
    Loaded: loaded (/usr/lib/systemd/system/qemu-guest-agent.service; disabled; vendor preset: enabled)
    systemctl edit qemu-guest-agent
    [Service]
    ExecStart=
    ExecStart=/usr/sbin/qemu-ga -F/etc/qemu/fsfreeze-hook
    systemctl restart qemu-guest-agent
    apt-get install python3
    workloadmgr snapshot-mount <snapshot_id> <mount_vm_id>
    workloadmgr snapshot-mounted-list [--workloadid <workloadid>]
    workloadmgr snapshot-dismount <snapshot_id>

    Networks - IP, Networkname & Mac Address

  • Attached Volumes - Name, Type, size (GB), Mount Point & Restore Size

  • Misc - Original ID of the VM

  • Do a File Search

    Victoria

    Ussuri

    Train

    Stein

    Queens

    Victoria

    Ussuri

    Victoria

    Ussuri

    Train

    4.1 HF3+

    16.1

    16.0

    13

    Victoria

    Ussuri

    Train

    Stein

    Queens

    Victoria

    Ussuri

    Victoria

    Ussuri

    Train

    4.1 GA

    16.1

    16.0

    13

    Ussuri

    Train

    Stein

    Queens

    Ussuri

    Ussuri

    RHEL8

    NFSv3

    Supported

    RHOSP 16.1

    Yes

    Yes

    Yes

    Yes

    RHEL8

    NFSv3

    Supported

    Red hat Director

    RHOSP 16.0

    Yes

    Yes

    Yes

    RHEL8

    NFSv3

    Supported

    Red Hat Director

    RHOSP 13

    Yes

    Yes

    Yes

    Yes

    RHEL7

    NFSv3

    Supported

    Red Hat Director

    Canonical Victoria

    Yes

    Yes

    Yes

    Yes

    Ubuntu 20.04

    NFSv3

    Supported

    JuJu Charms

    Canonical Ussuri

    Yes

    Yes

    Yes

    Yes

    Ubuntu 18.04/20.04

    NFSv3

    Supported

    JuJu Charms

    Canonical Train

    Yes

    Yes

    Yes

    Yes

    Ubuntu 18.04

    NFSv3

    Supported

    JuJu Charms

    Canonical Stein

    Yes

    Yes

    Yes

    Yes

    Ubuntu 18.04

    NFSv3

    Supported

    JuJu Charms

    Canonical Queens

    Yes

    Yes

    Yes

    Yes

    Ubuntu 18.04

    NFSv3

    Supported

    JuJu Charms

    Ansible Openstack Victoria

    Yes

    Yes

    Yes

    Ubuntu 20.04

    NFSv3

    Supported

    Ansible

    Ansible Openstack Ussuri

    Yes

    Yes

    Yes

    Yes

    Ubuntu 18.04/20.04

    NFSv3

    Supported

    Ansible

    Kolla Openstack Victoria

    Yes

    Yes

    Yes

    Ubuntu 20.04, CentOS Stream

    NFSv3

    Supported

    Ansible

    Kolla Openstack Ussuri

    Yes

    Yes

    Yes

    Yes

    Ubuntu 18.04

    NFSv3

    Supported

    Ansible

    TripleO Train

    Yes

    Yes

    Yes

    CentOS7

    NFSv3

    Supported

    Heat

    4.1 HF10+

    16.2

    16.1

    13

    Victoria

    Ussuri

    Train

    Stein

    Queens

    Victoria

    Ussuri

    Victoria

    Ussuri

    Train

    4.1 HF8+

    RHOSP 16.2

    Yes

    Yes

    16.2

    16.1

    16.0

    13

    Update the dmapi package
  • restore the backed-up config files into /etc/dmapi/

  • Restart the dmapi container

  • Check the status of the dmapi service

  • install the workloadmgrclient package
  • install the contegoclient

  • Restart the Horizon webserver

  • check the installed version of the workloadmgrclient

  • (S3 only) /etc/tvault-object-store
  • use apt list --upgradeable to identify the tvault-contego package used

  • Unmount backup storage

  • upgrade the tvault-contego package in the required python version

  • (S3 only) upgrade the s3-fuse-plugin package

  • restore the config files into /etc/tvault-contego/

  • (S3 only) Restart the tvault-object-store service

  • Restart the tvault-contego service

  • check the status

  • User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project the Snapshot is located in

    snapshot_id

    string

    ID of the Snapshot to mount

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant to search for mounted Snapshots

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgr

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant to search for mounted Snapshots

    workload_id

    string

    ID of the Workload to search for mounted Snapshots

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgr

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project the Snapshot is located in

    snapshot_id

    string

    ID of the Snapshot to dismount

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 15:29:03 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-9d779802-9c65-463a-973c-39cdffcba82e

    Trilio is using 4 main services on the Trilio Appliance:

    • wlm-api

    • wlm-scheduler

    • wlm-workloads

    • wlm-cron

    Those can be verified to be up and running using the systemctl status command.

    Check the Trilio pacemaker and nginx cluster

    The second component to check the Trilio Appliance's health is the nginx and pacemaker cluster.

    Verify API connectivity of the Trilio Appliance

    Checking the availability of the Trilio API on the chosen endpoints is recommended.

    The following example curl command lists the available workload-types and verifies that the connection is available and working:

    Please check the API guide for more commands and how to generate the X-Auth-Token.

    Verify Trilio components on OpenStack

    On OpenStack Ansible

    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    In order to check the dmapi service go to dmapi container which is residing on controller nodes and run below command

    Datamover service (tvault-contego)

    The datamover service is running on each compute node. Logging to compute node and run below command

    On Kolla Ansible OpenStack

    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    Run the following command on “nova-api” nodes and make sure “triliovault_datamover_api” container is in started state.

    Datamover service (tvault-contego)

    Run the following command on "nova-compute" nodes and make sure the container is in a started state.

    Trilio Horizon integration

    Run the following command on horizon nodes and make sure the container is in a started state.

    On Canonical OpenStack

    Run the following command on MAAS nodes and make sure all trilio units like trilio-data-mover, trilio-dm-api, trilio-horizon-plugin, trilio-wlmare in active state

    On Red Hat OpenStack and TripleO

    On controller node

    Make sure the Trilio dmapi and horizon containers (shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.

    On compute node

    Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly.

    On overcloud

    Please check dmapi endpoints on overcloud node.

    Ubuntu 18.04, Ubuntu 20.04

    Installing the workloadmgr client will automatically install all required Openstack clients as well.

    Further will the installation of the workloadmgr client integrate the client into the global openstack python client, if available.

    The required connection strings and package names can be found on the Trilio Dashboard under the Downloads tab.

    Install workloadmgr client rpm package on CentOS7/8

    The Trilio workload manager CLI client has several requirements that need to be met before the client can be installed without dependency issues.

    Preparing the workloadmgr client installation

    The following steps need to be done to prepare the installation the workloadmgr client:

    1. Add required repositories

      1. epel-release

      2. for CentOS7: centos-release-openstack-stein

      3. for CentOS8: centos-release-openstack-train

    2. install base packages

      1. yum -y install epel-release

      2. for CentOS7: yum -y install centos-release-openstack-stein

      3. for CentOS8: yum -y install centos-release-openstack-train

    These repositories are required to fulfill the following dependencies:

    On CentOS7 Python2: python-pbr,python-prettytable,python2-requests,python2-simplejson,python2-six,pytz,PyYAML,python2-openstackclient

    On CentOS8 Python3: python3-pbr,python3-prettytable,python3-requests,python3-simplejson,python3-six,python3-pyyaml,python3-pytz,python3-openstackclient

    Installing the workloadmgr client

    There are 2 possibilities for how the workloadmgr client packages can be installed.

    Download from the Trilio Appliance and install directly

    The Trilio appliance is shipping the workloadmgr client version, that is matching the Trilio version of the Trilio appliance. These clients will always work with their respective Trilio versions.

    The workloadmgr client can be directly downloaded using the following command:

    For CentOS7: wget http://<TVM-IP>:8085/yum-repo/queens/workloadmgrclient-<Trilio-Version>-<Trilio-Release>.noarch.rpm

    For CentOS8: http://<TVM-IP>:8085/yum-repo/queens/python3-workloadmgrclient-<Trilio-Version>-<TVault-Release>.noarch.rpm

    To identify the Trilio Version and Trilio release login into the Trilio Dashboard and check the upper left corner.

    The yum package manager is used to install the workloadmgr client package:

    yum install workloadmgrclient-<Trilio-Version>-<Trilio-Release>.noarch.rpm

    An example installation can be found below:

    Installing from the Trilio online repository

    To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:

    Create the Trilio yum repository file /etc/yum.repos.d/trilio.repo Enter the following details into the repository file:

    Install the workloadmgr client issuing the following command:

    For CentOS7: yum install workloadmgrclient For CentOS8: yum install python-3-workloadmgrclient-el8

    An example installation can be found below:

    Install workloadmgr client deb packages on Ubuntu

    The Trilio workloadmgr client packages for Ubuntu are only available from the online repository.

    Preparing the workloadmgr client installation

    There is no preparation required. All dependencies are automatically resolved by the standard repositories provided by Ubuntu.

    Installing the Workloadmgr client

    There are 2 possibilities for how the workloadmgr client packages can be installed.

    Download from the Trilio Appliance and install directly

    The Trilio appliance is shipping the workloadmgr client version, that is matching the Trilio version of the Trilio appliance. These clients will always work with their respective Trilio versions.

    The workloadmgr client can be directly downloaded using the following command:

    For Python2: curl -Og6 http://<TVM-IP>:8085/deb-repo/deb-repo/python-workloadmgrclient_<Trilio-Version>_all.deb

    For Python3:curl -Og6 http://<TVM-IP>:8085/deb-repo/deb-repo/python3-workloadmgrclient_<Trilio-Version>_all.deb

    o identify the Trilio Version and Trilio release login into the Trilio Dashboard and check the upper left corner.

    The apt package manager is used to install the workloadmgr client package:

    For Python2:apt-get install ./python-workloadmgrclient_<Trilio-Version>_all.deb -y For Python3:apt-get install ./python3-workloadmgrclient_<Trilio-Version>_all.deb -y

    An example installation can be found below:

    Installing from the Trilio online repository

    To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:

    Create the Trilio yum repository file /etc/apt/sources.list.d/fury.list Enter the following details into the repository file:

    run apt update to make the new repository available.

    The apt package manager is used to install the workloadmgr client package:

    For Python2:apt-get install python-workloadmgrclient For Python3:apt-get install python3-workloadmgrclient

    An example installation can be seen below:

    [root@TVM2 ~]# pcs resource disable wlm-cron
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr                                                                                                                                                                                                                                                                                                                                                           -cron
    
    deb [trusted=yes] https://apt.fury.io/triliodata-4-1/ /
    apt-get update
    apt list --upgradable
    [triliovault-4-1]
    name=triliovault-4-1
    baseurl=http://trilio:[email protected]:8283/triliodata-4-1/yum/
    gpgcheck=0
    enabled=1
    yum repolist
    yum check-upgrade
    lxc-attach -n controller_dmapi
    tar -czvf dmapi_config.tar.gz /etc/dmapi
    apt list --upgradable
    apt install python3-dmapi --upgrade
    tar -xzvf dmapi_config.tar.gz -C /
    systemctl restart tvault-datamover-api
    systemctl status tvault-datamover-api
    lxc-attach -n controller_dmapi
    tar -czvf dmapi_config.tar.gz /etc/dmapi
    yum list installed | grep dmapi
    yum check-update python3-dmapi
    yum upgrade python3-dmapi
    tar -xzvf dmapi_config.tar.gz -C /
    systemctl restart tvault-datamover-api
    systemctl status tvault-datamover-api
    lxc-attach -n controller_horizon_container-ead7cc60
    apt list --upgradable
    apt install python3-tvault-horizon-plugin --upgrade
    apt install python3-workloadmgrclient --upgrade
    apt install python3-contegoclient --upgrade
    systemctl restart apache2
    workloadmgr --version
    lxc-attach -n controller_horizon_container-ead7cc60
    yum list installed | grep trilio
    yum upgrade python3-contegoclient-el8 python3-tvault-horizon-plugin-el8 python3-workloadmgrclient-el8
    systemctl restart httpd
    workloadmgr --version
    tar -czvf  contego_config.tar.gz /etc/tvault-contego/ 
    apt list --upgradable
    (NFS only) umount /var/triliovault-mounts/<base64-hash>
    (S3 only) umount /var/triliovault-mounts
    apt install python3-tvault-contego --upgrade
    apt install python3-s3-fuse-plugin --upgrade
    tar -xzvf  contego_config.tar.gz -C /
    systemctl restart tvault-object-store
    systemctl restart tvault-contego
    systemctl status tvault-contego
    df -h
    tar -czvf  contego_config.tar.gz /etc/tvault-contego/ 
    yum list installed | grep tvault
    yum upgrade python3-tvault-contego
    yum upgrade python3-s3fuse-plugin
    tar -xzvf  contego_config.tar.gz -C /
    systemctl restart tvault-object-store
    systemctl restart tvault-contego
    systemctl status tvault-contego
    df -h
    retries 5
    timeout http-request 10m
    timeout queue 10m
    timeout connect 10m
    timeout client 10m
    timeout server 10m
    timeout check 10m
    balance roundrobin
    maxconn 50000
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    # Datamover haproxy setting
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_balance_alg: roundrobin
          haproxy_timeout_client: 10m
          haproxy_timeout_server: 10m
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    cd /opt/openstack-ansible/playbooks
    openstack-ansible haproxy-install.yml
    {
       "mount":{
          "mount_vm_id":"15185195-cd8d-4f6f-95ca-25983a34ed92",
          "options":{
             
          }
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 15:44:42 GMT
    Content-Type: application/json
    Content-Length: 228
    Connection: keep-alive
    X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a
    
    {
       "mounted_snapshots":[
          {
             "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
             "snapshot_name":"snapshot",
             "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
             "mounturl":"[\"http://192.168.100.87\"]",
             "status":"mounted"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 15:44:42 GMT
    Content-Type: application/json
    Content-Length: 228
    Connection: keep-alive
    X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a
    
    {
       "mounted_snapshots":[
          {
             "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
             "snapshot_name":"snapshot",
             "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
             "mounturl":"[\"http://192.168.100.87\"]",
             "status":"mounted"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 16:03:49 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-abf69be3-474d-4cf3-ab41-caa56bb611e4
    {
       "mount": 
          {
              "options": null
          }
    }
    systemctl | grep wlm
      wlm-api.service          loaded active running   workloadmanager api service
      wlm-cron.service         loaded active running   Cluster Controlled wlm-cron
      wlm-scheduler.service    loaded active running   Cluster Controlled wlm-scheduler
      wlm-workloads.service    loaded active running   workloadmanager workloads service
    systemctl status wlm-api
    ######
    ● wlm-api.service - workloadmanager api service
       Loaded: loaded (/etc/systemd/system/wlm-api.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:41:19 UTC; 2 months 21 days ago
     Main PID: 4688 (workloadmgr-api)
       CGroup: /system.slice/wlm-api.service
               ├─4688 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-api --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-scheduler
    ######
    ● wlm-scheduler.service - Cluster Controlled wlm-scheduler
       Loaded: loaded (/etc/systemd/system/wlm-scheduler.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-scheduler.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9342 (workloadmgr-sch)
       CGroup: /system.slice/wlm-scheduler.service
               └─9342 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-scheduler --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-workloads
    ######
    ● wlm-workloads.service - workloadmanager workloads service
       Loaded: loaded (/etc/systemd/system/wlm-workloads.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:51:05 UTC; 2 months 21 days ago
     Main PID: 606 (workloadmgr-wor)
       CGroup: /system.slice/wlm-workloads.service
               ├─ 606 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-cron
    ######
    ● wlm-cron.service - Cluster Controlled wlm-cron
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-cron.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9209 (workloadmgr-cro)
       CGroup: /system.slice/wlm-cron.service
               ├─9209 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    pcs status
    ######
    Cluster name: triliovault
    
    WARNINGS:
    Corosync and pacemaker node names do not match (IPs used in setup?)
    
    Stack: corosync
    Current DC: TVM1 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
    Last updated: Mon Jan 24 13:42:01 2022
    Last change: Tue Nov  2 19:07:04 2021 by root via crm_resource on TVM2
    
    3 nodes configured
    9 resources configured
    
    Online: [ TVM1 TVM2 TVM3 ]
    
    Full list of resources:
    
     virtual_ip     (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_public      (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_admin       (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_internal    (ocf::heartbeat:IPaddr2):       Started TVM2
     wlm-cron       (systemd:wlm-cron):     Started TVM2
     wlm-scheduler  (systemd:wlm-scheduler):        Started TVM2
     Clone Set: lb_nginx-clone [lb_nginx]
         Started: [ TVM2 ]
         Stopped: [ TVM1 TVM3 ]
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    curl http://10.10.2.34:8780/v1/8e16700ae3614da4ba80a4e57d60cdb9/workload_types/detail -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-workloadmgrclient" -H "Accept: application/json" -H "X-Auth-Token: gAAAAABe40NVFEtJeePpk1F9QGGh1LiGnHJVLlgZx9t0HRrK9rC5vqKZJRkpAcW1oPH6Q9K9peuHiQrBHEs1-g75Na4xOEESR0LmQJUZP6n37fLfDL_D-hlnjHJZ68iNisIP1fkm9FGSyoyt6IqjO9E7_YVRCTCqNLJ67ZkqHuJh1CXwShvjvjw
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    lxc-attach -n <dmapi-container-name>  (go to dmapi conatiner)
    root@controller-dmapi-container-08df1e06:~# systemctl status tvault-datamover-api.service
    ● tvault-datamover-api.service - TrilioData DataMover API service
         Loaded: loaded (/lib/systemd/system/tvault-datamover-api.service; enabled; vendor preset: enabled)
         Active: active (running) since Wed 2022-01-12 11:53:39 UTC; 1 day 17h ago
       Main PID: 23888 (dmapi-api)
          Tasks: 289 (limit: 57729)
         Memory: 607.7M
         CGroup: /system.slice/tvault-datamover-api.service
                 ├─23888 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23893 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23894 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23895 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23896 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23897 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23898 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23899 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23900 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23901 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23902 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23903 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23904 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23905 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23906 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23907 /usr/bin/python3 /usr/bin/dmapi-api
                 └─23908 /usr/bin/python3 /usr/bin/dmapi-api
    
    Jan 12 11:53:39 controller-dmapi-container-08df1e06 systemd[1]: Started TrilioData DataMover API service.
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    
    root@compute:~# systemctl status tvault-contego
    ● tvault-contego.service - Tvault contego
         Loaded: loaded (/etc/systemd/system/tvault-contego.service; enabled; vendor preset: enabled)
         Active: active (running) since Fri 2022-01-14 05:45:19 UTC; 2s ago
       Main PID: 1489651 (python3)
          Tasks: 19 (limit: 67404)
         Memory: 6.7G (max: 10.0G)
         CGroup: /system.slice/tvault-contego.service
                 ├─ 998543 /bin/qemu-nbd -c /dev/nbd45 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998772 /bin/qemu-nbd -c /dev/nbd73 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998931 /bin/qemu-nbd -c /dev/nbd100 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─ 999147 /bin/qemu-nbd -c /dev/nbd35 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─1371322 /bin/qemu-nbd -c /dev/nbd63 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─1371524 /bin/qemu-nbd -c /dev/nbd91 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 └─1489651 /openstack/venvs/nova-22.3.1/bin/python3 /usr/bin/tvault-contego --config-file=/etc/nova/nova.conf --config-file=/etc/tvault-contego/tvault-cont>
    
    Jan 14 05:45:19 compute systemd[1]: Started Tvault contego.
    Jan 14 05:45:20 compute sudo[1489653]:     nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/openstack/venvs/nova-22.3.1/bin/nova-rootwrap /etc/nova/rootwrap.conf umou>
    Jan 14 05:45:20 compute sudo[1489653]: pam_unix(sudo:session): session opened for user root by (uid=0)
    Jan 14 05:45:21 compute python3[1489655]: umount: /var/triliovault-mounts/VHJpbGlvVmF1bHQ=: no mount point specified.
    Jan 14 05:45:21 compute sudo[1489653]: pam_unix(sudo:session): session closed for user root
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] CPU Control group m>
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] I/O Control Group m>
    lines 1-22/22 (END)
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    [root@controller ~]# docker ps | grep triliovault_datamover_api
    3f979c15cedc   trilio/centos-binary-trilio-datamover-api:4.2.50-victoria                     "dumb-init --single-…"   3 days ago    Up 3 days                         triliovault_datamover_api
    [root@compute1 ~]# docker ps | grep triliovault_datamover
    2f1ece820a59   trilio/centos-binary-trilio-datamover:4.2.50-victoria                        "dumb-init --single-…"   3 days ago    Up 3 days                        triliovault_datamover
    [root@controller ~]# docker ps | grep horizon
    4a004c786d47   trilio/centos-binary-trilio-horizon-plugin:4.2.50-victoria                    "dumb-init --single-…"   3 days ago    Up 3 days (unhealthy)             horizon
    root@jujumaas:~# juju status | grep trilio
    trilio-data-mover       4.2.51   active       3  trilio-data-mover       jujucharms    9  ubuntu
    trilio-dm-api           4.2.51   active       1  trilio-dm-api           jujucharms    7  ubuntu
    trilio-horizon-plugin   4.2.51   active       1  trilio-horizon-plugin   jujucharms    6  ubuntu
    trilio-wlm              4.2.51   active       1  trilio-wlm              jujucharms    9  ubuntu
      trilio-data-mover/8        active    idle            172.17.1.5                         Unit is ready
      trilio-data-mover/6        active    idle            172.17.1.6                         Unit is ready
      trilio-data-mover/7*       active    idle            172.17.1.7                         Unit is ready
      trilio-horizon-plugin/2*   active    idle            172.17.1.16                        Unit is ready
    trilio-dm-api/2*             active    idle   1/lxd/4  172.17.1.27     8784/tcp           Unit is ready
    trilio-wlm/2*                active    idle   7        172.17.1.28     8780/tcp           Unit is ready
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp16 onwards/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain1-controller-0 heat-admin]# podman ps | grep trilio-
    e3530d6f7bec  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:4.2.47-rhosp16.1           kolla_start           2 weeks ago   Up 2 weeks ago          trilio_dmapi
    f93f7019f934  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:4.2.47-rhosp16.1          kolla_start           2 weeks ago   Up 2 weeks ago          horizon
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain3-novacompute-1 heat-admin]# podman ps | grep trilio-
    4419b02e075c  undercloud162.ctlplane.trilio.local:8787/trilio/trilio-datamover:dev-osp16.2-1-rhosp16.2       kolla_start  2 days ago   Up 27 seconds ago          trilio_datamover
     (overcloudtrain1) [stack@ucqa161 ~]$ openstack endpoint list | grep datamover
    | 218b2f92569a4d259839fa3ea4d6103a | regionOne | dmapi          | datamover      | True    | internal  | https://overcloudtrain1internalapi.trilio.local:8784/v2                    |
    | 4702c51aa5c24bed853e736499e194e2 | regionOne | dmapi          | datamover      | True    | public    | https://overcloudtrain1.trilio.local:13784/v2                              |
    | c8169025eb1e4954ab98c7abdb0f53f6 | regionOne | dmapi          | datamover      | True    | admin     | https://overcloudtrain1internalapi.trilio.local:8784/v2    
    [root@controller ~]# wget http://10.10.2.15:8085/yum-repo/queens/workloadmgrclient-4.0.115-4.0.noarch.rpm
    --2021-03-08 15:36:37--  http://10.10.2.15:8085/yum-repo/queens/workloadmgrclient-4.0.115-4.0.noarch.rpm
    Connecting to 10.10.2.15:8085... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 155976 (152K) [application/x-rpm]
    Saving to: ‘workloadmgrclient-4.0.115-4.0.noarch.rpm’
    
    100%[======================================>] 1,55,976    --.-K/s   in 0.001s
    
    2021-03-08 15:36:37 (125 MB/s) - ‘workloadmgrclient-4.0.115-4.0.noarch.rpm’ saved [155976/155976]
    
    [root@controller ~]# yum install workloadmgrclient-4.0.115-4.0.noarch.rpm
    Loaded plugins: fastestmirror
    Examining workloadmgrclient-4.0.115-4.0.noarch.rpm: workloadmgrclient-4.0.115-4.                                                                                                                                                                                                                                                                                                                                                           0.noarch
    Marking workloadmgrclient-4.0.115-4.0.noarch.rpm to be installed
    Resolving Dependencies
    --> Running transaction check
    ---> Package workloadmgrclient.noarch 0:4.0.115-4.0 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================
     Package         Arch   Version     Repository                             Size
    ================================================================================
    Installing:
     workloadmgrclient
                     noarch 4.0.115-4.0 /workloadmgrclient-4.0.115-4.0.noarch 700 k
    
    Transaction Summary
    ================================================================================
    Install  1 Package
    
    Total size: 700 k
    Installed size: 700 k
    Is this ok [y/d/N]: y
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : workloadmgrclient-4.0.115-4.0.noarch                         1/1
      Verifying  : workloadmgrclient-4.0.115-4.0.noarch                         1/1
    
    Installed:
      workloadmgrclient.noarch 0:4.0.115-4.0
    
    Complete!
    [trilio]
    name=Trilio Repository
    baseurl=http://trilio:[email protected]:8283/triliovault-<Trilio-Release>/yum/
    enabled=1
    gpgcheck=0
    [root@controller ~]# cat /etc/yum.repos.d/trilio.repo
    [trilio]
    name=Trilio Repository
    baseurl=http://trilio:[email protected]:8283/triliovault-4.0/yum/
    enabled=1
    gpgcheck=0
    
    [root@controller ~]# yum install workloadmgrclient
    Loaded plugins: fastestmirror
    Determining fastest mirrors
     * base: centos-canada.vdssunucu.com.tr
     * centos-ceph-nautilus: mirror.its.dal.ca
     * centos-nfs-ganesha28: centos.mirror.colo-serv.net
     * centos-openstack-train: centos-canada.vdssunucu.com.tr
     * centos-qemu-ev: centos-canada.vdssunucu.com.tr
     * extras: centos-canada.vdssunucu.com.tr
     * updates: centos-canada.vdssunucu.com.tr
    base                                                                                                                                                                                                                                                                                                                                                                                                                | 3.6 kB  00:00:00
    centos-ceph-nautilus                                                                                                                                                                                                                                                                                                                                                                                                | 3.0 kB  00:00:00
    centos-nfs-ganesha28                                                                                                                                                                                                                                                                                                                                                                                                | 3.0 kB  00:00:00
    centos-openstack-train                                                                                                                                                                                                                                                                                                                                                                                              | 3.0 kB  00:00:00
    centos-qemu-ev                                                                                                                                                                                                                                                                                                                                                                                                      | 3.0 kB  00:00:00
    extras                                                                                                                                                                                                                                                                                                                                                                                                              | 2.9 kB  00:00:00
    trilio                                                                                                                                                                                                                                                                                                                                                                                                              | 2.9 kB  00:00:00
    updates                                                                                                                                                                                                                                                                                                                                                                                                             | 2.9 kB  00:00:00
    (1/3): extras/7/x86_64/primary_db                                                                                                                                                                                                                                                                                                                                                                                   | 225 kB  00:00:00
    (2/3): centos-openstack-train/7/x86_64/primary_db                                                                                                                                                                                                                                                                                                                                                                   | 1.1 MB  00:00:00
    (3/3): updates/7/x86_64/primary_db                                                                                                                                                                                                                                                                                                                                                                                  | 5.7 MB  00:00:00
    Resolving Dependencies
    --> Running transaction check
    ---> Package workloadmgrclient.noarch 0:4.0.116-4.0 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ===========================================================================================================================================================================================================================================================================================================================================================================================================================================
     Package                                                                                                         Arch                                                                                                 Version                                                                                                   Repository                                                                                            Size
    ===========================================================================================================================================================================================================================================================================================================================================================================================================================================
    Installing:
     workloadmgrclient                                                                                               noarch                                                                                               4.0.116-4.0                                                                                               trilio                                                                                               152 k
    
    Transaction Summary
    ===========================================================================================================================================================================================================================================================================================================================================================================================================================================
    Install  1 Package
    
    Total download size: 152 k
    Installed size: 700 k
    Is this ok [y/d/N]: y
    Downloading packages:
    workloadmgrclient-4.0.116-4.0.noarch.rpm                                                                                                                                                                                                                                                                                                                                                                            | 152 kB  00:00:00
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : workloadmgrclient-4.0.116-4.0.noarch                                                                                                                                                                                                                                                                                                                                                                                    1/1
      Verifying  : workloadmgrclient-4.0.116-4.0.noarch                                                                                                                                                                                                                                                                                                                                                                                    1/1
    
    Installed:
      workloadmgrclient.noarch 0:4.0.116-4.0
    
    Complete!
    root@ubuntu:~# curl -Og6 http://10.10.2.15:8085/deb-repo/deb-repo/python3-workloadmgrclient_4.0.115_all.deb
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  116k  100  116k    0     0   899k      0 --:--:-- --:--:-- --:--:--  982k
    
    root@ubuntu:~# apt-get install ./python3-workloadmgrclient_4.0.115_all.deb -y
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Note, selecting 'python3-workloadmgrclient' instead of './python3-workloadmgrclient_4.0.115_all.deb'
    The following NEW packages will be installed:
      python3-workloadmgrclient
    0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
    Need to get 0 B/120 kB of archives.
    After this operation, 736 kB of additional disk space will be used.
    Selecting previously unselected package python3-workloadmgrclient.
    (Reading database ... 65533 files and directories currently installed.)
    Preparing to unpack .../python3-workloadmgrclient_4.0.115_all.deb ...
    Unpacking python3-workloadmgrclient (4.0.115) ...
    Setting up python3-workloadmgrclient (4.0.115) ...
    deb [trusted=yes] https://apt.fury.io/triliodata-<Trilio-Version>/ /
    root@ubuntu:~# cat /etc/apt/sources.list.d/fury.list
    deb [trusted=yes] https://apt.fury.io/triliodata-4-0/ /
    
    root@ubuntu:~# apt update
    Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
    Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
    Hit:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
    Hit:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
    Ign:5 https://apt.fury.io/triliodata-4-0  InRelease
    Ign:6 https://apt.fury.io/triliodata-4-0  Release
    Ign:7 https://apt.fury.io/triliodata-4-0  Packages
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Get:7 https://apt.fury.io/triliodata-4-0  Packages
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Fetched 84.0 kB in 12s (6930 B/s)
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    All packages are up to date.
    
    root@ubuntu:~# apt-get install python3-workloadmgrclient
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following NEW packages will be installed:
      python3-workloadmgrclient
    0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
    Need to get 0 B/120 kB of archives.
    After this operation, 736 kB of additional disk space will be used.
    Selecting previously unselected package python3-workloadmgrclient.
    (Reading database ... 65533 files and directories currently installed.)
    Preparing to unpack .../python3-workloadmgrclient_4.0.115_all.deb ...
    Unpacking python3-workloadmgrclient (4.0.115) ...
    Setting up python3-workloadmgrclient (4.0.115) ...

    Online upgrade Trilio Appliance

    This describes the upgrade process from Trilio 4.0 or Trilio 4.0SP1 to Trilio 4.1 GA or its hotfix releases.

    Kolla Ansible Openstack only: The mount point for the Trilio Backup Target has changed in Trilio 4.1. A reconfiguration after the upgrade is required.

    Generic Pre-requisites

    The prerequisites should already be fulfilled from upgrading the Trilio components on the Controller and Compute nodes.

    • Please ensure to complete the upgrade of all the Trilio components on the Openstack controller & compute nodes before starting the rolling upgrade of TVM.

    • The mentioned Gemfury repository should be accessible from TVault VM.

    • Please ensure the following points before starting the upgrade process:

    Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it has been completely shut down.

    Verify if the service is shut down with the below set of commands and expected output:

    Backup old configuration data

    Take a backup of the conf files on all TVM nodes.

    Setup Python3.8 virtual environment

    Check if Python 3.8 virtual environment exists on the T4O nodes

    If the virtual environment does not exist, perform the below steps on the T4O nodes

    Setup Python3.6 virtual environment

    Activate the Python3.6 virtual environment on all T4O nodes for wlm services upgrade

    [T4O 4.0 to T4O 4.1 only] uninstall Ansible

    Ansible doesn't support the upgrade from previous versions to the latest one (2.10.4) and needs to be uninstalled for that reason

    Upgrade pip package

    Run the following command on all TVM nodes to upgrade the pip package

    Set pip package repository env variable

    Upgrade s3fuse/tvault-object-store

    Major Upgrade

    Run the following commands on all TVM nodes to upgrade s3fuse and its dependent packages.

    Hotfix Upgrade

    Run the following commands on all TVM nodes to upgrade s3fuse packages only.

    Upgrade tvault-configurator

    Post upgrade, the password for T4O configurator will be reset to the default one i.e. 'password' for user 'admin'. Reset T4O configurator password after the upgrade.

    Make sure the correct virtual environment(myansible_3.8) has been activated

    Major Upgrade

    Run the following command on all TVM nodes to upgrade tvault-configurator and its dependent packages.

    Hotfix Upgrade

    Run the following command on all TVM nodes to upgrade tvault-configurator packages only.

    During the update of the tvault-configurator the following error might be shown:

    This error can be ignored.

    Upgrade workloadmgr

    Major Upgrade

    Run the upgrade command on all TVM nodes to upgrade workloadmgr and its dependent packages.

    Hotfix Upgrade

    Run the upgrade command on all TVM nodes to upgrade workloadmgr packages only.

    Upgrade workloadmgrclient

    Major Upgrade

    Run the upgrade command on all TVM nodes to upgrade workloadmgr and its dependent packages.

    Hotfix Upgrade

    Run the upgrade command on all TVM nodes to upgrade workloadmgr packages only.

    Upgrade contegoclient

    Major Upgrade

    Run the upgrade command on all TVM nodes to upgrade contegoclient and its dependent packages.

    Hotfix Upgrade

    Run the upgrade command on all TVM nodes to upgrade contegoclient packages only.

    Set oslo.messaging version

    Using the latest available oslo.messaging version can lead to stuck RPC and API calls.

    It is therefore required to fix the oslo.messaging version on the TVM.

    Post Upgrade Steps

    Restore the backed-up config files

    [Major Upgrade 4.0 to 4.1 only] Delete wlm-scheduler pcs resource

    Delete the wlm-scheduler pcs resource because in 4.1 it is not a part of pcs

    Restart services

    Restart the following services on all node(s) using respective commands\

    tvault-object-store restart required only if Trilio is configured with S3 backend storage

    Enable Global Job Scheduler ****Restart pcs resources only on the primary node

    Verify the status of the services

    tvault-object-store will run only if TVault configured with S3 backend storage

    Additional check for wlm-cron on the primary node

    The above command should show only 2 processes running: sample below:

    Check the mount point using “df -h” command

    [Upgrade to HF1 and higher only] Reconfigure the Trilio Appliance

    Trilio for Openstack 4.1 HF1 is introducing several new config parameters, which will be automatically set upon reconfiguration.

    [RHOSP and Kolla only] Reconfigure the Trilio Appliance

    Trilio for Openstack 4.1 is changing the Trilio mount point as follows:

    RHOSP 13 & 16.0 & 16.1: /var/lib/nova/triliovault-mounts Kolla Ansible Ussuri: /var/trilio/triliovault-mounts

    Reconfiguring the Trilio Appliance will automatically handle this change.

    [RHOSP and Kolla only] Create the mount bind to the old Trilio Mountpoint

    Trilio for Openstack 4.1 is changing the Trilio mount point as follows:

    RHOSP 13 & 16.0 & 16.1: /var/lib/nova/triliovault-mounts Kolla Ansible Ussuri: /var/trilio/triliovault-mounts

    After reconfiguration of the Trilio Appliance, it is necessary to create a mount bind between the old and new mount points to provide full access to the old Trilio backups.

    For RHOSP:

    For Kolla:

    To have this change persistent it is recommended to change the fstab accordingly:

    For RHOSP:

    For Kolla:

    [RHOSP and Kolla only] Verify nova UID/GID for nova user on the Appliance

    Red Hat OpenStack and Kolla Ansible Openstack are using the nova UID/GID of 42436 inside their containers instead of 162:162 which is the standard in other Openstack environments.

    Please verify that the nova UID/GID on the Trilio Appliance is still 42436,

    In case of the UID/GID is changed back to 162:162 follow these steps to set it back to 42436:42436.

    1. Download the shell script that will change the user id

    2. Assign executable permissions

    3. Execute the script

    4. Verify that nova user and group ids have changed to '42436'

    Schedulers

    Disable Workload Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/pause

    Disables the scheduler of a given Workload

    Healthcheck of Trilio

    Trilio is composed of multiple services, which can be checked in case of any errors.

    Verify the Trilio Appliance

    Verify the services are up

    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 11:52:56 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-99f51825-9b47-41ea-814f-8f8141157fc7

    Enable Workload Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/resume

    Enables the scheduler of a given Workload

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Scheduler Trust Status

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>

    Validates the Scheduler trust for a given Workload

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    All following API commands require an Authentication token against a user with admin-role in the authentication project.

    Global Job Scheduler status

    GET https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler

    Requests the status of the Global Job Scheduler

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Disable Global Job Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/disable

    Requests disabling the Global Job Scheduler

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Enable Global Job Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/enable

    Requests enabling the Global Job Scheduler

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    No snapshot OR restore to be running.
  • Global job-scheduler should be disabled.

  • wlm-cron should be disabled and any lingering process should be killed.

  • Trilio is using 4 main services on the Trilio Appliance:
    • wlm-api

    • wlm-scheduler

    • wlm-workloads

    • wlm-cron

    Those can be verified to be up and running using the systemctl status command.

    Check the Trilio pacemaker and nginx cluster

    The second component to check the Trilio Appliance's health is the nginx and pacemaker cluster.

    Verify API connectivity of the Trilio Appliance

    Checking the availability of the Trilio API on the chosen endpoints is recommended.

    The following example curl command lists the available workload-types and verifies that the connection is available and working:

    Please check the API guide for more commands and how to generate the X-Auth-Token.

    Verify Trilio components on OpenStack

    On OpenStack Ansible

    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    In order to check the dmapi service go to dmapi container which is residing on controller nodes and run below command

    Datamover service (tvault-contego)

    The datamover service is running on each compute node. Logging to compute node and run below command

    On Kolla Ansible OpenStack

    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    Run the following command on “nova-api” nodes and make sure “triliovault_datamover_api” container is in started state.

    Datamover service (tvault-contego)

    Run the following command on "nova-compute" nodes and make sure the container is in a started state.

    Trilio Horizon integration

    Run the following command on horizon nodes and make sure the container is in a started state.

    On Canonical OpenStack

    Run the following command on MAAS nodes and make sure all trilio units like trilio-data-mover, trilio-dm-api, trilio-horizon-plugin, trilio-wlmare in active state

    On Red Hat OpenStack and TripleO

    On controller node

    Make sure the Trilio dmapi and horizon containers (shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.

    On compute node

    Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly.

    On overcloud

    Please check dmapi endpoints on overcloud node.

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:06:01 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-4eb1863e-3afa-4a2c-b8e6-91a41fe37f78
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:31:49 GMT
    Content-Type: application/json
    Content-Length: 1223
    Connection: keep-alive
    X-Compute-Request-Id: req-c6f826a9-fff7-442b-8886-0770bb97c491
    
    {
       "scheduler_enabled":true,
       "trust":{
          "created_at":"2020-10-23T14:35:11.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "value":"871ca24f38454b14b867338cb0e9b46c",
          "description":"token id for user ccddc7e7a015487fa02920f4d4979779 project c76b3355a164498aa95ddbc960adc238",
          "category":"identity",
          "type":"trust_id",
          "public":false,
          "hidden":true,
          "status":"available",
          "metadata":[
             {
                "created_at":"2020-10-23T14:35:11.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"a3cc9a01-3d49-4ff8-ad8e-b12a7b3c68b0",
                "settings_name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
                "settings_project_id":"c76b3355a164498aa95ddbc960adc238",
                "key":"role_name",
                "value":"member"
             }
          ]
       },
       "is_valid":true,
       "scheduler_obj":{
          "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "user_domain_id":"default",
          "user":"ccddc7e7a015487fa02920f4d4979779",
          "tenant":"c76b3355a164498aa95ddbc960adc238"
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:45:27 GMT
    Content-Type: application/json
    Content-Length: 30
    Connection: keep-alive
    X-Compute-Request-Id: req-cd447ce0-7bd3-4a60-aa92-35fc43b4729b
    
    {"global_job_scheduler": true}
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:49:29 GMT
    Content-Type: application/json
    Content-Length: 31
    Connection: keep-alive
    X-Compute-Request-Id: req-6f49179a-737a-48ab-91b7-7e7c460f5af0
    
    {"global_job_scheduler": false}
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:50:11 GMT
    Content-Type: application/json
    Content-Length: 30
    Connection: keep-alive
    X-Compute-Request-Id: req-ed279acc-9805-4443-af91-44a4420559bc
    
    {"global_job_scheduler": true}
    pcs resource disable wlm-cron                                                                                                                                                                                                                                                                                                                                                        -cron
    
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr 
    tar -czvf tvault_backup.tar.gz /etc/tvault /etc/tvault-config /etc/workloadmgr
    cp tvault_backup.tar.gz /root/ 
    ls -al /home/stack/myansible_3.8
    yum-config-manager --disable bintray-rabbitmq-server
    yum-config-manager --disable mariadb
    yum -y groupinstall "Development Tools"
    yum -y install openssl-devel bzip2-devel libffi-devel xz-devel 
    wget https://www.python.org/ftp/python/3.8.12/Python-3.8.12.tgz 
    tar xvf Python-3.8.12.tgz
    cd Python-3.8*/
    ./configure --enable-optimizations
    sudo make altinstall
    # Create the Python3.8 virtual env
    cd /home/stack/
    virtualenv -p /usr/local/bin/python3.8 myansible_3.8 --system-site-packages
    source /home/stack/myansible_3.8/bin/activate
    pip3 install pip --upgrade
    pip3 install setuptools --upgrade
    pip3 install jinja2 ansible>=2.9.0 configobj pbr
    source /home/stack/myansible/bin/activate
    pip3 uninstall ansible
    pip3 install --upgrade pip
    export PIP_EXTRA_INDEX_URL=https://pypi.fury.io/triliodata-4-1/
    source /home/stack/myansible/bin/activate 
    systemctl stop tvault-object-store
    pip3 install --extra-index-url $PIP_EXTRA_INDEX_URL s3fuse --upgrade --no-cache-dir
    rm -rf /var/triliovault/*
    source /home/stack/myansible/bin/activate 
    systemctl stop tvault-object-store
    pip3 install --extra-index-url $PIP_EXTRA_INDEX_URL s3fuse --upgrade --no-cache-dir --no-deps
    rm -rf /var/triliovault/*
    source /home/stack/myansible_3.8/bin/activate
    pip3 install --extra-index-url $PIP_EXTRA_INDEX_URL tvault-configurator --upgrade --no-cache-dir
    source /home/stack/myansible_3.8/bin/activate
    pip3 install --extra-index-url $PIP_EXTRA_INDEX_URL tvault-configurator --upgrade --no-cache-dir
    ERROR: Command errored out with exit status 1:
    command: /home/stack/myansible/bin/python3 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-crie5qno/ansible_086eb28a1523443f802ab202398d361e/setup.py'"'"'; __file__='"'"'/tmp/pip-install-crie5qno/ansible_086eb28a1523443f802ab202398d361e/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pdd9x77v
    cwd: /tmp/pip-install-crie5qno/ansible_086eb28a1523443f802ab202398d361e/
    source /home/stack/myansible/bin/activate
    pip3 install --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgr --upgrade --no-cache-dir
    source /home/stack/myansible/bin/activate
    pip3 install --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgr --upgrade --no-cache-dir --no-deps
    source /home/stack/myansible/bin/activate
    pip3 install --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgrclient --upgrade --no-cache-dir
    source /home/stack/myansible/bin/activate
    pip3 install --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgrclient --upgrade --no-cache-dir --no-deps
    source /home/stack/myansible/bin/activate
    pip3 install --extra-index-url $PIP_EXTRA_INDEX_URL contegoclient --upgrade --no-cache-dir
    source /home/stack/myansible/bin/activate
    pip3 install --extra-index-url $PIP_EXTRA_INDEX_URL contegoclient --upgrade --no-cache-dir --no-deps
    source /home/stack/myansible/bin/activate
    pip3 install oslo.messaging==12.1.6 --no-deps
    cd /root 
    tar -xzvf tvault_backup.tar.gz -C /
    pcs resource delete wlm-scheduler
    systemctl restart tvault-object-store
    systemctl restart wlm-api 
    systemctl restart wlm-scheduler
    systemctl restart wlm-workloads 
    systemctl restart tvault-config
    pcs resource enable wlm-cron
    pcs resource restart wlm-cron
    systemctl status wlm-api wlm-scheduler wlm-workloads tvault-config tvault-object-store | grep -E 'Active|loaded'
    pcs status
    systemctl status wlm-cron
    ps -ef | grep [w]orkloadmgr-cron
    [root@tvm6 ~]# ps -ef | grep [w]orkloadmgr-cron
    nova      8841     1  2 Jul28 ?        00:40:44 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    nova      8898  8841  0 Jul28 ?        00:07:03 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    mount --bind /var/lib/nova/triliovault-mounts /var/triliovault-mounts
    mount --bind /var/trilio/triliovault-mounts /var/triliovault-mounts
    echo "/var/lib/nova/triliovault-mounts /var/triliovault-mounts    none    bind    0 0" >> /etc/fstab
    echo "/var/trilio/triliovault-mounts /var/triliovault-mounts	none bind	0 0" >> /etc/fstab
    [root@TVM1 ~]# id nova
    uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
    ## Download the shell script
    $ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    
    ## Assign executable permissions
    $ chmod +x nova_userid.sh
    
    ## Execute the shell script to change 'nova' user and group id to '42436'
    $ ./nova_userid.sh
    
    ## Ignore any errors and verify that 'nova' user and group id has changed to '42436'
    $ id nova
       uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
    systemctl | grep wlm
      wlm-api.service          loaded active running   workloadmanager api service
      wlm-cron.service         loaded active running   Cluster Controlled wlm-cron
      wlm-scheduler.service    loaded active running   Cluster Controlled wlm-scheduler
      wlm-workloads.service    loaded active running   workloadmanager workloads service
    systemctl status wlm-api
    ######
    ● wlm-api.service - workloadmanager api service
       Loaded: loaded (/etc/systemd/system/wlm-api.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:41:19 UTC; 2 months 21 days ago
     Main PID: 4688 (workloadmgr-api)
       CGroup: /system.slice/wlm-api.service
               ├─4688 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-api --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-scheduler
    ######
    ● wlm-scheduler.service - Cluster Controlled wlm-scheduler
       Loaded: loaded (/etc/systemd/system/wlm-scheduler.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-scheduler.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9342 (workloadmgr-sch)
       CGroup: /system.slice/wlm-scheduler.service
               └─9342 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-scheduler --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-workloads
    ######
    ● wlm-workloads.service - workloadmanager workloads service
       Loaded: loaded (/etc/systemd/system/wlm-workloads.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:51:05 UTC; 2 months 21 days ago
     Main PID: 606 (workloadmgr-wor)
       CGroup: /system.slice/wlm-workloads.service
               ├─ 606 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-cron
    ######
    ● wlm-cron.service - Cluster Controlled wlm-cron
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-cron.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9209 (workloadmgr-cro)
       CGroup: /system.slice/wlm-cron.service
               ├─9209 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    pcs status
    ######
    Cluster name: triliovault
    
    WARNINGS:
    Corosync and pacemaker node names do not match (IPs used in setup?)
    
    Stack: corosync
    Current DC: TVM1 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
    Last updated: Mon Jan 24 13:42:01 2022
    Last change: Tue Nov  2 19:07:04 2021 by root via crm_resource on TVM2
    
    3 nodes configured
    9 resources configured
    
    Online: [ TVM1 TVM2 TVM3 ]
    
    Full list of resources:
    
     virtual_ip     (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_public      (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_admin       (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_internal    (ocf::heartbeat:IPaddr2):       Started TVM2
     wlm-cron       (systemd:wlm-cron):     Started TVM2
     wlm-scheduler  (systemd:wlm-scheduler):        Started TVM2
     Clone Set: lb_nginx-clone [lb_nginx]
         Started: [ TVM2 ]
         Stopped: [ TVM1 TVM3 ]
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    curl http://10.10.2.34:8780/v1/8e16700ae3614da4ba80a4e57d60cdb9/workload_types/detail -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-workloadmgrclient" -H "Accept: application/json" -H "X-Auth-Token: gAAAAABe40NVFEtJeePpk1F9QGGh1LiGnHJVLlgZx9t0HRrK9rC5vqKZJRkpAcW1oPH6Q9K9peuHiQrBHEs1-g75Na4xOEESR0LmQJUZP6n37fLfDL_D-hlnjHJZ68iNisIP1fkm9FGSyoyt6IqjO9E7_YVRCTCqNLJ67ZkqHuJh1CXwShvjvjw
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    lxc-attach -n <dmapi-container-name>  (go to dmapi conatiner)
    root@controller-dmapi-container-08df1e06:~# systemctl status tvault-datamover-api.service
    ● tvault-datamover-api.service - TrilioData DataMover API service
         Loaded: loaded (/lib/systemd/system/tvault-datamover-api.service; enabled; vendor preset: enabled)
         Active: active (running) since Wed 2022-01-12 11:53:39 UTC; 1 day 17h ago
       Main PID: 23888 (dmapi-api)
          Tasks: 289 (limit: 57729)
         Memory: 607.7M
         CGroup: /system.slice/tvault-datamover-api.service
                 ├─23888 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23893 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23894 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23895 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23896 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23897 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23898 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23899 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23900 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23901 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23902 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23903 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23904 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23905 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23906 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23907 /usr/bin/python3 /usr/bin/dmapi-api
                 └─23908 /usr/bin/python3 /usr/bin/dmapi-api
    
    Jan 12 11:53:39 controller-dmapi-container-08df1e06 systemd[1]: Started TrilioData DataMover API service.
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    
    root@compute:~# systemctl status tvault-contego
    ● tvault-contego.service - Tvault contego
         Loaded: loaded (/etc/systemd/system/tvault-contego.service; enabled; vendor preset: enabled)
         Active: active (running) since Fri 2022-01-14 05:45:19 UTC; 2s ago
       Main PID: 1489651 (python3)
          Tasks: 19 (limit: 67404)
         Memory: 6.7G (max: 10.0G)
         CGroup: /system.slice/tvault-contego.service
                 ├─ 998543 /bin/qemu-nbd -c /dev/nbd45 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998772 /bin/qemu-nbd -c /dev/nbd73 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998931 /bin/qemu-nbd -c /dev/nbd100 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─ 999147 /bin/qemu-nbd -c /dev/nbd35 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─1371322 /bin/qemu-nbd -c /dev/nbd63 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─1371524 /bin/qemu-nbd -c /dev/nbd91 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 └─1489651 /openstack/venvs/nova-22.3.1/bin/python3 /usr/bin/tvault-contego --config-file=/etc/nova/nova.conf --config-file=/etc/tvault-contego/tvault-cont>
    
    Jan 14 05:45:19 compute systemd[1]: Started Tvault contego.
    Jan 14 05:45:20 compute sudo[1489653]:     nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/openstack/venvs/nova-22.3.1/bin/nova-rootwrap /etc/nova/rootwrap.conf umou>
    Jan 14 05:45:20 compute sudo[1489653]: pam_unix(sudo:session): session opened for user root by (uid=0)
    Jan 14 05:45:21 compute python3[1489655]: umount: /var/triliovault-mounts/VHJpbGlvVmF1bHQ=: no mount point specified.
    Jan 14 05:45:21 compute sudo[1489653]: pam_unix(sudo:session): session closed for user root
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] CPU Control group m>
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] I/O Control Group m>
    lines 1-22/22 (END)
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    [root@controller ~]# docker ps | grep triliovault_datamover_api
    3f979c15cedc   trilio/centos-binary-trilio-datamover-api:4.2.50-victoria                     "dumb-init --single-…"   3 days ago    Up 3 days                         triliovault_datamover_api
    [root@compute1 ~]# docker ps | grep triliovault_datamover
    2f1ece820a59   trilio/centos-binary-trilio-datamover:4.2.50-victoria                        "dumb-init --single-…"   3 days ago    Up 3 days                        triliovault_datamover
    [root@controller ~]# docker ps | grep horizon
    4a004c786d47   trilio/centos-binary-trilio-horizon-plugin:4.2.50-victoria                    "dumb-init --single-…"   3 days ago    Up 3 days (unhealthy)             horizon
    root@jujumaas:~# juju status | grep trilio
    trilio-data-mover       4.2.51   active       3  trilio-data-mover       jujucharms    9  ubuntu
    trilio-dm-api           4.2.51   active       1  trilio-dm-api           jujucharms    7  ubuntu
    trilio-horizon-plugin   4.2.51   active       1  trilio-horizon-plugin   jujucharms    6  ubuntu
    trilio-wlm              4.2.51   active       1  trilio-wlm              jujucharms    9  ubuntu
      trilio-data-mover/8        active    idle            172.17.1.5                         Unit is ready
      trilio-data-mover/6        active    idle            172.17.1.6                         Unit is ready
      trilio-data-mover/7*       active    idle            172.17.1.7                         Unit is ready
      trilio-horizon-plugin/2*   active    idle            172.17.1.16                        Unit is ready
    trilio-dm-api/2*             active    idle   1/lxd/4  172.17.1.27     8784/tcp           Unit is ready
    trilio-wlm/2*                active    idle   7        172.17.1.28     8780/tcp           Unit is ready
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp16 onwards/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain1-controller-0 heat-admin]# podman ps | grep trilio-
    e3530d6f7bec  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:4.2.47-rhosp16.1           kolla_start           2 weeks ago   Up 2 weeks ago          trilio_dmapi
    f93f7019f934  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:4.2.47-rhosp16.1          kolla_start           2 weeks ago   Up 2 weeks ago          horizon
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain3-novacompute-1 heat-admin]# podman ps | grep trilio-
    4419b02e075c  undercloud162.ctlplane.trilio.local:8787/trilio/trilio-datamover:dev-osp16.2-1-rhosp16.2       kolla_start  2 days ago   Up 27 seconds ago          trilio_datamover
     (overcloudtrain1) [stack@ucqa161 ~]$ openstack endpoint list | grep datamover
    | 218b2f92569a4d259839fa3ea4d6103a | regionOne | dmapi          | datamover      | True    | internal  | https://overcloudtrain1internalapi.trilio.local:8784/v2                    |
    | 4702c51aa5c24bed853e736499e194e2 | regionOne | dmapi          | datamover      | True    | public    | https://overcloudtrain1.trilio.local:13784/v2                              |
    | c8169025eb1e4954ab98c7abdb0f53f6 | regionOne | dmapi          | datamover      | True    | admin     | https://overcloudtrain1internalapi.trilio.local:8784/v2    

    Trilio 4.1 HF1 Release Notes

    Release Versions

    Packages

    Name
    Type
    Version

    Containers and Gitbranch

    Name
    Tag

    Changelog

    Enhancements

    Support for passwordless SMTP servers

    Trilio for Openstack supports sending notification emails upon succeeded or failed backup/restore jobs. The required SMTP server configuration enforced the usage of a password for the SMTP user.

    A password is no longer necessary when the SMTP server doesn’t need it.

    Support for Multi-Attach Volumes

    Cinder supports a Volume Type, which allows attaching the same Volume to multiple instances simultaneously. Backups and Restore for this Volume Type failed. Trilio for Openstack is now providing base support for this Volume Type.

    Cinder Boot Volumes with Multi-Attach activated are not yet supported.

    This only allows the backup and restoration of Multi-Attach Volumes. Trilio will handle the Volume like any single attached Volume for now. For example, a multi-attach volume connected to 2 VMs will get backed up and restored twice.

    Increased the timeout for data transfer

    Trilio for Openstack is tracking the progress of backups and restore using a tracking file. If this file is not getting updated within a defined timeframe, the Trilio data transfer fails. This timeframe got extended from 10 minutes to 20 minutes.

    This value will become configurable in T4O 4.1 SP1

    Misleading and expected errors are no longer shown in logs by default

    Trilio for Openstack was logging many system error messages, which were actually expected and handled internally without impacting the actual functions of the solution.

    These error messages were misleading in the normal troubleshooting process. These error messages aren’t logged anymore by default. They can be reactivated using the debug mode for logging.

    Image upload timeout window has been increased and made configurable

    The upload of Trilio backups is limited in time to prevent stalling workloads with a stuck upload process.

    This timeout window has been increased from 10 hours to 48h by default and is configurable in the workloadmgr config file.

    Restart wlm-workloads after setting this value.

    Grace time to retrigger a Snapshot in case of deactivated Global Job Scheduler

    When the Global Job Scheduler is deactivated, no backups are triggered. The Global Job Scheduler contains a grace time for missed Snapshots. All Snapshots that were supposed to be triggered within this grace time before activation of the Global Job Scheduler are retriggered.

    This grace period is now configurable in the workloadmgr config file.

    After setting this value, restart the wlm-cron service.

    Increased the amount of dmapi workers and made it configurable

    In highly used environments, the dmapi worker got identified as a potential bottleneck. The amount of default workers used by a dmapi service has been increased to 16 and is configurable in the dmapi config file.

    Restart the dmapi service after setting the configuration manually.

    The upgrade process of RHOSP and Kolla Ansible will automatically set this value.

    Added new settings in haproxy configuration for dmapi

    Response times in highly used environments might be slow, leading to the dmapi service timing out in the haproxy connection. The default values of haproxy are not always suitable in that case.

    The haproxy configuration for the dmapi service has been extended to the following values.

    Restart the haproxy service after setting the configuration manually.

    The upgrade process of RHOSP and Kolla Ansible will automatically set these values.

    Added retries for Neutron API calls

    The Openstack Neutron service is highly used up to the point that API calls are timing out. Trilio backups and restores failed when any Neutron API call timed out.

    Trilio will now retry Neutron API calls three times before failing a backup or restore.

    Added retries and rescan for mounting temporary Volumes using Cinder Storages with multipathing activated

    It was observed in multipathing environments that sometimes backups failed due to errors with the temporary Cinder Volumes during the following actions:

    • Create Cinder Volume out of Cinder Snapshot

    • Mount Cinder Volume to Compute Node

    • Unmount Cinder Volume from Compute Node

    • Delete Cinder Volume

    During these operations in multipath environments, errors are now handled by rescanning the connected devices and retrying the internal commands.

    The amount of retries is configurable in the tvault-contego config file.

    Restart the tvault-contego service after manually setting the value.

    The upgrade process of RHOSP and Kolla Ansible will automatically set these values.

    Enhanced logging of Trilio for Openstack GUI

    The Trilio for Openstack GUI uses the admin account to secure access to the features and functionalities located on the Trilio appliance. The following events are now getting logged by the Trilio Appliance:

    • Login attempts

    • Logout events

    • Password changes for the admin user

    Added text banner to Trilio for Openstack GUI login page

    The Trilio for Openstack login page can now is extendable to contain a text banner. This text banner is configurable on the Trilio appliance by editing the banner yaml file located under:

    /etc/tvault-config/banner.yaml

    The content of the file looks as follows:

    Restart tvault-config after changing the banner to activate it.

    Fixed Bugs and issues

    Local job scheduler stays disabled after workload creation with enabled job scheduler

    An issue got fixed for rare occasions in which the status of the local job scheduler of a single workload was disabled, despite the workload created with an enabled job scheduler.

    Global Job Scheduler is shown as active even when wlm-cron service is down

    An issue got fixed for the Global Job Scheduler returning enabled or disabled even when the wlm- cron service is deactivated. The status returned in this scenario is now an error message showing the wlm-cron service status.

    Wrong documentation link in the Trilio Appliance

    The documentation link available inside the Trilio Appliance was still pointing to the old outdated documentation webpage. The link has been updated to point towards the correct documentation.

    Restore VMs into different Availability Zone failed

    An issue got fixed, which prevented the restoration of VMs into different Availability Zones in the case of the original Availability Zone no longer being available.

    Stopping workloadmgr service and all ongoing worker tasks

    An issue got fixed, which lead to stall service jobs being left behind upon restart of workloadmgr services.

    Mounting of LVM configured disks into the FRM failed

    An issue got fixed, which prevented the correct mounting and access of Volumes partitioned and configured by LVM.

    Upload of Backups failed intermittently using S3

    An issue got fixed, which lead to a race condition between upload threads in the S3 fuse plugin, which led to backups failing during the upload phase.

    Multipathing not enabled by default in the Data-Mover container used in RHOSP and Kolla Ansible Openstack

    An issue got fixed, which lead to multipathing not being enabled in the Data-Mover container used by RHOSP and Kolla Ansible.

    Upgrading to 4.1 HF1 will automatically activate multipathing where feasible.

    An inaccessible S3 mount point got created when the S3 endpoint is not available during deployment or configuration

    An issue got fixed, which lead to the creation of a Trilio mount point, even when the provided S3 backup target is not reachable during deployment or configuration.

    The deployment will still succeed, but the tvault-object-store service will be in a failed state.

    Trustee role not correctly inherited from user groups to users

    An issue got fixed, which prevented the detection of the Trilio trustee role for a user, who had this role inherited from a user group.

    The configurator always trying to use Keystone internal endpoint

    An issue got fixed, during which the chosen endpoint type did not get honored for Keystone and the configurator always reached out to the Keystone internal endpoint.

    The following value can be set in the api-paste ini file located under:

    /etc/workloadmgr/api-paste.ini

    Afterward the wlm-workloads service needs to be restarted.

    It is recommended to reconfigure the appliance to activate the fix

    Disk integrity check failing with a false positive

    An issue got identified, which leads to the disk integrity check failing, although there is no data loss.

    Snapshots with a failed disk integrity check are currently no longer failing and instead show a warning in the log files about the failed disk integrity check.

    A complete fix of the root cause is planned for 4.1 SP1.

    Workload creation and Backup creation failed due to Latin characters like á

    An issue got identified in which Latin characters like á did lead to a Workload not being created or a backup not succeeding.

    This hotfix implements support of Latin characters for the following:

    • Calendar shown and used during workload creation for the job scheduler

    • Name and description of security groups

    Full support for Latin characters comes in 4.1 SP1

    Backups for multipath environments using the FC protocol failed

    An issue got fixed, which prevented successful backups in environments using multipathing with the FC storage protocol.

    Trilio 4.1 Release Notes

    Trilio release 4.1 introduces new features and capabilities including:

    • Openstack Ussuri support

    • New File Recovery Process

    • Increased Openstack independency

    • Installation Optimization for Kolla Ansible Openstack, Ansible Openstack, and Red Hat Openstack Platform

    • Support for External MySQL/MariaDB databases

    • Incremental Backup for nova booted instances

    • Support of Openstack User Groups

    • New Quota: Snapshots

    • S3 Support for Kolla Ansible Openstack

    • UI enhancement for selective Restore

    Trilio 4.1.94 is the GA release of Trilio 4.1

    Release Versions

    Packages

    Name
    Type
    Version

    Containers and Gitbranch

    Name
    Tag

    Openstack Ussuri support

    Trilio 4.1 continues to enable Openstack versions and Distributions, allowing Trilio customers to stay up to date with Openstack releases.

    Trilio 4.1 introduces full support for Openstack Ussuri for Kolla Ansible Openstack, Ansible Openstack, and Canonical Openstack. In addition, Trilio 4.1 does of course support the active long-term releases from Red Hat and Canonical. Openstack users of those releases can continue to use the latest Trilio functionalities.

    New File Recovery Manager Process

    Since Trilio was released for the first time was the File Recovery Manager instance in tandem with Trilio. The File Recovery Manager instance helped customers to easily and quickly fetch and restore files and folders directly from the backups.

    Over the years did more and more customers request to have the File Recovery Manager on their own images or on a specific Linux distribution. We looked into the possibility to create multiple versions of the File Recovery Manager which was identified as not getting to the point of flexibility our customers need.

    The result was a revamp of the File Recovery Manager which allows the installation on any CentOS-based or Ubuntu-based instance.

    Increased Openstack independency

    Trilio has the goal to become an integrated part of Openstack since its first draft on a whiteboard. Back then were the Trilio services defined as sub-services of Nova and Trilio tied deeply into the already existing nova services and became an integral part of it.

    Over the years did the nova service change and possibilities that were once available have been taken out from nova. This plus the required lengthy qualification of all Openstack Versions and Distributions made it clear that a change is required.

    Trilio 4.1 is now a complete stand-alone service, which is communicating with nova and other Openstack services through APIs only. The goal of this independence is to speed up future qualification and requalification cycles to provide a broader support matrix again.

    Installation optimization for Kolla Ansible Openstack, Ansible Openstack, and Red Hat Openstack Platform

    Trilio's integration into Openstack already starts with the installation process. This process required manual installation steps in the past, which were hard to automize and scale on bigger environments.

    Trilio 4.1 is therefore introducing Ansible Playbooks for Kolla-Ansible and Ansible Openstack, while the integration with Red Hat Director has been deepened. Further, the Trilio configurator does now allow to set the backup target on the Trilio Appliance directly based on Distribution.

    Support for external MySQL/MariaDB

    The Trilio Appliance provides its own database. A database that suits the needs of 99% of Trilio's customers. Some customers do have higher requirements. Be it performance, security, or just the general design of the Openstack itself.

    For these customers and everyone who wants to use it is Trilio 4.1 introducing the possibility to configure Trilio with an external database.

    Incremental Backups for nova booted instance root volumes

    Openstack provides the possibility to start an instance from a Cinder Volume or directly using the Glance Image and a nova volume.

    Trilio always provided incremental forever backups for all cinder volumes. Root-Volumes from nova-booted instances were always taken as a full backup of the Glance image and the actual VM root volume.

    Trilio 4.1 does introduce incremental backups for nova booted instance root volumes. This allows Trilio 4.1 to provide incremental backups to any type of root volume.

    Support for Openstack User Groups

    Trilio is using the Openstack Keystone service to authenticate any user and to verify that the right permissions are set. Openstack Keystone allows to group users and set the permissions to the group.

    These Openstack User Groups are now fully supported.

    New Quota: Snapshots

    Trilio 4.0 introduced the Quota functionality, which allowed to set quotas for the number of workloads, number of VMs and amount of storage used by a single tenant.

    Trilio 4.1 extends this feature by the number of Trilio Snapshots that a Tenant is allowed to have.

    S3 Support for Kolla Ansible Openstack

    S3 is becoming the standard to transfer data to and from storage solutions. Trilio introduced S3 already in Version 3.0 but had to take it out for Kolla Ansible since Kolla Ansible has been added to the Support Matrix.

    Trilio 4.1 is now closing that gap to other Openstack Distributions and provides full S3 support for Kolla Ansible Openstack Ussuri.

    UI enhancement for the Selective Restore

    The Selective Restore is the most powerful and complex restore Trilio has to offer. The UI needs to be easy to understand and help the user to fulfill its task.

    Several points have been identified to improve this requirement of usability. The selective restore now allows to select or deselect all VMs at once. Further are the VMs now provided in an easy to overview list and the sub-controls can be expanded and collapsed as necessary.

    Known Issues

    This release contains the following known issues which are tracked for a future update.

    Snapshots getting stuck in uploading stage on RHOSP13 with ISCSI based Volumes

    Observation:

    • Login into the iscsi device is getting rejected for the Trilio service

    • Result is the Snapshot not moving forward until timing out

    Workloadmgr Quota feature does not support RHOSP13 & Canonical Queens Horizon

    The workloadmgr Quota feature is still fully supported through CLI.

    [Openstack Ansible] snapshot mount failed with error 'Permission denied'

    Observation:

    • It has been observed that on non-kolla, non-rhosp setups, such as openstack ansible, nova user id is not same as we consider as default(162).

    • Another observation is that, the id which is assigned to nova, was conflicting with id of system user in TVM, this created a situation where we had to redeploy Openstack.

    Workaround:

    • Update permissions of /var/triliovault-mounts to 755

    [Volume Backup Exclude] Excluded Ceph Volume after restore not mountable or formattable

    Observation:

    • VM Volumes stored on Ceph are successfully excluded from backup if desired

    • Restore does create empty Ceph Volume

    • created empty Ceph Volume is not attachable or formattable

    Restored VMs have blank metadata config_drive attached

    Observation:

    • For every restore will the metadata config_drive be set as blank value

    • No impact on restored VMs known

    Workaround

    • delete metadata config_drive

    • or set desired value

    TVM reconfig fails when adding new Trilio VM to the cluster

    Observation:

    • TVault re-configuration while adding nodes to existing TVM cluster fails at "Configuring Trilio Cluster"

    • Reason is that the prev mysql password was not working and mysql root access has be reset.

    Workaround:

    • remove /root/.my.cnf file on already configured TVM and reconfigure it

    Database does not sync after Trilio cluster gets new nodes

    Observation:

    • After TVault re-configuration post addition of 2 more nodes to existing TVM cluster ("import workloads" was not seleted), the databases do not sync against already existing TVM.

    • It is expected that while adding the 2 new nodes, the db on node1 should get synced up with 2 new nodes and the existing workloads should be available post the reconfig on the new 3 node TVM cluster.

    Workaround:

    • Run workload import from CLI

    [exclude_boot_disk] Data on boot disk gets backed up despite exclusion

    Observation:

    • VM was set with metadata exclude_boot_disk_from_backup set to true

    • Restored instance showed, that data was backed up and restored

    After reinitialize and import Openstack certificates are missing

    Observation:

    • Reinitialize does not keep the already uploaded Openstack Certificates used to communicate with Openstack.

    Workaround:

    • Upload Certificates again

    CLI import changes scheduler trust value to disabled

    Observation:

    • When the is used via CLI is the scheduler trust changed from enabled to disabled.

    Workaround:

    • Configure/re-configure T4O with import option from UI after reinitialize.

    Unable to get node details after reinitializing the Trilio Appliance

    Observation:

    • After reinitializing was neither the UI nor CLI showing Node information

    Workaround:

    • Restart wlm-workloads and wlm-cron services on Trilio nodes

    • systemctl restart wlm-workloads

    • systemctl restart wlm-cron

    Snapshots fails with "object is not subscriptable" for many workload jobs at the exact same time

    Oberservation:

    • Running more than 25 workloads at the exact same time leads to error

    • dmapi service is not responding

    • Snapshots fail with "object is not subscriptable"

    Workaround:

    Contact Trilio Support to implement a known workaround.

    [Kolla Ansible] dmapi container stuck in restarting if storage backend changes

    Observation:

    • Just changing the backup target in the Kolla Ansible configuration files and redeploying leads to dmapi container stuck in restart

    Workaround:

    • Follow guide to

    No operation is permitted in insecure way for ssl enabled Keystone URL

    Observation:

    • SSL enabled Openstack

    • Backup and Restore jobs fail with with missing TLS CA certificate bundle error

    Workaround:

    • Configure the Trilio appliance with Openstack CA provided

    • OR Provide Openstack CA to /etc/workloadmgr/ca-chain.pem

    Upgrading on RHOSP

    0] Pre-requisites

    Please ensure the following points are met before starting the upgrade process:

    • No Snapshot or Restore is running

    • Global job scheduler is disabled

    • wlm-cron is disabled on the Trilio Appliance

    Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut-down.

    1.] [On Undercloud node] Clone latest Trilio repository and upload Trilio puppet module

    All commands need to be run as user 'stack' on undercloud node

    The Trilio appliance connected to this installation needs to be of version 4.1 HF10

    1.1] Clone Trilio cfg scripts repository

    Separate directories are created as per Redhat OpenStack release under 'triliovault-cfg-scripts/redhat-director-scripts/' directory. Use all scripts/templates from respective directory. For ex, if your RHOSP release is 13, then use scripts/templates from 'triliovault-cfg-scripts/redhat-director-scripts/rhosp13' directory only.

    Available RHOSP_RELEASE___DIRECTORY values are:

    rhosp13 rhosp16.1 rhosp16.2

    RHOSP 16.0 is not supported anymore as RedHat has officially stopped supporting it. However, Trilio maintained it for some time and stopped the support from 4.1HF11 onwards. The latest hotfix available for RHOSP16.0 is 41.HF10. Reach out to the Support team for any help.

    1.2] If backup target type is 'Ceph based S3' with SSL:

    If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into the puppet directory of the right release.

    2] Upload Trilio puppet module

    3] Update overcloud roles data file to include Trilio services

    Trilio has two services as explained below. You need to add these two services to your roles_data.yaml. If you do not have customized roles_data file, you can find your default roles_data.yaml file at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml on undercloud.

    You need to find that role_data file and edit it to add the following Trilio services.

    i) Trilio Datamover Api Service:

    Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamoverApi

    This service needs to be co-located with database and keystone services. That said, you need to add this service on the same role as of keystone and database service.

    Typically this service should be deployed on controller nodes where keystone and database runs. If you are using RHOSP's pre-defined roles, you need to addOS::TripleO::Services::TrilioDatamoverApiservice to Controller role.

    ii) Trilio Datamover Service: Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamover This service should be deployed on role where nova-compute service is running.

    If you are using RHOSP's pre-defined roles, you need to add our OS::TripleO::Services::TrilioDatamover service to Compute role.

    If you have defined your custom roles, then you need to identify the role name where in 'nova-compute' service is running and then you need to add 'OS::TripleO::Services::TrilioDatamover' service to that role.

    4] Prepare latest Trilio container images

    All commands need to be run as user 'stack'

    Refer to the word <HOTFIX-TAG-VERSION> as 4.1.94-hotfix-16 in the below sections

    Trilio containers are pushed to 'RedHat Container Registry'. Registry URL is ''. The Trilio container URLs are as follows:

    4.1] available container images

    RHOSP 13

    RHOSP 16.1

    RHOSP 16.2

    There are three registry methods available in RedHat OpenStack Platform.

    1. Remote Registry

    2. Local Registry

    3. Satellite Server

    4.2] Remote Registry

    Please refer to the to see which containers are available.

    Follow this section when 'Remote Registry' is used.

    For this method, it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from Redhat registry.

    Populate the trilio_env.yaml with container URLs for:

    • Trilio Datamover container

    • Trilio Datamover api container

    • Trilio Horizon Plugin

    trilio_env.yaml will be available in __triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments

    Example

    4.2] Local Registry

    Please refer to the to see which containers are available.

    Follow this section when 'local registry' is used on the undercloud.

    In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.

    RHOSP13

    Verify the changes

    RHOSP16.1

    Verify the changes:

    RHOSP16.2

    Verify the changes

    The changes can be verified using the following commands.

    4.3] Red Hat Satellite Server

    Please refer to the to see which containers are available.

    Follow this section when a Satellite Server is used for the container registry.

    Pull the Trilio containers on the Red Hat Satellite using the given

    Populate the trilio_env.yaml with container urls.

    RHOSP 13

    RHOSP16.1

    RHOSP16.2

    5. Verify Trilio environment details

    It is recommended to re-populate the backup target details in the freshly downloaded trilio_env.yaml file. This will ensure that parameters that have been added since the last update/installation of Trilio are available and will be filled out too.

    Locations of the trilio_env.yaml:

    For more details about the trilio_env.yaml please check .

    6] Update Overcloud Trilio components

    Use the following heat environment file and roles data file in overcloud deploy command:

    1. trilio_env.yaml

    2. roles_data.yaml

    3. Use correct Trilio endpoint map file as per available Keystone endpoint configuration

    To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:

    7] Verify deployment

    If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

    7.1] On Controller node

    Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

    7.2] On Compute node

    Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

    7.3] On the node with Horizon service

    Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.

    Workloads

    Definition

    A workload is a backup job that protects one or more Virtual Machines according to a configured policy. There can be as many workloads as needed. But each VM can only be part of one Workload.

    List of Workloads

    Using Horizon

    To view all available workloads of a project inside Horizon do:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    The overview in Horizon lists all workloads with the following additional information:

    • Creation time

    • Workload Name

    • Workload description

    • Total amount of Snapshots inside this workload

    Using CLI

    • --all {True,False}➡️List all workloads of all projects (valid for admin user only)

    • --nfsshare <nfsshare>➡️List all workloads of nfsshare (valid for admin user only)

    Workload Create

    Using Horizon

    To create a workload inside Horizon do the following steps:

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    4. Click "Create Workload"

    The created Workload will be available after a few seconds and starts to take backups according to the provided schedule and policy.

    Using CLI

    • --display-name➡️Optional workload name. (Default=None)

    • --display-description➡️Optional workload description. (Default=None)

    • --workload-type-id➡️

    Workload Overview

    A workload contains many information, which can be seen in the workload overview.

    Using Horizon

    To enter the workload overview inside Horizon do the following steps:

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    4. Identify the workload to show the details on

    Details Tab

    The Workload Details tab provides you with the general most important information about the workload:

    • Name

    • Description

    • Availability Zone

    • List of protected VMs including the information of qemu guest agent availability

    The status of the qemu-guest-agent just shows, whether the necessary Openstack configuration has been done for this VM to provide qemu guest agent integration. It does not check, whether the qemu guest agent is installed and configured on the VM.

    It is possible to navigate to the protected VM directly from the list of protected VMs.

    Snapshots Tab

    The Workload Snapshots Tab shows the list of all available Snapshots in the chosen Workload.

    From here it is possible to work with the Snapshots, create Snapshots on demand and start Restores.

    Please refer to the and User Guide to learn more about those.

    Policy Tab

    The Workload Policy Tab gives an overview of the current configured scheduler and retention policy. The following elements are shown:

    • Scheduler Enabled / Disabled

    • Start Date / Time

    • End Date / Time

    • RPO

    Filesearch Tab

    The Workload Filesearch Tab provides access to the powerful search engine, which allows to find files and folders on Snapshots without the need of a restore.

    Please refer to the File Search User Guide to learn more about this feature.

    Misc. Tab

    The Workload Miscellaneous Tab shows the remaining metadata of the Workload. The following information are provided:

    • Creation time

    • last update time

    • Workload ID

    • Workload Type

    Using CLI

    • <workload_id> ➡️ ID/name of the workload to show

    • --verbose➡️option to show additional information about the workload

    Edit a Workload

    Workloads can be modified in all components to match changing needs.

    Editing a Workload will set the User, who edits the Workload, as the new owner.

    Using Horizon

    To edit a workload in Horizon do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to be modified

    Using CLI

    • --display-name ➡️ Optional workload name. (Default=None)

    • --display-description➡️Optional workload description. (Default=None)

    • --instance <instance-id=instance-uuid>➡️

    Delete a Workload

    Once a workload is no longer needed it can be safely deleted.

    All Snapshots need to be deleted before the workload gets deleted. Please refer to the User Guide to learn how to delete Snapshots.

    Using Horizon

    To delete a workload do the following steps:

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    4. Identify the workload to be deleted

    Using CLI

    • <workload_id> ➡️ ID/name of the workload to delete

    • --database_only <True/False>➡️Keep True if want to delete from database only.(Default=False)

    Unlock a Workload

    Workloads that are actively taking backups or restores are locked for further tasks. It is possible to unlock a workload by force if necessary.

    It is highly recommend to use this feature only as last resort in case of backups/restores being stuck without failing or a restore is required while a backup is running.

    Using Horizon

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to unlock

    Using CLI

    • <workload_id> ➡️ ID of the workload to unlock

    Reset a Workload

    In rare cases it might be necessary to start a backup chain all over again, to ensure the quality of the created backups. To not recreate a Workload in such cases is it possible to reset a Workload.

    The Workload reset will:

    • Cancel all ongoing tasks

    • Delete all existing Openstack Trilio Snapshots from the protected VMs

    • recalculate the next Snapshot time

    • take a full backup at the next Snapshot

    Using Horizon

    To reset a Workload do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to reset

    Using CLI

    • <workload_id> ➡️ ID/name of the workload to reset

    Workload Policies

    Trilio’s tenant driven backup service gives tenants control over backup policies. However, sometimes it may be too much control to tenants and the cloud admins may want to limit what policies are allowed by tenants. For example, a tenant may become overzealous and only uses full backups every 1 hr interval. If every tenant were to pursue this backup policy, it puts a severe strain on cloud infrastructure. Instead, if cloud admin can define predefined backup policies and each tenant is only limited to those policies then cloud administrators can exert better control over backup service.

    Workload policy is similar to nova flavor where a tenant cannot create arbitrary instances. Instead, each tenant is only allowed to use the nova flavors published by the admin.

    List and showing available Workload policies

    Using Horizon

    To see all available Workload policies in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    4. Navigate to Trilio

    The following information are shown in the policy tab for each available policy:

    • Creation time

    • name

    • description

    • status

    Using CLI

    • <policy_id>➡️ Id of the policy to show

    Create a policy

    Using Horizon

    To create a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    4. Navigate to Trilio

    Using CLI

    • --policy-fields <key=key-name> ➡️ Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'

    • --display-description <display_description> ➡️ Optional policy description. (Default=No description)

    Edit a policy

    Using Horizon

    To edit a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    4. Navigate to Trilio

    Using CLI

    • --display-name <display-name>➡️Name of the policy

    • --display-description <display_description> ➡️ Optional policy description. (Default=No description)

    • --policy-fields <key=key-name>

    Assign/Remove a policy

    Using Horizon

    To assign or remove a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    4. Navigate to Trilio

    Using CLI

    • --add_project <project_id> ➡️ ID of the project to assign policy to. Use multiple times to assign multiple projects.

    • --remove_project <project_id> ➡️ ID of the project to remove policy from. Use multiple times to remove multiple projects.

    • <policy_id>

    Delete a policy

    Using Horizon

    To delete a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    4. Navigate to Trilio

    Using CLI

    • <policy_id> ➡️ID of the policy to be deleted

    Installing on Kolla Ussuri

    This page lists all steps required to deploy Trilio components on Kolla-ansible deployed OpenStack cloud.

    1] Plan for Deployment

    Please ensure that the Trilio Appliance has been updated to the latest hotfix before continuing the installation.

    E-Mail Notification Settings

    E-Mail Notification Settings are done through the settings API. Use the values from the following table to set Email Notifications up through API.

    Setting name
    Settings Type
    Value type
    example

    Managing Trusts

    Openstack Administrators should never have the need to directly work with the trusts created.

    The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.

    Configuring Trilio

    Added external database-support Added the Openstack distribution for storage (mount path)

    Trilio configuration process is using Ansible scripts. Ansible, in the last few years, has grown in popularity as a preferred configuration management tool and Trilio uses ansible playbooks extensively to configure the Trilio cluster. To troubleshoot Trilio configuration issues, the user should have a basic understanding of Ansible playbook output.

    Ansible modules are inherently idempotent and hence Trilio configuration can run any number of times to change or reconfigure Trilio cluster.

    Once the VM is booted, point your browser (Chrome or Firefox) to Trilio node IP address.

    This will bring you to the Trilio Dashboard, which contains the Trilio configurator.

    python package

    4.1.94

    contegoclient

    python package

    4.1.94

    dmapi

    deb package

    4.1.94

    python3-dmapi

    deb package

    4.1.94

    tvault-contego

    deb package

    4.1.94

    python3-tvault-contego

    deb package

    4.1.94

    tvault-horizon-plugin

    deb package

    4.1.94

    python3-tvault-horizon-plugin

    deb package

    4.1.94

    s3-fuse-plugin

    deb package

    4.1.94

    python3-s3-fuse-plugin

    deb package

    4.1.94

    workloadmgr

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94

    python3-dmapi

    rpm package

    4.1.94

    tvault-contego

    rpm package

    4.1.94

    python3-tvault-contego

    rpm package

    4.1.94

    tvault-horizon-plugin

    rpm package

    4.1.94

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94

    python3-s3fuse-plugin

    rpm package

    4.1.94

    s3fuse

    python package

    4.1.94

    tvault-configurator

    python package

    4.1.94

    workloadmgr

    python package

    4.1.94

    Gitbranch

    stable/4.2

    RHOSP13 containers

    4.1.94-rhosp13

    RHOSP16.0 containers

    4.1.94-rhosp16

    RHOSP16.1 containers

    4.1.94-rhosp16.1

    Kolla Ansible Ussuri containers

    4.1.94-ussuri

    import functionality
    Switch Backup Target on Kolla-Ansible

    workloadmgrclient

    python3-dmapi

    deb package

    4.1.94.3

    tvault-contego

    deb package

    4.1.94.3

    python3-tvault-contego

    deb package

    4.1.94.3

    tvault-horizon-plugin

    deb package

    4.1.94.3

    python3-tvault-horizon-plugin

    deb package

    4.1.94.3

    s3-fuse-plugin

    deb package

    4.1.94.3

    python3-s3-fuse-plugin

    deb package

    4.1.94.3

    workloadmgr

    deb package

    4.1.94.3

    workloadmgrclient

    deb package

    4.1.94

    dmapi

    rpm package

    4.1.94.3-4.1

    python3-dmapi

    rpm package

    4.1.94.3-4.1

    tvault-contego

    rpm package

    4.1.94.3-4.1

    python3-tvault-contego

    rpm package

    4.1.94.3-4.1

    tvault-horizon-plugin

    rpm package

    4.1.94.3-4.1

    python3-tvault-horizon plugin-el8

    rpm package

    4.1.94.3-4.1

    python-s3fuse-plugin-cent7

    rpm package

    4.1.94.3-4.1

    python3-s3fuse-plugin

    rpm package

    4.1.94.3-4.1

    workloadmgrclient

    rpm package

    4.1.94

    s3fuse

    python package

    4.1.94.3

    tvault-configurator

    python package

    4.1.94.3

    workloadmgr

    python package

    4.1.94.3

    workloadmgrclient

    python package

    4.1.94

    dmapi

    deb package

    Gitbranch

    hotfix-1-TVO/4.1

    RHOSP13 containers

    4.1.94-hotfix2-rhosp13

    RHOSP16.0 containers

    4.1.94-hotfix-2-rhosp16

    RHOSP16.1 containers

    4.1.94-hotfix-2-rhosp16.1

    Kolla Ansible Ussuri containers

    4.1.94-hotfix-2-ussuri

    4.1.94.3

    Instead of tls-endpoints-public-dns.yaml file, use environments/trilio_env_tls_endpoints_public_dns.yaml

  • Instead of tls-endpoints-public-ip.yaml file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml

  • Instead of tls-everywhere-endpoints-dns.yaml file, useenvironments/trilio_env_tls_everywhere_dns.yaml

  • registry.connect.redhat.com
    following overview
    this overview
    following overview
    Red Hat registry URLs.
    here

    Total amount of succeeded Snapshots

  • Total amount of failed Snapshots

  • Workload Type

  • Status of the Workload

  • Provide Workload Name and Workload Description on the first tab "Details"

  • Choose between Serial or Parallel workload on the first tab "Details"

  • Choose the Policy if available to use on the first tab "Details"

  • Choose the VMs to protect on the second Tab "Workload Members"

  • Decide for the schedule of the workload on the Tab "Schedule"

  • Provide the Retention policy on the Tab "Policy"

  • Choose the Full Backup Interval on the Tab "Policy"

  • If required check "Pause VM" on the Tab "Options"

  • Click create

  • Workload Type ID is required
  • --source-platform➡️Workload source platform is required. Supported platforms is 'openstack'

  • --instance➡️Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID

  • --jobschedule➡️Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'snapshots_to_retain' : '2'

  • --metadata➡️Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

  • --policy-id <policy_id>➡️ID of the policy to assign to the workload

  • Click the workload name to enter the Workload overview

    Time till next Snapshot run

  • Retention Policy and Value

  • Full Backup Interval policy and value

  • Click the small arrow next to "Create Snapshot" to open the sub-menu

  • Click "Edit Workload"

  • Modify the workload as desired - All parameters except workload type can be changed

  • Click "Update"

  • Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID
  • --jobschedule <key=key-name>➡️Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. If don't specify timezone, then by default it takes your local machine timezone 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30'

  • --metadata <key=key-name>➡️Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

  • --policy-id <policy_id>➡️ID of the policy to assign

  • <workload_id> ➡️ID of the workload to edit

  • Click the small arrow next to "Create Snapshot" to open the sub-menu

  • Click "Delete Workload"

  • Confirm by clicking "Delete Workload" yet again

  • Click the small arrow next to "Create Snapshot" to open the sub-menu

  • Click "Unlock Workload"

  • Confirm by clicking "Unlock Workload" yet again

  • Click the small arrow next to "Create Snapshot" to open the sub-menu

  • Click "Reset Workload"

  • Confirm by clicking "Reset Workload" yet again

  • Snapshot
    Restore
    Snapshots

    Navigate to Policy

    set interval

  • set retention type

  • set retention value

  • Navigate to Policy

  • Click new policy

  • provide a policy name on the Details tab

  • provide a description on the Details tab

  • provide the RPO in the Policy tab

  • Choose the Snapshot Retention Type

  • provide the Retention value

  • Choose the Full Backup Interval

  • Click create

  • --metadata <key=keyname> ➡️ Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

  • <display_name> ➡️ the name the policy will get

  • Navigate to Policy

  • identify the policy to edit

  • click on "Edit policy" at the end of the line of the chosen policy

  • edit the policy as desired - all values can be changed

  • Click "Update"

  • ➡️
    Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'
  • --metadata <key=keyname> ➡️ Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

  • <policy_id> ➡️ the name the policy will get

  • Navigate to Policy

  • identify the policy to assign/remove

  • click on the small arrow at the end of the line of the chosen policy to open the submenu

  • click "Add/Remove Projects"

  • Choose projects to add or remove by using the plus/minus buttons

  • Click "Apply"

  • ➡️
    policy to be assigned or removed

    Navigate to Policy

  • identify the policy to assign/remove

  • click on the small arrow at the end of the line of the chosen policy to open the submenu

  • click "Delete Policy"

  • Confirm by clicking "Delete"

  • [DEFAULT] 
    max_wait_for_upload = 48
    [global_job_scheduler] 
    misfire_grace_time = 600 
    [DEFAULT] 
    dmapi_workers = 16 
    retries 5 
    timeout http-request 10m 
    timeout queue 10m 
    timeout connect 10m 
    timeout client 10m 
    timeout server 10m 
    timeout check 10m 
    balance roundrobin 
    maxconn 50000
    [cinder] 
    http_retries = 10
    header: 
    header_color: blue 
    body_text_color: "#DC143C" 
    body_text: 
    header_font_size: 25px 
    body_text_font_size: 22px
    [filter:authtoken] 
    interface = internal
    [root@TVM2 ~]# pcs resource disable wlm-cron
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr                                                                                                                                                                                                                                                                                                                                                           -cron
    
    cd /home/stack
    mv triliovault-cfg-scripts triliovault-cfg-scripts-old
    git clone -b hotfix-13-TVO/4.1 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following.
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    Trilio Datamove container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover container:       registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover container:       registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    # For RHOSP13
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    
    # For RHOSP16.1
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
       
    # For RHOSP16.2
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/scripts/
    ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp13
    
    $ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: 172.25.2.2:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: 172.25.2.2:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: 172.25.2.2:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/scripts/
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp16.1
    
    $ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/scripts/
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp16.2
    
    $ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    (undercloud) [stack@undercloud redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>]$ openstack tripleo container image list | grep trilio
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1                       |                        |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1                   |                   |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1                  |
    
    -----------------------------------------------------------------------------------------------------
    
    (undercloud) [stack@undercloud redhat-director-scripts]$ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    $ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    RHOSP13: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/environments/trilio_env.yaml
    RHOSP16.1: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env.yaml
    RHOSP16.2: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/environments/trilio_env.yaml
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_dns.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
    [root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
    26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-REL-VERSION>-rhosp16.1        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-REL-VERSION>-rhosp16.1       kolla_start           5 days ago  Up 5 days ago         horizon
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-REL-VERSION>-rhosp16.1                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-REL-VERSION>-rhosp16.1       kolla_start           5 days ago  Up 5 days ago         horizon
    workloadmgr workload-list [--all {True,False}] [--nfsshare <nfsshare>]
    workloadmgr workload-create --instance <instance-id=instance-uuid>
                                [--display-name <display-name>]
                                [--display-description <display-description>]
                                [--workload-type-id <workload-type-id>]
                                [--source-platform <source-platform>]
                                [--jobschedule <key=key-name>]
                                [--metadata <key=key-name>]
                                [--policy-id <policy_id>]
    workloadmgr workload-show <workload_id> [--verbose <verbose>]
    usage: workloadmgr workload-modify [--display-name <display-name>]
                                       [--display-description <display-description>]
                                       [--instance <instance-id=instance-uuid>]
                                       [--jobschedule <key=key-name>]
                                       [--metadata <key=key-name>]
                                       [--policy-id <policy_id>]
                                       <workload_id>
    workloadmgr workload-delete [--database_only <True/False>] <workload_id>
    workloadmgr workload-unlock <workload_id>
    workloadmgr workload-reset <workload_id>
    workloadmgr policy-list
    workloadmgr policy-show <policy_id>
    workloadmgr policy-create --policy-fields <key=key-name>
                              [--display-description <display_description>]
                              [--metadata <key=key-name>]
                              <display_name>
    workloadmgr policy-update [--display-name <display-name>]
                              [--display-description <display-description>]
                              [--policy-fields <key=key-name>]
                              [--metadata <key=key-name>]
                              <policy_id>
    workloadmgr policy-assign [--add_project <project_id>]
                              [--remove_project <project_id>]
                              <policy_id>
    workloadmgr policy-delete <policy_id>

    Refer to the below-mentioned acceptable values for the placeholders in this document as per the Openstack environment: kolla_base_distro : ubuntu / centos triliovault_tag : 4.1.94-hotfix-13-ussuri / 4.1.94-hotfix-12-victoria

    1.1] Select backup target type

    Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

    Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.

    a) NFS

    Need NFS share path

    b) Amazon S3

    - S3 Access Key - Secret Key - Region - Bucket name

    c) Other S3 compatible storage (Like, Ceph based S3)

    - S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

    2] Clone Trilio Deployment Scripts

    Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterward, copy the Trilio Ansible role into the Kolla-ansible roles directory

    3] Hook Trilio deployment scripts to Kolla-ansible deploy scripts

    3.1] Add Trilio global variables to globals.yml

    3.2] Add Trilio passwords to kolla passwords.yaml

    Append triliovault_passwords.yml to /etc/kolla/passwords.yml. Passwords are empty. Set these passwords manually in the /etc/kolla/passwords.yml.

    Edit /etc/kolla/passwords.yml, go to the end of the file and set trilio passwords.

    3.3] Append Trilio site.yml content to kolla ansible’s site.yml

    3.4] Append triliovault_inventory.txt to your cloud’s kolla-ansible inventory file.

    4] Edit globals.yml to set Trilio parameters

    Edit /etc/kolla/globals.yml file to fill Trilio backup target and build details. You will find the Trilio related parameters at the end of globals.yml file. Details like Trilio build version, backup target type, backup target details, etc need to be filled out.

    Following is the list of parameters that the usr needs to edit.

    Parameter
    Defaults/choices
    comments

    triliovault_tag

    <triliovault_tag>

    Container tags. Use ussuri tagged containers for Ussuri and victoria tagged containers for Victoria

    horizon_image_full

    Keep Default

    By default will the Trilio Horizon container not get deployed.

    Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.

    triliovault_docker_username

    triliodocker

    default docker user of Trilio (read permission only)

    In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.

    Following are the triliovault container image URLs. Replace kolla_base_distro and triliovault_tag variables with their values.

    5] Enable Trilio Snapshot mount feature

    To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.

    Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variable. Append the Trilio mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.

    For a default Kolla installation, will the variable look as follows afterward:

    Next, find the variable nova_compute_default_volumes in the same file and append the mount bind /var/trilio:/var/trilio:shared to the list.

    After the change will the variable look for a default Kolla installation as follows:

    In case of using Ironic compute nodes, one more entry needs to be adjusted in the same file. Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.

    After the changes the variable will look like the following:

    6] Pull Trilio container images

    Pull the Trilio container images from the Dockerhub based on the existing inventory file. In the example is the inventory file named multinode.

    7] Deploy Trilio

    All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.

    This is just an example command. You need to use your cloud deploy command.

    8] Verify Trilio deployment

    Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.

    9] Troubleshooting Tips

    9.1 ] Check Trilio containers and their startup logs

    To see all TriloVault containers running on a specific node use the docker ps command.

    To check the startup logs use the docker logs <container name> command.

    9.2] Trilio Horizon tabs are not visible in Openstack

    Verify that the Trilio Appliance is configured. The Horizon tabs are only shown when a configured Trilio appliance is available.

    Verify that the Trilio horizon container is installed and in a running state.

    9.3] Trilio Service logs

    • Trilio datamover api service logs on datamover api node

    • Trilio datamover service logs on datamover node

    10. Change the nova user id on the Trilio Nodes

    Note: This step needs to be done on Trilio Appliance node. Not on OpenStack node.

    Pre-requisite: You should have already launched Trilio appliance VM

    In Kolla OpenStack distribution, nova user id on nova-compute docker container is set to '42436'. The nova user id on the Trilio nodes needs to be set the same. Do the following steps on all Trilio nodes:

    1. Download the shell script that will change the user id

    2. Assign executable permissions

    3. Execute the script

    4. Verify that nova user and group id have changed to '42436'

    5. After this step, you can proceed to the 'Configuring Trilio' section.

    String

    [email protected]

    smtp_port

    email_settings

    Integer

    587

    smtp_server_name

    email_settings

    String

    Mailserver_A

    smtp_server_username

    email_settings

    String

    admin

    smtp_server_password

    email_settings

    String

    password

    smtp_timeout

    email_settings

    Integer

    10

    smtp_email_enable

    email_settings

    Boolean

    True

    Create Setting

    POST https://$(tvm_address):8780/v1/$(tenant_id)/settings

    Creates a Trilio setting.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work with

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body format

    Setting create requires a Body in json format, to provide the requested information.

    Show Setting

    GET https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>

    Shows all details of a specified setting

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    setting_name

    string

    Name of the setting to show

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Modify Setting

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/settings

    Modifies the provided setting with the given details.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work with w

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body format

    Workload modify requires a Body in json format, to provide the information about the values to modify.

    Delete Setting

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>

    Deletes the specified Workload.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    setting_name

    string

    Name of the setting to delete

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    smtp_default___recipient

    email_settings

    String

    [email protected]

    smtp_default___sender

    email_settings

    List Trusts

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts

    Provides the lists of trusts for the given Tenant.

    Path Parameters

    Name
    Type
    Description

    tvm_name

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant / Project to fetch the trusts from

    Query Parameters

    Name
    Type
    Description

    is_cloud_admin

    boolean

    true/false

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:21:57 GMT
    Content-Type: application/json
    Content-Length: 868
    Connection: keep-alive
    X-Compute-Request-Id: req-fa48f0ad-aa76-42fa-85ea-1e5461889fb3
    
    {
       "trust":[
          {
             "created_at":"2020-11-26T13:10:53.000000",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "name":"trust-6e290937-de9b-446a-a406-eb3944e5a034",
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
    
    

    Create Trust

    POST https://$(tvm_address):8780/v1/$(tenant_id)/trusts

    Creates a workload in the provided Tenant/Project with the given details.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to create the Trust for

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body Format

    Show Trust

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>

    Shows all details of a specified trust

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    workload_id

    string

    ID of the Workload to show

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Delete Trust

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>

    Deletes the specified trust.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Trust in

    trust_id

    string

    ID of the Trust to delete

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Validate Scheduler Trust

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>

    Validates the Trust of a given Workload.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    workload_id

    string

    ID of the Workload to validate the Trust of

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    The user is: admin The default password is: password

    After the very first login, you are requested to change the admin password.

    Unlike previous versions of Trilio, the current version only requires you to configure the cluster once and the Trilio dashboard provides cluster-wide management capability.

    Uploading the OpenStack certificate bundle

    OpenStack endpoints can be configured to use TLS. In such a configuration the Trilio appliance needs to trust the certificates provided by the OpenStack endpoints.

    To achieve this trust it is required to upload the OpenStack certificate bundle through the OS API certificate tab of the Trilio appliance Dashboard.

    OS API Certificate tab location on Trilio appliance dashboard

    The certificate bundle is located on the controller nodes of the OpenStack installation.

    The default paths for each distribution are as follows:

    The uploaded certificates can be verified on the Trilio appliance at the following location.

    Details needed for the Trilio Appliance

    Upon login into an unconfigured Trilio Appliance, the shown page is the configurator. The configurator requires some information about the Trilio Appliance, Openstack, and Backup Storage.

    Trilio Cluster information

    The Trilio Cluster needs to be integrated into an existing environment to be able to operate correctly. This block asks for information about the Trilio Cluster operating details.

    • Controller Nodes

      • This is the list of Trilio virtual appliance IP addresses along with their hostnames.

      • Format: comma-separated list with pairs combined through '='

      • Example: 172.20.4.151=tvault-104-1,172.20.4.152=tvault-104-2,172.20.4.153=tvault-104-3’

    The Trilio Cluster supports only 1 node and 3 node clusters.

    • Virtual IP Address

      • This is the Trilio cluster IP address which is mandatory

      • Format: IP/Subnet

      • Example: 172.20.4.150/24

    The Virtual IP is mandatory even for single-node clusters and has to be different from any IP given at the Controller Nodes.

    • Name Server

      • List of nameservers, primarily used to resolve OpenStack service endpoints.

      • Format: comma-separated list

      • example: 10.10.10.1,172.20.4.1

    If defining OpenStack endpoint hostnames in the /etc/hosts file on the VM is preferred over a DNS solution you may set the nameserver to 0.0.0.0, the default gateway.

    • Domain Search Order

      • The domain the Trilio Cluster will use.

      • Format: comma-separated list

      • example: trilio.io,trilio.demo

    • NTP Servers

      • NTP servers the Trilio Cluster will use

      • format: comma-separated list

      • example: 0.pool.ntp.org,10.10.10.10

    • Timezone

      • Timezone the Trilio Cluster will use internally

      • format: pre-populated list

      • example: UTC

    Openstack Credentials information

    The Trilio appliance integrates with one RHV environment. This block asks for the information required to access and connect with the RHV Cluster.

    • Keystone URL

      • The Keystone endpoint used to fetch authentication for configuration

      • format: URL

      • example: https://keystone.trilio.io:5000/v3

    • Endpoint Type

      • Defines which endpoint type will be used to communicate with the Openstack endpoints

      • format: predefined list of radio buttons

      • example: Public

    When FQDNs are used for the Keystone endpoints it is necessary to configure at least one DNS server before the configuration.

    Otherwise, the validation of the Openstack Credentials will fail.

    • Domain ID

      • domain the provided user and tenant are located in

      • format: ID

      • example: default

    • Administrator

      • Username of an account with the domain admin role

      • format: String

      • example: admin

    • Password

      • password for the prior provided user

      • format: String

      • example: password

    Trilio requires domain admin role access. To provide domain admin role to a user, the following command can be used:

    openstack role add --domain <domain id> --user <username> admin

    The Trilio configurator verifies after every entry if it is possible to login into Openstack using the provided credentials.

    This verification will fail until all entries are set and correct.

    When the verification is successful it is possible to choose the Admin tenant, the Region, and the Trustee role without any error message shown.

    • Admin Tenant

      • The tenant to be used together with the provided user

      • format: a pre-populated list

      • example: admin

    • Region

      • Openstack Region the user and tenant are located in

      • format: a pre-populated list

      • example: RegionOne

    • Trustee Role

      • The Openstack role required to be able to use Trilio functionalities

      • format: a pre-populated list

      • example: _member_

    Backup Storage Configuration information

    This block is requesting the necessary information about the backup target that the Trilio installation will be used to store and read backups.

    • Openstack Dist

      • RHOSP and Kolla Ansible require a special mount point to be used

      • format: predefined list

      • example: RHOSP

    • Backup Storage

      • Defines the Backup Storage protocol to use

      • format: predefined list of radio buttons

      • example: NFS

    Using the NFS protocol

    • NFS Export

      • The path under which the NFS Volumes to be used can be found

      • format: comma-separated list of NFS Volumes paths

      • example: 10.10.2.20:/upstream,10.10.5.100:/nfs2

    • NFS Options

      • NFS options used by the Trilio Cluster when mounting the NFS Exports

      • format: NFS options

      • example: nolock,soft,timeo=180,intr,lookupcache=none

    Please use the predefined NFS Options and only change them when it is know that changes are necessary.

    Trilio is testing against the predefined NFS options.

    Using the S3 protocol

    • S3 Compatible

      • Switch between Amazon and other S3 compatible storage solutions

      • format: predefined list

      • example: Amazon S3

    • (S3 compatible) Endpoint URL

      • URL to be used to reach and access the provided S3 compatible storage

      • format: URL

      • example: objects.trilio.io

    • Access Key

      • Access Key necessary to login into the S3 storage

      • format: access key

      • example: SFHSAFHPFFSVVBSVBSZRF

    • Secret Key

      • Secret Key necessary to login into the S3 storage

      • format: secret key

      • example: bfAEURFGHsnvd3435BdfeF

    • Region

      • Configured Region for the S3 Bucket (keep the default for S3 compatible without Region)

      • format: String

      • example: us-east-1

    • Signature Version

      • S3 signature version to use for signing into the S3 storage

      • format: string

      • example: default

    • Bucket Name

      • Name of the bucket to be used as Backup target

      • format: string

      • example: Trilio-backup

    Using secured non-aws S3 storage

    When using secured connection with a non-aws S3 storage like CEPH you have to provide the certificate used for the connection.

    To enter this certificate type the https:// based endpoint into the field Endpoint URL.

    Once you tab out of the field will the upload certificate button be shown. See picture below.

    Accessing the upload certificate for secured connection

    Workload Import

    Check this box in case of reinitialization or reinstallation of the Trilio Appliance to import all matching Workloads located on the Backup Target.

    Workloads that are not assigned to an existing tenant will fail to import and need to be reassigned manually once the configuration is done.

    Advanced settings

    At the end of the configurator is the option to activate the advanced settings.

    Activating this option does provide the possibility to configure the Keystone endpoints used for the Datamover API and Trilio.

    Setup Trilio and Datamover API endpoints.

    Trilio generates Keystone endpoints for 2 services. The Trilio Datamover API and the Trilio Workloadmanager.

    Modern Openstack installation have the endpoint types split over multiple networks. The advanced settings for the Datamover API endpoints and Trilio Workloadmanager endpoints allow configuring Trilio accordingly.

    Used IP addresses are added as additional VIPs to the Trilio cluster.

    In the case of FQDN used for those endpoints will the Trilio configurator resolve the FQDN to learn of the IPs that are then set as VIPs.

    It is recommended to verify the datamover api settings against the ones configured during installation of the Trilio components.

    If these endpoints do already exist in Keystone are the values prefilled and can not be changed. In case of a change required, delete the old Keystone endpoints first.

    Providing an URL with https activates the TLS enabled configuration, which requires the upload of certificates and the connected private key.

    Set up an external database

    Trilio allows the use of an external MySQL or MariaDB database.

    This database needs to be prepared by creating the empty workloadmgr database, creating the workloadmgr user and setting the right permissions. An example command to create this database would be:

    Provide the connection string to the Trilio configurator.

    This value can only be set upon an initial configuration of the Trilio solution.

    When the Cluster has been configured to use the internal database, then the connection string will not be shown in the next configuration attempt.

    In case of an external database, will the connection string be shown, but is uneditable.

    Define the Trilio service user password

    Trilio is using a service user that is located in the Openstack service project.

    The password for this service user will be generated randomly or can be defined in the advanced settings.

    Starting the configurator

    Once all entries have been set and all validations are error-free the configurator can be started.

    • Click Finish

    • Reconfirm in the pop-up that you want to start the configuration

    • Wait for the configurator to finish

    Some elements of the configurator take time. Even when it looks like the configurator is stuck, please wait till the configurator finishes. Should the configurator have not finished after 6h, please contact Trilio Support for help.

    The configurator is using Ansible and a few Trilio internal API calls. After each configuration block or after the configurator finished it is possible to visit the Ansible output.

    At the end of a successful configuration does the configurator forward to the set VIP.

    Upgrading on Kolla OpenStack

    Upgrading from Trilio 4.0 to Trilio 4.1

    Due to the new installation method of Trilio for Kolla OpenStack, it is required to reinstall the Trilio components running on the Kolla Openstack nodes when upgrading from Trilio 4.0.

    The Trilio appliance can be upgraded as documented.

    Upgrading from Trilio 4.1 to a higher version

    Trilio 4.1 can be upgraded without reinstallation to a higher version of T4O if available.

    Refer to the below-mentioned acceptable values for the placeholders in this document as per the Openstack environment: kolla_base_distro : ubuntu / centos triliovault_tag : 4.1.94-hotfix-13-ussuri / 4.1.94-hotfix-12-victoria

    Pre-requisites

    Please ensure the following points are met before starting the upgrade process:

    • Either 4.1 GA OR any hotfix patch against 4.1 should be already deployed

    • No Snapshot OR Restore is running

    • Global job scheduler should be disabled

    • wlm-cron is disabled on the primary Trilio Appliance

    Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut-down.

    Clone latest configuration scripts

    Before the latest configuration script is loaded it is recommended to take a backup of the existing config scripts' folder & Trilio ansible roles. The following command can be used for this purpose:

    Clone the latest configuration scripts of the required branch and access the deployment script directory for Kolla Ansible Openstack. Available branches to upgrade T4O 4.1 are:

    Copy the downloaded Trilio ansible role into the Kolla-Ansible roles directory.

    Append Trilio variables

    Clean old Trilio variables and append new Trilio variables

    This step is not always required. It is recommended to comparetriliovault_globals.ymlwith the Trilio entries in the/etc/kolla/globals.ymlfile.

    In case of no changes, this step can be skipped.

    This is required, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_globals.yml they need to be updated in /etc/kolla/globals.yml file.

    Clean old Trilio passwords and append new Trilio password variables

    This step is not always required. It is recommended to comparetriliovault_passwords.ymlwith the Trilio entries in the/etc/kolla/passwords.ymlfile.

    In case of no changes, this step can be skipped.

    This step is required, when some password variable names have been added, changed, or removed in the latest triliovault_passwords.yml. In this case, the /etc/kolla/passwords.yml needs to be updated.

    Append triliovault_site.yml content to kolla ansible's site.yml

    This step is not always required. It is recommended to comparetriliovault_site.ymlwith the Trilio entries in the/usr/local/share/kolla-ansible/ansible/site.ymlfile.

    In case of no changes, this step can be skipped.

    This is required because, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_site.yml they need to be updated in /usr/local/share/kolla-ansible/ansible/site.yml file.

    Append triliovault_inventory.txt to the kolla-ansible inventory file

    This step is not always required. It is recommended to comparetriliovault_inventory.yml ith the Trilio entries in the/root/multinodefile.

    In case of no changes, this step can be skipped.

    By default, the triliovault-datamover-api service gets installed on ‘control' hosts and the trilio-datamover service gets installed on 'compute’ hosts. You can edit the T4O groups in the inventory file as per your cloud architecture.

    T4O group names are ‘triliovault-datamover-api’ and ‘triliovault-datamover’

    Edit globals.yml to set T4O parameters

    Edit '/etc/kolla/globals.yml' file to fill triliovault backup target and build details. You will find the triliovault related parameters at the end of globals.yml file. User needs to fill in details like triliovault build version, backup target type, backup target details, etc.

    Following is the list of parameters that the user needs to edit.

    Parameter
    Defaults/choices
    comments

    Enable T4O Snapshot mount feature

    This step is already part of the 4.1 GA installation procedure and should only be verified.

    To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.

    Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variable. Append the Trilio mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.

    For a default Kolla installation, will the variable look as follows afterward:

    Next, find the variable nova_compute_default_volumes in the same file and append the mount bind /var/trilio:/var/trilio:shared to the list.

    After the change will the variable look for a default Kolla installation as follows:

    In case of using Ironic compute nodes one more entry need to be adjusted in the same file. Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.

    After the changes the variable will looks like the following:

    Pull containers in case of private repository

    In case, the user doesn’t want to use the docker hub registry for triliovault containers during cloud deployment, then the user can pull triliovault images before starting cloud deployment and push them to other preferred registries.

    Following are the triliovault container image URLs. Replace kolla_base_distro and triliovault_tag variables with their values

    Pull T4O container images

    Run the below command from the directory with the multinode file tull pull the required images.

    Run Kolla-Ansible upgrade command

    Run the below command from the directory with the multinode file to start the upgrade process.

    Verify Trilio deployment

    Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.

    Advance settings/configuration for Trilio services

    Customize HAproxy configuration parameters for Trilio datamover api service

    Following are the default haproxy conf parameters set against triliovault datamover api service.

    These values work best for triliovault dmapi service. It’s not recommended to change these parameter values. However, in some exceptional cases, If needed to change any of the above parameter values then same can be done on kolla-ansible server in the following file.

    After editing, run kolla-ansible deploy command again to push these changes to openstack cloud.

    Post kolla-ansible deploy, to verify the changes, please check following file, available on all controller/haproxy nodes.

    Backups-Admin Area

    Trilio provides Backup-as-a-Service, which allows Openstack Users to manage and control their backups themselves. This doesn't eradicate the need for a Backup Administrator, who has an overview of the complete Backup Solution.

    To provide Backup Administrators with the tools they need does Trilio for Openstack provide a Backup-Admin area in Horizon in addition to the API and CLI.

    Access the Backups-Admin area

    To access the Backups-Admin area follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin Tab.

    4. Navigate to Trilio page.

    The Backups-Admin area provides the following features.

    It is possible to reduce the shown information down to a single tenant. That way seeing the exact impact the chosen Tenant has.

    Status overview

    The status overview is always visible in the Backups-Admin area. It provides the most needed information on a glance, including:

    • Storage Usage (nfs only)

    • Number of protected VMs compared to number of existing VMs

    • Number of currently running Snapshots

    • Status of TVault Nodes

    The status of nodes is filled when the services are running and in good status.

    Workloads tab

    This tab provides information about all currently existing Workloads. It is the most important overview tab for every Backup Administrator and therefor the default tab shown when opening the Backup-Admins area.

    The following information are shown:

    • User-ID that owns the Workload

    • Project that contains the Workload

    • Workload name

    • Workload Type

    Usage tab

    Administrators often need to figure out, where a lot of resources are used up, or they need to quickly provide usage information to a billing system. This tab helps in these tasks by providing the following information:

    • Storage used by a Tenant

    • VMs protected by a Tenant

    It is possible to drill down to see the same information per workload and finally per protected VM.

    The Usage tab includes workloads and VMs that are no longer actively used by a Tenant, but exist on the backup target.

    Nodes tab

    This tab displays information about Trilio cluster nodes. The following information are shown:

    • Node name

    • Node ID

    • Trilio Version of the node

    • IP Address

    The Virtual IP is shown as it's own node. It is typically shown directly below the current active Controller Node.

    Data Movers tab (Trilio Data Mover Service)

    This tab displays information about Trilio contego service. The following information are shown:

    • Service-Name

    • Compute Node the service is running on

    • Zone

    • Service Status from Openstack perspective (enabled/disabled)

    Storage tab

    This tab displays information about the backup target storage. It contains the following information:

    • Storage Name

    Clicking on the Storage name provides an overview of all workloads stored on that storage.

    • Capacity of the storage

    • Total utilization of the storage

    • Status of the storage

    • Statistic information

    Audit tab

    Audit logs provide the sequence of workload related activities done by users, like workload creation, snapshot creation, etc. The following information are shown:

    • Time of the entry

    • What task has been done

    • Project the task has performed in

    • User that performed the task

    The Audit log can be searched for strings to find for example only entries down by a specific user.

    Additionally, can the shown timeframe be changed as necessary.

    License tab

    The license tab provides an overview over the current license and allows to upload new licenses, or validate the current license.

    A license validation is automatically done, when opening the tab.

    The following information about an active license are shown:

    • Organization (License name)

    • License ID

    • Purchase date - when was the license created

    • License Expiry Date

    Trilio will stop all activities once a license is no longer valid or expired.

    Policy tab

    The policy tab gives Administrators the possibility to work with workload policies.

    Please use in the Admin guide to learn more about how to create and use Workload Policies.

    Settings tab

    This tab manages all global settings for the whole cloud. Trilio has two types of settings:

    1. Email settings

    2. Job scheduler settings.

    Email Settings

    These settings will be used by Trilio to send email reports of snapshots and restores to users.

    Configuring the Email settings is a must-have to provide Email notification to Openstack users.

    The following information are required to configure the email settings:

    • SMTP Server

    • SMTP username

    • SMTP password

    • SMTP port

    A test email can be sent directly from the configuration page.

    To work with email settings through CLI use the following commands:

    To set an email setting for the first time or after deletion use:

    • --description➡️Optional description (Default=None) ➡️ Not required for email settings

    • --category➡️Optional setting category (Default=None) ➡️ Not required for email settings

    To update an already set email setting through CLI use:

    • --description➡️Optional description (Default=None) ➡️ Not required for email settings

    • --category➡️Optional setting category (Default=None) ➡️ Not required for email settings

    To show an already set email setting use:

    • --get_hidden➡️show hidden settings (True) or not (False) ➡️ Not required for email settings, use False if set

    • <setting_name>➡️name of the setting to show➡️ Take from the list below

    To delete a set email setting use:

    • <setting_name>➡️name of the setting to delete ➡️ Take from the list below

    Setting name
    Value type
    example

    Disable/Enable Job Scheduler

    The Global Job Scheduler can be used to deactivate all scheduled workloads without modifying each one of them.

    To activate/deactivate the Global Job Scheduler through the Backups-Admin area:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin Tab.

    4. Navigate to Trilio page.

    The Global Job Scheduler can be controlled through CLI as well.

    To get the status of the Global Job Scheduler use:

    To deactivate the Global Job Scheduler use:

    To activate the Global Job Scheduler use:

    git clone -b hotfix-13-TVO/4.1 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/kolla-ansible/
    
    # For Centos and Ubuntu
    cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
    ## For Centos and Ubuntu
    ## Take backup of globals.yml
    cp /etc/kolla/globals.yml /opt/
    
    ## Append Trilio global variables to globals.yml 
    cat ansible/triliovault_globals.yml >> /etc/kolla/globals.yml
    ## For Centos and Ubuntu
    ## Take backup of passwords.yml
    cp /etc/kolla/passwords.yml /opt/
    
    ## Append Trilio global variables to passwords.yml 
    cat ansible/triliovault_passwords.yml >> /etc/kolla/passwords.yml
    
    ## For Centos and Ubuntu
    ## Take backup of site.yml
    cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/
    
    ## Append Trilio code to site.yml
    cat ansible/triliovault_site.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
    For example:
    If your inventory file name path '/root/multinode' then use following command.
    
    cat ansible/triliovault_inventory.txt >> /root/multinode
    docker.io/trilio/<kolla_base_distro>-binary-trilio-datamover-api:<triliovault_tag>
    docker.io/trilio/<kolla_base_distro>-binary-trilio-datamover:<triliovault_tag>
    docker.io/trilio/<kolla_base_distro>-binary-trilio-horizon-plugin:<triliovault_tag>
    
    nova_libvirt_default_volumes:
      - "{{ node_config_directory }}/nova-libvirt/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run/:/run/:shared"
      - "/dev:/dev"
      - "/sys/fs/cgroup:/sys/fs/cgroup"
      - "kolla_logs:/var/log/kolla/"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "
    {% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    "
      - "nova_libvirt_qemu:/etc/libvirt/qemu"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }
      - "/var/trilio:/var/trilio:shared"
    nova_compute_default_volumes:
      - "{{ node_config_directory }}/nova-compute/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run:/run:shared"
      - "/dev:/dev"
      - "kolla_logs:/var/log/kolla/"
      - "
    {% if enable_iscsid | bool %}iscsi_info:/etc/iscsi{% endif %}"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    nova_compute_ironic_default_volumes:
      - "{{ node_config_directory }}/nova-compute-ironic/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "kolla_logs:/var/log/kolla/"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    kolla-ansible -i multinode pull --tags triliovault
    kolla-ansible -i multinode deploy
    [root@controller ~]# docker ps | grep triliovault_datamover_api
    f00781997bc3        trilio/centos-binary-trilio-datamover-api:<triliovualt_tag>    "dumb-init --single-…"   2 minutes ago       Up 2 minutes                            triliovault_datamover_api
    
    [root@compute ~]# docker ps | grep triliovault_datamover
    84831db5d215        trilio/centos-binary-trilio-datamover:<triliovualt_tag>     "dumb-init --single-…"   5 minutes ago       Up 4 minutes                            triliovault_datamover
    
    [root@controller ~]# docker ps | grep horizon
    f3647e0fff27        trilio/centos-binary-trilio-horizon-plugin:<triliovualt_tag>   "dumb-init --single-…"   8 minutes ago       Up 8 minutes                            horizon
    docker ps -a | grep trilio
    docker logs trilio_datamover_api
    docker logs trilio_datamover
    docker ps | grep horizon
    /var/log/kolla/triliovault-datamover-api/dmapi.log
    /var/log/kolla/triliovault-datamover/tvault-contego.log
    ## Download the shell script
    $ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    
    ## Assign executable permissions
    $ chmod +x nova_userid.sh
    
    ## Execute the shell script to change 'nova' user and group id to '42436'
    $ ./nova_userid.sh
    
    ## Ignore any errors and verify that 'nova' user and group id has changed to '42436'
    $ id nova
       uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 11:55:43 GMT
    Content-Type: application/json
    Content-Length: 403
    Connection: keep-alive
    X-Compute-Request-Id: req-ac16c258-7890-4ae7-b7f4-015b5aa4eb99
    
    {
       "settings":[
          {
             "created_at":"2021-02-04T11:55:43.890855",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "name":"smtp_port",
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":null,
             "value":"8080",
             "description":null,
             "category":null,
             "type":"email_settings",
             "public":false,
             "hidden":0,
             "status":"available",
             "is_public":false,
             "is_hidden":false
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 12:01:27 GMT
    Content-Type: application/json
    Content-Length: 380
    Connection: keep-alive
    X-Compute-Request-Id: req-404f2808-7276-4c2b-8870-8368a048c28c
    
    {
       "setting":{
          "created_at":"2021-02-04T11:55:43.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"smtp_port",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_id":null,
          "value":"8080",
          "description":null,
          "category":null,
          "type":"email_settings",
          "public":false,
          "hidden":false,
          "status":"available",
          "metadata":[
             
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 12:05:59 GMT
    Content-Type: application/json
    Content-Length: 403
    Connection: keep-alive
    X-Compute-Request-Id: req-e92e2c38-b43a-4046-984e-64cea3a0281f
    
    {
       "settings":[
          {
             "created_at":"2021-02-04T11:55:43.000000",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "name":"smtp_port",
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":null,
             "value":"8080",
             "description":null,
             "category":null,
             "type":"email_settings",
             "public":false,
             "hidden":0,
             "status":"available",
             "is_public":false,
             "is_hidden":false
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 11:49:17 GMT
    Content-Type: application/json
    Content-Length: 1223
    Connection: keep-alive
    X-Compute-Request-Id: req-5a8303aa-6c90-4cd9-9b6a-8c200f9c2473
    {
       "settings":[
          {
             "category":null,
             "name":<String Setting_name>,
             "is_public":false,
             "is_hidden":false,
             "metadata":{
                
             },
             "type":<String Setting type>,
             "value":<String Setting Value>,
             "description":null
          }
       ]
    }
    {
       "settings":[
          {
             "category":null,
             "name":<String Setting_name>,
             "is_public":false,
             "is_hidden":false,
             "metadata":{
                
             },
             "type":<String Setting type>,
             "value":<String Setting Value>,
             "description":null
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:43:36 GMT
    Content-Type: application/json
    Content-Length: 868
    Connection: keep-alive
    X-Compute-Request-Id: req-2151b327-ea74-4eec-b606-f0df358bc2a0
    
    {
       "trust":[
          {
             "created_at":"2021-01-21T11:43:36.140407",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":"adfa32d7746a4341b27377d6f7c61adb",
             "value":"1c981a15e7a54242ae54eee6f8d32e6a",
             "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
             "category":"identity",
             "type":"trust_id",
             "public":false,
             "hidden":1,
             "status":"available",
             "is_public":false,
             "is_hidden":true,
             "metadata":[
                
             ]
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:39:12 GMT
    Content-Type: application/json
    Content-Length: 888
    Connection: keep-alive
    X-Compute-Request-Id: req-3c2f6acb-9973-4805-bae3-cd8dbcdc2cb4
    
    {
       "trust":{
          "created_at":"2020-11-26T13:15:29.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "value":"703dfabb4c5942f7a1960736dd84f4d4",
          "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
          "category":"identity",
          "type":"trust_id",
          "public":false,
          "hidden":true,
          "status":"available",
          "metadata":[
             {
                "created_at":"2020-11-26T13:15:29.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"86aceea1-9121-43f9-b55c-f862052374ab",
                "settings_name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
                "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
                "key":"role_name",
                "value":"member"
             }
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:41:51 GMT
    Content-Type: application/json
    Content-Length: 888
    Connection: keep-alive
    X-Compute-Request-Id: req-d838a475-f4d3-44e9-8807-81a9c32ea2a8
    {
       "scheduler_enabled":true,
       "trust":{
          "created_at":"2021-01-21T11:43:36.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "value":"1c981a15e7a54242ae54eee6f8d32e6a",
          "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
          "category":"identity",
          "type":"trust_id",
          "public":false,
          "hidden":true,
          "status":"available",
          "metadata":[
             {
                "created_at":"2021-01-21T11:43:36.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"d98d283a-b096-4a68-826a-36f99781787d",
                "settings_name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
                "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
                "key":"role_name",
                "value":"member"
             }
          ]
       },
       "is_valid":true,
       "scheduler_obj":{
          "workload_id":"209c13fa-e743-4ccd-81f7-efdaff277a1f",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_domain_id":"default",
          "user":"adfa32d7746a4341b27377d6f7c61adb",
          "tenant":"4dfe98a43bfa404785a812020066b4d6"
       }
    }
    {
       "trusts":{
          "role_name":"member",
          "is_cloud_trust":false
       }
    }
    RHOSP/TripleO: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
    Kolla Ansible with CentOS: /etc/pki/tls/certs/ca-bundle.crt
    Kolla Ansible with Ubuntu:  /usr/local/share/ca-certificates/
    OpenStack Ansible (OSA) with Ubuntu in our lab: /etc/openstack_deploy/ssl/
    OpenStack Asnible (OSA) with CentOS: /etc/openstack_deploy/ssl
    /etc/workloadmgr/ca-chain.pem
    create database workloadmgr_auto;
    CREATE USER 'trilio'@'localhost' IDENTIFIED BY 'password';
    GRANT ALL PRIVILEGES ON workloadmgr_auto.* TO 'trilio'@'10.10.10.67' IDENTIFIED BY 'password';
    mysql://trilio:[email protected]/workloadmgr_auto?charset=utf8

    triliovault_docker_password

    triliopassword

    password for default docker user of Trilio

    triliovault_docker_registry

    Default value: docker.io

    Edit this value if a different container registry for Trilio containers is to be used. Containers need to be pulled from docker.io and pushed to chosen registry first.

    triliovault_backup_target

    • nfs

    • amazon_s3

    • ceph_s3

    nfs if the backup target is NFS

    amazon_s3 if the backup target is Amazon S3

    ceph_s3 if the backup target type is S3 but not amazon S3.

    triliovault_nfs_shares

    <NFS-IP/FQDN>:/<NFS path>

    NFS share path example: ‘192.168.122.101:/nfs/tvault’

    triliovault_nfs_options

    'nolock,soft,timeo=180, intr,lookupcache=none'

    These parameter set NFS mount options. Keep default values, unless a special requirement exists.

    triliovault_s3_access_key

    S3 Access Key

    Valid for amazon_s3 and ceph_s3

    triliovault_s3_secret_key

    S3 Secret Key

    Valid for amazon_s3 and ceph_s3

    triliovault_s3_region_name

    • Default value: us-east-1

    • S3 Region name

    Valid for amazon_s3 and ceph_s3

    If s3 storage doesn't have region parameter keep default

    triliovault_s3_bucket_name

    S3 Bucket name

    Valid for amazon_s3 and ceph_s3

    triliovault_s3_endpoint_url

    S3 Endpoint URL

    Valid for ceph_s3 only

    triliovault_s3_ssl_enabled

    • True

    • False

    Valid for ceph_s3 only

    Set true for SSL enabled S3 endpoint URL

    triliovault_s3_ssl_cert_file_name

    s3-cert.pem

    Valid for ceph_s3 only with SSL enabled and self signed certificates

    OR issued by a private authority. In this case, copy the ceph s3 ca chain file to/etc/kolla/config/triliovault/

    directory on ansible server. Create this directory if it does not exist already.

    triliovault_copy_ceph_s3_ssl_cert

    • True

    • False

    Valid for ceph_s3 only

    Set to True when: SSL enabled with self-signed certificates or issued by a private authority.

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    "user_id":"cloud_admin",
    "value":"dbe2e160d4c44d7894836a6029644ea0",
    "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
    "category":"identity",
    "type":"trust_id",
    "public":false,
    "hidden":true,
    "status":"available",
    "metadata":[
    {
    "created_at":"2020-11-26T13:10:54.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"e9ec386e-79cf-4f6b-8201-093315648afe",
    "settings_name":"trust-6e290937-de9b-446a-a406-eb3944e5a034",
    "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
    "key":"role_name",
    "value":"admin"
    }
    ]
    }
    ]
    }

    User-Agent

    string

    python-workloadmgrclient

    Status of Contego Nodes

    Availability Zone

  • Amount of protected VMs

  • Performance information about the last 30 backups

    • How much data was backed up (green bars)

    • How long did the Backup take (red line)

  • Piechart showing amount of Full (Blue) Backups compared to Incremental (Red) Backups

  • Number of successful Backups

  • Number of failed Backups

  • Storage used by that Workload

  • Which Backup target is used

  • When is the next Snapshot run

  • What is the general intervall of the Workload

  • Scheduler Status including a Switch to deactivate/activate the Workload

  • Active Controller Node (True/False)

  • Status of the Node

  • Version of the Service

  • General Status

  • last time the Status was updated

  • Percentage all storages are used

  • Percentage how much storage is used for full backups

  • Amount of Full backups versus Incremental backups

  • Maintenance Expiry Date

  • License value

  • License Edition

  • License Version

  • License Type

  • Description of the License

  • Evaluation (True/False)

  • SMTP timeout

  • Sender email address

  • --type
    ➡️
    settings type
    ➡️
    set to email_settings
  • --is-public➡️sets if the setting can be seen publicly ➡️ set to False

  • --is-hidden➡️sets if the setting will always be hidden ➡️ set to False

  • --metadata➡️sets if the setting can be seen publicly ➡️ Not required for email settings

  • <name>➡️name of the setting ➡️ Take from the list below

  • <value>➡️value of the setting ➡️ Take value type from the list below

  • --type
    ➡️
    settings type
    ➡️
    set to email_settings
  • --is-public➡️sets if the setting can be seen publicly ➡️ set to False

  • --is-hidden➡️sets if the setting will always be hidden ➡️ set to False

  • --metadata➡️sets if the setting can be seen publicly ➡️ Not required for email settings

  • <name>➡️name of the setting ➡️ Take from the list below

  • <value>➡️value of the setting ➡️ Take value type from the list below

  • String

    Mailserver_A

    smtp_server_username

    String

    admin

    smtp_server_password

    String

    password

    smtp_timeout

    Integer

    10

    smtp_email_enable

    Boolean

    True

    Navigate to the Settings tab

  • Click "Disable/Enable Job Scheduler"

  • Check or Uncheck the box for "Job Scheduler Enabled"

  • Confirm by clicking on "Change"

  • smtp_default___recipient

    String

    [email protected]

    smtp_default___sender

    String

    [email protected]

    smtp_port

    Integer

    587

    Workload Policies

    smtp_server_name

    workloadmgr setting-create [--description <description>]
                               [--category <category>]
                               [--type <type>]
                               [--is-public {True,False}]
                               [--is-hidden {True,False}]
                               [--metadata <key=value>]
                               <name> <value>
    workloadmgr setting-update [--description <description>]
                               [--category <category>]
                               [--type <type>]
                               [--is-public {True,False}]
                               [--is-hidden {True,False}]
                               [--metadata <key=value>]
                               <name> <value>
    workloadmgr setting-show [--get_hidden {True,False}] <setting_name>
    workloadmgr setting-delete <setting_name>
    workloadmgr get-global-job-scheduler
    workloadmgr disable-global-job-scheduler
    workloadmgr enable-global-job-scheduler

    Access to the gemfury repository to fetch new packages

    triliopassword

    triliovault_docker___registry

    Default: docker.io

    If users want to use a different container registry for the triliovault containers, then the user can edit this value. In that case, the user first needs to manually pull triliovault containers from and push them to the other registry.

    triliovault_backup___target

    nfs

    amazon_s3

    ceph_s3

    'nfs': If the backup target is NFS

    'amazon_s3': If the backup target is Amazon S3

    'ceph_s3': If the backup target type is S3 but not amazon S3.

    dmapi_workers

    Default: 16

    If dmapi_workers field is not present in config file. The Default value will be equals to number of cores present on the node

    triliovault_nfs___shares

    Only with nfs for triliovault_backup_target

    User needs to provide NFS share path, e.g.: 192.168.122.101:/opt/tvault

    triliovault_nfs___options

    Default: nolock, soft, timeo=180, intr, lookupcache=none

    Only with nfs for triliovault_backup_target

    Keep default values if unclear

    triliovault_s3___access_key

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 access key

    triliovault_s3___secret_key

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 secret key

    triliovault_s3___region_name

    Default: us-east-1

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 region or keep default if no region required

    triliovault_s3___bucket_name

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 bucket

    triliovault_s3___endpoint_url

    Only with cephs3 for triliovault_backuptarget

    Provide S3 endpoint URL

    triliovault_s3___ssl_enabled

    True

    False

    Only with ceph_s3 for triliovault_backup_target

    Set to true if endpoint is on HTTPS

    triliovault_s3__ssl_cert__file_name

    s3-cert-pem

    Only with ceph_s3 for triliovault_backup_target and

    if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority user needs to copy the 'ceph s3 ca chain file' to "/etc/kolla/config/triliovault/" directory on ansible server. Create this directory if it does not exist already.

    triliovault_copy__ceph_s3__ssl_cert

    True

    False

    Set to true if:

    ceph_s3 for triliovault_backup_target and

    if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority

    triliovault_tag

    <triliovault_tag>

    Trilio Build Version

    horizon_image___full

    commented out

    Uncomment to install Trilio Horizon Container instead of previous installed container

    triliovault_docker___username

    triliodocker

    triliovault_docker___password

    [root@TVM2 ~]# pcs resource disable wlm-cron
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr                                                                                                                                                                                                                                                                                                                                                           -cron
    
    mv triliovault-cfg-scripts triliovault-cfg-scripts_4.1old
    mv /usr/local/share/kolla-ansible/ansible/roles/triliovault /opt/triliovault_4.1old
    git clone -b hotfix-13-TVO/4.1 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/kolla-ansible/
    cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
    #copy the backed-up original globals.yml which is not having triliovalut variables iniside current globals.yml
    cp /opt/globals.yml /etc/kolla/globals.yml
    
    #Append Trilio global variables to globals.yml
    cat ansible/triliovault_globals.yml >> /etc/kolla/globals.yml
    Take backup of current password file
    cp /etc/kolla/password.yml /opt/password-<CURRENT-RELEASE>.yml
    
    #Reset the passwords file to default one by reverting the backed-up original password.yml. This backup would have been taken during previous install/upgrade.
    cp /opt/password.yml /etc/kolla/password.yml
    
    #Append Trilio password variables to passwords.yml 
    cat ansible/triliovault_passwords.yml >> /etc/kolla/passwords.yml
    
    #File /etc/kolla/passwords.yml to be edited to set passwords.
    #To set the passwords, it's recommended to use the same passwords as done during previous T4O deployment, as present in the password file backup (/opt/password-<CURRENT-RELEASE>.yml). 
    #Any additional passwords (in triliovault_passwords.yml), should be set by the user in /etc/kolla/passwords.yml.
    #Take backup of current site.yml file
    cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/site-<CURRENT-RELEASE>.yml
    
    #Reset the site.yml to default one by reverting the backed-up original site.yml inside current site.yml. This backup would have been taken during previous install/upgrade.
    cp /opt/site.yml /usr/local/share/kolla-ansible/ansible/site.yml
    
    #Append Trilio code to site.yml
    cat ansible/triliovault_site.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
    For example:
    If your inventory file name path '/root/multinode' then use following
    #cleanup old T4O groups from /root/multinode and copy latest triliovault inventory file
    cat ansible/triliovault_inventory.txt >> /root/multinode
    nova_libvirt_default_volumes:
      - "{{ node_config_directory }}/nova-libvirt/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run/:/run/:shared"
      - "/dev:/dev"
      - "/sys/fs/cgroup:/sys/fs/cgroup"
      - "kolla_logs:/var/log/kolla/"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "
    {% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    
    "
      - "nova_libvirt_qemu:/etc/libvirt/qemu"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }
      - "/var/trilio:/var/trilio:shared"
    nova_compute_default_volumes:
      - "{{ node_config_directory }}/nova-compute/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run:/run:shared"
      - "/dev:/dev"
      - "kolla_logs:/var/log/kolla/"
      - "
    {% if enable_iscsid | bool %}iscsi_info:/etc/iscsi{% endif %}"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    nova_compute_ironic_default_volumes:
      - "{{ node_config_directory }}/nova-compute-ironic/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "kolla_logs:/var/log/kolla/"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    docker.io/trilio/<kolla_base_distro>-binary-trilio-datamover-api:<triliovault_tag>
    docker.io/trilio/<kolla_base_distro>-binary-trilio-datamover:<triliovault_tag>
    docker.io/trilio/<kolla_base_distro>-binary-trilio-horizon-plugin:<triliovault_tag>
    kolla-ansible -i multinode pull --tags triliovault
    kolla-ansible -i multinode upgrade
    [root@controller ~]# docker ps | grep triliovault_datamover_api
    f00781997bc3        trilio/centos-binary-trilio-datamover-api:<triliovault_tag>    "dumb-init --single-…"   2 minutes ago       Up 2 minutes                            triliovault_datamover_api
    
    [root@compute ~]# docker ps | grep triliovault_datamover
    4831db5d215        trilio/centos-binary-trilio-datamover:<triliovault_tag>    "dumb-init --single-…"   5 minutes ago       Up 4 minutes                            triliovault_datamover
    
    [root@controller ~]# docker ps | grep horizon
    f3647e0fff27        trilio/centos-binary-trilio-horizon-plugin:<triliovault_tag>  "dumb-init --single-…"   8 minutes ago       Up 8 minutes                            horizon
    retries 5
    timeout http-request 10m
    timeout queue 10m
    timeout connect 10m
    timeout client 10m
    timeout server 10m
    timeout check 10m
    balance roundrobin
    maxconn 50000
    /usr/local/share/kolla-ansible/ansible/roles/triliovault/defaults/main.yml
    /etc/kolla/haproxy/services.d/triliovault-datamover-api.cfg
    docker.io

    Workload Quotas

    List Quota Types

    GET https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types

    Lists all available Quota Types

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Show Quota Type

    GET https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types/<quota_type_id>

    Requests the details of a Quota Type

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Create allowed Quota

    POST https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>

    Creates an allowed Quota with the given parameters

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    List allowed Quota

    GET https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>

    Lists all allowed Quotas for a given project.

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Show allowed Quota

    GET https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quota/<allowed_quota_id>

    Shows details for a given allowed Quota

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Update allowed Quota

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/update_allowed_quota/<allowed_quota_id>

    Updates an allowed Quota with the given parameters

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    Delete allowed Quota

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<allowed_quota_id>

    Deletes a given allowed Quota

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Workload Import and Migration

    import Workload list

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/get_list/import_workloads

    Provides the list of all importable workloads

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project to work in

    quota_type_id

    string

    ID of the Quota Type to show

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    project_id

    string

    ID of the Tenant/Project to create the allowed Quota in

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    project_id

    string

    ID of the Tenant/Project to list allowed Quotas from

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    <allowed_quota_id>

    string

    ID of the allowed Quota to show

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    <allowed_quota_id>

    string

    ID of the allowed Quota to update

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    <allowed_quota_id>

    string

    ID of the allowed Quota to delete

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 15:40:56 GMT
    Content-Type: application/json
    Content-Length: 1625
    Connection: keep-alive
    X-Compute-Request-Id: req-2ad95c02-54c6-4908-887b-c16c5e2f20fe
    
    {
       "quota_types":[
          {
             "created_at":"2020-10-19T10:05:52.000000",
             "updated_at":"2020-10-19T10:07:32.000000",
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
             "display_name":"Workloads",
             "display_description":"Total number of workload creation allowed per project",
             "status":"available"
          },
          {
             "created_at":"2020-10-19T10:05:52.000000",
             "updated_at":"2020-10-19T10:07:32.000000",
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"b7273a06-2e08-11ea-889c-7440bb00b67d",
             "display_name":"Snapshots",
             "display_description":"Total number of snapshot creation allowed per project",
             "status":"available"
          },
          {
             "created_at":"2020-10-19T10:05:52.000000",
             "updated_at":"2020-10-19T10:07:32.000000",
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"be323f58-2e08-11ea-889c-7440bb00b67d",
             "display_name":"VMs",
             "display_description":"Total number of VMs allowed per project",
             "status":"available"
          },
          {
             "created_at":"2020-10-19T10:05:52.000000",
             "updated_at":"2020-10-19T10:07:32.000000",
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"c61324d0-2e08-11ea-889c-7440bb00b67d",
             "display_name":"Volumes",
             "display_description":"Total number of volume attachments allowed per project",
             "status":"available"
          },
          {
             "created_at":"2020-10-19T10:05:52.000000",
             "updated_at":"2020-10-19T10:07:32.000000",
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"f02dd7a6-2e08-11ea-889c-7440bb00b67d",
             "display_name":"Storage",
             "display_description":"Total storage (in Bytes) allowed per project",
             "status":"available"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 15:44:43 GMT
    Content-Type: application/json
    Content-Length: 342
    Connection: keep-alive
    X-Compute-Request-Id: req-5bf629fe-ffa2-4c90-b704-5178ba2ab09b
    
    {
       "quota_type":{
          "created_at":"2020-10-19T10:05:52.000000",
          "updated_at":"2020-10-19T10:07:32.000000",
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
          "display_name":"Workloads",
          "display_description":"Total number of workload creation allowed per project",
          "status":"available"
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 15:51:51 GMT
    Content-Type: application/json
    Content-Length: 24
    Connection: keep-alive
    X-Compute-Request-Id: req-08c8cdb6-b249-4650-91fb-79a6f7497927
    
    {
       "allowed_quotas":[
          {
             
          }
       ]
    }
    {
       "allowed_quotas":[
          {
             "project_id":"<project_id>",
             "quota_type_id":"<quota_type_id>",
             "allowed_value":"<integer>",
             "high_watermark":"<Integer>"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:01:39 GMT
    Content-Type: application/json
    Content-Length: 766
    Connection: keep-alive
    X-Compute-Request-Id: req-e570ce15-de0d-48ac-a9e8-60af429aebc0
    
    {
       "allowed_quotas":[
          {
             "id":"262b117d-e406-4209-8964-004b19a8d422",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
             "allowed_value":5,
             "high_watermark":4,
             "version":"4.0.115",
             "quota_type_name":"Workloads"
          },
          {
             "id":"68e7203d-8a38-4776-ba58-051e6d289ee0",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "quota_type_id":"f02dd7a6-2e08-11ea-889c-7440bb00b67d",
             "allowed_value":-1,
             "high_watermark":-1,
             "version":"4.0.115",
             "quota_type_name":"Storage"
          },
          {
             "id":"ed67765b-aea8-4898-bb1c-7c01ecb897d2",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "quota_type_id":"be323f58-2e08-11ea-889c-7440bb00b67d",
             "allowed_value":50,
             "high_watermark":25,
             "version":"4.0.115",
             "quota_type_name":"VMs"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:15:07 GMT
    Content-Type: application/json
    Content-Length: 268
    Connection: keep-alive
    X-Compute-Request-Id: req-d87a57cd-c14c-44dd-931e-363158376cb7
    
    {
       "allowed_quotas":{
          "id":"262b117d-e406-4209-8964-004b19a8d422",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
          "allowed_value":5,
          "high_watermark":4,
          "version":"4.0.115",
          "quota_type_name":"Workloads"
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:24:04 GMT
    Content-Type: application/json
    Content-Length: 24
    Connection: keep-alive
    X-Compute-Request-Id: req-a4c02ee5-b86e-4808-92ba-c363b287f1a2
    
    {"allowed_quotas": [{}]}
    {
       "allowed_quotas":{
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "allowed_value":"20000",
          "high_watermark":"18000"
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:33:09 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work in

    Query Parameters

    Name
    Type
    Description

    project_id

    string

    restricts the output to the given project

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 10:34:10 GMT
    Content-Type: application/json
    Content-Length: 7888
    Connection: keep-alive
    X-Compute-Request-Id: req-9d73e5e6-ca5a-4c07-bdf2-ec2e688fc339
    
    {
       "workloads":[
          {
             "created_at":"2020-11-02T13:40:06.000000",
             "updated_at":"2020-11-09T09:53:30.000000",
             "id":"18b809de-d7c8-41e2-867d-4a306407fb11",
             "user_id":"ccddc7e7a015487fa02920f4d4979779",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "availability_zone":"nova",
             "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
    

    orphaned Workload list

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/orphan_workloads

    Provides the list of all orphaned workloads

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work in

    Query Parameters

    Name
    Type
    Description

    migrate_cloud

    boolean

    True also shows Workloads from different clouds

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Import Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/import_workloads

    Imports all or the provided workloads

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of the Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to take the Snapshot in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body format

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 10:42:01 GMT
    Content-Type: application/json
    Content-Length: 120143
    Connection: keep-alive
    X-Compute-Request-Id: req-b443f6e7-8d8e-413f-8d91-7c30ba166e8c
    
    {
       "workloads":[
          {
             "created_at":"2019-04-24T14:09:20.000000",
             "updated_at":"2019-05-16T09:10:17.000000",
             "id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
             "user_id":"6ef8135faedc4259baac5871e09f0044",
             "project_id":"863b6e2a8e4747f8ba80fdce1ccf332e",
             "availability_zone":"nova",
             "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
             "name":"comdirect_test",
             "description":"Daily UNIX Backup 03:15 PM Full 7D Keep 8",
             "interval":null,
             "storage_usage":null,
             "instances":null,
             "metadata":[
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":false,
                   "created_at":"2019-05-16T09:13:54.000000",
                   "updated_at":null,
                   "value":"ca544215-1182-4a8f-bf81-910f5470887a",
                   "version":"3.2.46",
                   "key":"40965cbb-d352-4618-b8b0-ea064b4819bb",
                   "deleted_at":null,
                   "id":"5184260e-8bb3-4c52-abfa-1adc05fe6997"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:30.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"10.10.2.20:/upstream",
                   "version":"3.2.46",
                   "key":"backup_media_target",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"02dd0630-7118-485c-9e42-b01d23aa882c"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":false,
                   "created_at":"2019-05-16T09:13:51.000000",
                   "updated_at":null,
                   "value":"51693eca-8714-49be-b409-f1f1709db595",
                   "version":"3.2.46",
                   "key":"eb7d6b13-21e4-45d1-b888-d3978ab37216",
                   "deleted_at":null,
                   "id":"4b79a4ef-83d6-4e5a-afb3-f4e160c5f257"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:20.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"[\"Comdirect_test-2\", \"Comdirect_test-1\"]",
                   "version":"3.2.46",
                   "key":"hostnames",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"0cb6a870-8f30-4325-a4ce-e9604370198e"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":false,
                   "created_at":"2019-04-24T14:09:20.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"0",
                   "version":"3.2.46",
                   "key":"pause_at_snapshot",
                   "deleted_at":null,
                   "id":"5d4f109c-9dc2-48f3-a12a-e8b8fa4f5be9"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:20.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"[]",
                   "version":"3.2.46",
                   "key":"preferredgroup",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"9a223fbc-7cad-4c2c-ae8a-75e6ee8a6efc"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:11:49.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"\"\\\"\\\"\"",
                   "version":"3.2.46",
                   "key":"topology",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"77e436c0-0921-4919-97f4-feb58fb19e06"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:30.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"121",
                   "version":"3.2.46",
                   "key":"workload_approx_backup_size",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"79aa04dd-a102-4bd8-b672-5b7a6ce9e125"
                }
             ],
             "jobschedule":"(dp1\nVfullbackup_interval\np2\nV7\nsVretention_policy_type\np3\nVNumber of days to retain Snapshots\np4\nsVend_date\np5\nV05/31/2019\np6\nsVstart_time\np7\nS'02:15 PM'\np8\nsVinterval\np9\nV24 hrs\np10\nsVenabled\np11\nI01\nsVretention_policy_value\np12\nI8\nsS'appliance_timezone'\np13\nS'UTC'\np14\nsVtimezone\np15\nVAfrica/Porto-Novo\np16\nsVstart_date\np17\nS'04/24/2019'\np18\ns.",
             "status":"locked",
             "error_msg":null,
             "links":[
                {
                   "rel":"self",
                   "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
                },
                {
                   "rel":"bookmark",
                   "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
                }
             ],
             "scheduler_trust":null
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 11:03:55 GMT
    Content-Type: application/json
    Content-Length: 100
    Connection: keep-alive
    X-Compute-Request-Id: req-0e58b419-f64c-47e1-adb9-21ea2a255839
    
    {
       "workloads":{
          "imported_workloads":[
             "faa03-f69a-45d5-a6fc-ae0119c77974"        
          ],
          "failed_workloads":[
     
          ]
       }
    }
    {
       "workload_ids":[
          "<workload_id>"
       ],
       "upgrade":true
    }
    "name":"Workload_1",
    "description":"no-description",
    "interval":null,
    "storage_usage":null,
    "instances":null,
    "metadata":[
    {
    "created_at":"2020-11-09T09:57:23.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"ee27bf14-e460-454b-abf5-c17e3d484ec2",
    "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
    "key":"63cd8d96-1c4a-4e61-b1e0-3ae6a17bf533",
    "value":"c8468146-8117-48a4-bfd7-49381938f636"
    },
    {
    "created_at":"2020-11-05T10:27:06.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"22d3e3d6-5a37-48e9-82a1-af2dda11f476",
    "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
    "key":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
    "value":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2"
    },
    {
    "created_at":"2020-11-09T09:37:20.000000",
    "updated_at":"2020-11-09T09:57:23.000000",
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"61615532-6165-45a2-91e2-fbad9eb0b284",
    "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
    "key":"b083bb70-e384-4107-b951-8e9e7bbac380",
    "value":"c8468146-8117-48a4-bfd7-49381938f636"
    },
    {
    "created_at":"2020-11-02T13:40:24.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"5a53c8ee-4482-4d6a-86f2-654d2b06e28c",
    "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
    "key":"backup_media_target",
    "value":"10.10.2.20:/upstream"
    },
    {
    "created_at":"2020-11-05T10:27:14.000000",
    "updated_at":"2020-11-09T09:57:23.000000",
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"5cb4dc86-a232-4916-86bf-42a0d17f1439",
    "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
    "key":"e33c1eea-c533-4945-864d-0da1fc002070",
    "value":"c8468146-8117-48a4-bfd7-49381938f636"
    },
    {
    "created_at":"2020-11-02T13:40:06.000000",
    "updated_at":"2020-11-02T14:10:30.000000",
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"506cd466-1e15-416f-9f8e-b9bdb942f3e1",
    "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
    "key":"hostnames",
    "value":"[\"cirros-1\", \"cirros-2\"]"
    },
    {
    "created_at":"2020-11-02T13:40:06.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"093a1221-edb6-4957-8923-cf271f7e43ce",
    "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
    "key":"pause_at_snapshot",
    "value":"0"
    },
    {
    "created_at":"2020-11-02T13:40:06.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"79baaba8-857e-410f-9d2a-8b14670c4722",
    "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
    "key":"policy_id",
    "value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
    },
    {
    "created_at":"2020-11-02T13:40:06.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"4e23fa3d-1a79-4dc8-86cb-dc1ecbd7008e",
    "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
    "key":"preferredgroup",
    "value":"[]"
    },
    {
    "created_at":"2020-11-02T14:10:30.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"ed06cca6-83d8-4d4c-913b-30c8b8418b80",
    "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
    "key":"topology",
    "value":"\"\\\"\\\"\""
    },
    {
    "created_at":"2020-11-02T13:40:23.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"4b6a80f7-b011-48d4-b5fd-f705448de076",
    "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
    "key":"workload_approx_backup_size",
    "value":"6"
    }
    ],
    "jobschedule":"(dp0\nVfullbackup_interval\np1\nV-1\np2\nsVretention_policy_type\np3\nVNumber of Snapshots to Keep\np4\nsVend_date\np5\nVNo End\np6\nsVstart_time\np7\nV01:45 PM\np8\nsVinterval\np9\nV5\np10\nsVenabled\np11\nI00\nsVretention_policy_value\np12\nV10\np13\nsVtimezone\np14\nVUTC\np15\nsVstart_date\np16\nV11/02/2020\np17\nsVappliance_timezone\np18\nVUTC\np19\ns.",
    "status":"locked",
    "error_msg":null,
    "links":[
    {
    "rel":"self",
    "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
    },
    {
    "rel":"bookmark",
    "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
    }
    ],
    "scheduler_trust":null
    }
    ]
    }

    User-Agent

    string

    python-workloadmgrclient

    Trilio 4.1 HF13 Release

    Release Artifacts

    Artifacts

    Reference

    1

    Release Date

    Oct 5, 2022

    1. Introduction

    This document provides information on TVO-4.1.HF13 Release.

    Important Info:

    To use this hotfix (4.1.HF13)

    1. Customers (except Canonical Openstack) and having Openstack Ussuri need to have an already deployed and working TVO-4.1 GA OR TVO-4.1.HF1 OR TVO-4.1.HF2 OR TVO-4.1.HF3 OR TVO-4.1.HF4 OR TVO-4.1.HF5 OR TVO-4.1.HF6 OR TVO-4.1.HF7 OR TVO-4.1.HF8 OR TVO-4.1.HF9 OR TVO-4.1.HF10 OR TVO-4.1.HF11 OR HF12

    2. Customers (except Canonical Openstack) and having Openstack Victoria OR TripleO Train need to follow the TVO-4.1 GA deployment process and directly upgrade to 4.1.HF13 containers/packages. The high-level flow below:

      1. Deplo T4O-4.1 GA appliance.

    The deploy/upgrade documentations provide the detailed steps to deploy/upgrade to the hotfix.

    2. Release Scope

    Current Hotfix release targets the following:

    1. Verification of Jira issues targeted for 4.1.HF13 release.

    2. As part of the new process, the delivery will be via packages; end users would need to do the rolling upgrade on top of 4.1 GA OR 4.1.HF1 OR 4.1.HF2 OR TVO-4.1.HF3 OR TVO-4.1.HF4 OR TVO-4.1.HF5 OR TVO-4.1.HF6 OR TVO-4.1.HF7 OR TVO-4.1.HF8 OR TVO-4.1.HF9 OR TVO-4.1.HF10 OR TVO-4.1.HF11 OR TVO-4.1.HF12

    3. Tag References for Rolling Upgrades

    4. Resolved Issues

    Issues logged by Customers to be documented in this section

    5. Deliverables against 4.1.HF13

    Following packages changed/added in the current release

    6. T4O Deployment Coverage

    The following table gives the overview of coverage against Trilio Deployment with Openstack:

    TVault Deployment Tool
    Covered ?
    Comments

    7. Backup Store Coverage

    The following table gives the overview of coverage against backup stores covered as part of the development and testing of 4.1.HF9 release.

    Backup Storage
    Covered?

    8. Known Issues

    Summary
    Workaround/Comments (if any)

    Upgrade to 4.1.HF13 packages on the appliance.

  • Kolla & TripleO

    1. Deploy Trilio components via 4.1.HF13 containers/packages on Openstack Victoria/TripleO Train.

  • Openstack Ansible

    1. Deploy Trilio components Openstack Victoria [This will deploy 4.1 GA packages]

    2. Upgrade TrilioVault packages to 4.1.HF13 on Openstack Victoria.

  • Configure the Trilio appliance.

  • Canonical users having Openstack Ussuri can either upgrade (on top of 4.1 GA) using Trilio upgrade documents OR do a fresh deployment using 4.1 Deployment documents.

  • Canonical users having Openstack Victoria can either upgrade (on top of 4.1.HF4) using Trilio upgrade documents OR do a fresh deployment using 4.1 Deployment documents.

  • 4.1.94-hotfix-16-rhosp16.1

    RHOSP16.1 Container tag against 4.1.HF13

    4

    4.1-RHOSP16.2-CONTAINER

    4.1.94-hotfix-16-rhosp16.2

    RHOSP16.2 Container tag against 4.1.HF13

    5

    4.1-KOLLA-CONTAINER

    4.1.94-hotfix-13-ussuri

    4.1.94-hotfix-12-victoria

    Kolla Container tag against 4.1.HF13

    6

    4.1-TRIPLEO-CONTAINER

    4.1.94-hotfix-12-tripleo

    TripleO Container tag against 4.1.HF13

    rpm

    4.1.94.3-4.1

    4

    python3-dmapi

    deb

    4.1.94.3

    5

    tvault-contego

    rpm

    4.1.94.10-4.1

    6

    tvault-contego

    deb

    4.1.94.10

    7

    python3-tvault-contego

    deb

    4.1.94.10

    8

    python3-tvault-contego

    rpm

    4.1.94.10-4.1

    9

    s3fuse

    python

    4.1.94.7

    10

    s3-fuse-plugin

    deb

    4.1.94.7

    11

    python3-s3-fuse-plugin

    deb

    4.1.94.7

    12

    python3-s3fuse-plugin

    rpm

    4.1.94.7-4.1

    13

    python-s3fuse-plugin-cent7

    rpm

    4.1.94.7-4.1

    python

    4.1.94.17

    4

    tvault-horizon-plugin

    rpm

    4.1.94.7-4.1

    5

    tvault-horizon-plugin

    deb

    4.1.94.7

    6

    python3-tvault-horizon-plugin

    deb

    4.1.94.7

    7

    python3-tvault-horizon-plugin-el8

    rpm

    4.1.94.7-4.1

    8

    RHOSP16.1 Containers

    Containers

    4.1.94-hotfix-16-rhosp16.1

    9

    RHOSP16.2 Containers

    Containers

    4.1.94-hotfix-16-rhosp16.2

    10

    RHOSP13 Containers

    Containers

    4.1.94-hotfix-16-rhosp13

    11

    Kolla Containers

    Containers

    4.1.94-hotfix-13-ussuri

    4.1.94-hotfix-12-victoria

    12

    TripleO Containers

    Containers

    4.1.94-hotfix-12-tripleo

    YES

    Used on Ubuntu based distro via all TVault Deployment methods

    4

    RPM Packages

    YES

    Used on RH based distro via all TVault Deployment methods

    5

    RH Director

    YES

    For RHOSP

    6

    TripleO

    YES

    For TripleO

    7

    Juju/Charms

    YES

    For Canonical Openstack

    Wasabi S3

    NO

    Contego package installation failing on HF5 OSA with S3.

    Note : Respective steps added to common T4O components upgrade document against OSA distro.

    Before contego package upgrade unmount /var/triliovault-mounts path

    5

    In-place restore not working properly with multiattach volume

    Select all the VM's boot disk as well as cinder multiattach disk on the in-place restore window. Restore will work fine for all the VM

    6

    Snapshot mount only shows volume group/LVM for one VM when 2 or more VMs have volume group with same name

    NA

    7

    Snapshot Disk Integrity Check Disabled for 4.1.HF1 release.

    Impact

    1. If any snapshot disk OR the chain gets corrupted, T4O will identify it and log the warning message in logs however the snapshot will not be marked failed. Workload reset also will not be happening.

    2. Restore of such snapshot will fail.

    None

    8

    Backup and restore should not break for instances with multi-attach volumes.

    After upgrade from 4.1 GA to 4.1HF1 , snapshots which trigger just after upgrade for workloads having multi-attach volume would be of “mixed” type after that all snapshots will be of incremental types .

    9

    [FRM] Snapshot mount not working

    Update permissions of NFS mount point to 755 on the NFS server and retry snapshot mount operation.

    {noformat}chmod 755 /mnt/tvault/tvm4{noformat}

    10

    [Intermittent] [RHOSP 16.1] [Horizon] After the overcloud deployment, openstack UI messed UP.

    Login to the Horizon container and run the following commands:

    1. podman exec -it -u root horizon /bin/bash

    2. /usr/bin/python3 /usr/share/openstack-dashboard/manage.py collectstatic --clear --noinput

    3. /usr/bin/python3 /usr/share/openstack-dashboard/manage.py compress --force

    11

    [DR] Selective restore fails, If original image is deleted in canonical focal-victoria environment

    None

    2

    Debian URL

    deb [trusted=yes] https://apt.fury.io/triliodata-4-1/ /

    3

    RPM URL

    http://trilio:[email protected]:8283/triliovault-4.1/yum/

    4

    PIP URL

    https://pypi.fury.io/triliodata-4-1/

    Tag Reference in Upgrade Docs

    Value

    Comments

    1

    4.1-HOTFIX-LABEL

    hotfix-13-TVO/4.1

    Label against the Trilio repositories from where required code to be pulled for upgrades.

    2

    4.1-RHOSP13-CONTAINER

    4.1.94-hotfix-16-rhosp13

    RHOSP13 Container tag against 4.1.HF13

    3

    Summary

    1

    horizon logs getting dumped with errors

    2

    T4O 4.1 vulnerability reported by Fortinet

    3

    All the network ports of a project are deleted in case Restore Network Topology fails

    Package/Container Names

    Package Kind

    Package/Container Version/Tags

    1

    dmapi

    deb

    4.1.94.3

    2

    dmapi

    rpm

    4.1.94.3-4.1

    3

    Package/Container Names

    Package Kind

    Package/Container Version/Tags

    1

    workloadmgr

    deb

    4.1.95.22

    2

    workloadmgr

    python

    4.1.94.23

    3

    1

    Shell Script

    NO

    Scoped out since TVO-4.1

    2

    Ansible (Openstack native)

    YES

    For Kolla & Openstack ansible

    3

    1

    AWS S3

    NO

    2

    NFS

    YES

    3

    RH Ceph S3

    YES

    1

    2

    restore fails for SRIOV network

    (Fixed in 4.1.HF7; documenting single scenario)

    If port_security_enabled=False on the network , restore will pass and user can attach security group to the restored vm network port later after restore is done.

    3

    [Intermittent] All API calls are getting stuck.

    Note: Respective steps added to common T4O upgrade document.

    Set oslo.messaging package version to 12.1.6 on all T4O nodes.

    /home/stack/myansible/bin/pip install oslo.messaging==12.1.6 Restart all the wlm services.

    4.1-RHOSP16.1-CONTAINER

    python3-dmapi

    tvault_configurator

    Debian Packages

    4

    4

    Workloads

    List Workloads

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads

    Provides the list of all workloads for the given tenant/project id

    Path Parameters

    Name
    Type
    Description

    Query Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Create Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads

    Creates a workload in the provided Tenant/Project with the given details.

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body format

    Workload create requires a Body in json format, to provide the requested information.

    Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.

    Show Workload

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    Shows all details of a specified workload

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Modify Workload

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    Modifies a workload in the provided Tenant/Project with the given details.

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body format

    Workload modify requires a Body in json format, to provide the information about the values to modify.

    All values in the body are optional.

    Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.

    Delete Workload

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    Deletes the specified Workload.

    Path Parameters

    Name
    Type
    Description

    Query Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Unlock Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/unlock

    Unlocks the specified Workload

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Reset Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/reset

    Resets the defined workload

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Snapshots

    List Snapshots

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots

    Lists all Snapshots.

    Restart the Horizon container : podman restart horizon

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to fetch the workloads from

    nfs_share

    string

    lists workloads located on a specific nfs-share

    all_workloads

    boolean

    admin role required - True lists workloads of all tenants/projects

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to create the workload in

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    workload_id

    string

    ID of the Workload to show

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project where to find the workload in

    workload_id

    string

    ID of the Workload to modify

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    workload_id

    string

    ID of the Workload to delete

    database_only

    boolean

    True leaves the Workload data on the Backup Target

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    workload_id

    string

    ID of the Workload to unlock

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    workload_id

    string

    ID of the Workload to reset

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 29 Oct 2020 14:55:40 GMT
    Content-Type: application/json
    Content-Length: 3480
    Connection: keep-alive
    X-Compute-Request-Id: req-a2e49b7e-ce0f-4dcb-9e61-c5a4756d9948
    
    {
       "workloads":[
          {
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":"adfa32d7746a4341b27377d6f7c61adb",
             "id":"8ee7a61d-a051-44a7-b633-b495e6f8fc1d",
             "name":"worklaod1",
             "snapshots_info":"",
             "description":"no-description",
             "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
             "status":"available",
             "created_at":"2020-10-26T12:07:01.000000",
             "updated_at":"2020-10-29T12:22:26.000000",
             "scheduler_trust":null,
             "links":[
                {
                   "rel":"self",
                   "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
                },
                {
                   "rel":"bookmark",
                   "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
                }
             ]
          },
          {
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":"adfa32d7746a4341b27377d6f7c61adb",
             "id":"a90d002a-85e4-44d1-96ac-7ffc5d0a5a84",
             "name":"workload2",
             "snapshots_info":"",
             "description":"no-description",
             "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
             "status":"available",
             "created_at":"2020-10-20T09:51:15.000000",
             "updated_at":"2020-10-29T10:03:33.000000",
             "scheduler_trust":null,
             "links":[
                {
                   "rel":"self",
                   "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
                },
                {
                   "rel":"bookmark",
                   "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
                }
             ]
          }
       ]
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Thu, 29 Oct 2020 15:42:02 GMT
    Content-Type: application/json
    Content-Length: 703
    Connection: keep-alive
    X-Compute-Request-Id: req-443b9dea-36e6-4721-a11b-4dce3c651ede
    
    {
       "workload":{
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "id":"c4e3aeeb-7d87-4c49-99ed-677e51ba715e",
          "name":"API created",
          "snapshots_info":"",
          "description":"API description",
          "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
          "status":"creating",
          "created_at":"2020-10-29T15:42:01.000000",
          "updated_at":"2020-10-29T15:42:01.000000",
          "scheduler_trust":null,
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             }
          ]
       }
    }
    retention_policy_type
    retention_policy_value
    interval
    {
       "workload":{
          "name":"<name of the Workload>",
          "description":"<description of workload>",
          "workload_type_id":"<ID of the chosen Workload Type",
          "source_platform":"openstack",
          "instances":[
             {
                "instance-id":"<Instance ID>"
             },
             {
                "instance-id":"<Instance ID>"
             }
          ],
          "jobschedule":{
             "retention_policy_type":"<'Number of Snapshots to Keep'/'Number of days to retain Snapshots'>",
             "retention_policy_value":"<Integer>"
             "timezone":"<timezone>",
             "start_date":"<Date format: MM/DD/YYYY>"
             "end_date":"<Date format MM/DD/YYYY>",
             "start_time":"<Time format: HH:MM AM/PM>",
             "interval":"<Format: Integer hr",
             "enabled":"<True/False>"
          },
          "metadata":{
             <key>:<value>,
             "policy_id":"<policy_id>"
          }
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 12:08:42 GMT
    Content-Type: application/json
    Content-Length: 1536
    Connection: keep-alive
    X-Compute-Request-Id: req-afb76abb-aa33-427e-8219-04fc2b91bce0
    
    {
       "workload":{
          "created_at":"2020-10-29T15:42:01.000000",
          "updated_at":"2020-10-29T15:42:18.000000",
          "id":"c4e3aeeb-7d87-4c49-99ed-677e51ba715e",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "availability_zone":"nova",
          "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
          "name":"API created",
          "description":"API description",
          "interval":null,
          "storage_usage":{
             "usage":0,
             "full":{
                "snap_count":0,
                "usage":0
             },
             "incremental":{
                "snap_count":0,
                "usage":0
             }
          },
          "instances":[
             {
                "id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b",
                "name":"cirros-4",
                "metadata":{
                   
                }
             },
             {
                "id":"7c1bb5d2-aa5a-44f7-abcd-2d76b819b4c8",
                "name":"cirros-3",
                "metadata":{
                   
                }
             }
          ],
          "metadata":{
             "hostnames":"[]",
             "meta":"data",
             "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
             "preferredgroup":"[]",
             "workload_approx_backup_size":"6"
          },
          "jobschedule":{
             "retention_policy_type":"Number of Snapshots to Keep",
             "end_date":"15/27/2020",
             "start_time":"3:00 PM",
             "interval":"5",
             "enabled":false,
             "retention_policy_value":"10",
             "timezone":"UTC+2",
             "start_date":"10/27/2020",
             "fullbackup_interval":"-1",
             "appliance_timezone":"UTC",
             "global_jobscheduler":true
          },
          "status":"available",
          "error_msg":null,
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             }
          ],
          "scheduler_trust":null
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 12:31:42 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-674a5d71-4aeb-4f99-90ce-7e8d3158d137
    retention_policy_type
    retention_policy_value
    interval
    {
       "workload":{
          "name":"<name>",
          "description":"<description>"
          "instances":[
             {
                "instance-id":"<instance_id>"
             },
             {
                "instance-id":"<instance_id>"
             }
          ],
          "jobschedule":{
             "retention_policy_type":"<'Number of Snapshots to Keep'/'Number of days to retain Snapshots'>",
             "retention_policy_value":"<Integer>",
             "timezone":"<timezone>",
             "start_time":"<HH:MM AM/PM>",
             "end_date":"<MM/DD/YYYY>",
             "interval":"<Integer hr>",
             "enabled":"<True/False>"
          },
          "metadata":{
             "meta":"data",
             "policy_id":"<policy_id>"
          },
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 13:31:00 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 13:41:55 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 13:52:30 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Projects to fetch the Snapshots from

    Query Parameters

    Name
    Type
    Description

    host

    string

    host name of the TVM that took the Snapshot

    workload_id

    string

    ID of the Workload to list the Snapshots off

    date_from

    string

    starting date of Snapshots to show

    \

    Format: YYYY-MM-DDTHH:MM:SS

    string

    ending date of Snapshots to show

    \

    Format: YYYY-MM-DDTHH:MM:SS

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Take Snapshot

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of the Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to take the Snapshot in

    workload_id

    string

    ID of the Workload to take the Snapshot in

    Query Parameters

    Name
    Type
    Description

    full

    boolean

    True creates a full Snapshot

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body format

    When creating a Snapshot it is possible to provide additional information

    This Body is completely optional

    Show Snapshot

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Shows the details of a specified Snapshot

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of the Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to take the Snapshot from

    snapshot_id

    string

    ID of the Snapshot to show

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Delete Snapshot

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Deletes a specified Snapshot

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to find the Snapshot in

    snapshot_id

    string

    ID of the Snapshot to delete

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Cancel Snapshot

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/cancel

    Cancels the Snapshot process of a given Snapshot

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to find the Snapshot in

    snapshot_id

    string

    ID of the Snapshot to cancel

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 12:58:38 GMT
    Content-Type: application/json
    Content-Length: 266
    Connection: keep-alive
    X-Compute-Request-Id: req-ed391cf9-aa56-4c53-8153-fd7fb238c4b9
    
    {
       "snapshots":[
          {
             "id":"1ff16412-a0cd-4e6a-9b4a-b5d4440fffc4",
             "created_at":"2020-11-02T14:03:18.000000",
             "status":"available",
             "snapshot_type":"full",
             "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
             "name":"snapshot",
             "description":"-",
             "host":"TVM1"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 13:58:38 GMT
    Content-Type: application/json
    Content-Length: 283
    Connection: keep-alive
    X-Compute-Request-Id: req-fb8dc382-e5de-4665-8d88-c75b2e473f5c
    
    {
       "snapshot":{
          "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "created_at":"2020-11-04T13:58:37.694637",
          "status":"creating",
          "snapshot_type":"full",
          "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
          "name":"API taken 2",
          "description":"API taken description 2",
          "host":""
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 14:07:18 GMT
    Content-Type: application/json
    Content-Length: 6609
    Connection: keep-alive
    X-Compute-Request-Id: req-f88fb28f-f4ce-4585-9c3c-ebe08a3f60cd
    
    {
       "snapshot":{
          "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "created_at":"2020-11-04T13:58:37.000000",
          "updated_at":"2020-11-04T14:06:03.000000",
          "finished_at":"2020-11-04T14:06:03.000000",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"available",
          "snapshot_type":"full",
          "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
          "instances":[
             {
                "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                "name":"cirros-2",
                "status":"available",
                "metadata":{
                   "availability_zone":"nova",
                   "config_drive":"",
                   "data_transfer_time":"0",
                   "object_store_transfer_time":"0",
                   "root_partition_type":"Linux",
                   "trilio_ordered_interfaces":"192.168.100.80",
                   "vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.80\", \"config_drive\": \"\"}",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "workload_name":"Workload_1"
                },
                "flavor":{
                   "vcpus":"1",
                   "ram":"512",
                   "disk":"1",
                   "ephemeral":"0"
                },
                "security_group":[
                   {
                      "name":"default",
                      "security_group_type":"neutron"
                   }
                ],
                "nics":[
                   {
                      "mac_address":"fa:16:3e:cf:10:91",
                      "ip_address":"192.168.100.80",
                      "network":{
                         "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                         "name":"robert_internal",
                         "cidr":null,
                         "network_type":"neutron",
                         "subnet":{
                            "id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
                            "name":"robert_internal",
                            "cidr":"192.168.100.0/24",
                            "ip_version":4,
                            "gateway_ip":"192.168.100.1"
                         }
                      }
                   }
                ],
                "vdisks":[
                   {
                      "label":null,
                      "resource_id":"fa888089-5715-4228-9e5a-699f8f9d59ba",
                      "restore_size":1073741824,
                      "vm_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                      "volume_id":"51491d30-9818-4332-b056-1f174e65d3e3",
                      "volume_name":"51491d30-9818-4332-b056-1f174e65d3e3",
                      "volume_size":"1",
                      "volume_type":"iscsi",
                      "volume_mountpoint":"/dev/vda",
                      "availability_zone":"nova",
                      "metadata":{
                         "readonly":"False",
                         "attached_mode":"rw"
                      }
                   }
                ]
             },
             {
                "id":"e33c1eea-c533-4945-864d-0da1fc002070",
                "name":"cirros-1",
                "status":"available",
                "metadata":{
                   "availability_zone":"nova",
                   "config_drive":"",
                   "data_transfer_time":"0",
                   "object_store_transfer_time":"0",
                   "root_partition_type":"Linux",
                   "trilio_ordered_interfaces":"192.168.100.176",
                   "vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.176\", \"config_drive\": \"\"}",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "workload_name":"Workload_1"
                },
                "flavor":{
                   "vcpus":"1",
                   "ram":"512",
                   "disk":"1",
                   "ephemeral":"0"
                },
                "security_group":[
                   {
                      "name":"default",
                      "security_group_type":"neutron"
                   }
                ],
                "nics":[
                   {
                      "mac_address":"fa:16:3e:cf:4d:27",
                      "ip_address":"192.168.100.176",
                      "network":{
                         "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                         "name":"robert_internal",
                         "cidr":null,
                         "network_type":"neutron",
                         "subnet":{
                            "id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
                            "name":"robert_internal",
                            "cidr":"192.168.100.0/24",
                            "ip_version":4,
                            "gateway_ip":"192.168.100.1"
                         }
                      }
                   }
                ],
                "vdisks":[
                   {
                      "label":null,
                      "resource_id":"c8293bb0-031a-4d33-92ee-188380211483",
                      "restore_size":1073741824,
                      "vm_id":"e33c1eea-c533-4945-864d-0da1fc002070",
                      "volume_id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                      "volume_name":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                      "volume_size":"1",
                      "volume_type":"iscsi",
                      "volume_mountpoint":"/dev/vda",
                      "availability_zone":"nova",
                      "metadata":{
                         "readonly":"False",
                         "attached_mode":"rw"
                      }
                   }
                ]
             }
          ],
          "name":"API taken 2",
          "description":"API taken description 2",
          "host":"TVM1",
          "size":44171264,
          "restore_size":2147483648,
          "uploaded_size":44171264,
          "progress_percent":100,
          "progress_msg":"Snapshot of workload is complete",
          "warning_msg":null,
          "error_msg":null,
          "time_taken":428,
          "pinned":false,
          "metadata":[
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"16fc1ce5-81b2-4c07-ac63-6c9232e0418f",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"backup_media_target",
                "value":"10.10.2.20:/upstream"
             },
             {
                "created_at":"2020-11-04T13:58:37.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"5a56bbad-9957-4fb3-9bbc-469ec571b549",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"cancel_requested",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:29.000000",
                "updated_at":"2020-11-04T14:05:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"d36abef7-9663-4d88-8f2e-ef914f068fb4",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"data_transfer_time",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"c75f9151-ef87-4a74-acf1-42bd2588ee64",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"hostnames",
                "value":"[\"cirros-1\", \"cirros-2\"]"
             },
             {
                "created_at":"2020-11-04T14:05:29.000000",
                "updated_at":"2020-11-04T14:05:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"02916cce-79a2-4ad9-a7f6-9d9f59aa8424",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"object_store_transfer_time",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"96efad2f-a24f-4cde-8e21-9cd78f78381b",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"pause_at_snapshot",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"572a0b21-a415-498f-b7fa-6144d850ef56",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"policy_id",
                "value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"dfd7314d-8443-4a95-8e2a-7aad35ef97ea",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"preferredgroup",
                "value":"[]"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"2e17e1e4-4bb1-48a9-8f11-c4cd2cfca2a9",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"topology",
                "value":"\"\\\"\\\"\""
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"33762790-8743-4e20-9f50-3505a00dbe76",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"workload_approx_backup_size",
                "value":"6"
             }
          ],
          "restores_info":""
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 14:18:36 GMT
    Content-Type: application/json
    Content-Length: 56
    Connection: keep-alive
    X-Compute-Request-Id: req-82ffb2b6-b28e-4c73-89a4-310890960dbc
    
    {"task": {"id": "a73de236-6379-424a-abc7-33d553e050b7"}}
    
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 14:26:44 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-47a5a426-c241-429e-9d69-d40aed0dd68d
    {
       "snapshot":{
          "is_scheduled":<true/false>,
          "name":"<name>",
          "description":"<description>"
       }
    }

    all

    boolean

    admin role required - True lists all Snapshots of all Workloads

    User-Agent

    string

    python-workloadmgrclient

    Installing on RHOSP

    The Red Hat Openstack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation. Trilio integrates natively into the RHOSP Director and manual deployment methods are not supported for RHOSP.

    1. Prepare for deployment

    1.1] Select 'backup target' type

    Backup target storage is used to store backup images taken by Trilio and also associated configuration needs. The following backup target types are supported by Trilio:

    Backup Target Types
    Required Configuration

    1.2] Clone triliovault-cfg-scripts repository

    The overcloud-deploy command must already have been run successfully prior to this point and overcloud should be available. Perform the following steps for 'undercloud' node on an existing RHOSP environment:

    All commands need to be run as user 'stack' on undercloud node

    RHOSP 16.0 is not supported anymore as RedHat has officially stopped supporting it. However, Trilio maintained it for some time and stopped the support from 4.1HF11 onwards. The latest hotfix available for RHOSP16.0 is 41.HF10. Reach out to the Support team for any help.

    1. Ensure that the Trilio appliance connected to this installation is on the latest Hotfix version. Failure to ensure this may lead to your installation not working as expected.

      1. Refer to this doc :

    2. Run the following command to clone the triliovault-cfg-scripts github repository:

    ``

    1. If your backup target type is 'Ceph-based S3' with SSL, skip this step. Otherwise, access the Red Hat Director scripts according to the RHOSP version being used:

      • RHOSP 13 - cd triliovault-cfg-scripts/redhat-director-scripts/rhosp13/

      • RHOSP 16.1 - cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/

    From this point onwards in the documentation, only the following path will be used for examples: cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/

    2] Upload Trilio-Puppet module

    RHOSP 16.1

    The following commands upload the Trilio puppet module to the overcloud registry. The upload only happens upon the next deployment.

    Step 1: -

    The output of the above command looks like the following.

    Trilio puppet module is uploaded to overcloud as a swift deploy artifact with heat resource name 'DeployArtifactURLs'.

    Step 2: - Check Trilio's Puppet module artifact file and ensure that it looks like the following:

    Step 3: -

    • Firstly, check to make sure that your overcloud deploy environment files uses deploy artifacts. To do this check string DeployArtifactURLs in your environment files (only those mentioned in the overcloud deploy command with -e option). If you find any environment file with the -e option, then your deploy command is using deploy artifacts.

    • If your deploy command is using deploy artifact, you must merge all deploy artifacts in a single file. For example, if your artifact file path is /home/stack/templates/user-artifacts.yaml, then perform the following steps to merge both urls in single file, which is passed to the overcloud deploy command with the -e option.

    3] Update overcloud roles data file to include Trilio services

    Trilio contains multiple services. Add these services to your roles_data.yaml.

    In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:

    /usr/share/openstack-tripleo-heat-templates/roles_data.yaml

    Add the following services to the roles_data.yaml

    All commands need to be run as user 'stack'

    3.1] Add Trilio Datamover Api Service to role data file

    This service needs to share the same role as the keystone and database service. In case of the pre-defined roles will these services run on the role Controller. In case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone service installed.

    Add the following line to the identified role:

    3.2] Add Trilio Datamover Service to role data file

    This service needs to share the same role as the nova-compute service. In case of the pre-defined roles will the nova-compute service run on the role Compute. In case of custom-defined roles, it is necessary to use the role that the nova-compute service is using.

    Add the following line to the identified role:

    4] Prepare Trilio container images

    All commands need to be run as user 'stack'

    Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: registry.connect.redhat.com Container pull URLs are given below.

    Please note that using the hotfix containers requires that the Trilio Appliance is getting upgraded to the desired hotfix level as well.

    Refer to the word <HOTFIX-TAG-VERSION> as 4.1.94-hotfix-16 in the below sections

    RHOSP 13

    RHOSP 16.1

    RHOSP 16.2

    There are three registry methods available in the RedHat OpenStack Platform.

    1. Remote Registry

    2. Local Registry

    3. Satellite Server

    4.1] Remote Registry

    Follow this section when 'Remote Registry' is used.

    In this method, container images get downloaded directly on overcloud nodes during overcloud deploy/update command execution. User can set the remote registry to the RedHat registry or any other private registry that he wants to use. The user needs to provide credentials for the registry in containers-prepare-parameter.yaml file.

    1. Make sure other OpenStack service images are also using the same method to pull container images. If it's not the case you can not use this method.

    2. Populate containers-prepare-parameter.yaml with content like following. Important parameters are 'push_destination: false', ContainerImageRegistryLogin: true and registry credentials. Trilio container images are published to the registry registry.connect.redhat.com Credentials of registry 'registry.redhat.io' will work for registry.connect.redhat.com registry too.

    Note: This file - containers-prepare-parameter.yaml

    Redhat document for remote registry method:

    Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat

    3. Make sure you have network connectivity to the above registries from all overcloud nodes. Otherwise, image pull operation will fail.

    4. Populate the trilio_env.yaml with Trilio container image URLs:

    • Trilio Datamover container

    • Trilio Datamover API container

    • Trilio Horizon Plugin

    trilio_env.yaml will be available in

    4.2] Local Registry

    Follow this section when 'local registry' is used on the undercloud.

    In this case, it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts that will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and update the trilio_env.yaml.

    RHOSP13

    Verify the changes

    RHOSP16.1

    Verify the changes:

    RHOSP16.2

    Verify the changes

    The changes can be verified using the following commands.

    4.3] Red Hat Satellite Server

    Follow this section when a Satellite Server is used for the container registry.

    Pull the Trilio containers on the Red Hat Satellite using the given

    Populate the trilio_env.yaml with container urls.

    RHOSP 13

    RHOSP16.1

    RHOSP16.2

    5] Provide environment details in trilio-env.yaml

    Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.

    The following information are required additionally:

    • Network for the datamover api

    • datamover password

    • Backup target type {nfs/s3}

    • In case of NFS

    Use ceph_s3 for any non-aws S3 backup targets.

    6. Advanced Settings/Configuration

    6.1 Haproxy customized configuration for Trilio dmapi service __

    The existing default haproxy configuration works fine with most of the environments. Only when timeout issues with the dmapi are observed or other reasons are known, change the configuration as described here.

    Following is the haproxy conf file location on haproxy nodes of the overcloud. Trilio datamover api service haproxy configuration gets added to this file.

    Trilio datamover haproxy default configuration from the above file looks as follows:

    The user can change the following configuration parameter values.

    To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for edit (Edit <RHOSP_RELEASE> with your cloud's release information. Valid values are - rhosp13, rhosp16, rhosp16.1)

    For RHOSP13

    For RHOSP16.0

    For RHOSP16.1

    For RHOSP16.2

    ii) Search the following entries and edit as required

    iii) Save the changes.

    7] Deploy overcloud with trilio environment

    Use the following heat environment file and roles data file in overcloud deploy command:

    1. trilio_env.yaml

    2. roles_data.yaml

    3. Use the correct Trilio endpoint map file as per available Keystone endpoint configuration

    To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:

    8] Verify deployment

    If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

    8.1] On Controller node

    Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

    Verify the haproxy configuration under:

    8.2] On Compute node

    Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

    8.3] On the node with Horizon service

    Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.

    9] Additional Steps on Trilio Appliance

    9.1] Change the nova user id on the Trilio Nodes

    In RHOSP, nova user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes needs to be set the same. Do the following steps on all Trilio nodes:

    1. Download the shell script that will change the user id

    2. Assign executable permissions

    3. Execute the script

    4. Verify that nova user and group id have changed to '42436'

    10] Troubleshooting for overcloud deployment failures

    Trilio components will be deployed using puppet scripts.

    In case if the overcloud deployment is failing, do the following command to provide the list of errors. The following document also provides valuable insights:

  • RHOSP 16.2 - cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/

  • If your backup target is Ceph S3 with SSL and your SSL certificates are self-signed or authorized by private CA, you must provide the CA chain certificate to validate the SSL requests. Otherwise, skip this step. To do this:

    1. Rename your CA chain cert file to s3-cert.pem.

    2. Copy the renamed file into the following directory: triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE_Directory>/puppet/trilio/filesIf your overcloud deploy command uses any other deploy artifact through an environment file, then you must merge Trilio deploy artifact url and your url in a single file.

    3. Then access the Red Hat Director scripts according to the version being used:

      • RHOSP 13 - cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/puppet/trilio/files/

      • RHOSP 16.1 - cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/puppet/trilio/files/

  • list of NFS Shares

  • NFS options

  • In case of S3

    • S3 type {amazon_s3/ceph_s3}

    • S3 Access key

    • S3 Secret key

    • S3 Region name

    • S3 Bucket

    • S3 Endpoint URL

    • S3 Signature Version

    • S3 Auth Version

    • S3 SSL Enabled {true/false}

    • S3 SSL Cert

  • Instead of tls-endpoints-public-dns.yaml file, use environments/trilio_env_tls_endpoints_public_dns.yaml

  • Instead of tls-endpoints-public-ip.yaml file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml

  • Instead of tls-everywhere-endpoints-dns.yaml file, useenvironments/trilio_env_tls_everywhere_dns.yaml

  • NFS

    • NFS share path is required

    Amazon S3

    • S3 Access Key

    • Secret Key

    • Region

    • Bucket name

    Other S3 compatible storage, e.g. Ceph-based S3

    • S3 Access Key

    • Secret Key

    • Region

    • Bucket name

    • Endpoint URL (Valid for S3 other than Amazon S3)

    https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/director_installation_and_usage/preparing-for-director-installation#container-image-preparation-parameters
    Red Hat registry URLs.
    https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html

    Workload Policies

    List Policies

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy

    Requests the list of available Workload Policies

    cd /home/stack
    git clone -b hotfix-13-TVO/4.1 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/scripts/
    ./upload_puppet_module.sh
    
    ./upload_puppet_module.sh
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Above command creates the following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml
    # Heat environment to deploy artifacts via Swift Temp URL(s)
    parameter_defaults:
        DeployArtifactURLs:
        - 'http://172.25.0.103:8080/v1/AUTH_46ba596219d143c8b076e9fcc4139fed/overcloud-artifacts/puppet-modules.tar.gz?temp_url_sig=c3972b7ce75226c278ab3fa8237d31cc1f2115bd&temp_url_expires=1646738377'
    
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml | grep http >> /home/stack/templates/user-artifacts.yaml
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/templates/user-artifacts.yaml
    # Heat environment to deploy artifacts via Swift Temp URL(s)
    parameter_defaults:
        DeployArtifactURLs:
        - 'http://172.25.0.103:8080/v1/AUTH_57ba596219d143c8b076e9fcc4139f3g/overcloud-artifacts/some-artifact.tar.gz?temp_url_sig=dc972b7ce75226c278ab3fa8237d31cc1f2115sc&temp_url_expires=3446738365'
        - 'http://172.25.0.103:8080/v1/AUTH_46ba596219d143c8b076e9fcc4139fed/overcloud-artifacts/puppet-modules.tar.gz?temp_url_sig=c3972b7ce75226c278ab3fa8237d31cc1f2115bd&temp_url_expires=1646738377'
    
    'OS::TripleO::Services::TrilioDatamoverApi'
    'OS::TripleO::Services::TrilioDatamover' 
    Trilio Datamover container:
        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover Api Container:   
        registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio horizon plugin:            
        registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover container:
        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover Api Container: 
        registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio horizon plugin:
        registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover container:        
        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover Api Container:
        registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio horizon plugin:            
        registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    File name: containers-prepare-parameter.yaml
    parameter_defaults:
      ContainerImagePrepare:
      - push_destination: false
        set:
          namespace: registry.redhat.io/...
          ...
      ...
      ContainerImageRegistryCredentials:
        registry.redhat.io:
          myuser: 'p@55w0rd!'
        registry.connect.redhat.com:
          myuser: 'p@55w0rd!'
      ContainerImageRegistryLogin: true
    cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_Directory/
    vi trilio_env.yaml
    # For RHOSP13
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    
    # For RHOSP16.1
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
       
    # For RHOSP16.2
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/scripts/
    ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp13
    
    $ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: 172.25.2.2:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: 172.25.2.2:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: 172.25.2.2:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/scripts/
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp16.1
    
    $ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/scripts/
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp16.2
    
    $ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    (undercloud) [stack@undercloud redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>]$ openstack tripleo container image list | grep trilio
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1                       |                        |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1                   |                   |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1                  |
    
    -----------------------------------------------------------------------------------------------------
    
    (undercloud) [stack@undercloud redhat-director-scripts]$ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    $ grep 'Images' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    resource_registry:
      OS::TripleO::Services::TrilioDatamover: ../services/trilio-datamover.yaml
      OS::TripleO::Services::TrilioDatamoverApi: ../services/trilio-datamover-api.yaml
      # NOTE: If there are addition customizations to the endpoint map (e.g. for
      # other integration), this will need to be regenerated.
      OS::TripleO::EndpointMap: endpoint_map.yaml
    
    parameter_defaults:
    
       ## Enable Trilio's quota functionality on horizon
       ExtraConfig:
         horizon::customization_module: 'dashboards.overrides'
    
       ## Define network map for trilio datamover api service
       ServiceNetMap:
           TrilioDatamoverApiNetwork: internal_api
    
       ## Trilio Datamover Password for keystone and database
       TrilioDatamoverPassword: "test1234"
    
       ## Trilio container pull urls
       DockerTrilioDatamoverImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
    
       ## If you do not want Trilio's horizon plugin to replace your horizon container, just comment following line.
       ContainerHorizonImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
       ## Backup target type nfs/s3, used to store snapshots taken by triliovault
       BackupTargetType: 'nfs'
    
       ## For backup target 'nfs'
       NfsShares: '192.168.122.101:/opt/tvault'
       NfsOptions: 'nolock,soft,timeo=180,intr,lookupcache=none'
    
       ## For backup target 's3'
       ## S3 type: amazon_s3/ceph_s3
       S3Type: 'amazon_s3'
    
       ## S3 access key
       S3AccessKey: ''
    
       ## S3 secret key
       S3SecretKey: ''
    
       ## S3 region, if your s3 does not have any region, just keep the parameter as it is
       S3RegionName: ''
    
       ## S3 bucket name
       S3Bucket: ''
    
       ## S3 endpoint url, not required for Amazon S3, keep it as it is
       S3EndpointUrl: ''
    
       ## S3 signature version
       S3SignatureVersion: 'default'
    
       ## S3 Auth version
       S3AuthVersion: 'DEFAULT'
    
       ## If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint url then change it to 'True', otherwise keep it as 'False'
       S3SslEnabled: False
    
       ## If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint URL and SSL certificates are self signed, then
       ## user need to set this parameter value to: '/etc/tvault-contego/s3-cert.pem', otherwise keep it's value as empty string.
       S3SslCert: ''
    
       ## Don't edit following parameter
       EnablePackageInstall: True
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    listen trilio_datamover_api
      bind 172.25.0.107:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
      bind 172.25.0.107:8784 transparent
      balance roundrobin
      http-request set-header X-Forwarded-Proto https if { ssl_fc }
      http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
      http-request set-header X-Forwarded-Port %[dst_port]
      maxconn 50000
      option httpchk
      option httplog
      retries 5
      timeout check 10m
      timeout client 10m
      timeout connect 10m
      timeout http-request 10m
      timeout queue 10m
      timeout server 10m
      server overcloud-controller-0.internalapi.localdomain 172.25.0.106:8784 check fall 5 inter 2000 rise 2
    
    retries 5
    timeout http-request 10m
    timeout queue 10m
    timeout connect 10m
    timeout client 10m
    timeout server 10m
    timeout check 10m
    balance roundrobin
    maxconn 50000
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/services/trilio-datamover-api.yaml
              tripleo::haproxy::trilio_datamover_api::options:
                 'retries': '5'
                 'maxconn': '50000'
                 'balance': 'roundrobin'
                 'timeout http-request': '10m'
                 'timeout queue': '10m'
                 'timeout connect': '10m'
                 'timeout client': '10m'
                 'timeout server': '10m'
                 'timeout check': '10m'
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env_tls_endpoints_public_dns.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
    [root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
    26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-REL-VERSION>-rhosp16.1        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-REL-VERSION>-rhosp16.1       kolla_start           5 days ago  Up 5 days ago         horizon
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-REL-VERSION>-rhosp16.1                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-REL-VERSION>-rhosp16.1       kolla_start           5 days ago  Up 5 days ago         horizon
    ## Download the shell script
    $ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    
    ## Assign executable permissions
    $ chmod +x nova_userid.sh
    
    ## Execute the shell script to change 'nova' user and group id to '42436'
    $ ./nova_userid.sh
    
    ## Ignore any errors and verify that 'nova' user and group id has changed to '42436'
    $ id nova
       uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
    openstack stack failures list overcloud
    heat stack-list --show-nested -f "status=FAILED"
    heat resource-list --nested-depth 5 overcloud | grep FAILED
    
    => If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_dmapi
    
    tailf /var/log/containers/trilio-datamover-api/dmapi.log
    
     
    
    => If trilio datamover containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_datamover
    
    tailf /var/log/containers/trilio-datamover/tvault-contego.log
    RHOSP 16.2 - cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/puppet/trilio/files/
    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 13:56:08 GMT
    Content-Type: application/json
    Content-Length: 1399
    Connection: keep-alive
    X-Compute-Request-Id: req-4618161e-64e4-489a-b8fc-f3cb21d94096
    
    {
       "policy_list":[
          {
             "id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
             "created_at":"2020-10-26T12:52:22.000000",
             "updated_at":"2020-10-26T12:52:22.000000",
             "status":"available",
             "name":"Gold",
             "description":"",
             "metadata":[
                
    

    Show Policy

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>

    Requests the details of a given policy

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    policy_id

    string

    ID of the Policy to show

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    list assigned Policies

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/assigned/<project_id>

    Requests the lists of Policies assigned to a Project.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    project_id

    string

    ID of the Project to fetch assigned Policies from

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Create Policy

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy

    Creates a Policy with the given parameters

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body Format

    Update Policy

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>

    Updates a Policy with the given information

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    policy_id

    string

    ID of the Policy to update

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body Format

    Assign Policy

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>

    Updates a Policy with the given information

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    policy_id

    string

    ID of the Policy to assign

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body Format

    Delete Policy

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>

    Deletes a given Policy

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    policy_id

    string

    ID of the Policy to delete

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Restores

    Definition

    A Restore is the workflow to bring back the backed up VMs from a Trilio Snapshot.

    List of Restores

    Using Horizon

    To reach the list of Restores for a Snapshot follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    Using CLI

    • --snapshot_id <snapshot_id> ➡️ ID of the Snapshot to show the restores of

    Restores overview

    Using Horizon

    To reach the detailed Restore overview follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    Details Tab

    The Restore Details Tab shows the most important information about the Restore.

    • Name

    • Description

    • Restore Type

    • Status

    The Restore Options are the restore.json provided to Trilio.

    • List of VMs restored

      • restored VM Name

      • restored VM Status

      • restored VM ID

    Misc Tab

    The Misc tab provides additional Metadata information.

    • Creation Time

    • Restore ID

    • Snapshot ID containing the Restore

    • Workload

    Using CLI

    • <restore_id> ➡️ ID of the restore to be shown

    • --output <output> ➡️ Option to get additional restore details, Specify --output metadata for restore metadata,--output networks --output subnets --output routers --output flavors

    Delete a Restore

    Once a Restore is no longer needed, it can be safely deleted from a Workload.

    Deleting a Restore will only delete the Trilio information about this Restore. No Openstack resources are getting deleted.

    Using Horizon

    There are 2 possibilities to delete a Restore.

    Possibility 1: Single Restore deletion through the submenu

    To delete a single Restore through the submenu follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to delete

    Possibility 2: Multiple Restore deletion through a checkbox in Snapshot overview

    To delete one or more Restores through the Restore list do the following:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    Using CLI

    • <restore_id> ➡️ ID of the restore to be deleted

    Cancel a Restore

    Ongoing Restores can be canceled.

    Using Horizon

    To cancel a Restore in Horizon follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to delete

    Using CLI

    • <restore_id> ➡️ ID of the restore to be deleted

    One Click Restore

    The One Click Restore will bring back all VMs from the Snapshot in the same state as they were backed up. They will:

    • be located in the same cluster in the same datacenter

    • use the same storage domain

    • connect to the same network

    • have the same flavor

    The user can't change any Metadata.

    The One Click Restore requires, that the original VM's that have been backed up are deleted or otherwise lost. If even one VM is still existing, will the One Click Restore fail.

    The One Click Restore will automatically update the Workload to protect the restored VMs.

    Using Horizon

    There are 2 possibilities to start a One Click Restore.

    Possibility 1: From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Possibility 2: From the Snapshot overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Using CLI

    • <snapshot_id> ➡️ ID of the snapshot to restore.

    • --display-name <display-name> ➡️ Optional name for the restore.

    • --display-description <display-description> ➡️

    Selective Restore

    The Selective Restore is the most complex restore Trilio has to offer. It allows to adapt the restored VMs to the exact needs of the User.

    With the selective restore the following things can be changed:

    • Which VMs are getting restored

    • Name of the restored VMs

    • Which networks to connect with

    • Which Storage domain to use

    The Selective Restore is always available and doesn't have any prerequirements.

    The Selective Restore will automatically update the Workload to protect the created instance in the case that the original instance is no longer existing.

    Using Horizon

    There are 2 possibilities to start a Selective Restore.

    Possibility 1: From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Possibility 2: From the Snapshot overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Using CLI

    • <snapshot_id> ➡️ ID of the snapshot to restore.

    • --display-name <display-name> ➡️ Optional name for the restore.

    • --display-description <display-description> ➡️

    Inplace Restore

    The Inplace Restore covers those use cases, where the VM and its Volumes are still available, but the data got corrupted or needs to a rollback for other reasons.

    It allows the user to restore only the data of a selected Volume, which is part of a backup.

    The Inplace Restore only works when the original VM and the original Volume are still available and connected. Trilio is checking this by the saved Object-ID.

    The Inplace Restore will not create any new RHV resources. Please use one of the other restore options if new Volumes or VMs are required.

    Using Horizon

    There are 2 possibilities to start an Inplace Restore.

    Possibility 1: From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Possibility 2: From the Snapshot overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Using CLI

    • <snapshot_id> ➡️ ID of the snapshot to restore.

    • --display-name <display-name> ➡️ Optional name for the restore.

    • --display-description <display-description> ➡️

    required restore.json for CLI

    The workloadmgr client CLI is using a restore.json file to define the restore parameters for the selective and the inplace restore.

    An example for a selective restore of this restore.json is shown below. A detailed analysis and explanation is given afterwards.

    The restore.json requires many information about the backed up resources. All required information can be gathered in the .

    General required information

    Before the exact details of the restore are to be provided it is necessary to provide the general metadata for the restore.

    • name➡️the name of the restore

    • description➡️the description of the restore

    • oneclickrestore <True/False>➡️

    Selective Restore required information

    The Selective Restore requires a lot of information to be able to execute the restore as desired.

    Those information are divided into 3 components:

    • instances

    • restore_topology

    • networks_mapping

    Information required in instances

    This part contains all information about all instances that are part of the Snapshot to restore and how they are to be restored.

    Even when VMs are not to be restored are they required inside the restore.json to allow a clean execution of the restore.

    Each instance requires the following information

    • id ➡️ original id of the instance

    • include <True/False> ➡️ Set True when the instance shall be restored

    All further information are only required, when the instance is part of the restore.

    • name ➡️ new name of the instance

    • availability_zone ➡️ Nova Availability Zone the instance shall be restored into. Leave empty for "Any Availability Zone"

    • Nics ➡️

    To use the next free IP available in the set Nics to an empty list [ ]

    Using an empty list for Nics combined with the Network Topology Restore, will the restore automatically restore the original IP address of the instance.

    • vdisks ➡️ List of all Volumes that are part of the instance. Each Volume requires the following information:

      • id ➡️ Original ID of the Volume

      • new_volume_type

    The root disk needs to be at least as big as the root disk of the backed up instance was.

    The following example describes a single instance with all values.

    Information required in network topology restore or network mapping

    Do not mix network topology restore together with network mapping.

    To activate a network topology restore set:

    To activate network mapping set:

    When the network mapping is activated it is used, it is necessary to provide the mapping details, which are part of the networks_mapping block:

    • networks ➡️ list of snapshot_network and target_network pairs

      • snapshot_network ➡️ the network backed up in the snapshot, contains the following:

    Full selective restore example

    Inplace Restore required information

    The Inplace Restore requires less information thana selective restore. It only requires the base file with some information about the Instances and Volumes to be restored.

    Information required in instances

    • id ➡️ ID of the instance inside the Snapshot

    • restore_boot_disk ➡️ Set to True if the boot disk of that VM shall be restored.

    When the boot disk is at the same time a Cinder Disk, both values need to be set true.

    • include ➡️ Set to True if at least one Volume from this instance shall be restored

    • vdisks ➡️ List of disks, that are connected to the instance. Each disk contains:

      • id

    Network mapping information required

    There are no network information required, but the field have to exist as empty value for the restore to work.

    Full Inplace restore example

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 14:18:42 GMT
    Content-Type: application/json
    Content-Length: 2160
    Connection: keep-alive
    X-Compute-Request-Id: req-0583fc35-0f80-4746-b280-c17b32cc4b25
    
    {
       "policy":{
          "id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
          "created_at":"2020-10-26T12:52:22.000000",
          "updated_at":"2020-10-26T12:52:22.000000",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "status":"available",
          "name":"Gold",
          "description":"",
          "field_values":[
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"0201f8b4-482d-4ec1-9b92-8cf3092abcc2",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"retention_policy_value",
                "value":"10"
             },
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"48cc7007-e221-44de-bd4e-6a66841bdee0",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"interval",
                "value":"5"
             },
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"79070c67-9021-4220-8a79-648ffeebc144",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"retention_policy_type",
                "value":"Number of Snapshots to Keep"
             },
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"9fec205a-9528-45ea-a118-ffb64d8c7d9d",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"fullbackup_interval",
                "value":"-1"
             }
          ],
          "metadata":[
             
          ],
          "policy_assignments":[
             {
                "created_at":"2020-10-26T12:53:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"3e3f1b12-1b1f-452b-a9d2-b6e5fbf2ab18",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "project_id":"4dfe98a43bfa404785a812020066b4d6",
                "policy_name":"Gold",
                "project_name":"admin"
             },
             {
                "created_at":"2020-10-29T15:39:13.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"8b4a6236-63f1-4e2d-b8d1-23b37f4b4346",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "project_id":"c76b3355a164498aa95ddbc960adc238",
                "policy_name":"Gold",
                "project_name":"robert"
             }
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:14:01 GMT
    Content-Type: application/json
    Content-Length: 338
    Connection: keep-alive
    X-Compute-Request-Id: req-57175488-d267-4dcb-90b5-f239d8b02fe2
    
    {
       "policies":[
          {
             "created_at":"2020-10-29T15:39:13.000000",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"8b4a6236-63f1-4e2d-b8d1-23b37f4b4346",
             "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "policy_name":"Gold",
             "project_name":"robert"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:24:03 GMT
    Content-Type: application/json
    Content-Length: 1413
    Connection: keep-alive
    X-Compute-Request-Id: req-05e05333-b967-4d4e-9c9b-561f1a7add5a
    
    {
       "policy":{
          "id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
          "created_at":"2020-11-17T09:24:01.000000",
          "updated_at":"2020-11-17T09:24:01.000000",
          "status":"available",
          "name":"CLI created",
          "description":"CLI created",
          "metadata":[
             
          ],
          "field_values":[
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"767ae42d-caf0-4d36-963c-9b0e50991711",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"interval",
                "value":"4 hr"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_value",
                "value":"10"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_type",
                "value":"Number of Snapshots to Keep"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"fullbackup_interval",
                "value":"-1"
             }
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:32:13 GMT
    Content-Type: application/json
    Content-Length: 1515
    Connection: keep-alive
    X-Compute-Request-Id: req-9104cf1c-4025-48f5-be92-1a6b7117bf95
    
    {
       "policy":{
          "id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
          "created_at":"2020-11-17T09:24:01.000000",
          "updated_at":"2020-11-17T09:24:01.000000",
          "status":"available",
          "name":"API created",
          "description":"API created",
          "metadata":[
             
          ],
          "field_values":[
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"767ae42d-caf0-4d36-963c-9b0e50991711",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"interval",
                "value":"8 hr"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_value",
                "value":"20"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_type",
                "value":"Number of days to retain Snapshots"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"fullbackup_interval",
                "value":"7"
             }
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:46:23 GMT
    Content-Type: application/json
    Content-Length: 2318
    Connection: keep-alive
    X-Compute-Request-Id: req-169a53e4-b1c9-4bd1-bf68-3416d177d868
    
    {
       "policy":{
          "id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
          "created_at":"2020-11-17T09:24:01.000000",
          "updated_at":"2020-11-17T09:24:01.000000",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "status":"available",
          "name":"API created",
          "description":"API created",
          "field_values":[
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"767ae42d-caf0-4d36-963c-9b0e50991711",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"interval",
                "value":"8 hr"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_value",
                "value":"20"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_type",
                "value":"Number of days to retain Snapshots"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"fullbackup_interval",
                "value":"7"
             }
          ],
          "metadata":[
             
          ],
          "policy_assignments":[
             {
                "created_at":"2020-11-17T09:46:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"4794ed95-d8d1-4572-93e8-cebd6d4df48f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "project_id":"cbad43105e404c86a1cd07c48a737f9c",
                "policy_name":"API created",
                "project_name":"services"
             },
             {
                "created_at":"2020-11-17T09:46:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"68f187a6-3526-4a35-8b2d-cb0e9f497dd8",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "project_id":"c76b3355a164498aa95ddbc960adc238",
                "policy_name":"API created",
                "project_name":"robert"
             }
          ]
       },
       "failed_ids":[
          
       ]
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:56:03 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    {
       "workload_policy":{
          "field_values":{
             "fullbackup_interval":"<-1 for never / 0 for always / Integer>",
             "retention_policy_type":"<Number of Snapshots to Keep/Number of days to retain Snapshots>",
             "interval":"<Integer hr>",
             "retention_policy_value":"<Integer>"
          },
          "display_name":"<String>",
          "display_description":"<String>",
          "metadata":{
             <key>:<value>
          }
       }
    }
    {
       "policy":{
          "field_values":{
             "fullbackup_interval":"<-1 for never / 0 for always / Integer>",
             "retention_policy_type":"<Number of Snapshots to Keep/Number of days to retain Snapshots>",
             "interval":"<Integer hr>",
             "retention_policy_value":"<Integer>"
          },
          "display_name":"String",
          "display_description":"String",
          "metadata":{
             <key>:<value>
          }
       }
    }
    {
       "policy":{
          "remove_projects":[
             "<project_id>"
          ],
          "add_projects":[
             "<project_id>",
          ]
       }
    }
    ],
    "field_values":[
    {
    "created_at":"2020-10-26T12:52:22.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"0201f8b4-482d-4ec1-9b92-8cf3092abcc2",
    "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
    "policy_field_name":"retention_policy_value",
    "value":"10"
    },
    {
    "created_at":"2020-10-26T12:52:22.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"48cc7007-e221-44de-bd4e-6a66841bdee0",
    "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
    "policy_field_name":"interval",
    "value":"5"
    },
    {
    "created_at":"2020-10-26T12:52:22.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"79070c67-9021-4220-8a79-648ffeebc144",
    "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
    "policy_field_name":"retention_policy_type",
    "value":"Number of Snapshots to Keep"
    },
    {
    "created_at":"2020-10-26T12:52:22.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"9fec205a-9528-45ea-a118-ffb64d8c7d9d",
    "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
    "policy_field_name":"fullbackup_interval",
    "value":"-1"
    }
    ]
    }
    ]
    }

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the Snapshot Name

  • Navigate to the Restores tab

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the Snapshot Name

  • Navigate to the Restores tab

  • Identify the restore to show

  • Click the restore name

  • Time taken

  • Size

  • Progress Message

  • Progress

  • Host

  • Restore Options

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the Snapshot Name

  • Navigate to the Restore tab

  • Click "Delete Restore" in the line of the restore in question

  • Confirm by clicking "Delete Restore"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshots in the Snapshot list

  • Enter the Snapshot by clicking the Snapshot name

  • Navigate to the Restore tab

  • Check the checkbox for each Restore that shall be deleted

  • Click "Delete Restore" in the menu above

  • Confirm by clicking "Delete Restore"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the Snapshot Name

  • Navigate to the Restore tab

  • Identify the ongoing Restore

  • Click "Cancel Restore" in the line of the restore in question

  • Confirm by clicking "Cancel Restore"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click "One Click Restore" in the same line as the identified Snapshot

  • (Optional) Provide a name / description

  • Click "Create"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click the Snapshot Name

  • Navigate to the "Restores" tab

  • Click "One Click Restore"

  • (Optional) Provide a name / description

  • Click "Create"

  • Optional description for restore.

    Which DataCenter / Cluster to restore into

  • Which flavor the restored VMs will use

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

  • Click on "Selective Restore"

  • Configure the Selective Restore as desired

  • Click "Restore"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click the Snapshot Name

  • Navigate to the "Restores" tab

  • Click "Selective Restore"

  • Configure the Selective Restore as desired

  • Click "Restore"

  • Optional description for restore.
  • --filename <filename> ➡️ Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

  • Click on "Inplace Restore"

  • Configure the Inplace Restore as desired

  • Click "Restore"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click the Snapshot Name

  • Navigate to the "Restores" tab

  • Click "Inplace Restore"

  • Configure the Inplace Restore as desired

  • Click "Restore"

  • Optional description for restore.
  • --filename <filename> ➡️ Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.

  • If the restore is a oneclick restore. Setting this to True will override all other settings and a One Click Restore is started.
  • restore_type <oneclick/selective/inplace>➡️defines the restore that is intended

  • type openstack➡️defines that the restore is into an openstack cloud.

  • openstackstarts the exact definition of the restore

  • list of openstack Neutron ports that shall be attached to the instance. Each Neutron Port consists of:
    • id ➡️ ID of the Neutron port to use

    • mac_address ➡️ Mac Address of the Neutron port

    • ip_address ➡️ IP Address of the Neutron port

    • network ➡️ network the port is assigned to. Contains the following information:

      • id ➡️ ID of the network the Neutron port is part of

      • subnet➡️

    ➡️
    The Volume Type to use for the restored Volume. Leave empty for Volume Type None
  • availability_zone ➡️ The Cinder Availability Zone to use for the Volume. The default Availability Zone of Cinder is Nova

  • flavor➡️Defines the Flavor to use for the restored instance. Contains the following information:

    • ram➡️How much RAM the restored instance will have (in MB)

    • ephemeral➡️How big the ephemeral disk of the instance will be (in GB)

    • vcpus➡️How many vcpus the restored instance will have available

    • swap➡️How big the Swap of the restored instance will be (in MB). Leave empty for none.

    • disk➡️Size of the root disk the instance will boot with

    • id➡️ID of the flavor that matches the provided information

  • id ➡️ Original ID of the network backed up
  • subnet ➡️ the subnet of the network backed up in the snapshot, contains the following:

    • id ➡️ Original ID of the subnet backed up

  • target_network ➡️ the existing network to map to, contains the following

    • id ➡️ ID of the network to map to

    • subnet ➡️ the subnet of the network backed up in the snapshot, contains the following:

      • id ➡️ ID of the subnet to map to

  • ➡️
    Original ID of the Volume
  • restore_cinder_volume ➡️ set to true if the Volume shall be restored

  • Snapshot overview
    workloadmgr restore-list [--snapshot_id <snapshot_id>]
    workloadmgr restore-show [--output <output>] <restore_id>
    workloadmgr restore-delete <restores_id>
    workloadmgr restore-cancel <restore_id>
    workloadmgr snapshot-oneclick-restore [--display-name <display-name>]
                                          [--display-description <display-description>]
                                          <snapshot_id>
    workloadmgr snapshot-selective-restore [--display-name <display-name>]
                                           [--display-description <display-description>]
                                           [--filename <filename>]
                                           <snapshot_id>
    workloadmgr snapshot-inplace-restore [--display-name <display-name>]
                                         [--display-description <display-description>]
                                         [--filename <filename>]
                                         <snapshot_id>
    {
        oneclickrestore: False,
        restore_type: selective, 
        type: openstack, 
        openstack: 
            {
                instances: 
                    [
                        {
                            include: True, 
                            id: 890888bc-a001-4b62-a25b-484b34ac6e7e,                        
                            name: cdcentOS-1, 
                            availability_zone:, 
                            nics: [], 
                            vdisks: 
                                [
                                    {
                                        id: 4cc2b474-1f1b-4054-a922-497ef5564624, 
                                        new_volume_type:, 
                                        availability_zone: nova
                                    }
                                ], 
                            flavor: 
                                {
                                    ram: 512, 
                                    ephemeral: 0, 
                                    vcpus: 1,
                                    swap:,
                                    disk: 1, 
                                    id: 1
                                }                         
                        }
                    ], 
                restore_topology: True, 
                networks_mapping: 
                    {
                        networks: []
                    }
            }
    }
    'instances':[
      {
         'name':'cdcentOS-1-selective',
         'availability_zone':'US-East',
         'nics':[
           {
              'mac_address':'fa:16:3e:00:bd:60',
              'ip_address':'192.168.0.100',
              'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
              'network':{
                 'subnet':{
                    'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
                 },
                 'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
              }
           }
         ],
         'vdisks':[
           {
              'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
              'new_volume_type':'ceph',
              'availability_zone':'nova'
           }
         ],
         'flavor':{
            'ram':2048,
            'ephemeral':0,
            'vcpus':1,
            'swap':'',
            'disk':20,
            'id':'2'
         },
         'include':True,
         'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
      }
    ]
    restore_topology:True
    restore_topology:False
    {
       'oneclickrestore':False,
       'openstack':{
          'instances':[
             {
                'name':'cdcentOS-1-selective',
                'availability_zone':'US-East',
                'nics':[
                   {
                      'mac_address':'fa:16:3e:00:bd:60',
                      'ip_address':'192.168.0.100',
                      'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
                      'network':{
                         'subnet':{
                            'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
                         },
                         'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
                      }
                   }
                ],
                'vdisks':[
                   {
                      'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
                      'new_volume_type':'ceph',
                      'availability_zone':'nova'
                   }
                ],
                'flavor':{
                   'ram':2048,
                   'ephemeral':0,
                   'vcpus':1,
                   'swap':'',
                   'disk':20,
                   'id':'2'
                },
                'include':True,
                'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
             }
          ],
          'restore_topology':False,
          'networks_mapping':{
             'networks':[
                {
                   'snapshot_network':{
                      'subnet':{
                         'id':'8b609440-4abf-4acf-a36b-9a0fa70c383c'
                      },
                      'id':'8b871820-f92e-41f6-80b4-00555a649b4c'
                   },
                   'target_network':{
                      'subnet':{
                         'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
                      },
                      'id':'d5047e84-077e-4b38-bc43-e3360b0ad174',
                      'name':'internal'
                   }
                }
             ]
          }
       },
       'restore_type':'selective',
       'type':'openstack'
    }
    {
       'oneclickrestore':False,
       'restore_type':'inplace',
       'type':'openstack',   
       'openstack':{
          'instances':[
             {
                'restore_boot_disk':True,
                'include':True,
                'id':'ba8c27ab-06ed-4451-9922-d919171078de',
                'vdisks':[
                   {
                      'restore_cinder_volume':True,
                      'id':'04d66b70-6d7c-4d1b-98e0-11059b89cba6',
                   }
                ]
             }
          ]
       }
    }
    subnet the port is assigned to. Contains the following information:
    • id ➡️ ID of the network the Neutron port is part of

    Restores

    List Restores

    GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/detail

    Lists Restores with details

    Path Parameters

    Name
    Type
    Description

    Query Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Get Restore

    GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>

    Provides all details about the specified Restore

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Delete Restore

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>

    Deletes the specified Restore

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Cancel Restore

    GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>/cancel

    Cancels an ongoing Restore

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    One Click Restore

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Starts a restore according to the provided information

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    The One-Click restore requires a body to provide all necessary information in json format.

    Selective Restore

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Starts a restore according to the provided information.

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    The One-Click restore requires a body to provide all necessary information in json format.

    Inplace Restore

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Starts a restore according to the provided information

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    The One-Click restore requires a body to provide all necessary information in json format.

    Example runbook for Disaster Recovery using NFS

    This runbook will demonstrate how to set up Disaster Recovery with Trilio for a given scenario.

    The chosen scenario is following an actively used Trilio customer environment.

    Scenario

    There are two Openstack clouds available "Openstack Cloud A" and Openstack Cloud B". "Openstack Cloud B" is the Disaster Recovery restore point of "Openstack Cloud A" and vice versa. Both clouds have an independent Trilio installation integrated. These Trilio installations are writing their Backups to NFS targets. "Trilio A" is writing to "NFS A1" and "Trilio B" is writing to "NFS B1". The NFS Volumes used are getting synced against another NFS Volume on the other side. "NFS A1" is syncing with "NFS B2" and "NFS B1" is syncing with "NFS A2". The syncing process is set up independently from Trilio and will always favor the newer dataset.

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the Restores from

    snapshot_id

    string

    ID of the Snapshot to fetch the Restores from

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the restore from

    restore_id

    string

    ID of the restore to show

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the Restore from

    restore_id

    string

    ID of the Restore to be deleted

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of the Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the Restore from

    restore_id

    string

    ID of the Restore to cancel

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    snapshot_id

    string

    ID of the snapshot to restore

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    snapshot_id

    string

    ID of the snapshot to restore

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    snapshot_id

    string

    ID of the snapshot to restore

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 11:28:43 GMT
    Content-Type: application/json
    Content-Length: 4308
    Connection: keep-alive
    X-Compute-Request-Id: req-0bc531b6-be6e-43b4-90bd-39ef26ef1463
    
    {
       "restores":[
          {
             "id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
             "created_at":"2020-11-05T10:17:40.000000",
             "updated_at":"2020-11-05T10:17:40.000000",
             "finished_at":"2020-11-05T10:27:20.000000",
             "user_id":"ccddc7e7a015487fa02920f4d4979779",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "status":"available",
             "restore_type":"restore",
             "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
             "links":[
                {
                   "rel":"self",
                   "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
                },
                {
                   "rel":"bookmark",
                   "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
                }
             ],
             "name":"OneClick Restore",
             "description":"-",
             "host":"TVM2",
             "size":2147483648,
             "uploaded_size":2147483648,
             "progress_percent":100,
             "progress_msg":"Restore from snapshot is complete",
             "warning_msg":null,
             "error_msg":null,
             "time_taken":580,
             "restore_options":{
                "name":"OneClick Restore",
                "oneclickrestore":true,
                "restore_type":"oneclick",
                "openstack":{
                   "instances":[
                      {
                         "name":"cirros-2",
                         "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                         "availability_zone":"nova"
                      },
                      {
                         "name":"cirros-1",
                         "id":"e33c1eea-c533-4945-864d-0da1fc002070",
                         "availability_zone":"nova"
                      }
                   ]
                },
                "type":"openstack",
                "description":"-"
             },
             "metadata":[
                {
                   "created_at":"2020-11-05T10:27:20.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"91ab2495-1903-4d75-982b-08a4e480835b",
                   "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
                   "key":"data_transfer_time",
                   "value":"0"
                },
                {
                   "created_at":"2020-11-05T10:27:20.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"e0e01eec-24e0-4abd-9b8c-19993a320e9f",
                   "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
                   "key":"object_store_transfer_time",
                   "value":"0"
                },
                {
                   "created_at":"2020-11-05T10:27:20.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"eb909267-ba9b-41d1-8861-a9ec22d6fd84",
                   "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
                   "key":"restore_user_selected_value",
                   "value":"Oneclick Restore"
                }
             ]
          },
          {
             "id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
             "created_at":"2020-11-04T14:37:31.000000",
             "updated_at":"2020-11-04T14:37:31.000000",
             "finished_at":"2020-11-04T14:45:27.000000",
             "user_id":"ccddc7e7a015487fa02920f4d4979779",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "status":"error",
             "restore_type":"restore",
             "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
             "links":[
                {
                   "rel":"self",
                   "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/4673d962-f6a5-4209-8d3e-b9f2e9115f07"
                },
                {
                   "rel":"bookmark",
                   "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/4673d962-f6a5-4209-8d3e-b9f2e9115f07"
                }
             ],
             "name":"OneClick Restore",
             "description":"-",
             "host":"TVM2",
             "size":2147483648,
             "uploaded_size":2147483648,
             "progress_percent":100,
             "progress_msg":"",
             "warning_msg":null,
             "error_msg":"Failed restoring snapshot: Error creating instance e271bd6e-f53e-4ebc-875a-5787cc4dddf7",
             "time_taken":476,
             "restore_options":{
                "name":"OneClick Restore",
                "oneclickrestore":true,
                "restore_type":"oneclick",
                "openstack":{
                   "instances":[
                      {
                         "name":"cirros-2",
                         "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                         "availability_zone":"nova"
                      },
                      {
                         "name":"cirros-1",
                         "id":"e33c1eea-c533-4945-864d-0da1fc002070",
                         "availability_zone":"nova"
                      }
                   ]
                },
                "type":"openstack",
                "description":"-"
             },
             "metadata":[
                {
                   "created_at":"2020-11-04T14:45:27.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"be6dc7e2-1be2-476b-9338-aed986be3b55",
                   "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
                   "key":"data_transfer_time",
                   "value":"0"
                },
                {
                   "created_at":"2020-11-04T14:45:27.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"2e4330b7-6389-4e21-b31b-2503b5441c3e",
                   "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
                   "key":"object_store_transfer_time",
                   "value":"0"
                },
                {
                   "created_at":"2020-11-04T14:45:27.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"561c806b-e38a-496c-a8de-dfe96cb3e956",
                   "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
                   "key":"restore_user_selected_value",
                   "value":"Oneclick Restore"
                }
             ]
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 14:04:45 GMT
    Content-Type: application/json
    Content-Length: 2639
    Connection: keep-alive
    X-Compute-Request-Id: req-30640219-e94e-4651-9b9e-49f5574e2a7f
    
    {
       "restore":{
          "id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
          "created_at":"2020-11-05T10:17:40.000000",
          "updated_at":"2020-11-05T10:17:40.000000",
          "finished_at":"2020-11-05T10:27:20.000000",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"available",
          "restore_type":"restore",
          "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "snapshot_details":{
             "created_at":"2020-11-04T13:58:37.000000",
             "updated_at":"2020-11-05T10:27:22.000000",
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
             "user_id":"ccddc7e7a015487fa02920f4d4979779",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
             "snapshot_type":"full",
             "display_name":"API taken 2",
             "display_description":"API taken description 2",
             "size":44171264,
             "restore_size":2147483648,
             "uploaded_size":44171264,
             "progress_percent":100,
             "progress_msg":"Creating Instance: cirros-2",
             "warning_msg":null,
             "error_msg":null,
             "host":"TVM1",
             "finished_at":"2020-11-04T14:06:03.000000",
             "data_deleted":false,
             "pinned":false,
             "time_taken":428,
             "vault_storage_id":null,
             "status":"available"
          },
          "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
          "instances":[
             {
                "id":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2",
                "name":"cirros-2",
                "status":"available",
                "metadata":{
                   "config_drive":"",
                   "instance_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                   "production":"1"
                }
             },
             {
                "id":"b083bb70-e384-4107-b951-8e9e7bbac380",
                "name":"cirros-1",
                "status":"available",
                "metadata":{
                   "config_drive":"",
                   "instance_id":"e33c1eea-c533-4945-864d-0da1fc002070",
                   "production":"1"
                }
             }
          ],
          "networks":[
             
          ],
          "subnets":[
             
          ],
          "routers":[
             
          ],
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
             }
          ],
          "name":"OneClick Restore",
          "description":"-",
          "host":"TVM2",
          "size":2147483648,
          "uploaded_size":2147483648,
          "progress_percent":100,
          "progress_msg":"Restore from snapshot is complete",
          "warning_msg":null,
          "error_msg":null,
          "time_taken":580,
          "restore_options":{
             "name":"OneClick Restore",
             "oneclickrestore":true,
             "restore_type":"oneclick",
             "openstack":{
                "instances":[
                   {
                      "name":"cirros-2",
                      "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                      "availability_zone":"nova"
                   },
                   {
                      "name":"cirros-1",
                      "id":"e33c1eea-c533-4945-864d-0da1fc002070",
                      "availability_zone":"nova"
                   }
                ]
             },
             "type":"openstack",
             "description":"-"
          },
          "metadata":[
             
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 14:21:07 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-0e155b21-8931-480a-a749-6d8764666e4d
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 15:13:30 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-98d4853c-314c-4f27-bd3f-f81bda1a2840
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 14:30:56 GMT
    Content-Type: application/json
    Content-Length: 992
    Connection: keep-alive
    X-Compute-Request-Id: req-7e18c309-19e5-49cb-a07e-90dd368fddae
    
    {
       "restore":{
          "id":"3df1d432-2f76-4ebd-8f89-1275428842ff",
          "created_at":"2020-11-05T14:30:56.048656",
          "updated_at":"2020-11-05T14:30:56.048656",
          "finished_at":null,
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"restoring",
          "restore_type":"restore",
          "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
             }
          ],
          "name":"One Click Restore",
          "description":"One Click Restore",
          "host":"",
          "size":0,
          "uploaded_size":0,
          "progress_percent":0,
          "progress_msg":null,
          "warning_msg":null,
          "error_msg":null,
          "time_taken":0,
          "restore_options":{
             "openstack":{
                
             },
             "type":"openstack",
             "oneclickrestore":true,
             "vmware":{
                
             },
             "restore_type":"oneclick"
          },
          "metadata":[
             
          ]
       }
    }
    {
       "restore":{
          "options":{
             "openstack":{
                
             },
             "type":"openstack",
             "oneclickrestore":true,
             "vmware":{},
             "restore_type":"oneclick"
          },
          "name":"One Click Restore",
          "description":"One Click Restore"
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 09:53:31 GMT
    Content-Type: application/json
    Content-Length: 1713
    Connection: keep-alive
    X-Compute-Request-Id: req-84f00d6f-1b12-47ec-b556-7b3ed4c2f1d7
    
    {
       "restore":{
          "id":"778baae0-6c64-4eb1-8fa3-29324215c43c",
          "created_at":"2020-11-09T09:53:31.037588",
          "updated_at":"2020-11-09T09:53:31.037588",
          "finished_at":null,
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"restoring",
          "restore_type":"restore",
          "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
             }
          ],
          "name":"API",
          "description":"API Created",
          "host":"",
          "size":0,
          "uploaded_size":0,
          "progress_percent":0,
          "progress_msg":null,
          "warning_msg":null,
          "error_msg":null,
          "time_taken":0,
          "restore_options":{
             "openstack":{
                "instances":[
                   {
                      "vdisks":[
                         {
                            "new_volume_type":"iscsi",
                            "id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                            "availability_zone":"nova"
                         }
                      ],
                      "name":"cirros-1-selective",
                      "availability_zone":"nova",
                      "nics":[
                         
                      ],
                      "flavor":{
                         "vcpus":1,
                         "disk":1,
                         "swap":"",
                         "ram":512,
                         "ephemeral":0,
                         "id":"1"
                      },
                      "include":true,
                      "id":"e33c1eea-c533-4945-864d-0da1fc002070"
                   },
                   {
                      "include":false,
                      "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe"
                   }
                ],
                "restore_topology":false,
                "networks_mapping":{
                   "networks":[
                      {
                         "snapshot_network":{
                            "subnet":{
                               "id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
                            },
                            "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26"
                         },
                         "target_network":{
                            "subnet":{
                               "id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
                            },
                            "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                            "name":"internal"
                         }
                      }
                   ]
                }
             },
             "restore_type":"selective",
             "type":"openstack",
             "oneclickrestore":false
          },
          "metadata":[
             
          ]
       }
    }
    {
       "restore":{
        "name":"<restore name>",
        "description":"<restore description>",
    	  "options":{
             "openstack":{
                "instances":[
                   {
                      "name":"<new name of instance>",
                      "include":<true/false>,
                      "id":"<original id of instance to be restored>"
    				  "availability_zone":"<availability zone>",
    				  "vdisks":[
                         {
                            "id":"<original ID of Volume>",
                            "new_volume_type":"<new volume type>",
                            "availability_zone":"<Volume availability zone>"
                         }
                      ],
                      "nics":[
                         {
                             'mac_address':'<mac address of the pre-created port>',
                             'ip_address':'<IP of the pre-created port>',
                             'id':'<ID of the pre-created port>',
                             'network':{
                                'subnet':{
                                   'id':'<ID of the subnet of the pre-created port>'
                                },
                             'id':'<ID of the network of the pre-created port>'
                          }
                      ],
                      "flavor":{
                         "vcpus":<Integer>,
                         "disk":<Integer>,
                         "swap":<Integer>,
                         "ram":<Integer>,
                         "ephemeral":<Integer>,
                         "id":<Integer>
                      }
                   }
                ],
                "restore_topology":<true/false>,
                "networks_mapping":{
                   "networks":[
                      {
                         "snapshot_network":{
                            "subnet":{
                               "id":"<ID of the original Subnet ID>"
                            },
                            "id":"<ID of the original Network ID>"
                         },
                         "target_network":{
                            "subnet":{
                               "id":"<ID of the target Subnet ID>"
                            },
                            "id":"<ID of the target Network ID>",
                            "name":"<name of the target network>"
                         }
                      }
                   ]
                }
             },
             "restore_type":"selective",
             "type":"openstack",
             "oneclickrestore":false
          }
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 12:53:03 GMT
    Content-Type: application/json
    Content-Length: 1341
    Connection: keep-alive
    X-Compute-Request-Id: req-311fa97e-0fd7-41ed-873b-482c149ee743
    
    {
       "restore":{
          "id":"0bf96f46-b27b-425c-a10f-a861cc18b82a",
          "created_at":"2020-11-09T12:53:02.726757",
          "updated_at":"2020-11-09T12:53:02.726757",
          "finished_at":null,
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"restoring",
          "restore_type":"restore",
          "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
             }
          ],
          "name":"API",
          "description":"API description",
          "host":"",
          "size":0,
          "uploaded_size":0,
          "progress_percent":0,
          "progress_msg":null,
          "warning_msg":null,
          "error_msg":null,
          "time_taken":0,
          "restore_options":{
             "restore_type":"inplace",
             "type":"openstack",
             "oneclickrestore":false,
             "openstack":{
                "instances":[
                   {
                      "restore_boot_disk":true,
                      "include":true,
                      "id":"7c1bb5d2-aa5a-44f7-abcd-2d76b819b4c8",
                      "vdisks":[
                         {
                            "restore_cinder_volume":true,
                            "id":"f6b3fef6-4b0e-487e-84b5-47a14da716ca"
                         }
                      ]
                   },
                   {
                      "restore_boot_disk":true,
                      "include":true,
                      "id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b",
                      "vdisks":[
                         {
                            "restore_cinder_volume":true,
                            "id":"53204f34-019d-4ba8-ada1-e6ab7b8e5b43"
                         }
                      ]
                   }
                ]
             }
          },
          "metadata":[
             
          ]
       }
    }
    {
       "restore":{
          "name":"<restore-name>",
          "description":"<restore-description>",
          "options":{
             "restore_type":"inplace",
             "type":"openstack",
             "oneclickrestore":false,
             "openstack":{
                "instances":[
                   {
                      "restore_boot_disk":<Boolean>,
                      "include":<Boolean>,
                      "id":"<ID of the instance the volumes are attached to>",
                      "vdisks":[
                         {
                            "restore_cinder_volume":<boolean>,
                            "id":"<ID of the Volume to restore>"
                         }
                      ]
                   }
                ]
             }
          }
       }
    }
    Disaster Recovery Scenario

    This scenario will cover the Disaster Recovery of a single Workload and a complete Cloud. All processes are done be the Openstack administrator.

    Prerequisites for the Disaster Recovery process

    This runbook will assume that the following is true:

    • "Openstack Cloud A" and "Openstack Cloud B" both have an active Trilio installation with a valid license

    • "Openstack Cloud A" and "Openstack Cloud B" have free resources to host additional VMs

    • "Openstack Cloud A" and "Openstack Cloud B" have Tenants/Projects available that are the designated restore points for Tenant/Projects of the other side

    • Access to a user with the admin role permissions on domain level

    • One of the Openstack clouds is down/lost

    For ease of writing will this runbook assume from here on, that "Openstack Cloud A" is down and the Workloads are getting restored into "Openstack Cloud B".

    In the case of the usage of shared Tenant networks, beyond the floating IP, the following additional requirement is needed: All Tenant Networks, Routers, Ports, Floating IPs, and DNS Zones are created

    Disaster Recovery of a single Workload

    A single Workload can do a Disaster Recovery in this Scenario, while both Clouds are still active. To do so the following high-level process needs to be followed:

    1. Copy the Workload directories to the configured NFS Volume

    2. Make the right Mount-Paths available

    3. Reassign the Workload

    4. Restore the Workload

    5. Clean up

    Copy the Workload directories to the configured NFS Volume

    This process only shows how to get a Workload from "Openstack Cloud A" to "Openstack Cloud B". The vice versa process is similar.

    As only a single Workload is to be recovered it is more efficient to copy the data of that single Workload over to the "NFS B1" Volume, which is used by "Trilio B".

    Mount "NFS B2" Volume to a Trilio VM

    It is recommended to use the Trilio VM as a connector between both NFS Volumes, as the nova user is available on the Trilio VM.

    Identify the Workload on the "NFS B2" Volume

    Trilio Workloads are identified by their ID und which they are stored on the Backup Target. See below example:

    In the case that the Workload ID is not known can available Metadata inside the Workload directories be used to identify the correct Workload.

    Copy the Workload

    The identified workload needs to be copied with all subdirectories and files. Afterward, it is necessary to adjust the ownership to nova:nova with the right permissions.

    Make the Mount-Paths available

    Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.

    The MTAuMTAuMi4yMDovdXBzdHJlYW0= part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.

    This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.

    Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.

    Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.

    Identify the base64 hash values

    The used hash values can be calculated using the base64 tool in any Linux distribution.

    Create and bind the paths

    Based on the identified base64 hash values the following paths are required on each Compute node.

    /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

    and

    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0

    In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.

    To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.

    Reassign the workload

    Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.

    Add admin-user to required domains and projects

    To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.

    Discover orphaned Workloads from NFS-Storage of Target Cloud

    Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.

    List available projects on Target Cloud in the Target Domain

    The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.

    List available users on the Target Cloud in the Target Project that have the right backup trustee role

    To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.

    Reassign the workload to the target project

    Now that all informations have been gathered the workload can be reassigned to the target project.

    Verify the workload is available at the desired target_project

    After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.

    Restore the workload

    The reassigned workload can be restored using Horizon following the procedure described here.

    This runbook will continue on the CLI only path.

    Prepare the selective restore by getting the snapshot information

    To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.

    List all Snapshots of the workload to restore to identify the snapshot to restore

    Get Snapshot Details with network details for the desired snapshot

    Get Snapshot Details with disk details for the desired Snapshot

    Prepare the selective restore by creating the restore.json file

    The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.

    Run the selective restore

    To do the actual restore use the following command:

    Verify the restore

    To verify the success of the restore from a Trilio perspective the restore status is checked.

    Clean up

    After the Desaster Recovery Process has been successfully completed it is recommended to bring the TVM installation back into its original state to be ready for the next DR process.

    Delete the workload

    Delete the workload that got restored.

    Remove the database entry

    The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.

    To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.

    Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.

    This script can be found here: https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabase

    Remove the admin user from the project

    After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.

    Disaster Recovery of a complete cloud

    This Scenario will cover the Disaster Recovery of a full cloud. It is assumed that the source cloud is down or lost completly. To do the disaster recovery the following high-level process needs to be followed:

    1. Reconfigure the Target Trilio installation

    2. Make the right Mount-Paths available

    3. Reassign the Workload

    4. Restore the Workload

    5. Reconfigure the Target Trilio installation back to the original one

    6. Clean up

    Reconfigure the Target Trilio installation

    Before the Desaster Recovery Process can start is it necessary to make the backups to be restored available for the Trilio installation. The following steps need to be done to completely reconfigure the Trilio installation.

    During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.

    Add NFS B2 to the Trilio Appliance Cluster

    To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be fully reconfigured to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.

    Edit the workloadmgr.conf

    Look for the line defining the NFS mounts

    Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

    Write and close the workloadmgr.conf

    Restart the wlm-workloads service

    Add NFS B2 to the Trilio Datamovers

    Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.

    To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.

    Edit the tvault-contego.conf

    Look for the line defining the NFS mounts

    Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

    Write and close the tvault-contego.conf

    Restart the tvault-contego service

    Make the Mount-Paths available

    Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.

    The MTAuMTAuMi4yMDovdXBzdHJlYW0= part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.

    This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.

    Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.

    Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.

    Identify the base64 hash values

    The used hash values can be calculated using the base64 tool in any Linux distribution.

    Create and bind the paths

    Based on the identified base64 hash values the following paths are required on each Compute node.

    /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

    and

    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0

    In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.

    To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.

    Reassign the workload

    Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.

    Add admin-user to required domains and projects

    To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.

    Discover orphaned Workloads from NFS-Storage of Target Cloud

    Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.

    List available projects on Target Cloud in the Target Domain

    The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.

    List available users on the Target Cloud in the Target Project that have the right backup trustee role

    To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.

    Reassign the workload to the target project

    Now that all informations have been gathered the workload can be reassigned to the target project.

    Verify the workload is available at the desired target_project

    After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.

    Restore the workload

    The reassigned workload can be restored using Horizon following the procedure described here.

    This runbook will continue on the CLI only path.

    Prepare the selective restore by getting the snapshot information

    To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.

    List all Snapshots of the workload to restore to identify the snapshot to restore

    Get Snapshot Details with network details for the desired snapshot

    Get Snapshot Details with disk details for the desired Snapshot

    Prepare the selective restore by creating the restore.json file

    The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.

    Run the selective restore

    To do the actual restore use the following command:

    Verify the restore

    To verify the success of the restore from a Trilio perspective the restore status is checked.

    Reconfigure the Target Trilio installation back to the original one

    After the Desaster Recovery Process has finished it is necessary to return the Trilio installation to its original configuration. The following steps need to be done to completely reconfigure the Trilio installation.

    During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.

    Delete NFS B2 to the Trilio Appliance Cluster

    To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be fully reconfigured to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.

    Edit the workloadmgr.conf

    Look for the line defining the NFS mounts

    Delete NFS B2 from the comma-seperated list

    Write and close the workloadmgr.conf

    Restart the wlm-workloads service

    Delete NFS B2 to the Trilio Datamovers

    Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.

    To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.

    Edit the tvault-contego.conf

    Look for the line defining the NFS mounts

    Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

    Write and close the tvault-contego.conf

    Restart the tvault-contego service

    Clean up

    After the Desaster Recovery Process has been successfully completed and the Trilio installation reconfigured to its original state, it is recommended to do the following additional steps to be ready for the next Disaster Recovery process.

    Remove the database entry

    The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.

    To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.

    Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.

    This script can be found here: https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabase

    Remove the admin user from the project

    After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.

    # mount <NFS B2-IP/NFS B2-FQDN>:/<VOL-Path> /mnt
    workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    /…/workload_<id>/workload_db <<< Contains User ID and Project ID of Workload owner
    /…/workload_<id>/workload_vms_db <<< Contains VM IDs and VM Names of all VMs actively protected be the Workload
    # cp /mnt/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    # chown -R nova:nova /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    # chmod -R 644 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    #qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    file format: qcow2
    virtual size: 1.0G (1073741824 bytes)
    disk size: 516K
    cluster_size: 65536
    
    backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
    
    # echo -n 10.10.2.20:/NFS_A1 | base64
    MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    
    # echo -n 10.20.3.22:/NFS_B2 | base64
    MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
    #mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #mount --bind 
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #vi /etc/fstab
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ 		/ var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl	none		bind 		0 0
    # source {customer admin rc file}  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
    # workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True    
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    |     Name   |                  ID                  |            Project ID            |  User ID                         |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    | Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 |  329880dedb4cd357579a3279835f392 |  
    | Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 |  329880dedb4cd357579a3279835f392 |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+
    # openstack project list --domain <target_domain>  
    +----------------------------------+----------+  
    | ID                               | Name     |  
    +----------------------------------+----------+  
    | 01fca51462a44bfa821130dce9baac1a | project1 |  
    | 33b4db1099ff4a65a4c1f69a14f932ee | project2 |  
    | 9139e694eb984a4a979b5ae8feb955af | project3 |  
    +----------------------------------+----------+ 
    # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | Role                             | User                             | Group | Project                          | Domain | Inherited |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    # workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True    
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    |    Name   |                  ID                  |            Project ID            |  User ID                         |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    | project1  | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+ 
    # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
    +-------------------+------------------------------------------------------------------------------------------------------+
    | Property          | Value                                                                                                |
    +-------------------+------------------------------------------------------------------------------------------------------+
    | availability_zone | nova                                                                                                 |
    | created_at        | 2019-04-18T02:19:39.000000                                                                           |
    | description       | Test Linux VMs                                                                                       |
    | error_msg         | None                                                                                                 |
    | id                | ac9cae9b-5e1b-4899-930c-6aa0600a2105                                                                 |
    | instances         | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id":                      |
    |                   | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}]                                     |
    | interval          | None                                                                                                 |
    | jobschedule       | True                                                                                                 |
    | name              | Test Linux                                                                                           |
    | project_id        | 2fc4e2180c2745629753305591aeb93b                                                                     |
    | scheduler_trust   | None                                                                                                 |
    | status            | available                                                                                            |
    | storage_usage     | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
    |                   | "snap_count": 13}}                                                                                   |
    | updated_at        | 2019-11-15T02:32:43.000000                                                                           |
    | user_id           | 72e65c264a694272928f5d84b73fe9ce                                                                     |
    | workload_type_id  | f82ce76f-17fe-438b-aa37-7a023058e50d                                                                 |
    +-------------------+------------------------------------------------------------------------------------------------------+
    # workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
    
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    |         Created At         |     Name     |                  ID                  |             Workload ID              | Snapshot Type |   Status  |    Host   |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    | 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |      full     | available | Upstream2 |
    | 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    | 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    # workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |   Networks  | Value                                                                                                                                        |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |  ip_address | 172.20.20.20                                                                                                                                 |
    |    vm_id    | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44', 
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:74:58:bb                                                                                                                            |
    |             |                                                                                                                                              |
    |  ip_address | 172.20.20.13                                                                                                                                 |
    |    vm_id    | 3fd869b2-16bd-4423-b389-18d19d37c8e0                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:6b:46:ae                                                                                                                            |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------------+--------------------------------------------------+
    |       Vdisks      |                      Value                       |
    +-------------------+--------------------------------------------------+
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a       |
    |    volume_name    |       0027b140-a427-46cb-9ccf-7895c7624493       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       0027b140-a427-46cb-9ccf-7895c7624493       |
    | availability_zone |                       nova                       |
    |       vm_id       |       38b620f1-24ae-41d7-b0ab-85ffc2d7958b       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       8007ed89-6a86-447e-badb-e49f1e92f57a       |
    |    volume_name    |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    | availability_zone |                       nova                       |
    |       vm_id       |       3fd869b2-16bd-4423-b389-18d19d37c8e0       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    +-------------------+--------------------------------------------------+
    {
       u'description':u'<description of the restore>',
       u'oneclickrestore':False,
       u'restore_type':u'selective',
       u'type':u'openstack',
       u'name':u'<name of the restore>'
       u'openstack':{
          u'instances':[
             {
                u'name':u'<name instance 1>',
                u'availability_zone':u'<AZ instance 1>',
                u'nics':[ #####Leave empty for network topology restore
                ],
                u'vdisks':[
                   {
                      u'id':u'<old disk id>',
                      u'new_volume_type':u'<new volume type name>',
                      u'availability_zone':u'<new cinder volume AZ>'
                   }
                ],
                u'flavor':{
                   u'ram':<RAM in MB>,
                   u'ephemeral':<GB of ephemeral disk>,
                   u'vcpus':<# vCPUs>,
                   u'swap':u'<GB of Swap disk>',
                   u'disk':<GB of boot disk>,
                   u'id':u'<id of the flavor to use>'
                },
                u'include':<True/False>,
                u'id':u'<old id of the instance>'
             } #####Repeat for each instance in the snapshot
          ],
          u'restore_topology':<True/False>,
          u'networks_mapping':{
             u'networks':[ #####Leave empty for network topology restore
                
             ]
          }
       }
    }
    
    # workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
    
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    |         Created At         |       Name       |                  ID                  |             Snapshot ID              |   Size   |   Status  |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    | 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
    +------------------+------------------------------------------------------------------------------------------------------+
    | Property         | Value                                                                                                |
    +------------------+------------------------------------------------------------------------------------------------------+
    | created_at       | 2019-09-24T12:44:38.000000                                                                           |
    | description      | -                                                                                                    |
    | error_msg        | None                                                                                                 |
    | finished_at      | 2019-09-24T12:46:07.000000                                                                           |
    | host             | Upstream2                                                                                            |
    | id               | 5b4216d0-4bed-460f-8501-1589e7b45e01                                                                 |
    | instances        | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata":   |
    |                  | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}]     |
    | name             | OneClick Restore                                                                                     |
    | progress_msg     | Restore from snapshot is complete                                                                    |
    | progress_percent | 100                                                                                                  |
    | project_id       | 8e16700ae3614da4ba80a4e57d60cdb9                                                                     |
    | restore_options  | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
    |                  | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]},   |
    |                  | "type": "openstack", "name": "OneClick Restore"}                                                     |
    | restore_type     | restore                                                                                              |
    | size             | 41126400                                                                                             |
    | snapshot_id      | 5928554d-a882-4881-9a5c-90e834c071af                                                                 |
    | status           | available                                                                                            |
    | time_taken       | 89                                                                                                   |
    | updated_at       | 2019-09-24T12:44:38.000000                                                                           |
    | uploaded_size    | 41126400                                                                                             |
    | user_id          | d5fbd79f4e834f51bfec08be6d3b2ff2                                                                     |
    | warning_msg      | None                                                                                                 |
    | workload_id      | 02b1aca2-c51a-454b-8c0f-99966314165e                                                                 |
    +------------------+------------------------------------------------------------------------------------------------------+
    # workloadmgr workload-delete <workload_id>
    # source {customer admin rc file}  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
    
    # vi /etc/workloadmgr/workloadmgr.conf
    vault_storage_nfs_export = <NFS_B1/NFS_B1-FQDN>:/<VOL-B1-Path>
    vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path>,<NFS-IP/NFS-FQDN>:/<VOL—2-Path>
    # systemctl restart wlm-workloads
    # vi /etc/tvault-contego/tvault-contego.conf
    vault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path>
    vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>
    # systemctl restart tvault-contego
    #qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    file format: qcow2
    virtual size: 1.0G (1073741824 bytes)
    disk size: 516K
    cluster_size: 65536
    
    backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
    
    # echo -n 10.10.2.20:/NFS_A1 | base64
    MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    
    # echo -n 10.20.3.22:/NFS_B2 | base64
    MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
    #mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #mount --bind 
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #vi /etc/fstab
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ 		/ var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl	none		bind 		0 0
    # source {customer admin rc file}  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
    # workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True    
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    |     Name   |                  ID                  |            Project ID            |  User ID                         |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    | Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 |  329880dedb4cd357579a3279835f392 |  
    | Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 |  329880dedb4cd357579a3279835f392 |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+
    # openstack project list --domain <target_domain>  
    +----------------------------------+----------+  
    | ID                               | Name     |  
    +----------------------------------+----------+  
    | 01fca51462a44bfa821130dce9baac1a | project1 |  
    | 33b4db1099ff4a65a4c1f69a14f932ee | project2 |  
    | 9139e694eb984a4a979b5ae8feb955af | project3 |  
    +----------------------------------+----------+ 
    # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | Role                             | User                             | Group | Project                          | Domain | Inherited |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    # workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True    
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    |    Name   |                  ID                  |            Project ID            |  User ID                         |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    | project1  | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+ 
    # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
    +-------------------+------------------------------------------------------------------------------------------------------+
    | Property          | Value                                                                                                |
    +-------------------+------------------------------------------------------------------------------------------------------+
    | availability_zone | nova                                                                                                 |
    | created_at        | 2019-04-18T02:19:39.000000                                                                           |
    | description       | Test Linux VMs                                                                                       |
    | error_msg         | None                                                                                                 |
    | id                | ac9cae9b-5e1b-4899-930c-6aa0600a2105                                                                 |
    | instances         | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id":                      |
    |                   | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}]                                     |
    | interval          | None                                                                                                 |
    | jobschedule       | True                                                                                                 |
    | name              | Test Linux                                                                                           |
    | project_id        | 2fc4e2180c2745629753305591aeb93b                                                                     |
    | scheduler_trust   | None                                                                                                 |
    | status            | available                                                                                            |
    | storage_usage     | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
    |                   | "snap_count": 13}}                                                                                   |
    | updated_at        | 2019-11-15T02:32:43.000000                                                                           |
    | user_id           | 72e65c264a694272928f5d84b73fe9ce                                                                     |
    | workload_type_id  | f82ce76f-17fe-438b-aa37-7a023058e50d                                                                 |
    +-------------------+------------------------------------------------------------------------------------------------------+
    # workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
    
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    |         Created At         |     Name     |                  ID                  |             Workload ID              | Snapshot Type |   Status  |    Host   |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    | 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |      full     | available | Upstream2 |
    | 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    | 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    # workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |   Networks  | Value                                                                                                                                        |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |  ip_address | 172.20.20.20                                                                                                                                 |
    |    vm_id    | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44', 
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:74:58:bb                                                                                                                            |
    |             |                                                                                                                                              |
    |  ip_address | 172.20.20.13                                                                                                                                 |
    |    vm_id    | 3fd869b2-16bd-4423-b389-18d19d37c8e0                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:6b:46:ae                                                                                                                            |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------------+--------------------------------------------------+
    |       Vdisks      |                      Value                       |
    +-------------------+--------------------------------------------------+
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a       |
    |    volume_name    |       0027b140-a427-46cb-9ccf-7895c7624493       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       0027b140-a427-46cb-9ccf-7895c7624493       |
    | availability_zone |                       nova                       |
    |       vm_id       |       38b620f1-24ae-41d7-b0ab-85ffc2d7958b       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       8007ed89-6a86-447e-badb-e49f1e92f57a       |
    |    volume_name    |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    | availability_zone |                       nova                       |
    |       vm_id       |       3fd869b2-16bd-4423-b389-18d19d37c8e0       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    +-------------------+--------------------------------------------------+
    {
       u'description':u'<description of the restore>',
       u'oneclickrestore':False,
       u'restore_type':u'selective',
       u'type':u'openstack',
       u'name':u'<name of the restore>'
       u'openstack':{
          u'instances':[
             {
                u'name':u'<name instance 1>',
                u'availability_zone':u'<AZ instance 1>',
                u'nics':[ #####Leave empty for network topology restore
                ],
                u'vdisks':[
                   {
                      u'id':u'<old disk id>',
                      u'new_volume_type':u'<new volume type name>',
                      u'availability_zone':u'<new cinder volume AZ>'
                   }
                ],
                u'flavor':{
                   u'ram':<RAM in MB>,
                   u'ephemeral':<GB of ephemeral disk>,
                   u'vcpus':<# vCPUs>,
                   u'swap':u'<GB of Swap disk>',
                   u'disk':<GB of boot disk>,
                   u'id':u'<id of the flavor to use>'
                },
                u'include':<True/False>,
                u'id':u'<old id of the instance>'
             } #####Repeat for each instance in the snapshot
          ],
          u'restore_topology':<True/False>,
          u'networks_mapping':{
             u'networks':[ #####Leave empty for network topology restore
                
             ]
          }
       }
    }
    
    # workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
    
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    |         Created At         |       Name       |                  ID                  |             Snapshot ID              |   Size   |   Status  |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    | 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
    +------------------+------------------------------------------------------------------------------------------------------+
    | Property         | Value                                                                                                |
    +------------------+------------------------------------------------------------------------------------------------------+
    | created_at       | 2019-09-24T12:44:38.000000                                                                           |
    | description      | -                                                                                                    |
    | error_msg        | None                                                                                                 |
    | finished_at      | 2019-09-24T12:46:07.000000                                                                           |
    | host             | Upstream2                                                                                            |
    | id               | 5b4216d0-4bed-460f-8501-1589e7b45e01                                                                 |
    | instances        | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata":   |
    |                  | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}]     |
    | name             | OneClick Restore                                                                                     |
    | progress_msg     | Restore from snapshot is complete                                                                    |
    | progress_percent | 100                                                                                                  |
    | project_id       | 8e16700ae3614da4ba80a4e57d60cdb9                                                                     |
    | restore_options  | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
    |                  | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]},   |
    |                  | "type": "openstack", "name": "OneClick Restore"}                                                     |
    | restore_type     | restore                                                                                              |
    | size             | 41126400                                                                                             |
    | snapshot_id      | 5928554d-a882-4881-9a5c-90e834c071af                                                                 |
    | status           | available                                                                                            |
    | time_taken       | 89                                                                                                   |
    | updated_at       | 2019-09-24T12:44:38.000000                                                                           |
    | uploaded_size    | 41126400                                                                                             |
    | user_id          | d5fbd79f4e834f51bfec08be6d3b2ff2                                                                     |
    | warning_msg      | None                                                                                                 |
    | workload_id      | 02b1aca2-c51a-454b-8c0f-99966314165e                                                                 |
    +------------------+------------------------------------------------------------------------------------------------------+
    # vi /etc/workloadmgr/workloadmgr.conf
    vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>
    vault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path>
    # systemctl restart wlm-workloads
    # vi /etc/tvault-contego/tvault-contego.conf
    vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>
    vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path>
    # systemctl restart tvault-contego
    # source {customer admin rc file}  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>