arrow-left

Only this pageAll pages
gitbookPowered by GitBook
1 of 93

T4O-4.3

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deployment Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Trilio Appliance Administration Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

User Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Admin Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Troubleshooting

Loading...

Loading...

Loading...

Loading...

Loading...

API GUIDE

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Trilio 4.3 Release Notes

Trilio for OpenStack Architecture

hashtag
Backup-as-a-Service

Trilio is like a Data Protection project providing Backup-as-a-Service

Trilio is an add on service to OpenStack cloud infrastructure and provides backup and disaster recovery functions for tenant workloads. Trilio is very similar to other OpenStack services including nova, cinder, glance, etc and adheres to all tenets of OpenStack. It is a stateless service that scales with your cloud.

hashtag
Main Components

Trilio has four main software components:

  1. Trilio ships as a QCOW2 image. User can instantiate one or more VMs from the QCOW2 image on a standalone KVM boxes.

  2. Trilio API is a python module that is installed on all OpenStack controller nodes where the nova-api service is running.

  3. Trilio Datamover is a python module that is installed on every OpenStack compute nodes

hashtag
Service Endpoints

Trilio is both a provider and consumer into OpenStack ecosystem. It uses other OpenStack services such as nova, cinder, glance, neutron, and keystone and provides its own service to OpenStack tenants. To accomodate all possible OpenStack deployments, Trilio can be configured to use either public or internal URLs of services. Likewise Trilio provides its own public, internal and admin URLs.

hashtag
Network Topology

This figure represents a typical network topology. Trilio exposes its public URL endpoint on public network and Trilio virtual appliances and data movers typically use either internal network or dedicated backup network for storing and retrieving backup images from backup store.

About Trilio for OpenStack

Trilio, by TrilioData, is a native OpenStack service that provides policy-based comprehensive backup and recovery for OpenStack workloads. The solution captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data and Metadata of an environment) as full or incremental snapshots. These snapshots can be held in a variety of storage environments including NFS AWS S3 compatible storage. With Trilio and its single click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). With Trilio, IT departments are enabled to fully deploy OpenStack solutions and provide business assurance through enhanced data retention, protection and integrity.

With the use of Trilio’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. Trilio takes point-in-time backup of the entire workload consisting of compute resources, network configurations and storage data as one unit. It also takes incremental backups that only captures the changes that were made since last backup. Incremental snapshots save time and storage space as the backup only includes changes since the last backup. The benefits of using VAST for backup and restore could be summarized as below:

Trilio horizon plugin is installed as an add on to horizon servers. This module is installed on every server that runs horizon service.

Trilio architecture overview
Service endpoints overview
Example network topology

Efficient capture and storage of snapshots. Since our full backups only include data that is committed to storage volume and the incremental backups only include changed blocks of data since last backup, our backup processes are efficient and storages backup images efficiently on the backup media

  • Faster and reliable recovery. When your applications become complex that snap multiple VMs and storage volumes, our efficient recovery process will bring your application from zero to operational with just click of button

  • Easy migration of workloads between clouds. Trilio captures all the details of your application and hence our migration includes your entire application stack without leaving any thing for guess work.

  • Through policy and automation lower the Total Cost of Ownership. Our tenant driven backup process and automation eliminates the need for dedicated backup administrators, there by improves your total cost of ownership.

  • Uninstalling from Canonical OpenStack

    circle-exclamation

    Trilio is not providing the JuJu Charms to deploy Trilio 4.1 in Canonical Openstack. At the time of release are the JuJu Charms not yet updated to Trilio 4.1. We will update this page once the Charms are available.

    Reinitialize Trilio

    The Trilio Appliance can be reinitialized, which will delete all workload related values from the Trilio database.

    To reinitialize the Trilio Appliance do:

    • Login into the Trilio Dashboard

    • Click on "admin" in the upper right corner to open the submenu

    • Choose "Reinitialize"

    • Verify that you want to reinitialize the Trilio

    Upgrade OpenStack

    Reset the Trilio GUI password

    In case of the Trilio Dashboard being lost it can be resetted as long as SSH access to the appliance is available.

    To reset the password to its default do the following:

    [root@TVM1 ~]# source /home/stack/myansible/bin/activate
    (myansible) [root@TVM1 ~]# cd /home/stack/myansible/lib/python3.6/site-packages/tvault_configurator
    (myansible) [root@TVM1 tvault_configurator]# python recreate_conf.py
    (myansible) [root@TVM1 tvault_configurator]# systemctl restart tvault-config

    The dashboard login will be reset to:

    Username: admin
    Password: password

    Clean up Trilio database

    The Trilio database is following an older OpenStack database schema. This schema is not forgetting anything and is instead setting the deleted flag for any data that is no longer valid/deleted.

    This leads to ever-growing databases, which can lead to performance degradation.

    To counter this a database clean-up and optimization tool has been made available.

    Set Trilio GUI login banner

    To configure the banner shown upon accessing the Trilio Appliance GUI do the following.

    1. Login into Trilio Appliance console

    2. edit the banner.yaml at /etc/tvault-config/banner.yaml

    3. restart tvault-config service

    The content of the banner.yaml looks as follows and can be edited as required:

    Rebasing existing workloads

    The Trilio solution is using the qcow2 backing file to provide full synthetic backups.

    Especially when the NFS backup target is used, there are scenarios at which this backing file needs to be updated to a new mount path.

    To make this process easier and streamlined Trilio is providing the following rebase tool.

    Installing Trilio Components

    Once the Trilio VM or the Cluster of Trilio VMs has been spun, the actual installation process can begin. This process contains of the following steps:

    1. Install the Trilio dm-api service on the control plane

    2. Install the Trilio datamover service on the compute plane

    Preparing the installation

    It is recommended to think about the following elements prior to the installation of Trilio for Openstack.

    hashtag
    Tenant Quotas

    Trilio uses Cinder snapshots for calculating full and incremental backups. For full backups, Trilio creates Cinder snapshots for all the volumes in the backup job. It then leaves these Cinder snapshots behind for calculating the incremental backup image during next backup. During an incremental backup operation it creates new Cinder snapshots, calculates the changed blocks between the new snapshots and the old snapshots that were left behind during full/previous backups. It then deletes the old snapshots but leaves the newly created snapshots behind. So, it is important that each tenant who is availing Trilio backup functionality has sufficient Cinder snapshot quotas to accommodate these additional snapshots. The guideline is to add 2 snapshots for every volume that is added to backups to volume snapshot quotas for that tenant. You may also increase the volume quotas for the tenant by the same amount because Trilio briefly creates a volume from snapshot to read data from the snapshot for backup purposes. During a restore process, Trilio creates additional instances and Cinder volumes. To accommodate restore operations, a tenant should have sufficient quota for Nova instances and Cinder volumes. Otherwise restore operations will result in failures.

    Trilio Appliance Dashboard

    The Trilio Appliance Dashboard gives an overview of the running services and their Status inside the Cluster. The dashboard can be accessed using the virtual IP.

    circle-info

    If service status panels on the dashboard page are not visible then access the virtual IP on port 3001 (https://<T4O-VIP>:3001/) and accept the SSL exception, and then refresh the dashboard page.

    The Trilio Appliance Dashboard gives an overview of the running services and their Status inside the Cluster. This dashboard is accessible through the virtual IP.

    Uninstall Trilio

    The uninstallation of Trilio is depending on the Openstack Distribution it is installed in. The high-level process is the same for all Distributions.

    1. Uninstall the Horizon Plugin or the Trilio Horizon container

    2. Uninstall the datamover-api container

    Using the workloadmgr CLI tool on the Trilio Appliance

    To use the workloadmgr CLI tool on the Trilio appliance it is only necessary to activate the virtual environment of the workloadmgr

    circle-info

    An rc-file to authenticate against Openstack is required.

    Download Trilio logs

    hashtag
    Download Trilio logs

    It is possible to download the Trilio logs directly through the Trilio web gui.

    To download logs throught the Trilio web gui:

    Install the Trilio Horizon plugin into the Horizon service

    How these steps look in detail is depending on the Openstack distribution Trilio is installed in. Each supported Openstack distribution has its own deployment tools. Trilio is integrating into these deployment tools to provide a native integration from the beginning to the end.

    • Test text

    hashtag
    AWS S3 eventual consistency¶

    AWS S3 object consistency model includes:

    1. Read-after-write

    2. Read-after-update

    3. Read-after-delete

    Each of them describes how an object will reach its consistent state after an object is created/updated or deleted. None of them provides strong consistency and there is a lag time for an object to reach the consistent state. Though Trilio employed mechanisms to work around the limitations of eventual consistency of AWS S3, when an object reach its consistency state is not deterministic. There is no official statement from AWS on how long it takes for an object to reach consistent state. However read-after-write has a shorter time to reach consistency compared to other IO patterns. Our solution is designed to maximize read-after-write IO pattern. The time in which an object reaches eventual consistency also depends on the AWS region. For example, aws-standard region does not have strong consistency model compared to us-east or us-west. We suggest to use these regions when creating s3 buckets for Trilio. Though read-after-update IO pattern is hard to avoid completely, we employed ample delays in accessing objects to accommodate larger durations for objects to get into consistent state. However in rare occasions, backups may still fail and need to restarted.

    hashtag
    Trilio Cluster¶

    Trilio can be deployed as a single node or a three node cluster. It is highly recommended that Trilio is deployed as three node cluster for fault tolerance and load balancing. Starting with 3.0 release, Trilio requires additional IP for cluster and is required for both single node and three node deployments. Cluster ip a.k.a virtual ip is used for managing cluster and is used to register Trilio service endpoint in the keystone sevice catalog.

    Uninstall the datamover
  • Delete the Trilio Appliance Cluster

  • Login into the Trilio web gui
  • Go to "Logs"

  • Choose the log to be downloaded

    • Each log for every Trilio Appliance can be downloaded seperatly

    • or a zip of all logfiles can be created and downloaded

  • circle-info

    This will download the current log files. Already rotated logs need to be downloaded through SSH from the Trilio appliance directly. All logs, including rotated old logs, can be found at:

    /var/log/workloadmgr/

    Change the Trilio GUI password

    hashtag
    Change Trilio Dashboard password

    To change the Trilio GUI password do:

    • Login into the Trilio Dashboard

    • Click on "admin" in the upper right corner to open the submenu

    • Choose "Reset Password"

    • Set the new Trilio password

    Advanced Ceph configurations

    Ceph is the most common OpenSource solution to provide block storage through OpenStack Cinder.

    Ceph is a very flexible solution. The possibilities of Ceph require additional steps to the Trilio solution.

    header: 
    header_color: blue 
    body_text_color: "#DC143C" 
    body_text: 
    header_font_size: 25px 
    body_text_font_size: 22px
    source /home/stack/myansible/bin/activate

    Additions for multiple CEPH configurations

    It is possible to configure Cinder to have multiple configurations and keyrings for CEPH.

    In this case, the Trilio Datamover file needs to be extended with the CEPH information.

    For Trilio to be able to work in such an environment it is required to put copies of each of these configurations and keyrings into a separate directory, which is then made known to the Trilio Datamover inside a [ceph] block in the tvault-contego.conf.

    A tvault-contego.conf file with the extended [ceph] block would look like this.

    [DEFAULT]
    
    vault_storage_type = nfs
    vault_storage_nfs_export = 192.168.1.34:/mnt/tvault/tvm5
    vault_storage_nfs_options = nolock,soft,timeo=180,intr,lookupcache=none
    
    
    vault_data_directory_old = /var/triliovault
    vault_data_directory = /var/trilio/triliovault-mounts
    log_file = /var/log/kolla/triliovault-datamover/tvault-contego.log
    debug = False
    verbose = True
    max_uploads_pending = 3
    max_commit_pending = 3
    
    dmapi_transport_url = rabbit://openstack:[email protected]:5672,openstack:[email protected]:5672,openstack:[email protected]:5672//
    
    [dmapi_database]
    connection = mysql+pymysql://dmapi:x5nvYXnAn4rXmCHfWTK8h3wwShA4vxMq3gE2jH57@kolla-victoriaR-internal.triliodata.demo:3306/dmapi
    
    
    [libvirt]
    images_rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_user = volumes
    
    [ceph]
    keyring_ext = .volumes.keyring
    ceph_dir = /etc/ceph/directory1/,/etc/ceph/directory2/
    
    [contego_sys_admin]
    helper_command = sudo /usr/bin/privsep-helper
    
    
    [conductor]
    use_local = True
    
    [oslo_messaging_rabbit]
    ssl = false
    
    [cinder]
    http_retries = 10
    
    circle-info

    If service status panels on the dashboard page are not visible then access the virtual IP on port 3001 (https://<T4O-VIP>:3001/) and accept the SSL exception, and then refresh the dashboard page.

    It shows for each Trilio Appliance the Status of the following Trilio services:

    • wlm-workloads

    • wlm-scheduler

    • wlm-api

    • wlm-cron

    circle-info

    The wlm-cron service runs on only one Trilio appliance at all times. That they are shown inactive on other nodes is not an error

    To give administrators an overview of the HA status, does the dashboard also show the service status for:

    • Pacemaker

    • RabbitMQ

    • MySQL Galera Cluster

    TrilioVault Upgrade Upon RHOSP cloud Upgrade

    For example: RHOSP16.1 to RHOSP16.2 upgrade

    hashtag
    1] Finish upgrade process for RHOSP cloud

    Without any changes in TrilioVault, customer needs to complete RHOSP upgrade process first.

    hashtag
    2] Now we need to upgrade TrilioVault to match with new RHOSP release

    hashtag
    2.1] Identify correct TrilioVault release for new RHOSP release

    Check if current TrilioVault release which is deployed on the RHOSP cloud supports new RHOSP release If it does not support new RHOSP release, identify the next TrilioVault release which supports this new RHOSP release.

    hashtag
    2.2] Install TrilioVault using devops code compatible with new RHOSP release

    Install TrilioVault release identified in step 2.1, Refer TrilioVault install for RHOSP Get the correct deployment document as per Triliovault release from Trilio support team.

    Follow all the steps.

    hashtag
    2.3] Done

    E-Mail Notifications

    hashtag
    Definition

    Trilio can notify users via E-Mail upon the completion of backup and restore jobs.

    The E-Mail will be sent to the owner of the Workload.

    hashtag
    Requirements to activate E-Mail Notifications

    To use the E-mail notifications, two requirements need to be met.

    triangle-exclamation

    Both requirements need to be set or configured by the Openstack Administrator. Please contact your Openstack Administrator to verify the requirements.

    hashtag
    User E-Mail assigned

    As the E-Mail will be sent to the owner of the Workload does the Openstack User, who created the workload, require to have an E-Mail address associated.

    hashtag
    Trilio E-Mail Server configured

    Trilio needs to know which E-Mail server to use, to send the E-mail notifications. Backup Administrators can do this in the "Backup Admin" area.

    hashtag
    Activate/Deactivate the E-Mail Notifications

    E-Mail notifications are activated tenant wide. To activate the E-Mail notification feature for a tenant follow these steps:

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Settings

    hashtag
    Example E-Mails

    The following screenshots show example E-mails send by Trilio.

    Migrating encrypted Workloads

    hashtag
    Same cloud - different owner

    Migration within the same cloud to a different owner Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project A — User B Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project B — User B Cloud A — Domain A — Project A — User A =>Cloud A — Domain B — Project B — User B

    Steps used:

    1. Create a secret for Project A in Domain A via User A.

    2. Create encrypted workload in Project A in Domain A via User A. Take snapshot.

    3. Reassign workload to new owner

    4. Load rc file of User A & provide read only rights through acl to the new owner

      openstack acl user add --user <userB_id> <secret_href> --insecure

    hashtag
    Different cloud

    Migration between clouds Cloud A — Domain A — Project A — User A => Cloud B — Domain B — Project B — User B

    Steps used:

    1. Create a secret for Project A in Domain A via User A.

    2. Create an encrypted workload in Project A in Domain A via User A. Trigger snapshot.

    3. Reassign workload to Cloud B - Domain B — Project B — User B

    Reconfigure the Trilio Cluster

    The Trilio appliance can be reconfigured at any time to adjust the Trilio cluster to any changes in the Openstack environment or the general backup solution.

    To reconfigure the Trilio Cluster go to the "Configure". The configure page shows the current configuration of the TriloVault cluster.

    The configuration page also gives access to the ansible playbooks of the last successful configuration.

    To start the reconfiguration of the Trilio Cluster click "Reconfigure" at the end of the table.

    Follow the Configuring Trilio guide afterwards.

    circle-info

    Once the Trilio configurator has started, it needs to run through successfully to continue to use Trilio.

    The cluster will not roll back to its last working state in case of any errors.

    circle-info

    When the reconfiguration is required to switch to an external database it is necessary to and configure it from scratch.

    Requirements

    Trilio has four main software components:

    1. Trilio ships as a QCOW2 image. User can instantiate one or more VMs from the QCOW2 image on a standalone KVM boxes.

    2. Trilio API is a python module that is an extension to nova api service. This module is installed on all OpenStack controller nodes

    Additions for multiple Ceph users

    It is possible to configure Cinder and Ceph to use different Ceph users for different Ceph pools and Cinder volume types. Or to have the nova boot volumes and cinder block volumes controlled by different users.

    In the case of multiple Ceph users, it is required to adopt the keyring extension in the tvault-contego.conf inside the Ceph block.

    The following example will try all files with the extension keyring that are located inside /etc/ceph to access the Ceph cluster for a Trilio related task.

    Switch NFS Backing file

    Trilio is using a base64 hash for the mount point of NFS Backup targets. This hash makes sure, that multiple NFS Shares can be used with the same Trilio installation.

    This base64 hash is part of the Trilio incremental backups as an absolute path of the backing files. This requires the usage of mount bind during a DR scenario or quick migration scenario.

    In the case that there is time for a thorough migration there is another option to change the backing file and make the Trilio backups available on a different NFS Share. This option is updating the backing file to the new NFS Share mount point.

    hashtag

    Frequently Asked Questions

    Frequently Asked Questions about Trilio for OpenStack

    hashtag
    1. Can Trilio for OpenStack restore instance UUIDs?

    Answer: NO

    Trilio for OpenStack does not restore Instance UUIDs (also known as Instance IDs). The only scenario where we do not modify the Instance UUID is during an Inplace Restore, where we only recover the data without creating new instances.

    When Trilio for OpenStack restores virtual machines (VMs), it effectively creates new instances. This means that new Virtual Machine Instance UUIDs are generated for the restored VMs. We achieve this by orchestrating a call to Nova, which creates new VMs with new UUIDs.

    Apply the Trilio license

    After the Trilio VM has been configured and all components are installed can the license be applied.

    The license can be applied either through the admin-tab in Horizon or the CLI

    hashtag
    Apply license through Horizon

    To apply the license through Horizon follow these steps:

    Upgrade Trilio

    Starting Trilio for Openstack 4.0 does Trilio for Openstack allow in-place upgrades.

    The following versions can be upgraded to each other:

    Old
    New

    General Troubleshooting Tips

    Troubleshooting inside a complex environment like Openstack can be very time-consuming. The following tipps will help to speed up the troubleshooting process to identify root causes.

    hashtag
    What is happening where

    Openstack and Trilio are divided into multiple services. Each service has a very specific purpose that is called during a backup or recovery procedure. Knowing which service is doing what helps to understand where the error is happening, allowing more focused troubleshooting.

    Disaster Recovery

    Trilio Workloads are designed to allow a Desaster Recovery without the need to backup the Trilio database.

    As long as the Trilio Workloads are existing on the Backup Target Storage and a Trilio installation has access to them, it is possible to restore the Workloads.

    hashtag
    Disaster Recovery Process

    Managing Trusts

    Trilio is using the which enables the Trilio service user to act in the name of another Openstack user.

    This system is used during all backup and restore features.

    triangle-exclamation

    Openstack Administrators should never have the need to directly work with the trusts created.

    Trilio Datamover is a python module that is installed on every OpenStack compute nodes
  • Trilio horizon plugin is installed as an add on to horizon servers. This module is installed on every server that runs horizon service.

  • hashtag
    System requirements Trilio Appliance

    triangle-exclamation

    The Trilio Appliance is not supported as instance inside Openstack.

    The Trilio Appliance gets delivered as a qcow2 image, which gets attached to a virtual machine.

    Trilio supports KVM-based hypervisors on x86 architectures, with the following properties:

    Software
    Supported

    libvirt

    2.0.0 and above

    QEMU

    2.0.0 and above

    qemu-img

    2.6.0 and above

    The recommended size of the VM for the Trilio Appliance is:

    circle-info

    When running Trilio in production, a 3-node cluster of the Trilio appliance is recommended for high availability and load balancing.

    Ressource
    Value

    vCPU

    8

    RAM

    24 GB

    The qcow2 image itself defines the 40GB disk size of the VM.

    circle-info

    In the rare case of the Trilio Appliance database or log files getting larger than 40GB disk, contact or open a ticket with Trilio Customer Success to attach another drive to the Trilio Appliance.

    hashtag
    Software Requirements

    In addition to the Trilio Appliance does Trilio contain components, which are installed directly into the Openstack itself.

    Each Openstack distribution comes with a set of supported operating systems. Please check the support matrix to see, which Openstack Distribution is supported with which Operating System.

    Additionally, it is necessary to have the nfs-common packages installed on the compute nodes in case of using the NFS protocol for the backup target.

    hashtag
    OpenStack Barbican for encryption

    Since Trilio for OpenStack 4.2 is T4O capable of providing encrypted backups

    The Trilio for OpenStack solution is leveraging the OpenStack Barbican service to provide encryption capabilities for its backups.

    To be precise, T4O is using secrets provided by Barbican to encrypt and decrypt backups. Barbican is therefore required to utilize this feature.

    hashtag
    Trilio cluster

    The Trilio Cluster is the Controller of Trilio. It receives all Workload related requests from the users.

    Every task of a backup or restore process is triggered and managed from here. This includes the creation of the directory structure and initial metadata files on the Backup Target.

    hashtag
    During a backup process

    During a backup process is the Trilio cluster also responsible to gather the metadata about the backed up VMs and networks from the Openstack environment. It is sending API calls towards the Openstack endpoints on the configured endpoint type to fetch this information. Once the metadata has been received does the Trilio Cluster write it as json files on the Backup Target.

    The Trilio cluster is also sending the Cinder Snapshot command.

    hashtag
    During a restore process

    During restore process is the Trilio cluster reading the VM metadata from its Database and is using the metadata to create the Shell for the restore. It is sending API calls to the Openstack environment to create the necessary resources.

    hashtag
    dmapi

    The dmapi service is the connector between the Trilio cluster and the datamover running on the compute nodes.

    The purpose of the dmapi service is to identify which compute node is responsible for the current backup or restore task. To do so is the dmapi service connecting to the nova api database requesting the compute hose of a provided VM.

    Once the compute host has been identified is the dmapi forwarding the command from the Trilio Cluster to the datamover running on the identified compute host.

    hashtag
    datamover

    The datamover is the Trilio service running on the compute nodes.

    Each datamover is responsible for the VMs running on top of its compute node. A datamover can not work with VMs running on a different compute node.

    The datamover is controlling the freeze and thaw of VMs as well as the actual movement of the data.

    hashtag
    Everything on the Backup Target is happening as user nova

    Trilio is reading and writing on the Backup Target as nova:nova.

    The POSIX user-id and group-id of nova:nova need to be aligned between the Trilio Cluster and all compute nodes. Otherwise backup or restores may fail with permission or file not found issues.

    Alternativ ways to achieve the goal are possible, as long as all required nodes can fully write and read as nova:nova on the Backup Target.

    It is recommended to verify the required permissions on the Backup Target in case of any errors during the data transfer phase or in case of any file permission errors.

    hashtag
    Input/Output Error with Cohesity NFS

    On Cohesity NFS if an Input/Output error is observed, then increase the timeo and retrans parameter values in your NFS options.

    hashtag
    Timeout receiving packet error in multipath iscsi environment

    Logging inside all datamover containers and add uxsock_timeout with value as 60000 which is equals to 60 sec inside /etc/multipath.conf.Restart datamover container

    hashtag
    Trilio Trustee Role

    Trilio is using RBAC to allow the usage of Trilio features to users.

    This trustee role is absolutely required and can not be overwritten using the admin role.

    It is recommended to verify the assignment of the Trilio Trustee Role in case of any permission errors from Trilio during creation of Workloads, backups or restores.

    hashtag
    Openstack Quotas

    Trilio is creating Cinder Snapshots and temporary Cinder Volumes. The Openstack Quotas need to allow that.

    Every disk that is getting backed up requires one temporary Cinder Volumes.

    Every Cinder Volume that is getting backup requires two Cinder Snapshots. The second Cinder Snapshot is temporary to calculate the incremental.

    hashtag
    Trilio Configurator

    Once Trilio is configured use virtual IP to access its dashboard. If service status panels on the dashboard page are not visible then access the virtual IP on port 3001 (https://<T4O-VIP>:3001/) and accept the SSL exception, and then refresh the dashboard page.

    \

    documentarrow-up-right
    reinitialize the Trilio appliance
    Check/Uncheck the box for "Enable Email Alerts"
    Screenshot of a notification E-mail for a successful Snapshot
    Screenshot of a notification E-Mail for a failed Snapshot
    Screenshot of a notification E-Mail for a successful Restore

    4.1 GA (4.1.94)

    4.1 latest Hotfix

    4.1 Hotfix

    4.1 latest Hotfix

    4.1 GA (4.1.94)

    4.2 GA (4.2.64)

    4.1 Hotfix

    4.2 GA (4.2.64)

    4.2 GA (4.2.64)

    4.3.0

    4.2 Hotfix

    4.3.0

    The upgrade process contains upgrading the Trilio appliance and the Openstack components and is dependent on the underlying operating system.

    circle-exclamation

    The Upgrade of Trilio for Canonical Openstack is managed through the charms.

    hashtag
    Enable mount-bind for NFS

    In T4O 4.2 and later releases, calculation of the mount point has been changed. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2 and T4O 4.2 onwards releases. Please follow this documentation to set up the mount bind for Canonical OpenStack.

    4.0 GA (4.0.92)

    4.0 SP1 (4.0.115)

    4.0 GA (4.0.92)

    4.1 latest hotfix

    4.0 GA (4.0.92)

    4.2 GA (4.2.64)

    Load RC file of User B.
  • Create a secret for Project B in Domain B via User B with the same payload used in Cloud A.

  • Create token via “openstack token issue --insecure”

  • Add migrated workload's metadata to the new secret (provide issued token to Auth-Token & workload id to matadata as below)

  • [DEFAULT]
    
    vault_storage_type = nfs
    vault_storage_nfs_export = 192.168.1.34:/mnt/tvault/tvm5
    vault_storage_nfs_options = nolock,soft,timeo=180,intr,lookupcache=none
    
    
    vault_data_directory_old = /var/triliovault
    vault_data_directory = /var/trilio/triliovault-mounts
    log_file = /var/log/kolla/triliovault-datamover/tvault-contego.log
    debug = False
    verbose = True
    max_uploads_pending = 3
    max_commit_pending = 3
    
    dmapi_transport_url = rabbit://openstack:[email protected]:5672,openstack:[email protected]:5672,openstack:[email protected]:5672//
    
    [dmapi_database]
    connection = mysql+pymysql://dmapi:x5nvYXnAn4rXmCHfWTK8h3wwShA4vxMq3gE2jH57@kolla-victoriaR-internal.triliodata.demo:3306/dmapi
    
    
    [libvirt]
    images_rbd_ceph_conf = /etc/ceph/ceph.conf
    rbd_user = volumes
    
    [ceph]
    keyring_ext = .keyring
    ceph_dir = /etc/ceph/
    
    [contego_sys_admin]
    helper_command = sudo /usr/bin/privsep-helper
    
    
    [conductor]
    use_local = True
    
    [oslo_messaging_rabbit]
    ssl = false
    
    [cinder]
    http_retries = 10

    Login to Horizon using admin user.

  • Click on Admin Tab.

  • Navigate to Backups-Admin

  • Navigate to Trilio

  • Navigate to License

  • Click "Update License"

  • Click "Choose File"

  • choose license-file on client system

  • click "Apply"

  • hashtag
    Apply license through CLI

    • <license_file> ➡️ path to the license file

    Install and Configure Trilio for the target cloud
  • Verify required mount-paths and create if necessary

  • Reassign Workloads

  • Notify users to of Workloads being available

  • circle-info

    This procedure is designed to be applicable to all Openstack installations using Trilio. It is to be used as a starting point to develop the exact Desaster Recovery process of a specific environment.

    circle-info

    In case that instead of noticing the users, the workloads shall be restored is it necessary to have an User in each Project, that has the necessary privileges to restore.

    hashtag
    Mount-paths

    Trilio incremental Snapshots involve a backing file to the prior backup taken, which makes every Trilio incremental backup a synthetic full backup.

    Trilio is using qcow2 backing files for this feature:

    As can be seen in the example is the backing file an absolute path, which makes it necessary, that this path exists so the backing files can be accessed.

    Trilio is using the base64 hashing algorithm for the NFS mount-paths, to allow the configuration of multiple NFS Volumes at the same time. The hash value is calculated using the provided NFS path.

    When the path of the backing file is not available on the Trilio appliance and Compute nodes, will the restores of incremental backups fail.

    The tested and recommended method to make the backing files available is creating the required directory path and using mount --bind to make the path available for the backups.

    Running the mount --bind command will make the necessary path available until the next reboot. If it is required to have access to the path beyond a reboot is it necessary to edit the fstab.

    The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.

    Trusts can only be worked with via CLI

    hashtag
    List all trusts

    hashtag
    Show a trust

    • <trust_id> ➡️ ID of the trust to show

    hashtag
    Create a trust

    • <role_name> ➡️Name of the role that trust is created for

    • --is_cloud_trust {True,False} ➡️ Set to true if creating cloud admin trust. While creating cloud trust use same user and tenant which used to configure Trilio and keep the role admin.

    hashtag
    Delete a trust

    • <trust_id> ➡️ ID of the trust to be deleted

    Openstack Keystone Trust systemarrow-up-right
    curl -i -X PUT \
       -H "X-Auth-Token:gAAAAABh0ttjiKRPpVNPBjRjZywzsgVton2HbMHUFrbTXDhVL1w2zCHF61erouo4ZUjGyHVoIQMG-NyGLdR7nexmgOmG7ed66LJ3IMVul1LC6CPzqmIaEIM48H0kc-BGvhV0pvX8VMZiozgFdiFnqYHPDvnLRdh7cK6_X5dw4FHx_XPmkhx7PsQ" \
       -H "Content-Type:application/json" \
       -d \
    '{
      "metadata": {
          "workload_id": "c13243a3-74c8-4f23-b3ac-771460d76130",
          "workload_name": "workload-c13243a3-74c8-4f23-b3ac-771460d76130"
        }
    }' \
     'https://kolla-victoria-ubuntu20-1.triliodata.demo:9311/v1/secrets/f3b2fce0-3c7b-4728-b178-7eb8b8ebc966/metadata'
     
     
    curl -i -X GET \
       -H "X-Auth-Token:gAAAAABh0ttjiKRPpVNPBjRjZywzsgVton2HbMHUFrbTXDhVL1w2zCHF61erouo4ZUjGyHVoIQMG-NyGLdR7nexmgOmG7ed66LJ3IMVul1LC6CPzqmIaEIM48H0kc-BGvhV0pvX8VMZiozgFdiFnqYHPDvnLRdh7cK6_X5dw4FHx_XPmkhx7PsQ" \
     'https://kolla-victoria-ubuntu20-1.triliodata.demo:9311/v1/secrets/f3b2fce0-3c7b-4728-b178-7eb8b8ebc966/metadata'
    workloadmgr license-create <license_file>
    qemu-img info 85b645c5-c1ea-4628-b5d8-1faea0e9d549
    image: 85b645c5-c1ea-4628-b5d8-1faea0e9d549
    file format: qcow2
    virtual size: 1.0G (1073741824 bytes)
    disk size: 21M
    cluster_size: 65536
    backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_3c2fbee5-ad90-4448-b009-5047bcffc2ea/snapshot_f4874ed7-fe85-4d7d-b22b-082a2e068010/vm_id_9894f013-77dd-4514-8e65-818f4ae91d1f/vm_res_id_9ae3a6e7-dffe-4424-badc-bc4de1a18b40_vda/a6289269-3e72-4085-adca-e228ba656984
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
    # echo -n 10.10.2.20:/upstream | base64
    MTAuMTAuMi4yMDovdXBzdHJlYW0=
    #mount --bind <mount-path1> <mount-path2>
    #vi /etc/fstab
    <mount-path1> <mount-path2>	none bind	0 0
    workloadmgr trust-list
    workloadmgr trust-show <trust_id>
    workloadmgr trust-create [--is_cloud_trust {True,False}] <role_name>
    workloadmgr trust-delete <trust_id>
    Backing file change script

    Trilio is providing a shell script for the purpose of changing the backing file. This script is used after the Trilio appliance has been reconfigured to use the new NFS share.

    hashtag
    Downloading the shell script

    The Shell script is publicly available at:

    hashtag
    Pre-Requisites

    The following requirements need to be met before the change of the backing file can be attempted.

    • The Trilio Appliance has been reconfigured with the new NFS Share

      • Please check here for reconfiguring the Trilio Appliance

    • The Openstack environment has been reconfigured with the new NFS Share

      • Please check for Red Hat Openstack Platform

      • Please check for Canonical Openstack

      • Please check for Kolla Ansible Openstack

      • Please check for Ansible Openstack

    • The workloads are available on the new NFS Share

    • The workloads are owned by nova:nova user

    hashtag
    Usage

    The shell script is changing one workload at a time.

    triangle-exclamation

    The shell script has to run as nova user, otherwise the owner will get changed and the backup can not be used by Trilio.

    Run the following command:

    with

    • /var/triliovault-mounts/<base64>/ being the new NFS mount path

    • workload_<workload_id> being the workload to rebase

    hashtag
    Logging of the procedure

    The shell script is generating the following log file at the following location:

    The log file will not get overwritten when the script is run multiple times. Each run of the script will append the available log file.

    By following this approach, we maintain the principles of OpenStack and auditing. We do not update or modify existing database entries when objects are deleted and subsequently recovered. Instead, all deletions are marked as such, and new instances, including the recovered ones, are created as new objects in the Nova tables. This ensures compliance and preserves the integrity of the OpenStack environment.

    hashtag
    2. Can Trilio for OpenStack restore MAC addresses?

    Answer: YES

    Trilio can restore the VMs MAC address, however, there is a caveat when restoring a virtual machine (VM) to a different IP address: a new MAC address will be assigned to the VM.

    In the case of a One-Click Restore, the original MAC addresses and IP addresses will be recovered, but the VM will be created with a new UUID, as mentioned in question #1.

    When performing a Selective Restore, you have the option to recover the original MAC address. To do so, you need to select the original IP address from the available dropdown menu during the recovery process.

    By choosing the original IP address, Trilio for OpenStack will ensure that the VM is restored with its original MAC address, providing more flexibility and customization in the restoration process.

    Example of Selective Restore with original MAC (and IP address):

    1. In this example, we have taken a Trilio backup of a VM called prod-1.

    1. The VM is deleted and we perform a Selective Restore of a VM called prod-1, selecting the IP address it was originally assigned from the drop-down menu:

    1. Trilio then restores the VM with the original MAC address:

    1. If you left the option as "Choose next available IP address", it will assign a new MAC to the VM instead as Neutron maps all MAC addresses to IP addresses on the Subnet - so logically a new IP will result in a new MAC address.

    Set network accessibility of Trilio GUI

    By default is the Trilio GUI available on all NICs on port 443.

    To limit this to only one IP the following steps need to be applied.

    hashtag
    Network Setup

    The Trilio Appliance provides by default the possibility of 4 VIPs.

    • A general VIP which can be used for everything

    • A public VIP for the public endpoint

    • An internal VIP for the internal endpoint

    • An admin VIP for the admin endpoint

    Should an additional VIP be required to restrict the access of the Trilio Dashboard to this VIP the new VIP needs to be created as a new resource inside the PCS cluster.

    hashtag
    Nginx setup

    When the new dashboard_ip has been created or decided, then the next step is to set up the proxy forwarding inside Nginx, which will make the Trilio GUI available through port 8000.

    circle-exclamation

    All of the following steps need to be done all Trilio appliances of the cluster.

    1. Create new conf file at /etc/nginx/conf.d/tvault-dashboard.conf. Replace variables dashboard_ip and virtual_ip as configured or decided.

    2. edit /etc/nginx/nginx.conf and uncomment line #include /etc/nginx/conf.d/*.conf;

    hashtag
    Limit the access of the Dashboard

    The configured dashboard_ip will always end on the nginx service on port 8000 and will then be forwarded to the local dashboard service on port 443.

    This configuration limits the required access to the local dashboard service to the Trilio appliance cluster itself. All other connections on port 443 can be dropped.

    The following commands will set the required iptable rules.

    hashtag
    Verify the accessibility as required

    At this point is the Trilio GUI only reachable on the dashboard_ip on port 8000. Accessing the Trilio GUI through any other IP or on port 443 is not allowed.

    Schedulers

    hashtag
    Definition

    Every Workload has its own schedule. Those schedules can be activated, deactivated and modified.

    A schedule is defined by:

    Please ask your Trilio Customer Success Manager or Engineer.
    This page will be updated once the script is publicly available.
    ./backing_file_update.sh /var/triliovault-mounts/<base64>/workload_<workload_id>
    /tmp/backing_file_update.log
    here
    here
    here
    here

    check nginx syntax: nginx -t

  • reload nginx conf: nginx -s reload

  • Verify if the new cluster resource is visible or not using pcs resource command and by accessing the dashboard_ip.

  • pcs resource create dashboard_ip ocf:heartbeat:IPaddr2 ip=<new_vip> cidr_netmask=<netmask> nic=<new_nw_interface> op monitor interval=30s
    pcs constraint colocation add dashboard_ip virtual_ip
    server {
        listen <dashboard_ip>:8000 ssl ;
        ssl_certificate "/opt/stack/data/cert/workloadmgr.cert";
        ssl_certificate_key "/opt/stack/data/cert/workloadmgr.key";
        keepalive_timeout 65;
        proxy_read_timeout 1800;
        access_log on;
        location / {
            proxy_set_header Host $host:$server_port;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_pass https://<virtual_ip>:443;
        }
    }
    server {
        listen <dashboard_ip>:3001 ssl ;
        ssl_certificate "/opt/stack/data/cert/workloadmgr.cert";
        ssl_certificate_key "/opt/stack/data/cert/workloadmgr.key";
        keepalive_timeout 65;
        proxy_read_timeout 1800;
        access_log on;
        location / {
            proxy_set_header Host $host:$server_port;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_pass https://<virtual_ip>:3001;
        }
    }
    
    iptables -A INPUT -p tcp -s tvm1,tvm2,tvm3 --dport 80 -j ACCEPT
    iptables -A INPUT -p tcp -s tvm1,tvm2,tvm3 --dport 443 -j ACCEPT
    iptables -A INPUT -p tcp --dport 80 -j DROP
    iptables -A INPUT -p tcp --dport 443 -j DROP
    https://<dashboard_ip>:8000
    Status (Enabled/Disabled)
  • Start Day/Time

  • End Day

  • Hrs between 2 snapshots

  • hashtag
    Disable a schedule

    hashtag
    Using Horizon

    To disable the scheduler of a single Workload in Horizon do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to be modified

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Edit Workload"

    7. Navigate to the tab "Schedule"

    8. Uncheck "Enabled"

    9. Click "Update"

    hashtag
    Using CLI

    • --workloadid <workloadid> ➡️ Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>

    hashtag
    Enable a schedule

    hashtag
    Using Horizon

    To disable the scheduler of a single Workload in Horizon do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to be modified

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Edit Workload"

    7. Navigate to the tab "Schedule"

    8. check "Enabled"

    9. Click "Update"

    hashtag
    Using CLI

    • --workloadid <workloadid> ➡️ Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>

    hashtag
    Modify a schedule

    To modify a schedule the workload itself needs to be modified.

    Please follow this procedure to modify the workload.

    hashtag
    Verify the scheduler trust is working

    Trilio is using the Openstack Keystone Trust systemarrow-up-right which enables the Trilio service user to act in the name of another Openstack user.

    This system is used during all backup and restore features.

    hashtag
    Using Horizon

    As a trust is bound to a specific user for each Workload does the Trilio Horizon plugin show the status of the Scheduler on the Workload list page.

    Screenshot of an Workload with established scheduler trust

    hashtag
    Using CLI

    • <workload_id> ➡️ ID of the workload to validate

    Enabling T4O 4.1 or older backups when using NFS backup target

    Trilio for OpenStack is generating a base64 hash value for every NFS backup target connected to the T4O solution. This enables T4O to mount multiple NFS backup targets to the same T4O installation.

    The mountpoints are generated utilizing a hash value inside the mountpoint, providing a unique mount for every NFS backup target.

    This mountpoint is then used inside the incremental backups to point to the qcow2 backing files. The backing file path is required as a full path and can not be set as a relative path. This is a limitation of the qcow2 format.

    T4O 4.2 has changed how the hash value gets calculated. T4O 4.1 and prior calculated the hash value out of the complete NFS path provided as shown in the example below.

    T4O 4.2 is now only considering the NFS directory part for the hash value as shown below.

    It is therefore necessary to make older backups taken by T4O 4.1 or prior available for TVO4.2 to restore.

    This can be done by one of two methods:

    1. Rebase all backups and change the backing file path

    2. Make the old mount point available again and point it to the new one using mount bind

    hashtag
    Rebase all backups

    circle-info

    This method is taking a significant amount of time depending on the number of backups, that need to be rebased. It is therefore recommended to determine the required time through a test workload.

    Trilio is providing a script, that is taking care of the rebase procedure. This script can be downloaded from the following location.

    Copy and use the script from the Trilio appliance as the nova user.

    circle-exclamation

    The nova user is required, as this user owns the backup files, and using any other user will change the ownership and lead to unrestorable backups.

    The script is requiring the complete path to the workload directory on the backup target.

    This needs to be repeated until all workloads have been rebased.

    hashtag
    Use mount bind with the old mount point

    circle-info

    This method is a temporary solution, which is to be kept in place until all workloads have gone through a complete retention cycle.

    Once all Workloads only contain backups created by T4O 4.2 it is no longer required to keep the mount bind active.

    This method is generating a second mount point based on the old hash value calculation and then mounting the new mount point to the old one. By doing this mount bind both mount points will be available and point to the same backup target.

    To use this method the following information needs to be available:

    • old mount path

    • new mount path

    Examples of how to calculate the base64 hash value are shown below.

    Old mountpoint hash value:

    New mountpoint hash value:

    The mount bind needs to be done for all Trilio appliances and Datamover services.

    hashtag
    Trilio Appliance

    To enable the mount bind on the Trilio appliance follow these steps:

    1. Create the old mountpoint directory mkdir -p old_mount_path

    2. run mount --bind command mount --bind <new_mount_path> <old_mount_path>

    3. set permissions for mountpoint chmod 777 <old_mount_path>

    circle-info

    It is recommended to use df -h to identify the current mountpoint as RHOSP, TripleO and Kolla Ansible OpenStack are using a different path than Ansible OpenStack or Canonical OpenStack.

    An example is given below.

    hashtag
    RHOSP/TripleO Datamover

    The following steps need to be done on the overcloud compute node. They don't need to be done inside any container.

    To enable the mount bind on the Trilio appliance follow these steps:

    1. Create the old mountpoint directory mkdir -p old_mount_path

    2. run mount --bind command mount --bind <new_mount_path> <old_mount_path>

    3. set permissions for mountpoint chmod 777 <old_mount_path>

    hashtag
    Kolla Ansible OpenStack Datamover

    The following steps need to be done on the overcloud compute node. They don't need to be done inside any container.

    To enable the mount bind on the Trilio appliance follow these steps:

    1. Create the old mountpoint directory mkdir -p old_mount_path

    2. run mount --bind command mount --bind <new_mount_path> <old_mount_path>

    3. set permissions for mountpoint chmod 777 <old_mount_path>

    hashtag
    Ansible OpenStack Datamover

    The following steps need to be done on the overcloud compute node. They don't need to be done inside any container.

    To enable the mount bind on the Trilio appliance follow these steps:

    1. Create the old mountpoint directory mkdir -p old_mount_path

    2. run mount --bind command mount --bind <new_mount_path> <old_mount_path>

    3. set permissions for mountpoint chmod 777 <old_mount_path>

    hashtag
    Canonical Openstack WLM & Datamover containers

    Canonical OpenStack does the creation of the mountpoint and the mount bind through JuJu using the following commands.

    To create the mountpoint, if it doesn't already exist:

    To create the mount bind

    Uninstalling from Ansible OpenStack

    hashtag
    Uninstall Trilio Services

    The Trilio Ansible OpenStack playbook can be run to uninstall the Trilio services.

    hashtag
    Destroy Trilio Datamover API container

    To cleanly remove the Trilio Datamover API container run the following Ansible playbook.

    hashtag
    Clean openstack_user_config.yml

    Remove the tvault-dmapi_hosts and tvault_compute_hosts entries from /etc/openstack_deploy/openstack_user_config.yml

    hashtag
    Remove Trilio haproxy settings in user_variables.yml

    Remove Trilio Datamover API settings from /etc/openstack_deploy/user_variables.yml

    hashtag
    Remove Trilio Datamover API inventory file

    hashtag
    Remove Trilio Datamover API service endpoints

    hashtag
    Delete Trilio Datamover API database and user

    • Go inside galera container.

    • Login as root user in mysql database engine.

    • Drop dmapi database.

    hashtag
    Remove dmapi rabbitmq user from rabbitmq container

    • Go inside rabbitmq container.

    • Delete dmapi user.

    • Delete dmapi vhost.

    hashtag
    Clean haproxy

    Remove /etc/haproxy/conf.d/datamover_service file.

    Remove HAproxy configuration entry from /etc/haproxy/haproxy.cfg file.

    Restart the HAproxy service.

    hashtag
    Remove certificates from Compute nodes

    hashtag
    Destroy the Trilio VM Cluster

    List all VMs running on the KVM node

    Destroy the Trilio VMs

    Undefine the Trilio VMs

    Delete the TrlioVault VM disk from KVM Host storage

    Trilio network considerations

    Trilio integrates natively with Openstack. This includes that Trilio communicates completely through APIs using the Openstack Endpoints. Trilio is also generating its own Openstack endpoints. In addition, is the Trilio appliance and the compute nodes writing to and reading from the backup target. These points affect the network planning for the Trilio installation.

    hashtag
    Existing endpoints in Openstack

    Openstack knows 3 types of endpoints:

    • Public Endpoints

    • Internal Endpoints

    • Admin Endpoints

    Each of these endpoint types is designed for a specific purpose. Public endpoints are meant to be used by the Openstack end-users to work with Openstack. Internal endpoints are meant to be used by the Openstack services to communicate with each other. Admin endpoints are meant to be used by Openstack administrators.

    Out of those 3 endpoint types does only the admin endpoint sometimes contain APIs which are not available on any other endpoint type.

    To learn more about Openstack endpoints please visit the official Openstack documentation.

    hashtag
    Openstack endpoints required by Trilio

    Trilio is communicating with all services of Openstack on a defined endpoint type. Which endpoint type Trilio is using to communicate with Openstack is decided during the configuration of the Trilio appliance.

    triangle-exclamation

    There is one exception: The Trilio Appliance always requires access to the Keystone admin endpoint.

    The following network requirement can be identified this way:

    • Trilio appliance needs access to the Keystone admin endpoint on the admin endpoint network

    • Trilio appliance needs access to all endpoints of one type

    hashtag
    Recommendation: Provide access to all Openstack Endpoint types

    Trilio is recommending providing full access to all Openstack endpoints to the Trilio appliance to follow the Openstack standards and best practices.

    Trilio is generating its own endpoints as well. These endpoints are pointing towards the Trilio Appliance directly. This means that using those endpoints will not send the API calls towards the Openstack Controller nodes first, but directly to the Trilio VM.

    Following the Openstack standards and best practices, it is therefore recommended to put the Trilio endpoints on the same networks as the already existing Openstack endpoints. This allows to extend the purpose of each endpoint type to the Trilio service:

    • The public endpoint to be used by Openstack users when using Trilio CLI or API

    • The internal endpoint to communicate with the Openstack services

    • The admin endpoint to use the required admin only APIs of Keystone

    hashtag
    Backup target access required by Trilio

    The Trilio solution is using backup target storage to securely place the backup data. Trilio is dividing its backup data into two parts:

    1. Metadata

    2. Volume Disk Data

    The first type of data is generated by the Trilio appliance through communicating with the Openstack Endpoints. All metadata that is stored together with a backup is written by the Trilio Appliance to the backup target in the json format.

    The second type of data is generated by the Trilio Datamover service running on the compute nodes. The Datamover service is reading the Volume Data from the Cinder or Nova storage and transferring this data as qcow2 image to the backup target. Each Datamover service is hereby responsible for the VMs running on its compute node.

    The network requirements are therefor:

    • The Trilio appliance needs access to the backup target

    • Every compute node needs access to the backup target

    hashtag
    Example of a typical Trilio network integration

    Most Trilio customers are following the Openstack standards and best practices to have the public, internal, and admin endpoints on separate networks. They also typically don't have any network yet, which can access the desired backup target.

    The starting network configuration typically looks like this:

    Following the Openstack standards and Trilio's recommendation will the Trilio Appliance be placed on all those 3 networks. Further is the access to the backup target required by Trilio Appliance and Compute nodes. Here done by adding a 4th network.

    The resulting network configuration would look like this:

    It is of course possible to combine networks as necessary. As long as the required network access is available will Trilio work.

    hashtag
    Other examples of Trilio network integrations

    Each Openstack installation is different and so is the network configuration. There are endless possibilities of how to configure the Openstack network and how to implement the Trilio appliance into this network. The following three examples have been seen in production:

    The first example is from a manufacturing company, which wanted to split the networks by function and decided to put the Trilio backup target on the internal network as the backup and recovery function was identified as an Openstack internal solution. This example looks complex but integrates Trilio just as recommended.

    The second example is from a financial institute that wanted to be sure that the Openstack Users have no direct uncontrolled network access to the Openstack infrastructure. Following this example requires additional work as the internal HA-Proxy needs to be configured to correctly translates the API calls towards the Trilio

    The third example is from a service company that was forced to treat Trilio as an external 3rd party solution, as we require a virtual machine running outside of Openstack. This kind of network configuration requires good planning on the Trilio endpoints and firewall rules.

    Multi-IP NFS Backup target mapping file configuration

    hashtag
    Introduction

    Filename and location:

    This file is used only when the user wants to configure 'multiple IP/endpoints based NFS share' as a backup target for Trilio. In all other cases like single IP NFS, S3 this file does not get used. Follow regular install doc.

    If the user is using multiple-IP/endpoints based NFS share as a backup target for triliovault then triliovault mounts any one IP/endpoint on a given compute node for a given NFS share. Users can distribute NFS share IPs/endpoints across compute nodes.

    'triliovault_nfs_map_input.yml' file is a tool that users can use to distribute/load balance NFS share endpoints across compute nodes in a given cloud.

    circle-info

    Note: Two IPs/endpoints of same NFS share on single compute node is invalid scenario and not required because in backend it stores data at same place.

    hashtag
    Examples

    hashtag
    Using hostnames

    Here, the User has ‘one’ NFS share exposed with three IP addresses. 192.168.1.34, 192.168.1.35, 192.168.1.33 Share directory path is: /var/share1

    So, this NFS share supports the following full paths that clients can mount:

    There are 32 compute nodes in the OpenStack cloud. 30 node hostnames have the following naming pattern

    The remaining 2 node hostnames do not follow any format/pattern.

    Now the mapping file will look like this

    hashtag
    Using IPs

    Compute node IP range used here: 172.30.3.11-40 and 172.30.4.40, 172.30.4.50 Total of 32 compute nodes

    circle-info

    Other complex examples are available on github at

    hashtag
    Getting the correct compute hostnames/IPs

    hashtag
    RHOSP or TripleO

    Use the following command to get compute hostnames. Check the ‘Name' column. Use these exact hostnames in 'triliovault_nfs_map_input.yml' file.

    In the following command output, ‘overcloudtrain1-novacompute-0' and ‘overcloudtrain1-novacompute-1' are correct hostnames.

    circle-info

    Run this command on undercloud by sourcing 'stackrc'.

    hashtag
    Kolla-Ansible OpenStack

    Compute hostnames/IPs should match in kolla-ansible inventory file and triliovault_nfs_map_input.yml.

    If IP addresses are sued in kolla-ansible inventory file then the same IP addresses should be used in ‘triliovault_nfs_map_input.yml' file too. If the kolla-ansible deploy command looks like the following,

    kolla-ansible -i multinode deploy

    then, the inventory file is 'multinode'. Generally, it is available at “/root/multinode“.

    hashtag
    OpenStack Ansible

    Compute hostnames/IPs should match in the Openstack Ansible inventory file and triliovault_nfs_map_input.yml

    If IP addresses have been used in the Openstack-ansible inventory file then you should use the same IP addresses in the ‘triliovault_nfs_map_input.yml' file too.

    Generally inventory file available at /etc/openstack_deploy/openstack_user_config.yml

    Openstack Ansible deploy command looks like following,

    openstack-ansible os-tvault-install.yml

    Openstack Ansible automatically picks up values that the user has set in the file openstack_user_config.yml

    File Search

    hashtag
    Definition

    The file search functionality allows the user to search for files and folders located on a chosen VM in a workload in one or more Backups.

    hashtag
    Navigating to the file search tab in Horizon

    The file search tab is part of every workload overview. To reach it follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    hashtag
    Configuring and starting a file search Horizon

    A file search runs against a single virtual machine for a chosen subset of backups using a provided search string.

    To run a file search the following elements need to be decided and configured

    hashtag
    Choose the VM the file search shall run against

    Under VM Name/ID choose the VM that the search is done upon. The drop down menu provides a list of all VMs that are part of any Snapshot in the Workload.

    circle-info

    VMs that are no longer activly protected by the Workload but are still part of an existing Snapshot are listed in red.

    hashtag
    Set the File Path

    The File Path defines the search string that is run against the chosen VM and Snapshots. This search string does support basic RegEx.

    circle-exclamation

    The File Path has to start with a '/'

    circle-info

    Windows partitions are fully supported. Each partition is its own Volume with its own root. Use '/Windows' instead of 'C:\Windows'

    circle-info

    The file search does not go into deeper directories and always searches on the directory provided in the File Path

    Example File Path for all files inside /etc : /etc/*

    hashtag
    Define the Snapshots to search in

    "Filter Snapshots by" is the third and last component that needs to be set. This defines which Snapshots are going to be searched.

    There are 3 possibilities for a pre-filtering:

    1. All Snapshots - Lists all Snapshots that contain the chosen VM from all available Snapshots

    2. Last Snapshots - Choose between the last 10, 25, 50, or custom Snapshots and click Apply to get the list of the available Snapshots for the chosen VM that match the criteria.

    3. Date Range - Set a start and end date and click apply to get the list of all available Snapshots for the chosen VM within the set dates.

    After the pre-filtering is done all matching Snapshots are automatic prechosen. Uncheck any Snapshot that shall not be searched.

    circle-info

    When no Snapshot is chosen the file search will not start.

    hashtag
    Start the File Search and retrieve the results in Horizon

    To start a File Search the following elements need to be set:

    • A VM to search in has to be chosen

    • A valid File Path provided

    • At least one Snapshot to search in selected

    Once those have been set click "Search" to start the file search.

    triangle-exclamation

    Do not navigate to any other Horizon tab or website after starting the File Search. Results are lost and the search has to be repeated to regain them.

    After a short time the results will be presented. The results are presented in a tabular format grouped by Snapshots and Volumes inside the Snapshot.

    For each found file or folder the following information are provided:

    • POSIX permissions

    • Amount of links pointing to the file or folder

    • User ID who owns the file or folder

    Once the Snapshot of interest has been identified it is possible to go directly to the Snapshot using the "View Snapshot" option at the top of the table. It is also possible to directly mount the Snapshot using the "Mound Snapshot" Button at the end of the table.

    hashtag
    Doing a CLI File Search

    • <vm_id> ➡️ ID of the VM to be searched

    • <file_path> ➡️ Path of the file to search for

    Shutdown/Restart the Trilio cluster

    To gracefully shutdown/restart the Trilio cluster the following steps are recommended.

    hashtag
    Verify no snapshots or restores are running

    It is recommended to verify that no snapshots or restores are running on the Trilio Cluster.

    triangle-exclamation

    Stopping or restarting the Trilio cluster will cancel all running actively running backup or restore jobs. These jobs will be marked as errored after the system has come up again.

    This can be verified using the following two commands:

    hashtag
    Identify the master node for the VIP(s) and wlm-cron service

    The Trilio cluster is using the pacemaker service for setting the VIP(s) of the cluster and controlling the active node for the wlm-cron service. The identified node will be the last to shut down in case that the whole cluster gets shut down.

    This can be checked using the following command:

    In the following example is the master node the tvm1

    hashtag
    Shutdown/Restart of a single node in the cluster

    A single node in the cluster can be shut down or restarted without issues. All services will come up and the RabbitMQ and Galeera service will rejoin the remaining cluster.

    When the master node gets shutdown or restarted the VIP(s) and the wlm-cron service will switch to one of the remaining cluster nodes.

    hashtag
    Stop the services on the node

    To speed up the shutdown/restart process it is recommended to stop the Trilio services, the RabbitMQ service, and the MariaDB service on the node.

    circle-info

    The wlm-cron service and the VIP(s) are not getting stopped when only the master node gets rebooted or shut down. The pacemaker will automatically move the wlm-cron service and the VIP(s) to one of the remaining nodes.

    hashtag
    Shutdown/Restart the node

    After the services have been stopped the node can be restarted or shut down using standard Linux commands.

    hashtag
    Restarting the complete cluster node by node

    Restarting the whole cluster node by node follows the same procedure as restarting a single node, with the difference that each restarted node needs to be fully started again before the next node can be restarted.

    hashtag
    Shutdown/Restart the complete cluster as a whole

    When the complete cluster needs to get stopped and restarted at the same time the following procedure needs to be completed.

    The procedure on a high level is:

    • Shutdown the two slave nodes

    • Shutdown the master node

    • Start the master node

    hashtag
    Shutdown the two slave nodes

    Before shutting down the two slave nodes it is recommended to stop running Trilio services, the RabbitMQ server, and the MariaDB on the nodes.

    Afterward, the nodes can be shut down.

    hashtag
    Shutdown the master node

    Before shutting down the master node it is recommended to stop running Trilio services, the RabbitmQ server, the MariaDB, the wlm-cron and the VIP(s) resource in Pacemaker.

    Afterward, the node can be shut down.

    hashtag
    Start the master node

    The first server that is getting booted will be the master node. It is highly recommended that the old master node will be booted first again.

    circle-info

    Not booting the old mater node first again can lead to data loss when the Galeera Cluster is restarted.

    hashtag
    Enable the Galeera cluster

    Login into the freshly started master node and run the following command. This will restart the Galeera cluster with this node as master.

    hashtag
    Start the slave nodes

    After the master node has been booted and the Galeera cluster started the remaining nodes can be started and will automatically rejoin the Trilio cluster.

    Change Certificates used by Trilio

    The following Trilio services are providing certificates for secured access to the Trilio solution.

    Service
    Port used
    Description

    TVault-Config

    443

    Webservice providing the TrilIoVault Dashboard

    Nginx (wlm-api)

    8780

    provides the VIP for wlm-api service

    hashtag
    Changing the certificate of TVault-Config and Nginx for Grafana Service

    The TVault-Config service and the Nginx Resource for the Grafana Dashboard are using the same certificate.

    The certificate used is a symlink to a host-specific certificate. Each Trilio VM has its own self-signed certificate by default which is getting recreated every time the TVault-Config service is restarted.

    When the certificate for the TVault-Config and Nginx (Grafana) is to be changed to a customer chosen certificate it is required to deactivate the recreation of the certificates upon service restart.

    circle-info

    Trilio is planning to change this behavior to make it easier for customers to change the certificate in the future.

    1. Login into the Trilio VM via SSH

    2. Edit the following file: /home/stack/myansible/lib/python3.6/site-packages/tvault_configurator/tvault_config_bottle.py

    3. Look for create_ssl_certificates() in the main function

    The resulting main function will look like this:

    Afterward, the certificates can be replaced manually by overwriting the files.

    Once the certificates have been replaced by the desired ones restart the TVault-Config service and the Nginx pcs resource.

    hashtag
    Changing the certificate used by Nginx for wlm-api service

    The certificate provided by the Nginx for the wlm-api service is set during configuration when HTTPS endpoints are configured for the Trilio appliance. This certificate is provided to the end-user or Openstack every time an API call to the Trilio solution is sent.

    The certificate and its related private key can be changed through the OS API certificate tab.

    In this tab is the section to Upload Server certificate | Private key. Use this section to update the wlm-api certificate as required.

    circle-info

    The certificate and it's private key can also be changed through reconfiguration.

    Workload Import & Migration

    Each Trilio Workload has a dedicated owner. The ownership of a Workload is defined by:

    • OpenStack User - The OpenStack User-ID is assigned to a Workload

    • OpenStack Project - The OpenStack Project-ID is assigned to a Workload

    Spinning up the Trilio VM

    Learn about spinning up the Trilio VM

    circle-exclamation

    For Canonical Openstack it is not necessary to spin up the Trilio VM. However, Trilio File Search functionality requires that the Trilio Workload manager (trilio-wlm) be deployed as a virtual machine. File Search will not function if the Trilio Workload manager (trilio-wlm) is running as a lxd container(s).

    The Trilio Appliance is delivered as qcow2 image and runs as VM on top of a KVM Hypervisor.

    circle-info
    workloadmgr disable-scheduler --workloadids <workloadid>
    workloadmgr enable-scheduler --workloadids <workloadid>
    workloadmgr scheduler-trust-validate <workload_id>
    # echo -n 10.10.2.20:/Trilio_Backup | base64
    MTAuMTAuMi4yMDovVHJpbGlvX0JhY2t1cA==
    # echo -n /Trilio_Backup | base64
    L1RyaWxpb19CYWNrdXA=
    cd /opt/openstack-ansible/playbooks
    openstack-ansible os-tvault-install.yml --tags "tvault-all-uninstall"
    triliovault-cfg-scripts/common/triliovault_nfs_map_input.yml
    Typical Openstack Network configuration before Trilio gets installed
    Typical Openstack network configuration with Trilio installed
    The split them all network example
    The no trust network example
    Trilio as third party component network example
    Drop dmapi user
    triliovault-cfg-scripts/common/examples-multi-ip-nfs-map at master · trilioData/triliovault-cfg-scriptsarrow-up-right
    Identify the workload a file search shall be done in
  • Click the workload name to enter the Workload overview

  • Click File Search to enter the file search tab

  • Group ID assigned to the file or folder
  • Actual size in Bytes of the file or folder

  • Time of creation

  • Time of last modification

  • Time of last access

  • Full path to the found file or folder

  • --snapshotids <snapshotid> ➡️ Search only in specified snapshot ids snapshot-id: include the instance with this UUID
  • --end_filter <end_filter> ➡️Displays last snapshots, example , last 10 snapshots, default 0 means displays all snapshots

  • --start_filter <start_filter> ➡️Displays snapshots starting from , example , snapshot starting from 5, default 0 means starts from first snapshot

  • --date_from <date_from> ➡️ From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If time isn't specified then it takes 00:00 by default

  • --date_to <date_to> ➡️ To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day),Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to

  • Enable the Galeera cluster
  • Start the two slave nodes

  • Comment out create_ssl_certificates()

  • Repeat for all nodes of the Trilio cluster

  • Nginx (Grafana)

    3001

    VIP for the dashboard of Grafana service running on TrilIioVault VM

    Upload Server certificate | Private key block
    OpenStack Cloud - The Trilio Serviceuser-ID is assigned to a Workload
    circle-info

    OpenStack Users can update the User ownership of a Workload by modifying the Workload.

    This ownership ensures that only the owners of a Workload are able to work with it.

    OpenStack Administrators can reassign Workloads or reimport Workloads from older Trilio installations.

    hashtag
    Import workloads

    circle-info

    The import performance has been improved in the Trilio for OpenStack 4.3 release

    As part of this design improvement, the importing of data will happen in parallel by optimal utilization of available resources on the appliance. Additionally, import of network and security group related tables will not happen during import operation; this will happen later on during the following operations:

    • Snapshot show in UI : When user clicks on snapshot hyperlink on horizon UI.

    • Snapshot show in CLI : When user triggers snapshot-show command in CLI.

    • Any restore : When user triggers any of the restores either from UI or CLI.

    Note: The above import of network & security group related data will happen one only during the first run of any of the above operations.

    Workload import allows a user to import Workloads directly from the Backup Target into the Trilio database.

    circle-exclamation

    The Workload import is designed to import Workloads, which are owned by the OpenStack Cloud.

    It will not import or list any Workloads that are owned by a different cloud.

    To get a list of importable Workloads use the following CLI command:

    • --project_id <project_id> ➡️ List workloads belongs to given project only.

    To import Workloads into the Trilio database use the following CLI command:

    • --workloadids <workloadid> ➡️ Specify workload ids to import only specified workloads. Repeat option for multiple workloads.

    For every workload-importworkloads command triggered, the system will generate a jobid which will be displayed after successful workload-importworkloads command execution.

    Respective import process can further be tracked using this jobid in a CLI provided starting T4O-4.3 release. User can add all workloads (which are to be imported) with a single workload-importworkloads command OR can trigger multiple such commands. For every command trigger, system will generate one unique jobid.

    CLI details below:

    • --jobid <jobId> ➡️ The ID returned by workload-importworkloads CLI command.

    hashtag
    Orphaned Workloads

    The definition of an orphaned Workload is from the perspective of a specific Trilio installation. Any workload that is located on the Backup Target Storage, but not known to the TrilioVualt installation is considered orphaned.

    Further is to divide between Workloads that were previously owned by Projects/Users in the same cloud or are migrated from a different cloud.

    The following CLI command provides the list of orphaned workloads:

    • --migrate_cloud {True,False} ➡️ Set to True if you want to list workloads from other clouds as well. Default is False.

    • --generate_yaml {True,False} ➡️ Set to True if want to generate output file in yaml format, which would be further used as input for workload reassign API.

    circle-info

    Running this command against a Backup Target with many Workloads can take a bit of time. Trilio is reading the complete Storage and verifies every found Workload against the Workloads known in the database.

    hashtag
    Reassigning Workloads

    OpenStack administrators are able to reassign a Workload to a new owner. This involves the possibility to migrate a Workload from one cloud to another or between projects.

    triangle-exclamation

    Reassigning a workload only changes the database of the target Trilio installation. When the Workload was managed before by a different Trilio installation, will that installation not be updated.

    Use the following CLI command to reassign a Workload:

    • --old_tenant_ids <old_tenant_id>➡️ Specify old tenant ids from which workloads need to reassign to new tenant. Specify multiple times to choose Workloads from multiple tenants.

    • --new_tenant_id <new_tenant_id> ➡️ Specify new tenant id to which workloads need to reassign from old tenant. Only one target tenant can be specified.

    • --workload_ids <workload_id>➡️ Specify workload_ids which need to reassign to new tenant. If not provided then all the workloads from old tenant will get reassigned to new tenant. Specifiy multiple times for multiple workloads.

    • --user_id <user_id>➡️ Specify user id to which workloads need to reassign from old tenant. only one target user can be specified.

    • --migrate_cloud {True,False}➡️ Set to True if want to reassign workloads from other clouds as well. Default if False

    • --map_file➡️ Provide file path(relative or absolute) including file name of reassign map file. Provide list of old workloads mapped to new tenants. Format for this file is YAML.

    A sample mapping file with explanations is shown below:

    This guide shows the tested way to spin up the Trilio Appliance on a RHV Cluster. Please contact a RHV Administrator and Trilio Customer Success Agent in case of incompatibility with company standards.

    hashtag
    Creating the cloud-init image

    The Trilio appliance is utilizing cloud-init to provide the initial network and user configuration.

    Cloud-init is reading it's information either from a metadata server or from a provided cd image. Trilio is utilizing the cd image.

    hashtag
    Needed tools

    To create the cloud-init image it is required to have genisoimage available.

    hashtag
    Providing the Metadata

    Cloud-init is using two files for it's metadata.

    The first file is called meta-data and contains the information about the network configuration. Below is an example of this file.

    triangle-exclamation

    Keep the hostname localhost. The hostname gets changed through the configuration step. Changing the hostname will lead to the tvault-config service not properly starting, blocking further configuration.

    circle-exclamation

    The instance-id has to match the VM name in virsh

    The second file is called user-data and contains little scripts and information to set up for example the user passwords. Below is an example of this file.

    hashtag
    creating the image file

    circle-info

    Both files meta-data and user-data are needed. Even when one is empty, is it needed to create a working cloud-init image.

    The image is getting created using genisoimage follwing this general command:

    genisoimage -output <name>.iso -volid cidata -joliet -rock </path/user-data> </path/meta-data>

    An example of this command is shown below.

    hashtag
    Spining up the Trilio appliance

    circle-info

    The Trilio Appliance qcow2 image can be downloaded from the Trilio customer portalarrow-up-right. Please contact your Trilio sales or technical lead to get access to the portal.

    After the cloud-init image has been created the TriloVault appliance can be spun up on the desired KVM server.

    Extract the Trilio QCOW2 tar file using the following command :

    See below an example command, how to spin up the Trilio appliance using virsh and the created iso image.

    circle-info

    It is of course possible to spin up the Trilio appliance without a cloud-init iso-image. It will spin up with default values.

    hashtag
    Uninstalling cloud-init after first boot

    Once the Trilio appliance is up and running with it's initial configuration is it recommended to uninstall cloud-init.

    circle-info

    If cloud-init is not installed it will rerun the network configuration upon every boot. Setting the network configuration back to DHCP, if no metadata is provided.

    To uninstall cloud-init, follow the example below.

    hashtag
    [RHOSP, TripleO and Kolla only] Verify nova UID/GID for nova user on the Appliance

    Red Hat OpenStack, TripleO, and Kolla Ansible Openstack are using the nova UID/GID of 42436 inside their containers instead of 162:162 which is the standard in other Openstack environments.

    Please verify that the nova UID/GID on the Trilio Appliance is still 42436,

    In case of the UID/GID being changed back to 162:162 follow these steps to set it back to 42436:42436.

    1. Download the shell script that will change the user id

    2. Assign executable permissions

    3. Execute the script

    4. Verify that the nova user and group id have changed to '42436'

    hashtag
    Updating the appliance to the latest minor version

    It is recommended to directly update the Trilio appliance to the latest version.

    To do so follow the minor update guide provided here:

    • Online update Trilio Appliance

    • Offline update Trilio Appliance

    cd /opt/openstack-ansible/playbooks
    openstack-ansible lxc-containers-destroy.yml --limit "DMPAI CONTAINER_NAME"
    #tvault-dmapi
    tvault-dmapi_hosts:
      infra-1:
        ip: 172.26.0.3
      infra-2:
        ip: 172.26.0.4
        
    #tvault-datamover
    tvault_compute_hosts:
      infra-1:
        ip: 172.26.0.7
      infra-2:
        ip: 172.26.0.8
    # Datamover haproxy setting
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    rm /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml
     source cloudadmin.rc
     openstack endpoint delete "internal datamover service endpoint_id"
     openstack endpoint delete "public datamover service endpoint_id"
     openstack endpoint delete "admin datamover service endpoint_id"
    lxc-attach -n "GALERA CONTAINER NAME"
    mysql -u root -p "root password"
    DROP DATABASE dmapi;
    DROP USER dmapi;
    lxc-attach -n "RABBITMQ CONTAINER NAME"
    rabbitmqctl delete_user dmapi
    rabbitmqctl delete_vhost /dmapi
    rm  /etc/haproxy/conf.d/datamover_service
    frontend datamover_service-front-1
        bind ussuriubuntu.triliodata.demo:8784 ssl crt /etc/ssl/private/haproxy.pem ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
        option httplog
        option forwardfor except 127.0.0.0/8
        reqadd X-Forwarded-Proto:\ https
        mode http
        default_backend datamover_service-back
    
    frontend datamover_service-front-2
        bind 172.26.1.2:8784
        option httplog
        option forwardfor except 127.0.0.0/8
        mode http
        default_backend datamover_service-back
    
    
    backend datamover_service-back
        mode http
        balance leastconn
        stick store-request src
        stick-table type ip size 256k expire 30m
        option forwardfor
        option httplog
        option httpchk GET / HTTP/1.0\r\nUser-agent:\ osa-haproxy-healthcheck
    
    
        server controller_dmapi_container-bf17d5b3 172.26.1.75:8784 check port 8784 inter 12000 rise 1 fall 1
    systemctl restart haproxy
    rm -rf /opt/config-certs/rabbitmq
    rm -rf /opt/config-certs/s3
    virsh list
    virsh destroy <Trilio VM Name or ID>
    virsh undefine <Trilio VM name>
    192.168.1.33:/var/share1 
    192.168.1.34:/var/share1 
    192.168.1.35:/var/share1
    prod-compute-1.trilio.demo 
    prod-compute-2.trilio.demo 
    prod-compute-3.trilio.demo 
    . 
    . 
    . 
    prod-compute-30.trilio.demo
    compute_bare.trilio.demo 
    compute_virtual
    multi_ip_nfs_shares: 
     - "192.168.1.34:/var/share1": ['prod-compute-[1:10].trilio.demo', 'compute_bare.trilio.demo'] 
       "192.168.1.35:/var/share1": ['prod-compute-[11:20].trilio.demo', 'compute_virtual'] 
       "192.168.1.33:/var/share1": ['prod-compute-[21:30].trilio.demo'] 
    
    single_ip_nfs_shares: []
    multi_ip_nfs_shares: 
     - "192.168.1.34:/var/share1": ['172.30.3.[11:20]', '172.30.4.40'] 
       "192.168.1.35:/var/share1": ['172.30.3.[21:30]', '172.30.4.50'] 
       "192.168.1.33:/var/share1": ['172.30.3.[31:40]'] 
    
    single_ip_nfs_shares: []
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    workloadmgr filepath-search [--snapshotids <snapshotid>]
                                [--end_filter <end_filter>]
                                [--start_filter <start_filter>]
                                [--date_from <date_from>]
                                [--date_to <date_to>]
                                <vm_id> <file_path>
    workloadmgr snapshot-list --all=True
    workloadmgr restore-list
    pcs status
    pcs status
    Cluster name: triliovault
    
    WARNINGS:
    Corosync and pacemaker node names do not match (IPs used in setup?)
    
    Stack: corosync
    Current DC: tvm3 (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum
    Last updated: Thu Aug 26 12:10:32 2021
    Last change: Thu Aug 26 08:02:51 2021 by root via crm_resource on tvm1
    
    3 nodes configured
    8 resource instances configured
    
    Online: [ tvm1 tvm2 tvm3 ]
    
    Full list of resources:
    
     virtual_ip     (ocf::heartbeat:IPaddr2):       Started tvm1
     virtual_ip_public      (ocf::heartbeat:IPaddr2):       Started tvm1
     virtual_ip_admin       (ocf::heartbeat:IPaddr2):       Started tvm1
     virtual_ip_internal    (ocf::heartbeat:IPaddr2):       Started tvm1
     wlm-cron       (systemd:wlm-cron):     Started tvm1
     Clone Set: lb_nginx-clone [lb_nginx]
         Started: [ tvm1 ]
         Stopped: [ tvm2 tvm3 ]
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    systemctl stop wlm-api
    systemctl stop wlm-scheduler
    systemctl stop wlm-workloads
    systemctl stop mysqld
    rabbitmqctl stop
    reboot
    shutdown
    systemctl stop wlm-api
    systemctl stop wlm-scheduler
    systemctl stop wlm-workloads
    systemctl stop mysqld
    rabbitmqctl stop
    shutdown
    systemctl stop wlm-api
    systemctl stop wlm-scheduler
    systemctl stop wlm-workloads
    systemctl stop mysqld
    rabbitmqctl stop
    pcs resource stop wlm-cron
    pcs resource stop lb_nginx-clone
    shutdown
    galera_new_cluster
    [root@TVM1 ssl]# cd /etc/tvault/ssl/
    [root@TVM1 ssl]# ls -lisa server*
     577678 0 lrwxrwxrwx 1 root root 8 Jan 21 14:36 server.crt -> TVM1.crt
     577672 0 lrwxrwxrwx 1 root root 8 Jan 21 14:36 server.key -> TVM1.key
    1178820 0 lrwxrwxrwx 1 root root 8 Jan 21 14:36 server.pem -> TVM1.pem
    def main():
        # configure the networking
        #create_ssl_certificates()
    
        http_thread = Thread(target=main_http)
        http_thread.daemon = True  # thread dies with the program
        http_thread.start()
    
        bottle.debug(True)
        srv = SSLWSGIRefServer(host='::', port=443)
        bottle.run(server=srv, app=app, quiet=False, reloader=False)
    [root@TVM1 ~]# systemctl restart tvault-config
    [root@TVM1 ~]# pcs resource restart lb_nginx-clone
    lb_nginx-clone successfully restarted
    workloadmgr workload-get-importworkloads-list [--project_id <project_id>]
    workloadmgr workload-importworkloads [--workloadids <workloadid>]
    workloadmgr workload-get-importworkloads-progres --jobid <jobId>
    workloadmgr workload-get-orphaned-workloads-list [--migrate_cloud {True,False}]
                                                     [--generate_yaml {True,False}]
    workloadmgr workload-reassign-workloads
                                            [--old_tenant_ids <old_tenant_id>]
                                            [--new_tenant_id <new_tenant_id>]
                                            [--workload_ids <workload_id>]
                                            [--user_id <user_id>]
                                            [--migrate_cloud {True,False}]
                                            [--map_file <map_file>]
    reassign_mappings:
       - old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
         new_tenant_id: new_tenant_id
         user_id: user_id
         workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
         migrate_cloud: True/False #Set to True if want to reassign workloads from
                      # other clouds as well. Default is False
    
       - old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
         new_tenant_id: new_tenant_id
         user_id: user_id
         workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
         migrate_cloud: True/False #Set to True if want to reassign workloads from
                      # other clouds as well. Default is False
    #For RHEL and centos
    yum install genisoimage
    #For Ubuntu 
    apt-get install genisoimage
    [root@kvm]# cat meta-data
    instance-id: triliovault
    network-interfaces: |
       auto eth0
       iface eth0 inet static
       address 158.69.170.20
       netmask 255.255.255.0
       gateway 158.69.170.30
    
       dns-nameservers 11.11.0.51
    local-hostname: localhost
    [root@kvm]# cat user-data
    #cloud-config
    chpasswd:
      list: |
        root:password1
        stack:password2
      expire: False
    genisoimage  -output tvault-firstboot-config.iso -volid cidata -joliet -rock user-data meta-data
    tar Jxvf TrilioVault_file.tar.xz
    virt-install -n triliovault-vm  --memory 24576 --vcpus 8 \
    --os-type linux \ 
    --disk tvault-appliance-os-3.0.154.qcow2,device=disk,bus=virtio,size=40 \
    --network bridge=virbr0,model=virtio \
    --network bridge=virbr1,model=virtio \
    --graphics none \
    --import \
    --disk path=tvault-firstboot-config.iso,device=cdrom
    sudo yum remove cloud-init
    [root@TVM1 ~]# id nova
    uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
    ## Download the shell script
    $ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    
    ## Assign executable permissions
    $ chmod +x nova_userid.sh
    
    ## Execute the shell script to change 'nova' user and group id to '42436'
    $ ./nova_userid.sh
    
    ## Ignore any errors and verify that 'nova' user and group id has changed to '42436'
    $ id nova
       uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)

    Installing on Canonical OpenStack

    Trilio and Canonical have started a partnership to ensure a native deployment of Trilio using JuJu Charms.

    Those JuJu Charms are publicly available as Open Source Charms.

    circle-exclamation

    Trilio is providing the JuJu Charms to deploy Trilio 4.2 in Canonical OpenStack from Yoga release onwards only. JuJu Charms to deploy Trilio 4.2 in Canonical OpenStack up to wallaby release are developed and maintained by Canonical.

    circle-check

    Canonical OpenStack doesn't require the Trilio Cluster. The required services are installed and managed via JuJu Charms.

    The documentation of the charms can be found here:

    hashtag
    Juju charms for OpenStack Yoga release onwards

    hashtag
    Juju charms for other supported OpenStack releases upto wallaby

    Prerequisite

    Have a canonical OpenStack base setup deployed for a required release like Jammy Zed/Yoga, Focal yoga/Wallaby/Victoria/Ussuri, or Bionic Ussuri/Queens.

    Steps to install the Trilio charms

    1. Export the OpenStack base bundle

    2. Create a Trilio overlay bundle as per the OpenStack setup release using the charms given above.

    Some sample Trilio overlay bundles can be found .

    circle-info

    NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    triangle-exclamation

    Trilio File Search functionality requires that the Trilio Workload manager (trilio-wlm) be deployed as a virtual machine. File Search will not function if the Trilio Workload manager (trilio-wlm) is running as a lxd container(s).

    3. If file search functionality is required, provision any additional node(s) that will be required for deploying the Trilio Workload manager (trilio-wlm) as a VM instead of lxd container(s).

    4. Commission the additional node from MAAS UI.

    5. Do a dry run to check if the Trilio bundle is working

    6. Do the deployment

    7. Wait till all the Trilio units are deployed successfully. Check the status via juju status command.

    8. Once the deployment is complete, perform the below operations:

    1. Create cloud admin trust

    1. Add license

    Note: Reach out to the Trilio support team for the license file.

    For multipath enabled environments, perform the following actions

    1. log into each nova compute node

    2. add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf

    3. restart tvault-contego service

    Sample Trilio overlay bundles

    circle-info

    For bionic-queens openstack-origin parameter value for trilio-dm-api charm must be cloud:bionic-train

    circle-info

    For the AWS S3 storage backend, we need to use `` as S3 end-point URL.

    A few Sample overlay bundles for different OpenStack versions can be found .

    Uninstalling from RHOSP

    hashtag
    Clean Trilio Datamover API service

    The following steps need to be run on all nodes, which have the Trilio Datamover API service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::TrilioDatamoverApi.

    Once the role that runs the Trilio Datamover API service has been identified will the following commands clean the nodes from the service.

    circle-exclamation

    Run all commands as root or user with sudo permissions.

    Stop trilio_dmapi container.

    Remove trilio_dmapi container.

    Clean Trilio Datamover API service conf directory.

    Clean Trilio Datamover API service log directory.

    hashtag
    Clean Trilio Datamover Service

    The following steps need to be run on all nodes, which have the Trilio Datamover service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::TrilioDatamover.

    Once the role that runs the Trilio Datamover service has been identified will the following commands clean the nodes from the service.

    circle-exclamation

    Run all commands as root or user with sudo permissions.

    Stop trilio_datamover container.

    Remove trilio_datamover container.

    Unmount Trilio Backup Target on compute host.

    Clean Trilio Datamover service conf directory.

    Clean log directory of Trilio Datamover service.

    hashtag
    Clean Trilio haproxy resources

    The following steps need to be run on all nodes, which have the haproxy service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::HAproxy.

    Once the role that runs the haproxy service has been identified will the following commands clean the nodes from all Trilio resources.

    circle-exclamation

    Run all commands as root or user with sudo permissions.

    Edit the following file inside the haproxy container and remove all Trilio entries.

    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg

    An example of these entries is given below.

    Restart the haproxy container once all edits have been done.

    hashtag
    Clean Trilio Keystone resources

    Trilio registers services and users in Keystone. Those need to be unregistered and deleted.

    hashtag
    Clean Trilio database resources

    Trilio creates a database for the dmapi service. This database needs to be cleaned.

    Login into the database cluster

    Run the following SQL statements to clean the database.

    hashtag
    Revert overcloud deploy command

    Remove the following entries from roles_data.yaml used in the overcloud deploy command.

    • OS::TripleO::Services::TrilioDatamoverApi

    • OS::TripleO::Services::TrilioDatamover

    circle-info

    In the case that the overcloud deploy command used prior to the deployment of Trilio is still available, it can directly be used.

    Follow these steps to clean the overcloud deploy command from all Trilio entries.

    1. Remove trilio_env.yaml entry

    2. Remove trilio endpoint map file Replace with original map file if existing

    hashtag
    Revert back to original RHOSP Horizon container

    Run the cleaned overcloud deploy command.

    hashtag
    Destroy the Trilio VM Cluster

    List all VMs running on the KVM node

    Destroy the Trilio VMs

    Undefine the Trilio VMs

    Delete the TrlioVault VM disk from KVM Host storage

    Upgrade Trilio Appliance

    circle-info

    The Trilio appliance of T4O 4.2 is running a different Kernel than the Trilio appliance for T4O 4.1 or older.

    When upgrading from T4O 4.1 or older it is recommended to replace the Trilio appliance entirely and not do an in-place upgrade. This document provides both online/offline upgrades of Trilio Appliance from any of the older T4O 4.2-based releases to the latest T4O-4.3 release.

    hashtag
    Generic Prerequisites

    circle-info

    The prerequisites should already be fulfilled from upgrading the Trilio components on the Controller and Compute nodes.

    • Please ensure to complete the upgrade of all the Trilio components on the Openstack controller & compute nodes before starting the rolling upgrade of Trilio.

    • The mentioned Gemfury repository should be accessible from Trilio VM.

    • Either 4.2 GA OR any hotfix patch against 4.2 should be already deployed for performing upgrades mentioned in the current document.

    hashtag
    Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut-down.

    hashtag
    Steps to upgrade Trilio having Internet access

    1. Download the hf_upgrade.sh script on all T4O nodes where the upgrade is to be done.

    You can check the usage of this script .

    1. Run the following command to download and Install the upgraded packages

    1. Run the following command to enable the wlm-cron service

    hashtag
    Steps to upgrade Trilio having No Internet access

    Here the packages need to be downloaded on a separate host which has internet access. The script ./hf_upgrade.sh can be used with the below-mentioned option to download the required package

    1. Download the hf_upgrade.sh script and use the script to download the upgraded packages. You can check the usage of this script .\

    2. Copy the downloaded 4.3-offlinePkgs.tar.gz package and hf_upgrade.sh script to all the Trilio nodes\

    hashtag
    Post upgrade steps

    1. Verify all wlm services on all Trilio nodes

    2. Make sure all wlm services are up and running from python version 3.8.x

    3. Check the mount point using “df -h” command

    Additional check for wlm-cron on the primary node

    hashtag
    [RHOSP, TripleO and Kolla only] Verify nova UID/GID for nova user on the Appliance

    Red Hat OpenStack, TripleO, and Kolla Ansible Openstack are using the nova UID/GID of 42436 inside their containers instead of 162:162 which is the standard in other Openstack environments.

    Please verify that the nova UID/GID on the Trilio Appliance is still 42436, to do so follow the below document provided here:

    Workload Encryption with Barbican

    Learn about encrypting Trilio workloads with Barbican

    hashtag
    Introduction

    Integrating Trilio for OpenStack 4.2 with Barbican facilitates encryption support for the qcow2-data segment of Trilio backups. However, the JSON files housing the backed-up OpenStack metadata remain unencrypted.

    This capability necessitates the presence of the OpenStack Barbican service. Absence of the Barbican service will result in the omission of encryption options for Workloads within the Horizon interface.

    Trilio for OpenStack (T4O) 4.2 exclusively retrieves secrets from Barbican, without generating, modifying, or deleting any secrets within the Barbican platform.

    Barbican secrets are indispensable for executing backups or restorations in encryption-enabled Workloads. The onus lies on the OpenStack project user to supply these secrets and guarantee their accurate availability.

    In order to employ encrypted Workloads, the Trilio trustee role must be capable of interacting with the Barbican service to access and retrieve secrets from Barbican. By default, only the 'admin' and 'creator' roles are endowed with these permissions.

    Encryption availability for a Workload is confined to the Workload creation stage. Post-creation, the encryption status of a Workload is irreversible; it cannot be altered or toggled.

    circle-info

    Note : When leveraging OpenStack Barbican for protecting encrypted volumes and offering encrypted backups, it's essential that the Trustee Role is assigned as 'Creator' or a role that possesses equivalent permissions to the Creator role.

    This is crucial because only the Creator role has the authority to create, read, and delete secrets within Barbican. The generation of encryption-enabled workloads would be unsuccessful if the Trustee Role does not possess the permissions associated with the 'Creator' role.

    hashtag
    The following secret configurations are supported for AES-256

    Mode(s)
    Content Types
    Payload Content Encoding
    Secret Type
    Payload/Secret File

    hashtag
    By default Barbican will generate secrets of the following type:

    • Alghorithm: AES-256 (All supported types utilize this algorithm)

    • Mode: cbc

    • content type: application/octet-stream

    hashtag
    Prerequisite

    For encrypted workload Barbican should enabled on Openstack, then while configuring TVO-4.2, trustee role should be kept as creator .

    Additionally every user who’ll be interacting with TrilioVault for any operator, should also have creator role assigned.

    hashtag
    1. Secret creation

    Secret can be created via OpenStack cli with following the below steps

    1. Source the rc file of the user who is going to create the encrypted workload.

    2. Create any supported type of secret and fetch the secret UUID using OpenStack secret cli.

      1. Refer to this guide for more information about

    ()[root@overcloudtrain1-controller-0 /]# openstack secret order create --name secret2 --algorithm aes --mode ctr --bit-length 256 --payload-content-type=application/octet-stream key +----------------+-------------------------------------------------------------------------------------------+ | Field | Value | +----------------+-------------------------------------------------------------------------------------------+ | Order href | https://overcloudtrain1.trilio.local:13311/v1/orders/641bac2c-b5b2-4a7a-9172-8fe0cf55425d | | Type | Key | | Container href | N/A | | Secret href | None | | Created | None | | Status | None | | Error code | None |

    b. View the details of the order to identify the location of the generated key, shown here as the Secret href value: ()[root@overcloudtrain1-controller-0 /]# openstack secret order get https://overcloudtrain1.trilio.local:13311/v1/orders/641bac2c-b5b2-4a7a-9172-8fe0cf55425d +----------------+--------------------------------------------------------------------------------------------+ | Field | Value | +----------------+--------------------------------------------------------------------------------------------+ | Order href | https://overcloudtrain1.trilio.local:13311/v1/orders/641bac2c-b5b2-4a7a-9172-8fe0cf55425d | | Type | Key | | Container href | N/A | | Secret href | https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c | | Created | 2023-07-12T11:38:17+00:00 |

    c. Fetch the secret UUID via below command (use Secret href)

    ()[root@overcloudtrain1-controller-0 /]# openstack secret get https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c +---------------+--------------------------------------------------------------------------------------------+ | Field | Value | +---------------+--------------------------------------------------------------------------------------------+ | Secret href | https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c | | Name | secret2 | | Created | 2023-07-12T11:38:17+00:00 | | Status | ACTIVE | | Content types | {'default': 'application/octet-stream'} | | Algorithm | aes | | Bit length | 256 |

    Note down the last value from Secret href URL which is UUID (8-4-4-4-12 format).

    For example secret UUID for the Secret href URL https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c will be f2c96fe2-6ae7-4985-b98c-e571ba05403c

    This UUID will be used further for creating the encrypted workload.

    hashtag
    2. Workload creation

    1. Login to openstack horizon dashboard with user who has created the secret & go to Project->Backup->workloads

    2. click on create workload.

    3. select the checkbox for Enable Encryption

    1. Follow the usual procedure for further tabs (Workload member, Schedule, policy & Option) & click on create.

    2. Workload will be created and value for Encryption field will be True.

    hashtag
    Encrypted workload migration

    For migration of encrypted workload please follow:

    hashtag
    Upgrade from older release to 4.2

    For upgrade from older release to 4.2 please follow:

    For Trilio configuration please follow:

    solutions/openstack/CleanWlmDatabase/cleanAndOptimizeWLMDB at master · trilioData/solutionsGitHubchevron-right
    solutions/openstack/backing-file-update at master · trilioData/solutionsGitHubchevron-right

    File Search

    hashtag
    Start File Search

    POST https://$(tvm_address):8780/v1/$(tenant_id)/search

    Starts a File Search with the given parameters

    Important log files

    hashtag
    On the Trilio Nodes

    The Trilio Cluster contains multiple log files.

    The main log is workloadmgr-workloads.log, which contains all logs about ongoing and past Trilio backup and restore tasks. It can be found at:

    /var/log/workloadmgr/workloadmgr-workloads.log

    ./backing_file_update.sh /var/triliovault-mounts/<base64>/workload_<workload_id>
    # echo -n 10.10.2.20:/Trilio_Backup | base64
    MTAuMTAuMi4yMDovVHJpbGlvX0JhY2t1cA==
    # echo -n /Trilio_Backup | base64
    L1RyaWxpb19CYWNrdXA=
    mkdir /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mount --bind /var/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=
    chmod 777 /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mkdir /var/lib/nova/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mount --bind /var/lib/nova/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/lib/nova/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=
    chmod 777 /var/lib/nova/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mkdir /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mount --bind /var/trilio/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=
    chmod 777 /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mkdir /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mount --bind /var/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=
    chmod 777 /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    juju exec [-m <model>] --application trilio-data-mover "sudo -u nova mkdir /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/"
    juju exec [-m <model>] --application trilio-wlm "sudo -u nova mkdir /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    juju exec [-m <model>] --application trilio-data-mover "sudo mount --bind /var/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/"
    juju exec [-m <model>] --application trilio-wlm "sudo mount --bind /var/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/"
    The next important log is the workloadmgr-api.log, which contains all logs about API calls received by the Trilio Cluster. It can be found at:

    /var/log/workloadmgr/workloadmgr-api.log

    The log for the third service is the workloadmgr-scheduler.log, which contains all logs about the internal job scheduling between Trilio nodes in the Trilio Cluster.

    /var/log/workloadmgr/workloadmgr-scheduler.log

    The last but not least service running on the Trilio Nodes is the wlm-cron service, which is controlling the scheduled automated backups.

    /var/log/workloadmgr/workloadmgr-workloads.log

    In the case of using S3 as a backup target is there also a log file that keeps track of the S3-Fuse plugin used to connect with the S3 storage.

    /var/log/workloadmgr/s3vaultfuse.py.log

    circle-info

    Canonical Openstack is having these logs inside the workloadmgr container.

    hashtag
    Trilio Datamover service logs on RHOSP

    hashtag
    Datamover API log

    The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running under:

    /var/log/containers/trilio-datamover-api/dmapi.log

    hashtag
    Datamover log

    The log for the Trilio Datamover service is located on the nodes, typically compute, where the Trilio Datamover container is running under:

    /var/log/containers/trilio-datamover/tvault-contego.log

    In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:

    /var/log/containers/trilio-datamover/tvault-object-store.log

    hashtag
    Trilio Datamover service logs on Kolla Openstack

    hashtag
    Datamover API log

    The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running under:

    /var/log/kolla/trilio-datamover-api/dmapi.log

    hashtag
    Datamover log

    The log for the Trilio Datamover service is located on the nodes, typically compute, where the Trilio Datamover container is running under:

    /var/log/kolla/triliovault-datamover/tvault-contego.log

    In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:

    /var/log/kolla/trilio-datamover/tvault-object-store.log

    hashtag
    Trilio Datamover service logs on Ansible Openstack

    hashtag
    Datamover API log

    The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running. Log into the dmapi container using lxc-attach command (example below).

    lxc-attach -n controller_dmapi_container-a11984bf

    The log file is then located under:

    /var/log/dmapi/dmapi.log

    hashtag
    Datamover log

    The log for the Trilio Datamover service is typically located on the compute nodes and the logs can be found here:

    /var/log/tvault-contego/tvault-contego.log

    In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:

    /var/log/tvault-object-store/tvault-object-store.log

    encoded with base64

    cbc, ctr, xts

    text/plain

    None

    opaque

    plaintext

    payload-content-encoding: base64
  • secret type: opaque

  • payload: plaintext

  • Below is the example of symmetric secret type creation & fetch the secret UUID.

    1. Generate a new 256-bit key using order create

    | Error message | None |
    +----------------+-------------------------------------------------------------------------------------------+
    | Status | ACTIVE |
    | Error code | None |
    | Error message | None |
    +----------------+--------------------------------------------------------------------------------------------+
    | Secret type | symmetric |
    | Mode | ctr |
    | Expiration | None |
    +---------------+--------------------------------------------------------------------------------------------+
    ()[root@overcloudtrain1-controller-0 /]#
    Provide the UUID noted in above steps in Secret UUID text box

    cbc, xts

    text/plain

    None

    passphrase

    plaintext

    ctr, cbc, xts

    application/octet-stream

    base64

    managing Barbican secretsarrow-up-right
    Migrating Encrypted Workloads
    Upgrade Trilio
    Configuring Trilio
    Create Workload
    Workloads

    symmetric keys

    Please ensure the following points before starting the upgrade process:

    • No snapshot OR restore to be running.

    • Global job-scheduler should be disabled.

    • wlm-cron should be disabled and any lingering process should be killed.

    Now access the appliance VMs and install the copied offline packages on each of the Trilio appliances by running the below-mentioned command:\
  • Once the installation is done, run the following command to enable the wlm-cron service\

  • Grafana dashboard shows the correct wlm service status on all T4O nodes.

  • Enable Global Job Scheduler

  • herearrow-up-right
    herearrow-up-right
    Update nova UID/GID on the appliance
    # For RHOSP13
    systemctl disable tripleo_trilio_dmapi.service
    systemctl stop tripleo_trilio_dmapi.service
    docker stop trilio_dmapi
    
    # For RHOSP16 onwards
    systemctl disable tripleo_trilio_dmapi.service
    systemctl stop tripleo_trilio_dmapi.service
    podman stop trilio_dmapi
    # For RHOSP13
    docker rm trilio_dmapi
    docker rm trilio_datamover_api_init_log
    docker rm trilio_datamover_api_db_sync
    
    # For RHOSP16 onwards
    podman rm trilio_dmapi
    podman rm trilio_datamover_api_init_log
    podman rm trilio_datamover_api_db_sync
    
    ## If present, remove below container as well
    podman rm container-puppet-triliodmapi
    rm -rf /var/lib/config-data/puppet-generated/triliodmapi
    rm /var/lib/config-data/puppet-generated/triliodmapi.md5sum
    rm -rf /var/lib/config-data/triliodmapi*
    rm -rf /var/log/containers/trilio-datamover-api/
    # For RHOSP13
    docker stop trilio_datamover
    
    # For RHOSP16 onwards
    systemctl disable tripleo_trilio_datamover.service
    systemctl stop tripleo_trilio_datamover.service
    podman stop trilio_datamover
    # For RHOSP13
    docker rm trilio_datamover
    
    # For RHOSP16 onwards
    podman rm trilio_datamover
    
    ## If present, remove below container as well
    podman rm container-puppet-triliodmapi
    ## Following steps applicable for all supported RHOSP releases.
    
    # Check triliovault backup target mount point
    mount | grep trilio
    
    # Unmount it
    -- If it's NFS	(COPY UUID_DIR from your compute host using above command)
    umount /var/lib/nova/triliovault-mounts/<UUID_DIR>
    
    -- If it's S3
    umount /var/lib/nova/triliovault-mounts
    
    # Verify that it's unmounted		
    mount | grep trilio
    	
    df -h  | grep trilio
    
    # Remove mount point directory after verifying that backup target unmounted successfully.
    # Otherwise actual data from backup target may get cleaned.	
    
    rm -rf /var/lib/nova/triliovault-mounts
    rm -rf /var/lib/config-data/puppet-generated/triliodm/
    rm /var/lib/config-data/puppet-generated/triliodm.md5sum
    rm -rf /var/lib/config-data/triliodm*
    rm -rf /var/log/containers/trilio-datamover/
    listen trilio_datamover_api
      bind 172.25.3.60:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
      bind 172.25.3.60:8784 transparent
      http-request set-header X-Forwarded-Proto https if { ssl_fc }
      http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
      http-request set-header X-Forwarded-Port %[dst_port]
      option httpchk
      option httplog
      server overcloud-controller-0.internalapi.localdomain 172.25.3.59:8784 check fall 5 inter 2000 rise 2
    # For RHOSP13
    docker restart haproxy-bundle-docker-0
    
    # For RHOSP16 onwards
    podman restart haproxy-bundle-podman-0
    openstack service delete dmapi
    openstack user delete dmapi
    ## On RHOSP13, run following command on node where database service runs
    docker exec -ti -u root galera-bundle-docker-0 mysql -u root
    
    ## On RHOSP16
    podman exec -it galera-bundle-podman-0 mysql -u root
    ## Clean database
    DROP DATABASE dmapi;
    
    ## Clean dmapi user
    => List 'dmapi' user accounts
    MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
    +-------+-------------+
    | user  | host        |
    +-------+-------------+
    | dmapi | 172.25.2.10 |
    | dmapi | 172.25.2.8  |
    +-------+-------------+
    2 rows in set (0.00 sec)
    
    => Delete those user accounts
    MariaDB [mysql]> DROP USER [email protected];
    Query OK, 0 rows affected (0.82 sec)
    
    MariaDB [mysql]> DROP USER [email protected];
    Query OK, 0 rows affected (0.05 sec)
    
    => Verify that dmapi user got cleaned
    MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
    Empty set (0.00 sec)
    virsh list
    virsh destroy <Trilio VM Name or ID>
    virsh undefine <Trilio VM name>
    pcs resource enable wlm-cron
    
    [root@TVM2 ~]# pcs resource disable wlm-cron
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr                                                                                                                                                                                                                                                                                                                                                           -cron
    
    cd /opt/ 
    wget https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/4.3.2/TVOAppliance/hf_upgrade.sh
    chmod +x hf_upgrade.sh
    ./hf_upgrade.sh --all  
    OR 
    ./hf_upgrade.sh -a
    
    pcs resource enable wlm-cron
    
    cd /<tvo_packages_download_path>/ 
    wget https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/4.3.2/TVOAppliance/hf_upgrade.sh
    chmod +x hf_upgrade.sh
    
    # Download the upgraded packages
    ./hf_upgrade.sh --downloadonly
    
    scp <tvo_packages_download_path>/* root@<TrilioVault_node_IP>:/<path_to_upgrade_package>/
    systemctl status tvault-config wlm-workloads wlm-api wlm-scheduler
    pcs status (on primary node)
    systemctl status wlm-cron (on primary node)
    systemctl status tvault-object-store (only if Trilio configured with S3 backend storage)
    ps -ef | grep workloadmgr-cron | grep -v grep
    # Above command should show only 2 processes running; sample below
    
    [root@tvm6 ~]# ps -ef | grep workloadmgr-cron | grep -v grep
    nova      8841     1  2 Jul28 ?        00:40:44 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    nova      8898  8841  0 Jul28 ?        00:07:03 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    cd <path_to_upgrade_package>/
    ./hf_upgrade.sh --installonly
    

    Jammy (Ubuntu 22.04)

    latest/edge

    Focal (Ubuntu 20.04)

    latest/edge

    Focal (Ubuntu 20.04)

    latest/edge

    Focal (Ubuntu 20.04)

    latest/edge

    Focal (Ubuntu 20.04)

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    Charm names

    Channel

    Supported releases

    trilio-charmers-trilio-wlm-jammyarrow-up-right

    latest/edge

    Jammy (Ubuntu 22.04)

    trilio-charmers-trilio-dm-api-jammyarrow-up-right

    latest/edge

    Jammy (Ubuntu 22.04)

    trilio-charmers-trilio-data-mover-jammyarrow-up-right

    latest/edge

    Jammy (Ubuntu 22.04)

    trilio-charmers-trilio-horizon-plugin-jammyarrow-up-right

    Charm names

    Channel

    Supported releases

    trilio-wlmarrow-up-right

    4.2/stable

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    trilio-data-moverarrow-up-right

    4.2/stable

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    trilio-dm-apiarrow-up-right

    4.2/stable

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    trilio-horizon-pluginarrow-up-right

    herearrow-up-right
    http://s3.amazonaws.comarrow-up-right
    HEREarrow-up-right

    latest/edge

    4.2/stable

    hashtag
    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to run the search in

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 13:23:25 GMT
    Content-Type: application/json
    Content-Length: 244
    Connection: keep-alive
    X-Compute-Request-Id: req-bdfd3fb8-5cbf-4108-885f-63160426b2fa
    
    {
       "file_search":{
          "created_at":"2020-11-09T13:23:25.698534",
          "updated_at":null,
          "id":14,
          "deleted_at":null,
          "status":"executing",
    

    hashtag
    Body format

    hashtag
    Get File Search Results

    POST https://$(tvm_address):8780/v1/$(tenant_id)/search/<search_id>

    Starts a filesearch with the given parameters

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to run the search in

    search_id

    string

    ID of the File Search to get

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Workload Policies

    Trilio’s tenant driven backup service gives tenants control over backup policies. However, sometimes it may be too much control to tenants and the cloud admins may want to limit what policies are allowed by tenants. For example, a tenant may become overzealous and only uses full backups every 1 hr interval. If every tenant were to pursue this backup policy, it puts a severe strain on cloud infrastructure. Instead, if cloud admin can define predefined backup policies and each tenant is only limited to those policies then cloud administrators can exert better control over backup service.

    Workload policy is similar to nova flavor where a tenant cannot create arbitrary instances. Instead, each tenant is only allowed to use the nova flavors published by the admin.

    hashtag
    List and showing available Workload policies

    hashtag
    Using Horizon

    To see all available Workload policies in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    The following information are shown in the policy tab for each available policy:

    • Creation time

    • name

    • description

    hashtag
    Using CLI

    • <policy_id>➡️ Id of the policy to show

    hashtag
    Create a policy

    hashtag
    Using Horizon

    To create a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    hashtag
    Using CLI

    • --policy-fields <key=key-name> ➡️ Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'

    • --display-description <display_description> ➡️ Optional policy description. (Default=No description)

    hashtag
    Edit a policy

    hashtag
    Using Horizon

    To edit a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    hashtag
    Using CLI

    • --display-name <display-name>➡️Name of the policy

    • --display-description <display_description> ➡️ Optional policy description. (Default=No description)

    hashtag
    Assign/Remove a policy

    hashtag
    Using Horizon

    To assign or remove a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    hashtag
    Using CLI

    • --add_project <project_id> ➡️ ID of the project to assign policy to. Use multiple times to assign multiple projects.

    • --remove_project <project_id> ➡️ ID of the project to remove policy from. Use multiple times to remove multiple projects.

    hashtag
    Delete a policy

    hashtag
    Using Horizon

    To delete a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    hashtag
    Using CLI

    • <policy_id> ➡️ID of the policy to be deleted

    Uninstalling from Kolla OpenStack

    hashtag
    Clean triliovault_datamover_api container

    The container needs to be cleaned on all nodes where the triliovault_datamover_api container is running. The Kolla Openstack inventory file helps to identify the nodes with the service.

    Following steps need to be done to clean the triliovault_datamover_api container:

    Stop the triliovault_datamover_api container.

    Remove the triliovault_datamover_api container.

    Clean /etc/kolla/triliovault-datamover-api directory.

    Clean log directory of triliovault_datamover_api container.

    hashtag
    Clean triliovault_datamover container

    The container needs to be cleaned on all nodes where the triliovault_datamover container is running. The Kolla Openstack inventory file helps to identify the nodes with the service.

    Following steps need to be done to clean the triliovault_datamover container:

    Stop the triliovault_datamover container.

    Remove the triliovault_datamover container.

    Clean /etc/kolla/triliovault-datamover directory.

    Clean log directory of triliovault_datamover container.

    hashtag
    Clean haproxy of Trilio Datamover API

    The Trilio Datamover API entries need to be cleaned on all haproxy nodes. The Kolla Openstack inventory file helps to identify the nodes with the service.

    Following steps need to be done to clean the haproxy container:

    hashtag
    Clean Kolla Ansible deployment procedure

    Delete all Trilio related entries from:

    circle-info

    To cross-verify the uninstallation undo all steps done in and .

    Trilio entries can be found in:

    • /usr/local/share/kolla-ansible/ansible/roles/ ➡️ There is a role triliovault

    • /etc/kolla/globals.yml➡️ Trilio entries had been appended at the end of the file

    hashtag
    Revert to original Horizon container

    Run deploy command to replace the Trilio Horizon container with original Kolla Ansible Horizon container.

    hashtag
    Clean Keystone resources

    Trilio created a dmapi service with dmapi user.

    hashtag
    Clean Trilio database resources

    Trilio Datamover API service has its own database in the Openstack database.

    Login to Openstack database as root user or user with similar priviliges.

    Delete dmapi database and user.

    hashtag
    Destroy the Trilio VM Cluster

    List all VMs running on the KVM node

    Destroy the Trilio VMs

    Undefine the Trilio VMs

    Delete the TrlioVault VM disk from KVM Host storage

    Workload Quotas

    Trilio enables Openstack administrators to set Project Quotas against the usage of Trilio.

    The following Quotas can be set:

    • Number of Workloads a Project is allowed to have

    • Number of Snapshots a Project is allowed to have

    Switch Backup Target on Kolla-ansible

    hashtag
    Unmount old mount point

    The first step is to remove the datamover container and to unmount the old mounts. This is necessary to make sure, that the new datamover container with the new backend target is not getting any interference from the old backup target.

    hashtag

    juju export-bundle --filename openstack_base_file.yaml
    juju deploy --dry-run ./openstack_base_file.yaml --overlay <Trilio bundle path>
    juju deploy ./openstack_base_file.yaml --overlay <Trilio bundle path>
    Juju 2.x
    juju run-action --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
    Juju 3.x
    juju run --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
    juju attach-resource trilio-wlm license=<Path to trilio license file>
    Juju 2.x
    juju run-action --wait trilio-wlm/leader create-license
    Juju 3.x
    juju run --wait trilio-wlm/leader create-license
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 13:24:28 GMT
    Content-Type: application/json
    Content-Length: 819
    Connection: keep-alive
    X-Compute-Request-Id: req-d57bea9a-9968-4357-8743-e0b906466063
    
    {
       "file_search":{
          "created_at":"2020-11-09T13:23:25.000000",
          "updated_at":"2020-11-09T13:23:48.000000",
          "id":14,
          "deleted_at":null,
          "status":"completed",
          "error_msg":null,
          "filepath":"/etc/h*",
          "json_resp":"[
                          {
                             "ed4f29e8-7544-4e1c-af8a-a76031211926":[
                                {
                                   "/dev/vda1":[
                                      "/etc/hostname",
                                      "/etc/hosts"
                                   ],
                                   "/etc/hostname":{
                                      "dev":"2049",
                                      "ino":"32",
                                      "mode":"33204",
                                      "nlink":"1",
                                      "uid":"0",
                                      "gid":"0",
                                      "rdev":"0",
                                      "size":"1",
                                      "blksize":"1024",
                                      "blocks":"2",
                                      "atime":"1603455255",
                                      "mtime":"1603455255",
                                      "ctime":"1603455255"
                                   },
                                   "/etc/hosts":{
                                      "dev":"2049",
                                      "ino":"127",
                                      "mode":"33204",
                                      "nlink":"1",
                                      "uid":"0",
                                      "gid":"0",
                                      "rdev":"0",
                                      "size":"37",
                                      "blksize":"1024",
                                      "blocks":"2",
                                      "atime":"1603455257",
                                      "mtime":"1431011050",
                                      "ctime":"1431017172"
                                   }
                                }
                             ]
                          }
                      ]",
          "vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
       }
    }
    {
       "file_search":{
          "start":<Integer>,
          "end":<Integer>,
          "filepath":"<Reg-Ex String>",
          "date_from":<Date Format: YYYY-MM-DDTHH:MM:SS>,
          "date_to":<Date Format: YYYY-MM-DDTHH:MM:SS>,
          "snapshot_ids":[
             "<Snapshot-ID>"
          ],
          "vm_id":"<VM-ID>"
       }
    }
    "error_msg":null,
    "filepath":"/etc/h*",
    "json_resp":null,
    "vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
    }
    }

    User-Agent

    string

    python-workloadmgrclient

    Navigate to Trilio
  • Navigate to Policy

  • status
  • set interval

  • set retention type

  • set retention value

  • Navigate to Trilio
  • Navigate to Policy

  • Click new policy

  • provide a policy name on the Details tab

  • provide a description on the Details tab

  • provide the RPO in the Policy tab

  • Choose the Snapshot Retention Type

  • provide the Retention value

  • Choose the Full Backup Interval

  • Click create

  • --metadata <key=keyname> ➡️ Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

  • <display_name> ➡️ the name the policy will get

  • Navigate to Trilio
  • Navigate to Policy

  • identify the policy to edit

  • click on "Edit policy" at the end of the line of the chosen policy

  • edit the policy as desired - all values can be changed

  • Click "Update"

  • --policy-fields <key=key-name> ➡️ Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'
  • --metadata <key=keyname> ➡️ Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

  • <policy_id> ➡️ the name the policy will get

  • Navigate to Trilio
  • Navigate to Policy

  • identify the policy to assign/remove

  • click on the small arrow at the end of the line of the chosen policy to open the submenu

  • click "Add/Remove Projects"

  • Choose projects to add or remove by using the plus/minus buttons

  • Click "Apply"

  • <policy_id>➡️policy to be assigned or removed
    Navigate to Trilio
  • Navigate to Policy

  • identify the policy to assign/remove

  • click on the small arrow at the end of the line of the chosen policy to open the submenu

  • click "Delete Policy"

  • Confirm by clicking "Delete"

  • /etc/kolla/passwords.yml➡️ Trilio entries had been appended at the end of the file
  • /usr/local/share/kolla-ansible/ansible/site.yml ➡️ Trilio entries had been appended at the end of the file

  • /root/multinode ➡️ Trilio entries had been appended at the end of this example inventory file

  • append Kolla Ansible yml files
    clone Trilio Ansible role
    Update globals.yml

    Edit the globals.yml file to contain the new backup target.

    Follow the installation documentation to learn about the globals.yml Trilio variables.

    hashtag
    Deploy Trilio components with new backup target

    hashtag
    Verify the successful change in backup target

    hashtag
    Reconfigure the Trilio Appliance

    Follow the documentation to reconfigure the Trilio appliance with the new backup target.

    workloadmgr policy-list
    workloadmgr policy-show <policy_id>
    workloadmgr policy-create --policy-fields <key=key-name>
                              [--display-description <display_description>]
                              [--metadata <key=key-name>]
                              <display_name>
    workloadmgr policy-update [--display-name <display-name>]
                              [--display-description <display-description>]
                              [--policy-fields <key=key-name>]
                              [--metadata <key=key-name>]
                              <policy_id>
    workloadmgr policy-assign [--add_project <project_id>]
                              [--remove_project <project_id>]
                              <policy_id>
    workloadmgr policy-delete <policy_id>
    docker stop triliovault_datamover_api
    docker rm triliovault_datamover_api
    rm -rf /etc/kolla/triliovault-datamover-api
    rm -rf /var/log/kolla/triliovault-datamover-api/
    docker stop triliovault_datamover
    docker rm triliovault_datamover
    rm -rf /etc/kolla/triliovault-datamover
    rm -rf /var/log/kolla/triliovault-datamover/
    rm /etc/kolla/haproxy/services.d/triliovault-datamover-api.cfg
    docker restart haproxy
    kolla-ansible -i multinode deploy
    openstack service delete dmapi
    openstack user delete dmapi
    mysql -u root -p
    DROP DATABASE dmapi;
    DROP USER dmapi;
    virsh list
    virsh destroy <Trilio VM Name or ID>
    virsh undefine <Trilio VM name>
    #check current mount point
    [root@compute ~]# df -h
    Filesystem                      Size  Used Avail Use% Mounted on
    devtmpfs                        7.8G     0  7.8G   0% /dev
    tmpfs                           7.8G     0  7.8G   0% /dev/shm
    tmpfs                           7.8G   26M  7.8G   1% /run
    tmpfs                           7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root             280G   12G  269G   5% /
    /dev/sda1                       976M  197M  713M  22% /boot
    192.168.1.34:/mnt/tvault/42436  2.5T 1005G  1.5T  41% /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    
    #Stop triliovault_datamover
    [root@compute ~]# docker stop triliovault_datamover
    triliovault_datamover
    [root@compute ~]#
    
    #Delete triliovault_datamover
    [root@compute ~]# docker rm triliovault_datamover
    triliovault_datamover
    [root@compute ~]#
    
    #unmount mount point
    [root@compute ~]# umount /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    
    #check mount point is unmounted successfully 
    [root@compute ~]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    devtmpfs             7.8G     0  7.8G   0% /dev
    tmpfs                7.8G     0  7.8G   0% /dev/shm
    tmpfs                7.8G   26M  7.8G   1% /run
    tmpfs                7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root  280G   12G  269G   5% /
    /dev/sda1            976M  197M  713M  22% /boot
    [root@compute ~]#
    
    #Delete mounted dir from compute node
    [root@compute trilio]# rm -rf /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    root@controller:~# kolla-ansible -i multinode deploy
    ##Check that all Containers are up and running
    #Controller node
    root@controller:~# docker ps -a | grep trilio
    583b8d42ab42        trilio/ubuntu-binary-trilio-datamover-api:4.1.36-ussuri    "dumb-init --single-…"   3 days ago          Up 3 days                               openstack-nova-api-triliodata-plugin
    3be25d3819ac        trilio/ubuntu-binary-trilio-horizon-plugin:4.1.36-ussuri   "dumb-init --single-…"   4 days ago          Up 4 days                               horizon
    
    #Compute node
    root@compute:~# docker ps -a | grep trilio
    bf52face23fb        trilio/ubuntu-binary-trilio-datamover:4.1.36-ussuri    "dumb-init --single-…"   3 days ago          Up 3 days                               trilio-datamover
    
    ## Verify the backup target has been changed successfully
    # In case of switch to NFS
    [root@compute ~]# df -h
    Filesystem                      Size  Used Avail Use% Mounted on
    devtmpfs                        7.8G     0  7.8G   0% /dev
    tmpfs                           7.8G     0  7.8G   0% /dev/shm
    tmpfs                           7.8G   26M  7.8G   1% /run
    tmpfs                           7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root             280G   12G  269G   5% /
    /dev/sda1                       976M  197M  713M  22% /boot
    192.168.1.34:/mnt/tvault/42436  2.5T 1005G  1.5T  41% /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    
    #In case of switch to S3
    [root@compute ~]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    devtmpfs             7.8G     0  7.8G   0% /dev
    tmpfs                7.8G     0  7.8G   0% /dev/shm
    tmpfs                7.8G   34M  7.8G   1% /run
    tmpfs                7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root  280G   12G  269G   5% /
    /dev/sda1            976M  197M  713M  22% /boot
    Trilio             -     -  0.0K    - /var/trilio/triliovault-mounts
    
    ##Reverify in the triliovault_datamover containers
    [root@compute ~]# docker exec -it triliovault_datamover bash
    (triliovault-datamover)[nova@compute /]$ df -h
    Filesystem           Size  Used Avail Use% Mounted on
    overlay              280G   12G  269G   5% /
    tmpfs                7.8G     0  7.8G   0% /sys/fs/cgroup
    devtmpfs             7.8G     0  7.8G   0% /dev
    tmpfs                7.8G     0  7.8G   0% /dev/shm
    /dev/mapper/cl-root  280G   12G  269G   5% /etc/iscsi
    tmpfs                6.3G     0  6.3G   0% /var/triliovault/tmpfs
    Trilio             -     -  0.0K    - /var/trilio/triliovault-mounts

    Number of VMs a Project is allowed to protect

  • Amount of Storage a Project is allowed to use on the Backup Target

  • hashtag
    Work with Workload Quotas via Horizon

    circle-exclamation

    The Trilio Quota feature is available for all supported Openstack versions and distributions, but only Train and higher releases include the Horizon integration of the Quota feature.

    Workload Quotas are managed like any other Project Quotas.

    1. Login into Horizon as user with admin role

    2. Navigate to Identity

    3. Navigate to Projects

    4. Identify the Project to modify or show the quotas on

    5. Use the small arrow next to "Manage Members" to open the submenu

    6. Choose "Modify Quotas"

    7. Navigate to "Workload Manager"

    8. Edit Quotas as desired

    9. Click "Save"

    Screenshot of Horizon integration for Workload Manager Quotas

    hashtag
    Work with Workload Quotas via CLI

    hashtag
    List available Quota Types

    Trilio is providing several different Quotas. The following command allows listing those.

    triangle-exclamation

    Trilio 4.1 do not yet have the Quota Type Volume integrated. Using this will not generate any Quotas a Tenant has to apply to.

    hashtag
    Show Quota Type Details

    The following command will show the details of a provided Quota Type.

    • <quota_type_id> ➡️ID of the Quota Type to show

    hashtag
    Create a Quota

    The following command will create a Quota for a given project and set the provided value.

    • <quota_type_id> ➡️ID of the Quota Type to be created

    • <allowed_value>➡️ Value to set for this Quota Type

    • <high_watermark>➡️ Value to set for High Watermark warnings

    • <project_id>➡️ Project to assign the quota to

    circle-info

    The high watermark is automatically set to 80% of the allowed value when set via Horizon.

    circle-info

    A created Quota will generate an allowed_quota_object with its own ID. This is ID is needed when continuing to work with the created Quota.

    hashtag
    List allowed Quotas

    The following command lists all Trilio Quotas set for a given project.

    • <project_id>➡️ Project to list the Quotas from

    hashtag
    Show allowed Quota

    The following command shows the details about a provided allowed Quota.

    • <allowed_quota_id> ➡️ID of the allowed Quota to show.

    hashtag
    Update allowed Quota

    The following command shows how to update the value of an already existing allowed Quota.

    • <allowed_value>➡️ Value to set for this Quota Type

    • <high_watermark>➡️ Value to set for High Watermark warnings

    • <project_id>➡️ Project to assign the quota to

    • <allowed_quota_id> ➡️ID of the allowed Quota to update

    hashtag
    Delete allowed Quota

    The following command will delete an allowed Quota and sets the value of the connected Quota Type back to unlimited for the affected project.

    • <allowed_quota_id> ➡️ID of the allowed Quota to delete

    trilio-charmers-trilio-wlm-focalarrow-up-right
    trilio-charmers-trilio-dm-api-focalarrow-up-right
    trilio-charmers-trilio-data-mover-focalarrow-up-right
    trilio-charmers-trilio-horizon-plugin-focalarrow-up-right

    Snapshots

    hashtag
    Definition

    A Snapshot is a single Trilio backup of a workload including all data and metadata. It contains the information of all VM's that are protected by the workload.

    hashtag
    List of Snapshots

    hashtag
    Using Horizon

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    The List of Snapshots for the chosen Workload contains the following additional information:

    • Creation Time

    • Name of the Snapshot

    • Description of the Snapshot

    hashtag
    Using CLI

    • --workload_id <workload_id> ➡️ Filter results by workload_id

    • --tvault_node <host> ➡️ List all the snapshot operations scheduled on a tvault node(Default=None)

    hashtag
    Creating a Snapshot

    Snapshots are automatically created by the Trilio scheduler. If necessary or in case of deactivated scheduler is it possible to create a Snapshot on demand.

    hashtag
    Using Horizon

    There are 2 possibilities to create a snapshot on demand.

    hashtag
    Possibility 1: From the Workloads overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    hashtag
    Possibility 2: From the Workload Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    hashtag
    Using CLI

    • <workload_id>➡️ID of the workload to snapshot.

    • --full➡️ Specify if a full snapshot is required.

    hashtag
    Snapshot overview

    Each Snapshot contains a lot of information about the backup. These information can be seen in the Snapshot overview.

    hashtag
    Using Horizon

    To reach the Snapshot Overview follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    hashtag
    Details Tab

    The Snapshot Details Tab shows the most important information about the Snapshot.

    • Snapshot Name / Description

    • Snapshot Type

    • Time Taken

    hashtag
    Restores Tab

    The Snapshot Restores Tab shows the list of Restores that have been started from the chosen Snapshot. It is possible to start Restores from here.

    circle-info

    Please refer to the User Guide to learn more about Restores.

    hashtag
    Misc. Tab

    The Snapshot Miscellaneous Tab provides the remaining metadata information about the Snapshot.

    • Creation Time

    • Last Update time

    • Snapshot ID

    hashtag
    Using CLI

    • <snapshot_id>➡️ID of the snapshot to be shown

    • --output <output>➡️Option to get additional snapshot details, Specify --output metadata for snapshot metadata, Specify --output networks for snapshot vms networks, Specify --output disks for snapshot vms disks

    hashtag
    Delete Snapshots

    Once a Snapshot is no longer needed, it can be safely deleted from a Workload.

    circle-info

    The retention policy will automatically delete the oldest Snapshots according to the configure policy.

    circle-info

    You have to delete all Snapshots to be able to delete a Workload.

    circle-info

    Deleting a Trilio Snapshot will not delete any Openstack Cinder Snapshots. Those need to be deleted separately if desired.

    hashtag
    Using Horizon

    There are 2 possibilities to delete a Snapshot.

    hashtag
    Possibility 1: Single Snapshot deletion through the submenu

    To delete a single Snapshot through the submenu follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    hashtag
    Possibility 2: Multiple Snapshot deletion through checkbox in Snapshot overview

    To delete one or more Snapshots through the Snapshot overview do the following:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    hashtag
    Using CLI

    • <snapshot_id>➡️ID of the snapshot to be deleted

    hashtag
    Snapshot Cancel

    Ongoing Snapshots can be canceled.

    circle-info

    Canceled Snapshots will be treated like errored Snapshots

    hashtag
    Using Horizon

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    hashtag
    Using CLI

    • <snapshot_id>➡️ID of the snapshot to be canceled

    Upgrading on Canonical OpenStack

    Learn about upgrading Trilio on Canonical OpenStack

    Trilio supports the upgrade of charms and Trilio components from older releases (4.0 onwards) to the latest release. More information about the latest 4.2 Trilio charms for the Trilio-4.2 release can be found herearrow-up-right.

    The following charms exist:

    • trilio-wlmarrow-up-right ➡️ Installs and manages Trilio Controller services.

    • ➡️Installs and manages the Trilio Datamover API service.

    • ➡️ Installs and manages the Trilio Datamover service.

    • ➡️ Installs and manages the Trilio Horizon Plugin.

    The documentation of the charms can be found here:

    triangle-exclamation

    The following steps have been tested and verified within Trilio environments. There have been cases where these steps updated all packages inside the LXC containers, leading to failures in basic OpenStack services.

    It is recommended to run each of these steps in dry-run first.

    When any other packages but Trilio packages are getting updated, stop the upgrade procedure and contact your Trilio customer success manager.

    hashtag
    Upgrading the Trilio Charms

    More detailed information about the latest 4.2 Trilio charms for the Trilio-4.2 release can be found .

    hashtag
    Upgrading the Trilio Charms up to OpenStack Wallaby release

    Follow the steps mentioned in this to upgrade the charms to the latest 4.2 charms before upgrading the Trilio packages.

    hashtag
    Upgrading the Trilio Charms to OpenStack Yoga release onwards

    Follow the steps mentioned below to upgrade the charms to the latest 4.2 charms before upgrading the Trilio packages.

    General Prerequisites

    1. No snapshot OR restore are to be running during this process.

    2. Global job scheduler should be disabled

    hashtag
    1. Upgrade juju charms

    juju [-m <model>] upgrade-charm trilio-wlm --switch trilio-charmers-trilio-wlm-focal --channel latest/edge

    juju [-m <model>] upgrade-charm trilio-horizon-plugin --switch trilio-charmers-trilio-horizon-plugin-focal --channel latest/edge

    juju [-m <model>] upgrade-charm trilio-dm-api --switch trilio-charmers-trilio-dm-api-focal --channel latest/edge

    juju [-m <model>] upgrade-charm trilio-data-mover --switch trilio-charmers-trilio-data-mover-focal --channel latest/edge

    hashtag
    2. Wait till all trilio units are active/idle then Restart all the trilio services

    juju [-m <model>] exec --application trilio-dm-api "sudo systemctl restart tvault-datamover-api"

    juju [-m <model>] exec --application trilio-data-mover "sudo systemctl restart tvault-contego"

    juju [-m <model>] exec --application trilio-horizon-plugin "sudo systemctl restart apache2"

    hashtag
    3. Select Single Node or HA steps as appropriate to restart services

    hashtag
    For setup with single node wlm unit

    juju [-m <model>] exec --application trilio-wlm "sudo systemctl restart wlm-api wlm-scheduler wlm-workloads wlm-cron"

    hashtag
    For setups with 3 node HA enabled wlm

    juju [-m <model>] exec --application trilio-wlm "systemctl restart wlm-api wlm-scheduler wlm-workloads"

    juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource restart res_trilio_wlm_wlm_cron"

    hashtag
    4. Ensure that wlm-cron is only running on a single Unit in the output of this command

    juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-cron"

    hashtag
    If wlm-cron is running in more than one unit, or nowhere at all, this can be fixed by following the below steps:

    juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource stop res_trilio_wlm_wlm_cron" juju [-m <model>] exec --application trilio-wlm "sudo systemctl stop wlm-cron" juju [-m <model>] exec --application trilio-wlm "sudo ps -ef | grep workloadmgr-cron | grep -v grep"

    hashtag
    No workloadmgr-cron process should be running. If it's still running somewhere, login to that unit and stop the service manually with 'sudo systemctl stop wlm-cron'

    juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource start res_trilio_wlm_wlm_cron" juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-cron"

    circle-exclamation

    wlm-cron service should only be running in a single Juju Unit and not running in the other two.

    hashtag
    5. Check upgraded juju charm & trilio services

    Check the trilio unit's status in juju status [-m <model>] | grep trilio output.

    All the trilio units will be reporting the new juju charm version.

    trilio-data-mover 4.2.64.26 active 3 trilio-charmers-trilio-data-mover-jammy latest/edge 4 no Unit is ready trilio-dm-api 4.2.64.2 active 1 trilio-charmers-trilio-dm-api-jammy latest/edge 2 no Unit is ready trilio-horizon-plugin 4.2.64.5 active 1 trilio-charmers-trilio-horizon-plugin-jammy latest/edge 4 no Unit is ready trilio-wlm 4.2.64.20 active 1 trilio-charmers-trilio-wlm-jammy latest/edge 3 no Unit is ready

    hashtag
    6. Check that all Trilio services are running

    juju [-m <model>] exec --application trilio-dm-api "sudo systemctl status tvault-datamover-api"

    juju [-m <model>] exec --application trilio-data-mover "sudo systemctl status tvault-contego"

    juju [-m <model>] exec --application trilio-horizon-plugin "sudo systemctl status apache2"

    juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-api wlm-scheduler wlm-workloads wlm-cron"

    hashtag
    Next move on the Updating the Trilio packages.

    hashtag
    Upgrading the Trilio packages

    Trilio is releasing hotfixes, which require updating the packages inside the containers. These hotfixes can not be installed using the Juju charms as they don't require an update to the charms.

    hashtag
    Generic Prerequisites

    1. Trilio packages can be upgraded after deployment. Trilio supports upgrade to the latest 4.2 releases from as old as the Trilio 4.0 release.

    2. No snapshot OR restore are to be running during this process.

    3. Global job scheduler should be disabled.

    1. Stop the wlm-cron service

    1. Ensure that no stale wlm-cron processes exist

    If any stale process are found, they should be stopped manually.

    hashtag
    Upgrade package on trilio units

    The deployed Trilio version is controlled by the triliovault-pkg-source charm configuration option.

    The gemfury repository should be accessible from all Trilio units. For each trilio charm, it should be pointing to the following Gemfury repository:

    This can be checked via juju [-m ] config trilio-wlm triliovault-pkg-source command output.

    The preferred, recommended, and tested method to update the packages is through the Juju command line.

    Run the below commands from MAAS node

    hashtag
    On Ubuntu Bionic environments

    hashtag
    On other (not Ubuntu Bionic) environments

    hashtag
    Check the trilio unit's status in juju status [-m ] | grep trilio output. All the trilio units will be with the new packages.

    hashtag
    Update the DB schema

    Run the below command to update the schema

    Check the schema head with below command. It should point to latest schema head.

    hashtag
    Enable mount-bind for NFS

    T4O 4.2 is changing how the mount point is calculated. It is required to setup a mount bind to make T4O 4.1 or older backups available.

    Please referfor detailed steps to set up the mount bind.

    hashtag
    Post-Upgrade steps

    1. If the trilio-wlm nodes are HA enabled:

      1. Make sure the wlm-cron services are down after the pkg upgrade. Run the following command for the same:

    1. Unset the cluster maintenance mode

    1. Make sure the wlm-cron service up and running on any one node.

    1. Set the Global Job Scheduler to the original state.

    hashtag
    Troubleshooting

    If any trilio unit gets into an error state with the message :

    hook failed: "update-status"

    One of the reasons could be the package installation did not finish correctly. One way to verify that is by logging into that unit and following the below steps;

    Restart Trilio Services

    In complex environments it is sometimes necessary to restart a single service or the complete solution. Rarely is restarting the complete node, where a service is running possible or even the ideal solution.

    This page describes the services running by Trilio and how to restart those.

    hashtag
    Trilio Appliance Services

    The Trilio Appliance is the controller of Trilio. Most services on the Appliance are running in a High Availability mode on a 3-node cluster.

    hashtag
    wlm-api

    The wlm-api service takes the API calls against the Trilio Appliance. It is running in active-active mode on all nodes of the Trilio cluster.

    To restart the wlm-api service run on each Trilio node:

    hashtag
    wlm-scheduler

    The wlm-scheduler service is taking job requests and identifies which Trilio node should take the request. It is running in active-active mode on all nodes of the Trilio cluster.

    To restart the wlm-scheduler service run on each Trilio node:

    hashtag
    wlm-workloads

    The wlm-workloads service is the task worker of Trilio executing all jobs given to the Trilio node. It is running in active-active mode on all nodes of the Trilio cluster.

    To restart the wlm-workloads service run on each Trilio node:

    hashtag
    wlm-cron

    The wlm-cron service is responsible for starting scheduled Backups according to the configurtation of Tenant Workloads. It is running in active-passive mode and controlled by the pacemaker cluster.

    To restart the wlm-workloads service run on the Trilio node with VIP assigned:

    hashtag
    VIP resources

    The Trilio appliance is running 1 to 4 virtual IPs on the Trilio cluster. These are controlled by the pacemaker cluster and provided through NGINX.

    To restart these resources the pacemaker NGINX resource is getting restarted:

    hashtag
    RabbitMQ

    The Trilio cluster is using RabbitMQ as messaging service. It is running in active-active mode on all nodes of the Trilio cluster.

    RabbitMQ is a complex system in itself. This guide will only provide the basic commands to do a restart of a node and check the health of the cluster afterward. For complete documentation of how to restart RabbitMQ, please follow the .

    To restart a RabbitMQ node run on each Trilio node:

    circle-info

    It is recommended to wait for the node to rejoin and sync with the cluster before restarting another RabbitMQ node.

    circle-exclamation

    When the complete cluster is getting stopped and restarted it is important to keep the order of nodes in mind. The last node to be stopped needs to be the first node to be started.

    hashtag
    Galera Cluster (MariaDB)

    The Galera Cluster is managing the Trilio MariaDB database. It is running in active-active mode on all nodes of the Trilio cluster.

    Galera Cluster is a complex system in itself. This guide will only provide the basic commands to do a restart of a node and check the health of the cluster afterward. For complete documentation of how to restart Galera clusters, please follow the .

    When restarting Galera two different use-cases need to be considered:

    • Restarting a single node

    • Restarting the whole cluster

    hashtag
    Restarting a single node

    A single node can be restarted without any issues. It will automatically rejoin the cluster and sync against the remaining nodes.

    The following commands will gracefully stop and restart the mysqld service.

    circle-exclamation

    After a restart will the cluster start the syncing process. Don't restart node after node to reach a complete cluster restart.

    Check the cluster health after the restart.

    hashtag
    Restarting the complete cluster

    Restarting a complete cluster requires some additional steps as the Galera cluster is basically destroyed once all nodes have been shut down. It needs to be rebuild afterwards.

    First gracefully shutdown the Galera cluster on all nodes:

    The second step is to identify the Galera node with the latest dataset. This can be achieved by reading the grastate.dat file on the Trilio nodes.

    circle-info

    When this documentation is followed the last mysqld service that got shut down will be the one with the latest dataset.

    The value to check for are the seqno.

    The node with the highest seqno is the node that contains the latest data. This node will also contain safe_to_bootstrap: 1 to indicate that the Galera cluster can be rebuild from this node.

    On the identified node the new cluster is getting generated with the following command:

    circle-exclamation

    Running galera_new_cluster on the wrong node will lead to data loss as this command will set the node the command is issued on as the first node of the cluster. All nodes which join afterward will sync against the data of this first node.

    After the command has been issued is the mysqld service running on this node. Now the other nodes can be restarted one by one. The started nodes will automatically rejoin the cluster and sync against the master node. Once a synced status has been reached is each node a primary node in the cluster.

    Check the Cluster health after all services are up again.

    hashtag
    Verify Health of the Galera Cluster

    Verify the cluster health by running the following commands inside each Trilio MariaDB. The values returned from these statements have to be the same for each node.

    hashtag
    Canonical workloadmgr container services

    Canonical Openstack is not using the Trilio Appliance. In Canonical environments is the Trilio controller unit part of the JuJu deployment as workloadmgr container.

    To restart the services inside this container the following commands are to be issued.

    hashtag
    Single Node deployment

    hashtag
    HA deployment

    On all nodes:

    On a single node:

    hashtag
    Trilio dmapi service

    The Trilio dmapi service is running on the Openstack controller nodes. Depending on the Openstack Distribution Trilio is installed on different commands are issued to restart the dmapi service.

    hashtag
    RHOSP13

    RHOSP13 is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    hashtag
    RHOSP16

    RHOSP16 is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    hashtag
    Canonical

    Canonical is running the Trilio services in JuJu controlled LXD containers. The dmapi service can be restarted by issuing the following command from the MASS node.

    hashtag
    Kolla-Ansible Openstack

    Kolla-Ansible Openstack is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    hashtag
    Ansible Openstack

    Ansible Openstack is running the Trilio services as LXD containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    hashtag
    Trilio datamover service (tvault-contego)

    The Trilio datamover service is running on the Openstack compute nodes. Depending on the Openstack Distribution Trilio is installed on different commands are issued to restart the datamover service.

    hashtag
    RHOSP13

    RHOSP13 is running the Trilio services as docker containers. The datamover service can be restarted by issuing the following command on the compute node.

    hashtag
    RHOSP16

    RHOSP16 is running the Trilio services as docker containers. The datamover service can be restarted by issuing the following command on the compute node.

    hashtag
    Canonical

    Canonical is running the Trilio services in JuJu controlled LXD containers. The datamover service can be restarted by issuing the following command from the MASS node.

    hashtag
    Kolla-Ansible Openstack

    Kolla-Ansible Openstack is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    hashtag
    Ansible Openstack

    Ansible Openstack is running the Trilio datamover service directly on the compute node. The datamover service can be restarted by issuing the following command on.

    Snapshot Mount

    hashtag
    Mount Snapshot

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/mount

    Mounts a Snapshot to the provided File Recovery Manager

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Body Format

    hashtag
    List of mounted Snapshots in Tenant

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/mounted/list

    Provides the list of all Snapshots mounted in a Tenant

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    List of mounted Snapshots in Workload

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/snapshots/mounted/list

    Provides the list of all Snapshots mounted in a specified Workload

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Dismount Snapshot

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/dismount

    Unmounts a Snapshot of the provided File Recovery Manager

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Body Format

    solutions/openstack/backing-file-update/backing_file_update.sh at master · trilioData/solutionsGitHubchevron-right
    Rebase script for T4O backups

    T4O 4.3.2

    hashtag
    Release Versions

    hashtag
    Packages

    Compatibility Matrix

    Learn about Trilio Support for OpenStack Distributions

    circle-info

    The CentOS community has moved over to the CentOS stream.

    The support for CentOS8 has ended on December 31st 2021. The official announcement can be found .

    CentOS7 is still supported and maintained till June 30th 2024

    workloadmgr project-quota-type-list
    workloadmgr project-quota-type-show <quota_type_id>
    workloadmgr project-allowed-quota-create --quota-type-id quota_type_id
                                             --allowed-value allowed_value 
                                             --high-watermark high_watermark 
                                             --project-id project_id
    workloadmgr project-allowed-quota-list <project_id>
    workloadmgr project-allowed-quota-show <allowed_quota_id>
    workloadmgr project-allowed-quota-update [--allowed-value <allowed_value>]
                                             [--high-watermark <high_watermark>]
                                             [--project-id <project_id>]
                                             <allowed_quota_id>
    workloadmgr project-allowed-quota-delete <allowed_quota_id>
    Logo
    Logo
    Identify the workload to show the details on
  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Total amount of Restores from this Snapshot
    • Total amount of succeeded Restores

    • Total amount of failed Restores

  • Snapshot Type

  • Snapshot Size

  • Snapshot Status

  • --date_from <date_from>➡️ From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If don't specify time then it takes 00:00 by default
  • --date_to <date_to>➡️To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day), Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to

  • --all {True,False} ➡️ List all snapshots of all the projects(valid for admin user only)

  • Identify the workload that shall create a Snapshot
  • Click "Create Snapshot"

  • Provide a name and description for the Snapshot

  • Decide between Full and Incremental Snapshot

  • Click "Create"

  • Identify the workload that shall create a Snapshot
  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Click "Create Snapshot"

  • Provide a name and description for the Snapshot

  • Decide between Full and Incremental Snapshot

  • Click "Create"

  • --display-name <display-name>➡️Optional snapshot name. (Default=None)
  • --display-description <display-description>➡️Optional snapshot description. (Default=None)

  • Identify the workload that contains the Snapshot to show
  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the Snapshot Name

  • Size
  • Which VMs are part of the Snapshot

  • for each VM in the Snapshot

    • Instance Info - Name & Status

    • Security Group(s) - Name & Type

    • Flavor - vCPUs, Disk & RAM

    • Networks - IP, Networkname & Mac Address

    • Attached Volumes - Name, Type, size (GB), Mount Point & Restore Size

    • Misc - Original ID of the VM

  • Workload ID of the Workload containing the Snapshot
    Identify the workload that contains the Snapshot to delete
  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu

  • Click "Delete Snapshot"

  • Confirm by clicking "Delete"

  • Identify the workload that contains the Snapshot to show
  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshots in the Snapshot list

  • Check the checkbox for each Snapshot that shall be deleted

  • Click "Delete Snapshots"

  • Confirm by clicking "Delete"

  • Identify the workload that contains the Snapshot to cancel
  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click "Cancel" on the same line as the identified Snapshot

  • Confirm by clicking "Cancel"

  • Restores
    official RabbitMQ documentationarrow-up-right
    official Galera documentationarrow-up-right

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project the Snapshot is located in

    snapshot_id

    string

    ID of the Snapshot to mount

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant to search for mounted Snapshots

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgr

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant to search for mounted Snapshots

    workload_id

    string

    ID of the Workload to search for mounted Snapshots

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgr

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project the Snapshot is located in

    snapshot_id

    string

    ID of the Snapshot to dismount

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 15:29:03 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-9d779802-9c65-463a-973c-39cdffcba82e
     workloadmgr snapshot-list [--workload_id <workload_id>]
                               [--tvault_node <host>]
                               [--date_from <date_from>]
                               [--date_to <date_to>]
                               [--all {True,False}]
    workloadmgr workload-snapshot [--full] [--display-name <display-name>]
                                  [--display-description <display-description>]
                                  <workload_id>
    workloadmgr snapshot-show [--output <output>] <snapshot_id>
    workloadmgr snapshot-delete <snapshot_id>
    workloadmgr snapshot-cancel <snapshot_id>
    systemctl restart wlm-api
    systemctl restart wlm-scheduler
    systemctl restart wlm-workloads
    pcs resource restart wlm-cron
    pcs resource restart lb_nginx-clone
    [root@TVM1 ~]# rabbitmqctl stop
    Stopping and halting node rabbit@TVM1 ...
    [root@TVM1 ~]# rabbitmq-server -detached
    Warning: PID file not written; -detached was passed.
    [root@TVM1 ~]# rabbitmqctl cluster_status
    Cluster status of node rabbit@TVM1 ...
    [{nodes,[{disc,[rabbit@TVM1,rabbit@TVM2,rabbit@TVM3]}]},
     {running_nodes,[rabbit@TVM2,rabbit@TVM3,rabbit@TVM1]},
     {cluster_name,<<"rabbit@TVM1">>},
     {partitions,[{rabbit@TVM2,[rabbit@TVM1,rabbit@TVM3]},
                  {rabbit@TVM3,[rabbit@TVM1,rabbit@TVM2]}]},
     {alarms,[{rabbit@TVM2,[]},{rabbit@TVM3,[]},{rabbit@TVM1,[]}]}]
    systemctl stop mysqld
    systemctl start mysqld
    systemctl stop mysqld
    cat /var/lib/mysql/grastate.dat
    
    # GALERA saved state
    version: 2.1
    uuid:    353e129f-11f2-11eb-b3f7-76f39b7b455d
    seqno:   213576545367
    safe_to_bootstrap: 1
    galera_new_cluster
    systemctl start mysqld
    MariaDB [(none)]> show status like 'wsrep_incoming_addresses';
    +--------------------------+-------------------------------------------------+
    | Variable_name            | Value                                           |
    +--------------------------+-------------------------------------------------+
    | wsrep_incoming_addresses | 10.10.2.13:3306,10.10.2.14:3306,10.10.2.12:3306 |
    +--------------------------+-------------------------------------------------+
    1 row in set (0.01 sec)
    
    MariaDB [(none)]> show status like 'wsrep_cluster_size';
    +--------------------+-------+
    | Variable_name      | Value |
    +--------------------+-------+
    | wsrep_cluster_size | 3     |
    +--------------------+-------+
    1 row in set (0.00 sec)
    
    MariaDB [(none)]> show status like 'wsrep_cluster_state_uuid';
    +--------------------------+--------------------------------------+
    | Variable_name            | Value                                |
    +--------------------------+--------------------------------------+
    | wsrep_cluster_state_uuid | 353e129f-11f2-11eb-b3f7-76f39b7b455d |
    +--------------------------+--------------------------------------+
    1 row in set (0.00 sec)
    
    MariaDB [(none)]> show status like 'wsrep_local_state_comment';
    +---------------------------+--------+
    | Variable_name             | Value  |
    +---------------------------+--------+
    | wsrep_local_state_comment | Synced |
    +---------------------------+--------+
    1 row in set (0.01 sec)
    juju ssh <workloadmgr unit name>/<unit-number>
    Systemctl restart wlm-api wlm-scheduler wlm-workloads wlm-cron
    juju ssh <workloadmgr unit name>/<unit-number>
    Systemctl restart wlm-api wlm-scheduler wlm-workloads
    juju ssh <workloadmgr unit name>/<unit-number>
    crm_resource --restart -r res_trilio_wlm_wlm_cron
    docker restart trilio_dmapi
    podman restart trilio_dmapi
    juju ssh <trilio-dm-api unit name>/<unit-number>
    sudo systemctl restart tvault-datamover-api
    docker restart triliovault_datamover_api
    lxc-stop -n <dmapi container name>
    lxc-start -n <dmapi container name>
    docker restart trilio_datamover
    podman restart trilio_datamover
    juju ssh <trilio-data-mover unit name>/<unit-number>
    sudo systemctl restart tvault-contego
    docker restart triliovault_datamover
    service tvault-contego restart
    {
       "mount":{
          "mount_vm_id":"15185195-cd8d-4f6f-95ca-25983a34ed92",
          "options":{
             
          }
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 15:44:42 GMT
    Content-Type: application/json
    Content-Length: 228
    Connection: keep-alive
    X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a
    
    {
       "mounted_snapshots":[
          {
             "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
             "snapshot_name":"snapshot",
             "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
             "mounturl":"[\"http://192.168.100.87\"]",
             "status":"mounted"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 15:44:42 GMT
    Content-Type: application/json
    Content-Length: 228
    Connection: keep-alive
    X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a
    
    {
       "mounted_snapshots":[
          {
             "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
             "snapshot_name":"snapshot",
             "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
             "mounturl":"[\"http://192.168.100.87\"]",
             "status":"mounted"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 16:03:49 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-abf69be3-474d-4cf3-ab41-caa56bb611e4
    {
       "mount": 
          {
              "options": null
          }
    }
    wlm-cron should be disabled ( Following sets of commands are to be run on MAAS node).
  • If trilio-wlm is HA enabled, set the cluster configuration to maintenance mode ( this command will fail for single node deployment).

  • trilio-dm-apiarrow-up-right
    trilio-data-moverarrow-up-right
    trilio-horizon-pluginarrow-up-right
    herearrow-up-right
    documentarrow-up-right
    to this page
    TrilioVault data protection — charm-guide 0.0.1.dev818 documentationdocs.openstack.orgchevron-right
    hashtag
    Deliverables against T4O-4.3.2

    Package/Container Names

    Package Kind

    Package Versions

    contego

    deb

    4.2.64

    s3-fuse-plugin

    deb

    4.3.1.2

    python3-s3-fuse-plugin

    deb

    4.3.1.2

    tvault-contego

    deb

    4.3.1.3

    hashtag
    Containers and Gitbranch

    Name

    Tag

    Gitbranch

    4.3.2

    RHOSP13 containers

    4.3.2-rhosp13

    RHOSP16.1 containers

    4.3.2-rhosp16.1

    RHOSP16.2 containers

    4.3.2-rhosp16.2

    RHOSP17.0 containers

    4.3.2-rhosp17.0

    Kolla Ansible Victoria containers

    4.3.2-victoria

    hashtag
    Changelog

    • Issues reported by customers.

    hashtag
    Fixed Bugs and issues

    1. Snapshot listing (workload open) on UI doesn't happen if snapshot count is > 1000.

    2. FileManager caches old snapshot details.

    3. Custom horizon integration - Canonical.

    4. Restore failing with error Failed restoring snapshot: RetryError[<Future at 0x7fc607693c18 state=finished raised StorageFailure>.

    5. Duplicate security groups get created post restore of a VM.

    6. kolla ansible yoga 4.3.0 deployment fails with error "Unsupported parameters for (kolla_container_facts) module: container_engine. Supported parameters include: name, api_version".

    hashtag
    Known issues

    1. Import of multiple workloads don't get segregated across all nodes evenly

    Observation : When import of multiple workloads is triggered, it is expected that system must evenly divide the load on all available nodes in round robin fashion. However, current it is running the import on any one of the available nodes.

    2. After clicking on snapshot or any restore against Imported WL, UI become unresponsive for first time only

    Observation : For workloads with a large number of VMs and with a large number of networks attached to those VMs, then the import of the snapshot details may take more than 1 minute (connection timeout) and hence this issue might be observed.

    Comments:

    1. If a user hits this issue, then the import of the snapshot has already been triggered and now the user needs to wait till the snapshot import is done.

    2. The users are advised to wait for a couple of minutes before they recheck the snapshot details.

    3. Once the details of the snapshot are visible, the restore operation can be carried out.

    3. Import of ALL workloads without specifying any workload id NOT recommended

    Observation : If user hits workload import command without any workload id, thereby expecting that all eligible workloads will get imported, then the command execution takes longer than expected time.

    Comments:

    1. As long as the import command is not returning, it is expected to be running; if successful, it will return the job ID, if not, it will throw an error msg.

    2. The execution time may vary based on the number of workloads present in the backend target.

    3. It is recommended to run this command with specific workload IDs.

    4. To import ALL workloads at once, all WL IDs to be provided as parameter to the import CLI command. Procedure for the same mentioned below.

    4. Deleting snapshots show status as available in horizon UI

    Observation : Snapshot for which delete operation is in-progress from UI , its status is showing as available instead deleting.

    Workaround:

    Wait for sometime to complete all the delete operations.Eventually all the snapshots will be deleted successfully.

    circle-info

    Kolla Ansible environments running on CentOS8 are receiving continuous limited support. This means that future updates from Trilio for Kolla Ansible environments on CentOS8 will use the latest available CentOS8 base containers and only the Trilio for OpenStack code gets updated. When the Kolla Ansible community provides CentOS Stream based containers, Trilio will provide CentOS Stream based containers as well.

    circle-info

    Ansible Openstack is at its End Of Support from Trilio starting 2nd March 2023

    hashtag
    Trilio for OpenStack Compatibility Matrix

    Trilio Release
    RHOSP
    Canonical
    Kolla
    TripleO

    4.3.X

    17.0

    16.2

    16.1

    13

    Zed

    Yoga

    Wallaby

    Victoria

    Ussuri

    Train

    Stein

    Queens

    Zed

    Yoga

    Wallaby

    Victoria

    Train

    NFS & S3 Support:

    All versions of Trilio for OpenStack support NFSv3 and S3 as backup targets.

    Barbican Support:

    Supported in RHOSP 16.1, 16.2, 17.0, Canonical Ussuri, Victoria, Wallaby, Yoga, Zed, Kolla Victoria, Wallaby, Yoga.

    Not Supported in RHOSP 13, Canonical Queens, Stein, Train, TripleO Train.

    Supported OS:

    RHEL7, RHEL8, RHEL9, Ubuntu 18.04, Ubuntu 20.04, Ubuntu 20.04 source image, Ubuntu 22.04, CentOS7, CentOS Stream 8, CentOS Linux 8, Ubuntu 20.04 binary image, CentOS Stream 8 binary image, CentOS Stream 8 source image.

    Deployment:

    Deploy using the distributions corresponding deployment toolsets, including Red Hat Director, JuJu Charms, Ansible, and Director.

    hashtag
    Compatibility Matrix Detailed View

    Distribution/Version
    Trilio 4.3.X
    OS
    Barbican Support
    NFS Support
    S3 Support
    Deployment

    RHOSP 13

    Yes

    RHEL7

    Not supported

    NFSv3

    AWS S3 compatible

    herearrow-up-right

    Install workloadmgr CLI client

    hashtag
    About the workloadmgr CLI client

    The workloadmgr CLI client is provided as rpm and deb packages.

    It got tested against the following operating systems:

    • CentOS7, CentOS8

    • Ubuntu 18.04, Ubuntu 20.04

    Installing the workloadmgr client will automatically install all required Openstack clients as well.

    Further will the installation of the workloadmgr client integrate the client into the global openstack python client, if available.

    circle-info

    The required connection strings and package names can be found on the Trilio Dashboard under the Downloads tab.

    hashtag
    Install workloadmgr client rpm package on CentOS7/8

    The Trilio workload manager CLI client has several requirements that need to be met before the client can be installed without dependency issues.

    hashtag
    Preparing the workloadmgr client installation

    The following steps need to be done to prepare the installation the workloadmgr client:

    1. Add required repositories

      1. epel-release

      2. for CentOS7: centos-release-openstack-stein

    circle-info

    These repositories are required to fulfill the following dependencies:

    On CentOS7 Python2: python-pbr,python-prettytable,python2-requests,python2-simplejson,python2-six,pytz,PyYAML,python2-openstackclient

    On CentOS8 Python3: python3-pbr,python3-prettytable,python3-requests,python3-simplejson,python3-six,python3-pyyaml,python3-pytz,python3-openstackclient

    hashtag
    Installing the workloadmgr client

    There are 2 possibilities for how the workloadmgr client packages can be installed.

    hashtag
    Download from the Trilio Appliance and install directly

    The Trilio appliance is shipping the workloadmgr client version, that is matching the Trilio version of the Trilio appliance. These clients will always work with their respective Trilio versions.

    The workloadmgr client can be directly downloaded using the following command:

    For CentOS7: wget http://<TVM-IP>:8085/yum-repo/queens/workloadmgrclient-<Trilio-Version>-<Trilio-Release>.noarch.rpm

    For CentOS8: http://<TVM-IP>:8085/yum-repo/queens/python3-workloadmgrclient-<Trilio-Version>-<TVault-Release>.noarch.rpm

    circle-info

    To identify the Trilio Version and Trilio release login into the Trilio Dashboard and check the upper left corner.

    The yum package manager is used to install the workloadmgr client package:

    yum install workloadmgrclient-<Trilio-Version>-<Trilio-Release>.noarch.rpm

    An example installation can be found below:

    hashtag
    Installing from the Trilio online repository

    To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:

    Create the Trilio yum repository file /etc/yum.repos.d/trilio.repo Enter the following details into the repository file:

    Install the workloadmgr client issuing the following command:

    For CentOS7: yum install workloadmgrclient For CentOS8: yum install python-3-workloadmgrclient-el8

    An example installation can be found below:

    hashtag
    Install workloadmgr client deb packages on Ubuntu

    The Trilio workloadmgr client packages for Ubuntu are only available from the online repository.

    hashtag
    Preparing the workloadmgr client installation

    There is no preparation required. All dependencies are automatically resolved by the standard repositories provided by Ubuntu.

    hashtag
    Installing the Workloadmgr client

    There are 2 possibilities for how the workloadmgr client packages can be installed.

    hashtag
    Download from the Trilio Appliance and install directly

    The Trilio appliance is shipping the workloadmgr client version, that is matching the Trilio version of the Trilio appliance. These clients will always work with their respective Trilio versions.

    The workloadmgr client can be directly downloaded using the following command:

    For Python2: curl -Og6 http://<TVM-IP>:8085/deb-repo/deb-repo/python-workloadmgrclient_<Trilio-Version>_all.deb

    For Python3:curl -Og6 http://<TVM-IP>:8085/deb-repo/deb-repo/python3-workloadmgrclient_<Trilio-Version>_all.deb

    circle-info

    o identify the Trilio Version and Trilio release login into the Trilio Dashboard and check the upper left corner.

    The apt package manager is used to install the workloadmgr client package:

    For Python2:apt-get install ./python-workloadmgrclient_<Trilio-Version>_all.deb -y For Python3:apt-get install ./python3-workloadmgrclient_<Trilio-Version>_all.deb -y

    An example installation can be found below:

    hashtag
    Installing from the Trilio online repository

    To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:

    Create the Trilio yum repository file /etc/apt/sources.list.d/fury.list Enter the following details into the repository file:

    run apt update to make the new repository available.

    The apt package manager is used to install the workloadmgr client package:

    For Python2:apt-get install python-workloadmgrclient For Python3:apt-get install python3-workloadmgrclient

    An example installation can be seen below:

    Healthcheck of Trilio

    Trilio is composed of multiple services, which can be checked in case of any errors.

    hashtag
    Verify the Trilio Appliance

    hashtag

    Post Installation Health-Check

    After the installation and configuration of Trilio for Openstack did succeed the following steps can be done to verify that the Trilio installation is healthy.

    hashtag
    Verify the Trilio Appliance

    hashtag

    Snapshot Mount

    hashtag
    Definition

    Trilio allows you to view or download a file from the snapshot. Any changes to the files or directories when snapshot is mounted are temporary and are discarded when the snapshot is unmounted. Mounting is a faster way to restore a single or multiple files. To mount a snapshot follow these steps.

    hashtag

    TrilioVault data protection — charm-guide 0.0.1.dev818 documentationdocs.openstack.orgchevron-right
    juju exec [-m <model>] --unit trilio-wlm/leader "sudo crm configure property maintenance-mode=true"
    juju exec [-m <model>] --application trilio-wlm "sudo systemctl stop wlm-cron"
    juju exec [-m <model>] --application trilio-wlm "sudo ps -ef | grep [w]orkloadmgr-cron"
    deb [trusted=yes] https://apt.fury.io/trilio-4-3/ /
    juju exec [-m <model>] --application trilio-wlm 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" workloadmgr python3-workloadmgrclient python3-contegoclient s3-fuse-plugin'
    juju exec [-m <model>] --application trilio-horizon-plugin 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" tvault-horizon-plugin python-workloadmgrclient'
    juju exec [-m <model>] --application trilio-dm-api 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-dmapi'
    juju exec [-m <model>] --application trilio-data-mover 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" tvault-contego s3-fuse-plugin'
    juju exec [-m <model>] --application trilio-wlm 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" workloadmgr python3-workloadmgrclient python3-contegoclient python3-s3-fuse-plugin'
    juju exec [-m <model>] --application trilio-horizon-plugin 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-tvault-horizon-plugin python3-workloadmgrclient'
    juju exec [-m <model>] --application trilio-dm-api 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-dmapi'
    juju exec [-m <model>] --application trilio-data-mover 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-tvault-contego python3-s3-fuse-plugin'
    trilio-data-mover      <package version>  active      3  trilio-data-mover      jujucharms    8  ubuntu
    trilio-dm-api          <package version>  active      1  trilio-dm-api          jujucharms    5  ubuntu
    trilio-horizon-plugin  <package version>  active      1  trilio-horizon-plugin  jujucharms    4  ubuntu
    trilio-wlm             <package version>  active      3  trilio-wlm             jujucharms    7  ubuntu
    juju exec [-m <model>] --unit trilio-wlm/leader "alembic -c /etc/workloadmgr/alembic.ini upgrade heads"
    juju exec [-m <model>] --unit trilio-wlm/leader "alembic -c /etc/workloadmgr/alembic.ini current"
    juju exec [-m <model>] --application trilio-wlm "sudo systemctl stop wlm-cron"
    juju exec [-m <model>] --unit trilio-wlm/leader "sudo crm configure property maintenance-mode=false"
    juju exec [-m <model>] --application trilio-wlm "sudo systemctl status wlm-cron"
    Login to trilio unit and run "sudo dpkg --configure -a"
    It will ask for user input, hit enter and log out from the unit.
    From mass node run command "juju resolve <trilio unit name>"
    	i. Before proceeding for upgrade OR reinitialize, fetch the list of ALL workload IDs which are NOT in error OR deleted state from database 
    		Query : select id from workloads where status not in ('deleted','error')
    	ii. Use IDs from this list to create the import CLI command parameters. 
    		Sample : --workloadids <wl_id1> --workloadids <wl_id2> …. etc. Shell command below to do the same. 
    		wlIdList.txt to have all workload IDs; one ID per line.
    	iii. awk '{print " --workloadids "$1}' wlIdList.txt | tr -d '\n'
    	iv Append output of above command to the import command.
    		workloadmgr workload-importworkloads <Command output>
    		Eg : workloadmgr workload-importworkloads --workloadids ff24945f-7bef-498d-98eb-d727ec85bc7b --workloadids a15948b4-942c-47e2-85c5-06cad697010f

    python3-tvault-contego

    deb

    4.3.1.3

    dmapi

    deb

    4.3.1

    contegoclient

    deb

    4.3.1

    python3-dmapi

    deb

    4.3.1

    python3-contegoclient

    deb

    4.3.1

    tvault-horizon-plugin

    deb

    4.3.2.2

    python3-tvault-horizon-plugin

    deb

    4.3.2.2

    python-workloadmgrclient

    deb

    4.3.6

    python3-workloadmgrclient

    deb

    4.3.6

    workloadmgr

    deb

    4.3.8.4

    python3-namedatomiclock

    deb

    1.1.3

    tvault-contego

    python

    4.3.1.3

    s3fuse

    python

    4.3.2.2

    dmapi

    python

    4.3.2

    contegoclient

    python

    4.3.2

    tvault-horizon-plugin

    python

    4.3.3.2

    tvault_configurator

    python

    4.3.7

    workloadmgrclient

    python

    4.3.7

    workloadmgr

    python

    4.3.9.4

    trilio-fusepy

    rpm

    3.0.1-1

    python3-trilio-fusepy

    rpm

    3.0.1-1

    python3-trilio-fusepy-el9

    rpm

    3.0.1-1

    puppet-triliovault

    rpm

    4.2.64-4.2

    python3-s3fuse-plugin

    rpm

    4.3.1.2-4.3

    python3-s3fuse-plugin-el9

    rpm

    4.3.1.2-4.3

    python-s3fuse-plugin-cent7

    rpm

    4.3.1.2-4.3

    tvault-contego

    rpm

    4.3.1.3-4.3

    python3-tvault-contego

    rpm

    4.3.1.3-4.3

    python3-tvault-contego-el9

    rpm

    4.3.1.3-4.3

    dmapi

    rpm

    4.3.1-4.3

    contegoclient

    rpm

    4.3.1-4.3

    python3-dmapi

    rpm

    4.3.1-4.3

    python3-dmapi-el9

    rpm

    4.3.1-4.3

    python3-contegoclient-el8

    rpm

    4.3.1-4.3

    python3-contegoclient-el9

    rpm

    4.3.1-4.3

    tvault-horizon-plugin

    rpm

    4.3.2.2-4.3

    python3-tvault-horizon-plugin-el8

    rpm

    4.3.2.2-4.3

    python3-tvault-horizon-plugin-el9

    rpm

    4.3.2.2-4.3

    workloadmgrclient

    rpm

    4.3.6-4.3

    python3-workloadmgrclient-el8

    rpm

    4.3.6-4.3

    python3-workloadmgrclient-el9

    rpm

    4.3.6-4.3

    Kolla Ansible Wallaby containers

    4.3.2-wallaby

    Kolla Yoga Containers

    4.3.2-yoga

    Kolla Zed Containers

    4.3.2-zed

    TripleO Containers

    4.3.2-tripleo

    Red Hat Director

    RHOSP 16.1

    Yes

    RHEL8

    Supported

    NFSv3

    AWS S3 compatible

    Red Hat Director

    RHOSP 16.2

    Yes

    RHEL8

    Supported

    NFSv3

    AWS S3 compatible

    Red Hat Director

    RHOSP 17.0

    Yes

    RHEL9

    Supported

    NFSv3

    AWS S3 compatible

    Red Hat Director

    Canonical Queens

    Yes

    Ubuntu 18.04

    Not supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Stein

    Yes

    Ubuntu 18.04

    Not supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Train

    Yes

    Ubuntu 18.04

    Not supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Ussuri

    Yes

    Ubuntu 18.04/20.04

    Supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Victoria

    Yes

    Ubuntu 20.04

    Supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Wallaby

    Yes

    Ubuntu 20.04

    Supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Yoga

    Yes

    Ubuntu 20.04/22.04

    Supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Zed

    Yes

    Ubuntu 22.04

    Supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Kolla Victoria

    Yes

    Ubuntu 20.04, CentOS Linux 8

    Supported

    NFSv3

    AWS S3 compatible

    Ansible

    Kolla Wallaby

    Yes

    Ubuntu 20.04, CentOS Stream 8

    Supported

    NFSv3

    AWS S3 compatible

    Ansible

    Kolla Yoga

    Yes

    Ubuntu 20.04, CentOS Stream 8

    Supported

    NFSv3

    AWS S3 compatible

    Ansible

    Kolla Zed

    Yes

    Ubuntu 22.04, Rocky9

    Supported

    NFSv3

    AWS S3 compatible

    Ansible

    TripleO Train

    Yes

    CentOS7

    Not Supported

    NFSv3

    AWS S3 compatible

    Director

    Logo
    Logo

    for CentOS8: centos-release-openstack-train

  • install base packages

    1. yum -y install epel-release

    2. for CentOS7: yum -y install centos-release-openstack-stein

    3. for CentOS8: yum -y install centos-release-openstack-train

  • Verify the services are up

    Trilio is using 4 main services on the Trilio Appliance:

    • wlm-api

    • wlm-scheduler

    • wlm-workloads

    • wlm-cron

    Those can be verified to be up and running using the systemctl status command.

    hashtag
    Check the Trilio pacemaker and nginx cluster

    The second component to check the Trilio Appliance's health is the nginx and pacemaker cluster.

    hashtag
    Verify API connectivity of the Trilio Appliance

    Checking the availability of the Trilio API on the chosen endpoints is recommended.

    The following example curl command lists the available workload-types and verifies that the connection is available and working:

    circle-info

    Please check the API guide for more commands and how to generate the X-Auth-Token.

    hashtag
    Verify Trilio components on OpenStack

    hashtag
    On OpenStack Ansible

    hashtag
    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    In order to check the dmapi service go to dmapi container which is residing on controller nodes and run below command

    hashtag
    Datamover service (tvault-contego)

    The datamover service is running on each compute node. Logging to compute node and run below command

    hashtag
    On Kolla Ansible OpenStack

    hashtag
    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    Run the following command on “nova-api” nodes and make sure “triliovault_datamover_api” container is in started state.

    hashtag
    Datamover service (tvault-contego)

    Run the following command on "nova-compute" nodes and make sure the container is in a started state.

    hashtag
    Trilio Horizon integration

    Run the following command on horizon nodes and make sure the container is in a started state.

    hashtag
    On Canonical OpenStack

    Run the following command on MAAS nodes and make sure all trilio units like trilio-data-mover, trilio-dm-api, trilio-horizon-plugin, trilio-wlmare in active state

    hashtag
    On Red Hat OpenStack and TripleO

    hashtag
    On controller node

    Make sure the Trilio dmapi and horizon containers (shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.

    hashtag
    On compute node

    Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly.

    hashtag
    On overcloud

    Please check dmapi endpoints on overcloud node.

    Verify the services are up

    Trilio is using 4 main services on the Trilio Appliance:

    • wlm-api

    • wlm-scheduler

    • wlm-workloads

    • wlm-cron

    Those can be verified to be up and running using the systemctl status command.

    hashtag
    Check the Trilio pacemaker and nginx cluster

    The second component to check the Trilio Appliance's health is the nginx and pacemaker cluster.

    hashtag
    Verify API connectivity of the Trilio Appliance

    Checking the availability of the Trilio API on the chosen endpoints is recommended.

    The following example curl command lists the available workload-types and verifies that the connection is available and working:

    circle-info

    Please check the API guide for more commands and how to generate the X-Auth-Token.

    hashtag
    Verify Trilio components on OpenStack

    hashtag
    On OpenStack Ansible

    hashtag
    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    In order to check the dmapi service go to dmapi container which is residing on controller nodes and run below command

    hashtag
    Datamover service (tvault-contego)

    The datamover service is running on each compute node. Logging to compute node and run below command

    hashtag
    On Kolla Ansible OpenStack

    hashtag
    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    Run the following command on “nova-api” nodes and make sure “triliovault_datamover_api” container is in started state.

    hashtag
    Datamover service (tvault-contego)

    Run the following command on "nova-compute" nodes and make sure the container is in a started state.

    hashtag
    Trilio Horizon integration

    Run the following command on horizon nodes and make sure the container is in a started state.

    hashtag
    On Canonical OpenStack

    Run the following command on MAAS nodes and make sure all trilio units like trilio-data-mover, trilio-dm-api, trilio-horizon-plugin, trilio-wlmare in active state

    hashtag
    On Red Hat OpenStack and TripleO

    hashtag
    On controller node

    Make sure the Trilio dmapi and horizon containers (shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.

    hashtag
    On compute node

    Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly.

    hashtag
    On overcloud

    Please check dmapi endpoints on overcloud node.

    [root@controller ~]# wget http://10.10.2.15:8085/yum-repo/queens/workloadmgrclient-4.0.115-4.0.noarch.rpm
    --2021-03-08 15:36:37--  http://10.10.2.15:8085/yum-repo/queens/workloadmgrclient-4.0.115-4.0.noarch.rpm
    Connecting to 10.10.2.15:8085... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 155976 (152K) [application/x-rpm]
    Saving to: ‘workloadmgrclient-4.0.115-4.0.noarch.rpm’
    
    100%[======================================>] 1,55,976    --.-K/s   in 0.001s
    
    2021-03-08 15:36:37 (125 MB/s) - ‘workloadmgrclient-4.0.115-4.0.noarch.rpm’ saved [155976/155976]
    
    [root@controller ~]# yum install workloadmgrclient-4.0.115-4.0.noarch.rpm
    Loaded plugins: fastestmirror
    Examining workloadmgrclient-4.0.115-4.0.noarch.rpm: workloadmgrclient-4.0.115-4.                                                                                                                                                                                                                                                                                                                                                           0.noarch
    Marking workloadmgrclient-4.0.115-4.0.noarch.rpm to be installed
    Resolving Dependencies
    --> Running transaction check
    ---> Package workloadmgrclient.noarch 0:4.0.115-4.0 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================
     Package         Arch   Version     Repository                             Size
    ================================================================================
    Installing:
     workloadmgrclient
                     noarch 4.0.115-4.0 /workloadmgrclient-4.0.115-4.0.noarch 700 k
    
    Transaction Summary
    ================================================================================
    Install  1 Package
    
    Total size: 700 k
    Installed size: 700 k
    Is this ok [y/d/N]: y
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : workloadmgrclient-4.0.115-4.0.noarch                         1/1
      Verifying  : workloadmgrclient-4.0.115-4.0.noarch                         1/1
    
    Installed:
      workloadmgrclient.noarch 0:4.0.115-4.0
    
    Complete!
    [trilio]
    name=Trilio Repository
    baseurl=http://trilio:[email protected]:8283/triliovault-<Trilio-Release>/yum/
    enabled=1
    gpgcheck=0
    [root@controller ~]# cat /etc/yum.repos.d/trilio.repo
    [trilio]
    name=Trilio Repository
    baseurl=http://trilio:[email protected]:8283/triliovault-4.0/yum/
    enabled=1
    gpgcheck=0
    
    [root@controller ~]# yum install workloadmgrclient
    Loaded plugins: fastestmirror
    Determining fastest mirrors
     * base: centos-canada.vdssunucu.com.tr
     * centos-ceph-nautilus: mirror.its.dal.ca
     * centos-nfs-ganesha28: centos.mirror.colo-serv.net
     * centos-openstack-train: centos-canada.vdssunucu.com.tr
     * centos-qemu-ev: centos-canada.vdssunucu.com.tr
     * extras: centos-canada.vdssunucu.com.tr
     * updates: centos-canada.vdssunucu.com.tr
    base                                                                                                                                                                                                                                                                                                                                                                                                                | 3.6 kB  00:00:00
    centos-ceph-nautilus                                                                                                                                                                                                                                                                                                                                                                                                | 3.0 kB  00:00:00
    centos-nfs-ganesha28                                                                                                                                                                                                                                                                                                                                                                                                | 3.0 kB  00:00:00
    centos-openstack-train                                                                                                                                                                                                                                                                                                                                                                                              | 3.0 kB  00:00:00
    centos-qemu-ev                                                                                                                                                                                                                                                                                                                                                                                                      | 3.0 kB  00:00:00
    extras                                                                                                                                                                                                                                                                                                                                                                                                              | 2.9 kB  00:00:00
    trilio                                                                                                                                                                                                                                                                                                                                                                                                              | 2.9 kB  00:00:00
    updates                                                                                                                                                                                                                                                                                                                                                                                                             | 2.9 kB  00:00:00
    (1/3): extras/7/x86_64/primary_db                                                                                                                                                                                                                                                                                                                                                                                   | 225 kB  00:00:00
    (2/3): centos-openstack-train/7/x86_64/primary_db                                                                                                                                                                                                                                                                                                                                                                   | 1.1 MB  00:00:00
    (3/3): updates/7/x86_64/primary_db                                                                                                                                                                                                                                                                                                                                                                                  | 5.7 MB  00:00:00
    Resolving Dependencies
    --> Running transaction check
    ---> Package workloadmgrclient.noarch 0:4.0.116-4.0 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ===========================================================================================================================================================================================================================================================================================================================================================================================================================================
     Package                                                                                                         Arch                                                                                                 Version                                                                                                   Repository                                                                                            Size
    ===========================================================================================================================================================================================================================================================================================================================================================================================================================================
    Installing:
     workloadmgrclient                                                                                               noarch                                                                                               4.0.116-4.0                                                                                               trilio                                                                                               152 k
    
    Transaction Summary
    ===========================================================================================================================================================================================================================================================================================================================================================================================================================================
    Install  1 Package
    
    Total download size: 152 k
    Installed size: 700 k
    Is this ok [y/d/N]: y
    Downloading packages:
    workloadmgrclient-4.0.116-4.0.noarch.rpm                                                                                                                                                                                                                                                                                                                                                                            | 152 kB  00:00:00
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : workloadmgrclient-4.0.116-4.0.noarch                                                                                                                                                                                                                                                                                                                                                                                    1/1
      Verifying  : workloadmgrclient-4.0.116-4.0.noarch                                                                                                                                                                                                                                                                                                                                                                                    1/1
    
    Installed:
      workloadmgrclient.noarch 0:4.0.116-4.0
    
    Complete!
    root@ubuntu:~# curl -Og6 http://10.10.2.15:8085/deb-repo/deb-repo/python3-workloadmgrclient_4.0.115_all.deb
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  116k  100  116k    0     0   899k      0 --:--:-- --:--:-- --:--:--  982k
    
    root@ubuntu:~# apt-get install ./python3-workloadmgrclient_4.0.115_all.deb -y
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Note, selecting 'python3-workloadmgrclient' instead of './python3-workloadmgrclient_4.0.115_all.deb'
    The following NEW packages will be installed:
      python3-workloadmgrclient
    0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
    Need to get 0 B/120 kB of archives.
    After this operation, 736 kB of additional disk space will be used.
    Selecting previously unselected package python3-workloadmgrclient.
    (Reading database ... 65533 files and directories currently installed.)
    Preparing to unpack .../python3-workloadmgrclient_4.0.115_all.deb ...
    Unpacking python3-workloadmgrclient (4.0.115) ...
    Setting up python3-workloadmgrclient (4.0.115) ...
    deb [trusted=yes] https://apt.fury.io/triliodata-<Trilio-Version>/ /
    root@ubuntu:~# cat /etc/apt/sources.list.d/fury.list
    deb [trusted=yes] https://apt.fury.io/triliodata-4-0/ /
    
    root@ubuntu:~# apt update
    Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
    Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
    Hit:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
    Hit:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
    Ign:5 https://apt.fury.io/triliodata-4-0  InRelease
    Ign:6 https://apt.fury.io/triliodata-4-0  Release
    Ign:7 https://apt.fury.io/triliodata-4-0  Packages
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Get:7 https://apt.fury.io/triliodata-4-0  Packages
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Fetched 84.0 kB in 12s (6930 B/s)
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    All packages are up to date.
    
    root@ubuntu:~# apt-get install python3-workloadmgrclient
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following NEW packages will be installed:
      python3-workloadmgrclient
    0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
    Need to get 0 B/120 kB of archives.
    After this operation, 736 kB of additional disk space will be used.
    Selecting previously unselected package python3-workloadmgrclient.
    (Reading database ... 65533 files and directories currently installed.)
    Preparing to unpack .../python3-workloadmgrclient_4.0.115_all.deb ...
    Unpacking python3-workloadmgrclient (4.0.115) ...
    Setting up python3-workloadmgrclient (4.0.115) ...
    systemctl | grep wlm
      wlm-api.service          loaded active running   workloadmanager api service
      wlm-cron.service         loaded active running   Cluster Controlled wlm-cron
      wlm-scheduler.service    loaded active running   Cluster Controlled wlm-scheduler
      wlm-workloads.service    loaded active running   workloadmanager workloads service
    systemctl status wlm-api
    ######
    ● wlm-api.service - workloadmanager api service
       Loaded: loaded (/etc/systemd/system/wlm-api.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:41:19 UTC; 2 months 21 days ago
     Main PID: 4688 (workloadmgr-api)
       CGroup: /system.slice/wlm-api.service
               ├─4688 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-api --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-scheduler
    ######
    ● wlm-scheduler.service - Cluster Controlled wlm-scheduler
       Loaded: loaded (/etc/systemd/system/wlm-scheduler.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-scheduler.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9342 (workloadmgr-sch)
       CGroup: /system.slice/wlm-scheduler.service
               └─9342 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-scheduler --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-workloads
    ######
    ● wlm-workloads.service - workloadmanager workloads service
       Loaded: loaded (/etc/systemd/system/wlm-workloads.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:51:05 UTC; 2 months 21 days ago
     Main PID: 606 (workloadmgr-wor)
       CGroup: /system.slice/wlm-workloads.service
               ├─ 606 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-cron
    ######
    ● wlm-cron.service - Cluster Controlled wlm-cron
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-cron.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9209 (workloadmgr-cro)
       CGroup: /system.slice/wlm-cron.service
               ├─9209 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    pcs status
    ######
    Cluster name: triliovault
    
    WARNINGS:
    Corosync and pacemaker node names do not match (IPs used in setup?)
    
    Stack: corosync
    Current DC: TVM1 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
    Last updated: Mon Jan 24 13:42:01 2022
    Last change: Tue Nov  2 19:07:04 2021 by root via crm_resource on TVM2
    
    3 nodes configured
    9 resources configured
    
    Online: [ TVM1 TVM2 TVM3 ]
    
    Full list of resources:
    
     virtual_ip     (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_public      (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_admin       (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_internal    (ocf::heartbeat:IPaddr2):       Started TVM2
     wlm-cron       (systemd:wlm-cron):     Started TVM2
     wlm-scheduler  (systemd:wlm-scheduler):        Started TVM2
     Clone Set: lb_nginx-clone [lb_nginx]
         Started: [ TVM2 ]
         Stopped: [ TVM1 TVM3 ]
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    curl http://10.10.2.34:8780/v1/8e16700ae3614da4ba80a4e57d60cdb9/workload_types/detail -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-workloadmgrclient" -H "Accept: application/json" -H "X-Auth-Token: gAAAAABe40NVFEtJeePpk1F9QGGh1LiGnHJVLlgZx9t0HRrK9rC5vqKZJRkpAcW1oPH6Q9K9peuHiQrBHEs1-g75Na4xOEESR0LmQJUZP6n37fLfDL_D-hlnjHJZ68iNisIP1fkm9FGSyoyt6IqjO9E7_YVRCTCqNLJ67ZkqHuJh1CXwShvjvjw
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    lxc-attach -n <dmapi-container-name>  (go to dmapi conatiner)
    root@controller-dmapi-container-08df1e06:~# systemctl status tvault-datamover-api.service
    ● tvault-datamover-api.service - TrilioData DataMover API service
         Loaded: loaded (/lib/systemd/system/tvault-datamover-api.service; enabled; vendor preset: enabled)
         Active: active (running) since Wed 2022-01-12 11:53:39 UTC; 1 day 17h ago
       Main PID: 23888 (dmapi-api)
          Tasks: 289 (limit: 57729)
         Memory: 607.7M
         CGroup: /system.slice/tvault-datamover-api.service
                 ├─23888 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23893 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23894 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23895 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23896 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23897 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23898 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23899 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23900 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23901 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23902 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23903 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23904 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23905 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23906 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23907 /usr/bin/python3 /usr/bin/dmapi-api
                 └─23908 /usr/bin/python3 /usr/bin/dmapi-api
    
    Jan 12 11:53:39 controller-dmapi-container-08df1e06 systemd[1]: Started TrilioData DataMover API service.
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    
    root@compute:~# systemctl status tvault-contego
    ● tvault-contego.service - Tvault contego
         Loaded: loaded (/etc/systemd/system/tvault-contego.service; enabled; vendor preset: enabled)
         Active: active (running) since Fri 2022-01-14 05:45:19 UTC; 2s ago
       Main PID: 1489651 (python3)
          Tasks: 19 (limit: 67404)
         Memory: 6.7G (max: 10.0G)
         CGroup: /system.slice/tvault-contego.service
                 ├─ 998543 /bin/qemu-nbd -c /dev/nbd45 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998772 /bin/qemu-nbd -c /dev/nbd73 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998931 /bin/qemu-nbd -c /dev/nbd100 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─ 999147 /bin/qemu-nbd -c /dev/nbd35 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─1371322 /bin/qemu-nbd -c /dev/nbd63 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─1371524 /bin/qemu-nbd -c /dev/nbd91 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 └─1489651 /openstack/venvs/nova-22.3.1/bin/python3 /usr/bin/tvault-contego --config-file=/etc/nova/nova.conf --config-file=/etc/tvault-contego/tvault-cont>
    
    Jan 14 05:45:19 compute systemd[1]: Started Tvault contego.
    Jan 14 05:45:20 compute sudo[1489653]:     nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/openstack/venvs/nova-22.3.1/bin/nova-rootwrap /etc/nova/rootwrap.conf umou>
    Jan 14 05:45:20 compute sudo[1489653]: pam_unix(sudo:session): session opened for user root by (uid=0)
    Jan 14 05:45:21 compute python3[1489655]: umount: /var/triliovault-mounts/VHJpbGlvVmF1bHQ=: no mount point specified.
    Jan 14 05:45:21 compute sudo[1489653]: pam_unix(sudo:session): session closed for user root
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] CPU Control group m>
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] I/O Control Group m>
    lines 1-22/22 (END)
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    [root@controller ~]# docker ps | grep triliovault_datamover_api
    3f979c15cedc   trilio/centos-binary-trilio-datamover-api:4.2.50-victoria                     "dumb-init --single-…"   3 days ago    Up 3 days                         triliovault_datamover_api
    [root@compute1 ~]# docker ps | grep triliovault_datamover
    2f1ece820a59   trilio/centos-binary-trilio-datamover:4.2.50-victoria                        "dumb-init --single-…"   3 days ago    Up 3 days                        triliovault_datamover
    [root@controller ~]# docker ps | grep horizon
    4a004c786d47   trilio/centos-binary-trilio-horizon-plugin:4.2.50-victoria                    "dumb-init --single-…"   3 days ago    Up 3 days (unhealthy)             horizon
    root@jujumaas:~# juju status | grep trilio
    trilio-data-mover       4.2.51   active       3  trilio-data-mover       jujucharms    9  ubuntu
    trilio-dm-api           4.2.51   active       1  trilio-dm-api           jujucharms    7  ubuntu
    trilio-horizon-plugin   4.2.51   active       1  trilio-horizon-plugin   jujucharms    6  ubuntu
    trilio-wlm              4.2.51   active       1  trilio-wlm              jujucharms    9  ubuntu
      trilio-data-mover/8        active    idle            172.17.1.5                         Unit is ready
      trilio-data-mover/6        active    idle            172.17.1.6                         Unit is ready
      trilio-data-mover/7*       active    idle            172.17.1.7                         Unit is ready
      trilio-horizon-plugin/2*   active    idle            172.17.1.16                        Unit is ready
    trilio-dm-api/2*             active    idle   1/lxd/4  172.17.1.27     8784/tcp           Unit is ready
    trilio-wlm/2*                active    idle   7        172.17.1.28     8780/tcp           Unit is ready
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp16 onwards/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain1-controller-0 heat-admin]# podman ps | grep trilio-
    e3530d6f7bec  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:4.2.47-rhosp16.1           kolla_start           2 weeks ago   Up 2 weeks ago          trilio_dmapi
    f93f7019f934  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:4.2.47-rhosp16.1          kolla_start           2 weeks ago   Up 2 weeks ago          horizon
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain3-novacompute-1 heat-admin]# podman ps | grep trilio-
    4419b02e075c  undercloud162.ctlplane.trilio.local:8787/trilio/trilio-datamover:dev-osp16.2-1-rhosp16.2       kolla_start  2 days ago   Up 27 seconds ago          trilio_datamover
     (overcloudtrain1) [stack@ucqa161 ~]$ openstack endpoint list | grep datamover
    | 218b2f92569a4d259839fa3ea4d6103a | regionOne | dmapi          | datamover      | True    | internal  | https://overcloudtrain1internalapi.trilio.local:8784/v2                    |
    | 4702c51aa5c24bed853e736499e194e2 | regionOne | dmapi          | datamover      | True    | public    | https://overcloudtrain1.trilio.local:13784/v2                              |
    | c8169025eb1e4954ab98c7abdb0f53f6 | regionOne | dmapi          | datamover      | True    | admin     | https://overcloudtrain1internalapi.trilio.local:8784/v2    
    systemctl | grep wlm
      wlm-api.service          loaded active running   workloadmanager api service
      wlm-cron.service         loaded active running   Cluster Controlled wlm-cron
      wlm-scheduler.service    loaded active running   Cluster Controlled wlm-scheduler
      wlm-workloads.service    loaded active running   workloadmanager workloads service
    systemctl status wlm-api
    ######
    ● wlm-api.service - workloadmanager api service
       Loaded: loaded (/etc/systemd/system/wlm-api.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:41:19 UTC; 2 months 21 days ago
     Main PID: 4688 (workloadmgr-api)
       CGroup: /system.slice/wlm-api.service
               ├─4688 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-api --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-scheduler
    ######
    ● wlm-scheduler.service - Cluster Controlled wlm-scheduler
       Loaded: loaded (/etc/systemd/system/wlm-scheduler.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-scheduler.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9342 (workloadmgr-sch)
       CGroup: /system.slice/wlm-scheduler.service
               └─9342 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-scheduler --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-workloads
    ######
    ● wlm-workloads.service - workloadmanager workloads service
       Loaded: loaded (/etc/systemd/system/wlm-workloads.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:51:05 UTC; 2 months 21 days ago
     Main PID: 606 (workloadmgr-wor)
       CGroup: /system.slice/wlm-workloads.service
               ├─ 606 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-cron
    ######
    ● wlm-cron.service - Cluster Controlled wlm-cron
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-cron.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9209 (workloadmgr-cro)
       CGroup: /system.slice/wlm-cron.service
               ├─9209 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    pcs status
    ######
    Cluster name: triliovault
    
    WARNINGS:
    Corosync and pacemaker node names do not match (IPs used in setup?)
    
    Stack: corosync
    Current DC: TVM1 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
    Last updated: Mon Jan 24 13:42:01 2022
    Last change: Tue Nov  2 19:07:04 2021 by root via crm_resource on TVM2
    
    3 nodes configured
    9 resources configured
    
    Online: [ TVM1 TVM2 TVM3 ]
    
    Full list of resources:
    
     virtual_ip     (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_public      (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_admin       (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_internal    (ocf::heartbeat:IPaddr2):       Started TVM2
     wlm-cron       (systemd:wlm-cron):     Started TVM2
     wlm-scheduler  (systemd:wlm-scheduler):        Started TVM2
     Clone Set: lb_nginx-clone [lb_nginx]
         Started: [ TVM2 ]
         Stopped: [ TVM1 TVM3 ]
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    curl http://10.10.2.34:8780/v1/8e16700ae3614da4ba80a4e57d60cdb9/workload_types/detail -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-workloadmgrclient" -H "Accept: application/json" -H "X-Auth-Token: gAAAAABe40NVFEtJeePpk1F9QGGh1LiGnHJVLlgZx9t0HRrK9rC5vqKZJRkpAcW1oPH6Q9K9peuHiQrBHEs1-g75Na4xOEESR0LmQJUZP6n37fLfDL_D-hlnjHJZ68iNisIP1fkm9FGSyoyt6IqjO9E7_YVRCTCqNLJ67ZkqHuJh1CXwShvjvjw
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    lxc-attach -n <dmapi-container-name>  (go to dmapi conatiner)
    root@controller-dmapi-container-08df1e06:~# systemctl status tvault-datamover-api.service
    ● tvault-datamover-api.service - TrilioData DataMover API service
         Loaded: loaded (/lib/systemd/system/tvault-datamover-api.service; enabled; vendor preset: enabled)
         Active: active (running) since Wed 2022-01-12 11:53:39 UTC; 1 day 17h ago
       Main PID: 23888 (dmapi-api)
          Tasks: 289 (limit: 57729)
         Memory: 607.7M
         CGroup: /system.slice/tvault-datamover-api.service
                 ├─23888 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23893 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23894 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23895 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23896 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23897 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23898 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23899 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23900 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23901 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23902 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23903 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23904 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23905 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23906 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23907 /usr/bin/python3 /usr/bin/dmapi-api
                 └─23908 /usr/bin/python3 /usr/bin/dmapi-api
    
    Jan 12 11:53:39 controller-dmapi-container-08df1e06 systemd[1]: Started TrilioData DataMover API service.
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    
    root@compute:~# systemctl status tvault-contego
    ● tvault-contego.service - Tvault contego
         Loaded: loaded (/etc/systemd/system/tvault-contego.service; enabled; vendor preset: enabled)
         Active: active (running) since Fri 2022-01-14 05:45:19 UTC; 2s ago
       Main PID: 1489651 (python3)
          Tasks: 19 (limit: 67404)
         Memory: 6.7G (max: 10.0G)
         CGroup: /system.slice/tvault-contego.service
                 ├─ 998543 /bin/qemu-nbd -c /dev/nbd45 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998772 /bin/qemu-nbd -c /dev/nbd73 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998931 /bin/qemu-nbd -c /dev/nbd100 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─ 999147 /bin/qemu-nbd -c /dev/nbd35 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─1371322 /bin/qemu-nbd -c /dev/nbd63 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─1371524 /bin/qemu-nbd -c /dev/nbd91 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 └─1489651 /openstack/venvs/nova-22.3.1/bin/python3 /usr/bin/tvault-contego --config-file=/etc/nova/nova.conf --config-file=/etc/tvault-contego/tvault-cont>
    
    Jan 14 05:45:19 compute systemd[1]: Started Tvault contego.
    Jan 14 05:45:20 compute sudo[1489653]:     nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/openstack/venvs/nova-22.3.1/bin/nova-rootwrap /etc/nova/rootwrap.conf umou>
    Jan 14 05:45:20 compute sudo[1489653]: pam_unix(sudo:session): session opened for user root by (uid=0)
    Jan 14 05:45:21 compute python3[1489655]: umount: /var/triliovault-mounts/VHJpbGlvVmF1bHQ=: no mount point specified.
    Jan 14 05:45:21 compute sudo[1489653]: pam_unix(sudo:session): session closed for user root
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] CPU Control group m>
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] I/O Control Group m>
    lines 1-22/22 (END)
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    [root@controller ~]# docker ps | grep triliovault_datamover_api
    3f979c15cedc   trilio/centos-binary-trilio-datamover-api:4.2.50-victoria                     "dumb-init --single-…"   3 days ago    Up 3 days                         triliovault_datamover_api
    [root@compute1 ~]# docker ps | grep triliovault_datamover
    2f1ece820a59   trilio/centos-binary-trilio-datamover:4.2.50-victoria                        "dumb-init --single-…"   3 days ago    Up 3 days                        triliovault_datamover
    [root@controller ~]# docker ps | grep horizon
    4a004c786d47   trilio/centos-binary-trilio-horizon-plugin:4.2.50-victoria                    "dumb-init --single-…"   3 days ago    Up 3 days (unhealthy)             horizon
    root@jujumaas:~# juju status | grep trilio
    trilio-data-mover       4.2.51   active       3  trilio-data-mover       jujucharms    9  ubuntu
    trilio-dm-api           4.2.51   active       1  trilio-dm-api           jujucharms    7  ubuntu
    trilio-horizon-plugin   4.2.51   active       1  trilio-horizon-plugin   jujucharms    6  ubuntu
    trilio-wlm              4.2.51   active       1  trilio-wlm              jujucharms    9  ubuntu
      trilio-data-mover/8        active    idle            172.17.1.5                         Unit is ready
      trilio-data-mover/6        active    idle            172.17.1.6                         Unit is ready
      trilio-data-mover/7*       active    idle            172.17.1.7                         Unit is ready
      trilio-horizon-plugin/2*   active    idle            172.17.1.16                        Unit is ready
    trilio-dm-api/2*             active    idle   1/lxd/4  172.17.1.27     8784/tcp           Unit is ready
    trilio-wlm/2*                active    idle   7        172.17.1.28     8780/tcp           Unit is ready
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp16 onwards/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain1-controller-0 heat-admin]# podman ps | grep trilio-
    e3530d6f7bec  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:4.2.47-rhosp16.1           kolla_start           2 weeks ago   Up 2 weeks ago          trilio_dmapi
    f93f7019f934  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:4.2.47-rhosp16.1          kolla_start           2 weeks ago   Up 2 weeks ago          horizon
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain3-novacompute-1 heat-admin]# podman ps | grep trilio-
    4419b02e075c  undercloud162.ctlplane.trilio.local:8787/trilio/trilio-datamover:dev-osp16.2-1-rhosp16.2       kolla_start  2 days ago   Up 27 seconds ago          trilio_datamover
     (overcloudtrain1) [stack@ucqa161 ~]$ openstack endpoint list | grep datamover
    | 218b2f92569a4d259839fa3ea4d6103a | regionOne | dmapi          | datamover      | True    | internal  | https://overcloudtrain1internalapi.trilio.local:8784/v2                    |
    | 4702c51aa5c24bed853e736499e194e2 | regionOne | dmapi          | datamover      | True    | public    | https://overcloudtrain1.trilio.local:13784/v2                              |
    | c8169025eb1e4954ab98c7abdb0f53f6 | regionOne | dmapi          | datamover      | True    | admin     | https://overcloudtrain1internalapi.trilio.local:8784/v2    
    Supported File Recovery Manager Image

    Cloud Image Name

    Version

    Supported

    Ubuntu

    Bionic(18.04)

    ✔️

    Ubuntu

    Focal(20.04)

    ✔️

    Centos

    Centos8

    ✔️

    Centos

    hashtag
    Create a File Recovery Manager Instance

    circle-check

    It is recommended to do these steps once to the chosen cloud-Image and then upload the modified cloud image to Glance.

    • Create an Openstack image using a Linux based cloud-image like Ubuntu, CentOS or RHEL with the following metadata parameters.

    • Spin up an instance from that image It is recommended to have at least 8GB RAM for the mount operation. Bigger Snapshots can require more RAM.

    hashtag
    Steps to apply on CentOS and RHEL cloud-images

    • install and activate qemu-guest-agent

    • Edit /etc/sysconfig/qemu-ga and remove the following from BLACKLIST_RPC section

    • Disable SELINUX in /etc/sysconfig/selinux

    • Install python3 and lvm2

    • Reboot the Instance

    hashtag
    Steps to apply on Ubuntu cloud-images

    • install and activate qemu-guest-agent

    • Verify the loaded path of qemu-guest-agent

    hashtag
    Loaded path init.d (Ubuntu 18.04)

    Follow this path when systemctl returns the following loaded path

    Edit /etc/init.d/qemu-guest-agent and add Freeze-Hook file path in daemon args

    hashtag
    Loaded path systemd (Ubuntu 20.04)

    Follow this path when systemctl returns the following loaded path

    Edit qemu-guest-agent systemd file

    Add the following lines

    hashtag
    Finalize the FRM on Ubuntu

    • Restart qemu-guest-agent service

    • Install Python3

    • Reboot the VM

    hashtag
    Mounting a Snapshot

    Mounting a Snapshot to a File Recovery Manager provides read access to all data that is located on the in the mounted Snapshot.

    triangle-exclamation

    It is possible to run the mounting process against any Openstack instance. During this process will the instance be rebooted.

    Always mount Snapshots to File Recovery Manager instances only.

    circle-info

    To be able to successfully mount Windows (NTFS) Snapshots the ntfs filesystem support is required on the File Recovery Manager instance.

    circle-exclamation

    Unmount any mounted Snapshot once there is no further need to keep it mounted. Mounted Snapshots will not be purged by the Retention policy.

    hashtag
    Using Horizon

    There are 2 possibilities to mount a Snapshot in Horizon.

    hashtag
    Through the Snapshot list

    To mount a Snapshot through the Snapshot list follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the searched Snapshot in the Snapshot list

    8. Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu

    9. Click "Mount Snapshot"

    10. Choose the File Recovery Manager instance to mount to

    11. Confirm by clicking "Mount"

    circle-exclamation

    Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:

    tvault_recovery_manager=yes

    hashtag
    Through the File Search results

    To mount a Snapshot through the File Search results follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    5. Click the workload name to enter the Workload overview

    6. Navigate to the File Search tab

    7. Identify the Snapshot to be mounted

    8. Click "Mount Snapshot" for the chosen Snapshot

    9. Choose the File Recovery Manager instance to mount to

    10. Confirm by clicking "Mount"

    circle-exclamation

    Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:

    tvault_recovery_manager=yes

    hashtag
    Using CLI

    • <snapshot_id> ➡️ ID of the Snapshot to be mounted

    • <mount_vm_id> ➡️ ID of the File Recovery Manager instance to mount the Snapshot to.

    hashtag
    Accessing the File Recovery Manager

    The File Recovery Manager is a normal Linux based Openstack instance.

    It can be accessed via SSH or SSH based tools like FileZila or WinSCP.

    circle-info

    SSH login is often disabled by default in cloud-images. Enable SSH login if necessary.

    The mounted Snapshot can be found at the following path:

    /home/ubuntu/tvault-mounts/mounts/

    Each VM in the Snapshot has its own directory using the VM_ID as the identifier.

    hashtag
    Identifying mounted Snapshots

    Sometimes a Snapshot is mounted for a longer time and it needs to be identified, which Snapshots are mounted.

    hashtag
    Using Horizon

    There are 2 possibilities to identify mounted Snapshots inside Horizon.

    hashtag
    From the File Recovery Manager instance Metadata

    1. Login to Horizon

    2. Navigate to Compute

    3. Navigate to Instances

    4. Identify the File Recovery Manager Instance

    5. Click on the Name of the File Recovery Manager Instance to bring up its details

    6. On the Overview tab look for Metadata

    7. Identify the value for mounted_snapshot_url

    The mounted_snapshot_url contains the Snapshot ID of the Snapshot that has been mounted last.

    circle-info

    This value only gets updated, when a new Snapshot is mounted.

    hashtag
    From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Search for the Snapshot that has the option "Unmount Snapshot"

    hashtag
    Using CLI

    • --workloadid <workloadid> ➡️ Restrict the list to snapshots in the provided workload

    hashtag
    Unmounting a Snapshot

    Once a mounted Snapshot is no longer needed it is possible and recommended to unmount the snapshot.

    circle-info

    Unmounting a Snapshot frees the File Recovery Manager instance to mount the next Snapshot and allows Trilio retention policy to purge the former mounted Snapshot.

    circle-exclamation

    Deleting the File Recovery Manager instance will not update the Trilio appliance. The Snapshot will be considered mounted until an unmount command has been received.

    hashtag
    Using Horizon

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Search for the Snapshot that has the option "Unmount Snapshot"

    8. Click "Unmount Snapshot"

    hashtag
    Using the CLI

    • <snapshot_id> ➡️ ID of the snapshot to unmount.

    T4O 4.3.1

    hashtag
    Release Versions

    hashtag
    Packages

    hashtag
    Deliverables against T4O-4.3.1

    hashtag
    Containers and Gitbranch

    hashtag
    Changelog

    • Issues reported by customers.

    hashtag
    Fixed Bugs and issues

    1. Backup and Selective restore failure with quobyte cinder volume.

    2. Trilio takes user time and not the dashboard time.

    3. Documentation Change : Steps to be followed from Trilio end while minor upgrade from RHOSP 16.x to 16.y.

    hashtag
    Known issues

    1. Import of multiple workloads don't get segregated across all nodes evenly

    Observation : When import of multiple workloads is triggered, it is expected that system must evenly divide the load on all available nodes in round robin fashion. However, current it is running the import on any one of the available nodes.

    2. After clicking on snapshot or any restore against Imported WL, UI become unresponsive for first time only

    Observation : For workloads with a large number of VMs and with a large number of networks attached to those VMs, then the import of the snapshot details may take more than 1 minute (connection timeout) and hence this issue might be observed.

    Comments:

    1. If a user hits this issue, then the import of the snapshot has already been triggered and now the user needs to wait till the snapshot import is done.

    2. The users are advised to wait for a couple of minutes before they recheck the snapshot details.

    3. Once the details of the snapshot are visible, the restore operation can be carried out.

    3. Import of ALL workloads without specifying any workload id NOT recommended

    Observation : If user hits workload import command without any workload id, thereby expecting that all eligible workloads will get imported, then the command execution takes longer than expected time.

    Comments:

    1. As long as the import command is not returning, it is expected to be running; if successful, it will return the job ID, if not, it will throw an error msg.

    2. The execution time may vary based on the number of workloads present in the backend target.

    3. It is recommended to run this command with specific workload IDs.

    4. Deleting snapshots show status as available in horizon UI

    Observation : Snapshot for which delete operation is in-progress from UI , its status is showing as available instead deleting.

    Workaround:

    Wait for sometime to complete all the delete operations.Eventually all the snapshots will be deleted successfully.

    Upgrading on Ansible OpenStack

    hashtag
    Upgrading from Trilio 4.1 to a higher version

    Trilio 4.1 can be upgraded without reinstallation to a higher version of T4O if available.

    hashtag
    Pre-requisites

    Please ensure the following points are met before starting the upgrade process:

    • No Snapshot or Restore is running

    • The Global-Job-Scheduler is disabled

    • wlm-cron is disabled on the Trilio Appliance

    hashtag
    Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it has been completely shut down.

    hashtag
    Update the repositories

    hashtag
    Deb-based (Ubuntu)

    Add the Gemfury repository on each of the DMAPI containers, Horizon containers & Compute nodes.

    Create a file /etc/apt/sources.list.d/fury.list and add the below line to it.

    The following commands can be used to verify the connection to the Gemfury repository and to check for available packages.

    hashtag
    RPM-based (CentOS)

    Add Trilio repo on each of the DMAPI containers, Horizon containers & Compute nodes.

    Modify the file /etc/yum.repos.d/trilio.repo and add the below line in it.

    The following commands can be used to verify the connection to the Gemfury repository and to check for available packages.

    hashtag
    Upgrade tvault-datamover-api package

    The following steps represent the best practice procedure to upgrade the DMAPI service.

    1. Login to DMAPI container

    2. Take a backup of the DMAPI configuration in /etc/dmapi/

    3. use apt list --upgradeable to identify the package used for the dmapi service

    These steps are done with the following commands. This example is assuming that the more common python3 packages are used.

    hashtag
    Deb-based (Ubuntu)

    hashtag
    RPM-based (CentOS)

    hashtag
    Upgrade Horizon plugin

    The following steps represent the best practice procedure to update the Horizon plugin.

    1. Login to Horizon Container

    2. use apt list --upgradeable to identify the package the Trilio packages for the workloadmgrclient, contegoclient and tvault-horizon-plugin

    These steps are done with the following commands. This example is assuming that the more common python3 packages are used.

    hashtag
    Deb-based (Ubuntu)

    hashtag
    RPM-based (CentOS)

    hashtag
    Upgrade the tvault-contego service

    The following steps represent the best practice procedure to update the tvault-contego service on the compute nodes.

    1. Login into the compute node

    2. Take a backup of the config files at and /etc/tvault-contego/ and /etc/tvault-object-store (if S3)

    3. Unmount storage mount path

    These steps are done with the following commands. This example is assuming that the more common python3 packages are used.

    hashtag
    NFS as Storage Backend

    • Take a backup of the config files

    • Check the mount path of the NFS storage using the command df -h and unmount the path using umount command. e.g.

    • Upgrade the Trilio packages:

    Deb-based (Ubuntu):

    RPM-based (CentOS):

    • Restore the config files, restart the service and verify the mount point

    hashtag
    S3 as Storage Backend

    • Take a backup of the config files

    • Check the mount path of the S3 storage using the command df -h and unmount the path using umount command.

    • Upgrade the Trilio packages

    Deb-based (Ubuntu):

    RPM-based (CentOS):

    • Restore the config files, restart the service and verify the mount point

    hashtag
    Advance settings/configuration

    hashtag
    Customize HAproxy cfg parameters for Trilio datamover api service

    Following are the haproxy cfg parameters recommended for optimal performance of dmapi service. File location on controller /etc/haproxy/haproxy.cfg

    circle-info

    If values were already updated during any of the previous releases, further steps can be skipped.

    hashtag
    Parameters timeout client, timeout server, and balance for DMAPI service

    Remove the below content, if present in the file/etc/openstack_deploy/user_variables.ymlon the ansible host.

    Add the below lines at end of the file /etc/openstack_deploy/user_variables.yml on the ansible host.

    Update Haproxy configuration using the below command on the ansible host.

    hashtag
    Enable mount-bind for NFS

    T4O 4.2 has changed the calculation of the mount point. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2

    Please follow for detailed steps to set up mount bind.

    Installing on TripleO Train

    hashtag
    1. Prepare for deployment

    hashtag
    1.1] Select 'backup target' type

    Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

    The following backup target types are supported by Trilio

    a) NFS

    Need NFS share path

    b) Amazon S3

    - S3 Access Key - Secret Key - Region - Bucket name

    c) Other S3 compatible storage (Like, Ceph based S3)

    - S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

    hashtag
    1.2] Clone triliovault-cfg-scripts repository

    The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.

    circle-exclamation

    All commands need to be run as user 'stack' on undercloud node

    circle-exclamation

    TripleO CentOS8 is not supported anymore as CentOS Linux 8 has reached End of Life on December 31st,2021.

    The following command clones the triliovault-cfg-scripts github repository.

    circle-exclamation

    Please note that the Trilio Appliance needs to get updated to the latest HF as well.

    hashtag
    1.3] If the backup target type is 'Ceph based S3' with SSL:

    If your backup target is ceph S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, the user needs to rename his ca chain cert file to s3-cert.pem and copy it into the directory triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE_Directory/puppet/trilio/files

    hashtag
    2] Upload Trilio puppet module

    hashtag
    3] Update overcloud roles data file to include Trilio services

    Trilio contains multiple services. Add these services to your roles_data.yaml.

    circle-info

    In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:

    /usr/share/openstack-tripleo-heat-templates/roles_data.yaml

    Add the following services to the roles_data.yaml

    circle-exclamation

    All commands need to be run as user 'stack'

    hashtag
    3.1] Add Trilio Datamover Api Service to role data file

    This service needs to share the same role as the keystone and database service. In the case of the pre-defined roles will these services run on the role Controller. In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone service is installed.

    Add the following line to the identified role:

    hashtag
    3.2] Add Trilio Datamover Service to role data file

    This service needs to share the same role as the nova-compute service. In the case of the pre-defined roles will the nova-compute service run on the role Compute. In the case of custom-defined roles, it is necessary to use the role the nova-compute service is used.

    Add the following line to the identified role:

    hashtag
    3.3] Add Trilio Horizon Service role data file4] Prepare Trilio container images

    This service needs to share the same role as the OpenStack Horizon server. In the case of the pre-defined roles, Horizon service runs on the role Controller. Add the following line to the identified role:

    circle-exclamation

    All commands need to be run as user 'stack'

    circle-info

    Refer to the word <HOTFIX-TAG-VERSION> as 4.3.2 in the below sections

    Trilio containers are pushed to 'Dockerhub'. Registry URL: 'docker.io'. Container pull URLs are given below.

    hashtag
    CentOS7

    There are two registry methods available in TripleO Openstack Platform.

    1. Remote Registry

    2. Local Registry

    hashtag
    4.1] Remote Registry

    Follow this section when 'Remote Registry' is used.

    For this method, it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from the Dockerhub registry.

    Populate the trilio_env.yaml with container URLs for:

    • Trilio Datamover container

    • Trilio Datamover api container

    • Trilio Horizon Plugin

    circle-info

    trilio_env.yaml will be available in __triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments

    hashtag
    4.2] Local Registry

    Follow this section when 'local registry' is used on the undercloud.

    Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.

    The changes can be verified using the following commands.

    hashtag
    5] Configure multi-IP NFS

    circle-info

    This section is only required when the multi-IP feature for NFS is required.

    This feature allows setting the IP to access the NFS Volume per datamover instead of globally.

    On Undercloud node, change the directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/IP map.

    Get the compute hostnames from the following command. Check the 'Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.

    circle-info

    Run this command on undercloud by sourcing 'stackrc'.

    Edit the input map file and fill in all the details. Refer to for details about the structure.

    vi triliovault_nfs_map_input.yml

    Update pyYAML on the undercloud node only

    circle-exclamation

    If pip isn't available please install pip on the undercloud.

    Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.

    The result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

    Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Validate the changes in file triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

    Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

    Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that NFS is used as the backup target.

    hashtag
    6] Fill in Trilio environment details

    Fill Trilio details in the file /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml, triliovault environment file is self-explanatory. Fill in details of the backup target, verify image URLs, and other details.

    circle-info

    NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    hashtag
    7] Install Trilio on Overcloud

    Use the following heat environment file and roles data file in overcloud deploy command

    1. trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations

    2. roles_data.yaml: This file contains overcloud roles data with Trilio roles added.

    3. Use the correct trilio endpoint map file as per your keystone endpoint configuration. - Instead of tls-endpoints-public-dns.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’ - Instead of tls-endpoints-public-ip.yaml

    Deploy command with triliovault environment file looks like the following.

    circle-info

    Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

    hashtag
    8] Verify the deployment

    triangle-exclamation

    If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

    hashtag
    8.1] On the Controller node

    Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

    Verify the haproxy configuration under:

    hashtag
    8.2] On Compute node

    Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

    hashtag
    8.3] On the node with Horizon service

    Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.

    hashtag
    10] Troubleshooting for overcloud deployment failures

    Trilio components will be deployed using puppet scripts.

    In case of the overcloud deployment fails do the following command to provide the list of errors. The following document also provides valuable insights:

    Upgrading on TripleO Train [CentOS7]

    hashtag
    1. Generic Pre-requisites

    1. Please ensure following points before starting the upgrade process:

      1. Either 4.1 GA OR any hotfix patch against 4.1 should be already deployed for performing upgrades mentioned in the current document.

      2. No snapshot OR restore to be running.

      3. Global job scheduler should be disabled.

      4. wlm-cron should be disabled (on primary T4O node).

        1. pcs resource disable wlm-cron

        2. Check : systemctl status wlm-cron OR pcs resource show wlm-cron

    hashtag
    2. [On Undercloud node] Clone triliovault repo and upload trilio puppet module

    circle-info

    Run all the commands with 'stack' user

    hashtag
    2.1 Clone the latest configuration scripts

    hashtag
    2.2 Backup target is “Ceph based S3” with SSL

    If the backup target is Ceph S3 with SSL and SSL certificates are self-signed or authorized by private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, user needs to rename his CA chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/puppet/trilio/files'.

    hashtag
    2.3 Upload triliovault puppet module

    hashtag
    3. Prepare Trilio container images

    In this step, we are going to pull triliovault container images to the user’s registry.

    Trilio containers are pushed to ‘Dockerhub'. The registry URL is ‘docker.io’. Following are the triliovault container pull URLs.

    circle-info

    Refer to the word <HOTFIX-TAG-VERSION> as 4.3.2 in the below sections

    hashtag
    3.1 Trilio container URLs

    Trilio container URLs for TripleO Train CentOS7:

    There are two registry methods available in the TripleO Openstack Platform.

    1. Remote Registry

    2. Local Registry

    Identify which method you are using. Below we have explained all three methods to pull and configure trilioVault's container images for overcloud deployment.

    hashtag
    3.2 Remote Registry

    If you are using the 'Remote Registry' method follow this section. You don't need to pull anything. You just need to populate the following container URLs in trilio env yaml.

    • Populate 'environments/trilio_env.yaml' file with triliovault container urls. Changes look like the following.

    hashtag
    3.3 Registry on Undercloud

    If you are using 'local registry' on undercloud, follow this section.

    • Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.

    ## Above script pushes trilio container images to undercloud registry and sets correct trilio images URLs in ‘environments/trilio_env.yaml’. Verify the changes using the following command.

    hashtag
    4. Configure multi-IP NFS

    circle-info

    This section is only required when the multi-IP feature for NFS is required.

    This feature allows to set the IP to access the NFS Volume per datamover instead of globally.

    On Undercloud node, change directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

    Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.

    circle-info

    Run this command on undercloud by sourcing 'stackrc'.

    Edit input map file and fill all the details. Refer to the for details about the structure.

    vi triliovault_nfs_map_input.yml

    Update pyyaml on the undercloud node only

    circle-exclamation

    If pip isn't available please install pip on the undercloud.

    Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.

    The result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

    Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Validate the changes in file 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

    Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that nfs is used as backup target.

    hashtag
    5. Fill in triliovault environment details

    • Refer old 'trilio_env.yaml' - (/home/stack/triliovault-cfg-scripts-old/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml) file and update new trilio_env.yaml file.

    • Fill triliovault details in file - '/home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml', triliovault environment file is self explanatory. Fill details of backup target, verify image urls and other details.

    circle-info

    For Cohesity NFS format for NFS options : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    hashtag
    6. roles_data.yaml file changes

    A new separate triliovault service is introduced in Trilio 4.3 release for Trilio Horizon plugin. User need to add following service to roles_data.yaml file and this service need to be co-located with openstack horizon service.

    OS::TripleO::Services::TrilioHorizon

    hashtag
    7. Install Trilio on Overcloud

    Use the following heat environment file and roles data file in overcloud deploy command

    1. trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations

    2. roles_data.yaml: This file contains overcloud roles data with Trilio roles added. This file need not be changed, you can use the old role_data.yaml file

    3. Use the correct trilio endpoint map file as per your keystone endpoint configuration. - Instead of tls-endpoints-public-dns.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’ - Instead of

    Deploy command with triliovault environment file looks like following.

    hashtag
    8. Steps to verify correct deployment

    hashtag
    8.1 On overcloud controller node(s)

    Make sure Trilio dmapi and horizon containers(shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps

    hashtag
    8.2 On overcloud compute node(s)

    Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps.

    hashtag
    8.3 On OpenStack node where OpenStack Horizon Service is running

    Make sure the horizon container is in a running state. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin

    hashtag
    9. Troubleshooting if any failures

    Trilio components will be deployed using puppet scripts.

    In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights:

    hashtag
    10. Known Issues/Limitations

    hashtag
    10.1 Overcloud deploy fails with the following error. Valid for Train CentOS7 only.

    circle-info

    This is not a Trilio issue. It’s a TripleO issue that is not fixed in Train Centos7. This is fixed in higher versions of TripleO.

    Undercloud jobs puppet task ertmonger_certificate[haproxy-external-cert] fails with Unrecognized parameter or wrong value type

    Workaround:

    APPLY the fix directly on the setup. It's not merged in train centos7. PR: Need fix in on controller and compute nodes /usr/share/openstack-puppet/modules/certmonger/lib/puppet/provider/certmonger_certificate/certmonger_certificate.rb

    hashtag
    11. Enable mount-bind for NFS

    Note : Below mentioned steps required only if target backend is NFS.

    Please referfor detailed steps to setup mount bind.

    T4O 4.3.0 (GA)

    hashtag
    Release Versions

    hashtag
    Packages

    Installing on Ansible Openstack

    circle-info

    Please ensure that the Trilio Appliance has been updated to the latest hotfix before continuing the installation.

    hashtag
    Change the nova user id on the Trilio Nodes

    Configuring Trilio

    Learn about configuring Trilio for OpenStack

    The configuration process used by Trilio for OpenStack heavily utilizes Ansible scripts. In recent years, Ansible has emerged as a leading tool for configuration management, due to which Trilio makes extensive use of Ansible playbooks to effectively configure the Trilio cluster. To address any potential Trilio configuration issues, it's crucial for users to have a fundamental understanding of Ansible playbook output.

    Given the inherent repeatability of Ansible modules, the Trilio configuration can be run as many times as needed to alter or reconfigure the Trilio cluster.

    Upon booting the VM, direct your browser (preferably Chrome or Firefox) to the Trilio node's IP address. This will take you to the Trilio Dashboard, which houses the Trilio configurator.

    The user is: admin The default password is: password

    circle-info

    Workloads

    hashtag
    Definition

    A workload is a backup job that protects one or more Virtual Machines according to a configured policy. There can be as many workloads as needed. But each VM can only be part of one Workload.

    circle-exclamation

    openstack image create \
    --file <File Manager Image Path> \
    --container-format bare \
    --disk-format qcow2 \
    --public \
    --property hw_qemu_guest_agent=yes \
    --property tvault_recovery_manager=yes \
    --property hw_disk_bus=virtio \
    tvault-file-manager
    guest-file-read
    guest-file-write
    guest-file-open
    guest-file-close
    SELINUX=disabled
    yum install python3 lvm2
    apt-get update
    apt-get install qemu-guest-agent
    systemctl enable qemu-guest-agent
    Loaded: loaded (/etc/init.d/qemu-guest-agent; generated)
    DAEMON_ARGS="-F/etc/qemu/fsfreeze-hook"
    Loaded: loaded (/usr/lib/systemd/system/qemu-guest-agent.service; disabled; vendor preset: enabled)
    systemctl edit qemu-guest-agent
    [Service]
    ExecStart=
    ExecStart=/usr/sbin/qemu-ga -F/etc/qemu/fsfreeze-hook
    systemctl restart qemu-guest-agent
    apt-get install python3
    workloadmgr snapshot-mount <snapshot_id> <mount_vm_id>
    workloadmgr snapshot-mounted-list [--workloadid <workloadid>]
    workloadmgr snapshot-dismount <snapshot_id>

    Centos8 stream

    ✔️

    RHEL

    RHEL7

    ✔️

    RHEL

    RHEL8

    ✔️

    RHEL

    RHEL9

    ✔️

    Do a File Search
    Logo

    4.3.1.2

    dmapi

    deb

    4.3.1

    contegoclient

    deb

    4.3.1

    python3-dmapi

    deb

    4.3.1

    python3-contegoclient

    deb

    4.3.1

    tvault-horizon-plugin

    deb

    4.3.2.1

    python3-tvault-horizon-plugin

    deb

    4.3.2.1

    python-workloadmgrclient

    deb

    4.3.6

    python3-workloadmgrclient

    deb

    4.3.6

    workloadmgr

    deb

    4.3.8.2

    tvault-contego

    python

    4.3.1.2

    s3fuse

    python

    4.3.2.2

    dmapi

    python

    4.3.2

    contegoclient

    python

    4.3.2

    tvault-horizon-plugin

    python

    4.3.3.1

    workloadmgrclient

    python

    4.3.7

    tvault_configurator

    python

    4.3.7

    workloadmgr

    python

    4.3.9.2

    trilio-fusepy

    rpm

    3.0.1-1

    python3-trilio-fusepy

    rpm

    3.0.1-1

    python3-trilio-fusepy-el9

    rpm

    3.0.1-1

    puppet-triliovault

    rpm

    4.2.64-4.2

    tvault-contego

    rpm

    4.3.1.2-4.3

    python3-s3fuse-plugin

    rpm

    4.3.1.2-4.3

    python3-tvault-contego

    rpm

    4.3.1.2-4.3

    python3-s3fuse-plugin-el9

    rpm

    4.3.1.2-4.3

    python3-tvault-contego-el9

    rpm

    4.3.1.2-4.3

    python-s3fuse-plugin-cent7

    rpm

    4.3.1.2-4.3

    dmapi

    rpm

    4.3.1-4.3

    contegoclient

    rpm

    4.3.1-4.3

    python3-dmapi

    rpm

    4.3.1-4.3

    python3-dmapi-el9

    rpm

    4.3.1-4.3

    python3-contegoclient-el8

    rpm

    4.3.1-4.3

    python3-contegoclient-el9

    rpm

    4.3.1-4.3

    tvault-horizon-plugin

    rpm

    4.3.2.1-4.3

    python3-tvault-horizon-plugin-el8

    rpm

    4.3.2.1-4.3

    python3-tvault-horizon-plugin-el9

    rpm

    4.3.2.1-4.3

    workloadmgrclient

    rpm

    4.3.6-4.3

    python3-workloadmgrclient-el8

    rpm

    4.3.6-4.3

    python3-workloadmgrclient-el9

    rpm

    4.3.6-4.3

    4.3.1-victoria

    Kolla Ansible Wallaby containers

    4.3.1-wallaby

    Kolla Yoga Containers

    4.3.1-yoga

    Kolla Zed Containers

    4.3.1-zed

    TripleO Containers

    4.3.1-tripleo

    Canonical/Juju - python-os-brick_5.2.2-0ubuntu1.3 will disrupt Trilio snapshots completely, if using multipath.
  • Restores failing with Error : "security group rule does not exist".

  • When VMs of the workloads are deleted, snapshots fail (with a very misleading message).

  • "Timeout uploading data" when backing up a VM with a 23TB volume.

  • To import ALL workloads at once, all WL IDs to be provided as parameter to the import CLI command. Procedure for the same mentioned below.

    Package/Container Names

    Package Kind

    Package Versions

    s3-fuse-plugin

    deb

    4.3.1.2

    tvault-contego

    deb

    4.3.1.2

    python3-s3-fuse-plugin

    deb

    4.3.1.2

    python3-tvault-contego

    Name

    Tag

    Gitbranch

    4.3.1

    RHOSP13 containers

    4.3.1-rhosp13

    RHOSP16.1 containers

    4.3.1-rhosp16.1

    RHOSP16.2 containers

    4.3.1-rhosp16.2

    RHOSP17.0 containers

    4.3.1-rhosp17.0

    deb

    Kolla Ansible Victoria containers

    Access to the Gemfury repository to fetch new packages \

    1. Note: For single IP-based NFS share as a backup target refer to this rolling upgrade on the Ansible Openstack document. User needs to follow the Ansible Openstack installation document if having multiple IP-based NFS share

    Update the DMAPI package

  • restore the backed-up config files into /etc/dmapi/

  • Restart the DMAPI service

  • Check the status of the DMAPI service

  • Install the tvault-horizon-plugin package in the required python version
  • Install the workloadmgrclient package

  • Install the contegoclient

  • Restart the Horizon webserver

  • Check the installed version of the workloadmgrclient

  • Upgrade the tvault-contego package in the required python version

  • (S3 only) upgrade the s3-fuse-plugin package

  • Restore the config files

  • (S3 only) Restart the tvault-object-store service

  • Restart the tvault-contego service

  • Check the status of the service(s)

  • this documentation
    this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’ - Instead of
    tls-everywhere-endpoints-dns.yaml
    this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
    this page
    https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.htmlarrow-up-right
    The 'nova' user ID and Group ID on the Trilio nodes need to be set the same as in the compute node(s). Trilio is by default using the nova user ID (UID) and Group ID (GID) 162:162. Ansible OpenStack is not always 'nova' user id 162 on compute node. Do the following steps on all Trilio nodes in case of nova UID & GID are not in sync with the Compute Node(s)
    1. Download the shell script that will change the user-id

    2. Assign executable permissions

    3. Edit script to use the correct nova id

    4. Execute the script

    5. Verify that 'nova' user and group id has changed to the desired value

    hashtag
    Prepare deployment host

    Clone triliovault-cfg-scripts from github repository on Ansible Host.

    Available values for <branch>: hotfix-4-TVO/4.2

    Copy Ansible roles and vars to required places.

    circle-info

    In case of installing on OSA Victora or OSA Wallaby edit OPENSTACK_DIST in the file /etc/openstack_/user_tvault_vars.yml to victoria or wallaby respectively

    Add Trilio playbook to /opt/openstack-ansible/playbooks/setup-openstack.ymlat the end of the file.

    Add the following content at the end of the file /etc/openstack_deploy/user_variables.yml

    Create the following file /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml

    Add the following content to the created file.

    Edit the file /etc/openstack_deploy/openstack_user_config.yml according to the example below to set host entries for Trilio components.

    Edit the common editable parameter section in the file /etc/openstack_deploy/user_tvault_vars.yml

    Append the required details like Trilio Appliance IP address, Openstack distribution, snapshot storage backend, SSL related information, etc.

    circle-info

    Note:

    1. From 4.2HF4 onwards default prefilled value i.e 4.2.64 will be used for TVAULT_PACKAGE_VERSION .

    2. In case of more than one nova virtual environment If the user wants to install tvault-contego service in a specific nova virtual environment on compute node(s) then needs to uncomment var nova_virtual_env and then set the value of nova_virtual_env

    3. In case of more than one horizon plugin configured on openstack user can specify under which horizon plugins to install Trilio Horizon Plugins by setting horizon_virtual_env parameter. Default value of horizon_virtual_env is ' /openstack/venvs/horizon*'\

    4. NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    hashtag
    Configure Multi-IP NFS

    circle-info

    This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs

    New parameter added to /etc/openstack_deploy/user_tvault_vars.yml file for Mutli-IP NFS\

    Change the Directory

    Edit file 'triliovault_nfs_map_input.yml' in current directory and provide compute host and NFS share/IP map.

    Please take a look at this page to learn about the format of the file.

    Update pyyaml on the Openstack Ansible server node only

    Execute generate_nfs_map.py file to create one to one mapping of compute and nfs share.

    Result will be in file - 'triliovault_nfs_map_output.yml' of the current directory

    Validate output map file Open file 'triliovault_nfs_map_output.yml

    available in current directory and validate that all compute nodes are mapped with all necessary nfs shares.

    Append the content of triliovault_nfs_map_output.yml file to /etc/openstack_deploy/user_tvault_vars.yml

    hashtag
    Deploy Trilio components

    Run the following commands to deploy only Trilio components in case of an already deployed Ansible Openstack.

    If Ansible Openstack is not already deployed then run the native Openstack deployment commands to deploy Openstack and Trilio Components together. An example for the native deployment command is given below:

    hashtag
    Verify the Trilio deployment

    Verify triliovault datamover api service deployed and started well. Run the below commands on controller node(s).

    Verify triliovault datamover service deployed and started well on compute node(s). Run the following command oncompute node(s).

    Verify that triliovault horizon plugin, contegoclient, and workloadmgrclient are installed on the Horizon container.

    Run the following command on Horizon container.

    Verify that haproxy setting on controller node using below commands.

    hashtag
    Update to the latest hotfix

    After the deployment has been verified it is recommended to update to the latest hotfix to ensure the best possible experience.

    To update the environment follow this procedure.

    Using encrypted Workload will lead to longer backup times. The following timings have been seen in Trilio labs:

    Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : ~108GB

    1. For unencrypted WL : 62 min

    2. For encrypted WL : 82 min

    Snapshot time for Windows Image booted VM. No additional data except OS. : ~12 GB

    1. For unencrypted WL : 10 min

    2. For encrypted WL : 18 min5

    hashtag
    List of Workloads

    hashtag
    Using Horizon

    To view all available workloads of a project inside Horizon do:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    The overview in Horizon lists all workloads with the following additional information:

    • Creation time

    • Workload Name

    • Workload description

    • Total amount of Snapshots inside this workload

      • Total amount of succeeded Snapshots

      • Total amount of failed Snapshots

    • Workload Type

    • Status of the Workload

    hashtag
    Using CLI

    • --all {True,False}➡️List all workloads of all projects (valid for admin user only)

    • --nfsshare <nfsshare>➡️List all workloads of nfsshare (valid for admin user only)

    hashtag
    Workload Create

    circle-info

    The encryption options of the workload creation process are only available when the Barbican service is installed and available.

    hashtag
    Using Horizon

    To create a workload inside Horizon do the following steps:

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    4. Click "Create Workload"

    5. Provide Workload Name and Workload Description on the first tab "Details"

    6. Choose between Serial or Parallel workload on the first tab "Details"

    7. Choose the Policy if available to use on the first tab "Details"

    8. Choose if the Workload is encrypted on the first tab "Details"

    9. Provide the secret UUID if Workload is encrypted on the first tab "Details"

    10. Choose the VMs to protect on the second Tab "Workload Members"

    11. Decide for the schedule of the workload on the Tab "Schedule"

    12. Provide the Retention policy on the Tab "Policy"

    13. Choose the Full Backup Interval on the Tab "Policy"

    14. If required check "Pause VM" on the Tab "Options"

    15. Click create

    The created Workload will be available after a few seconds and starts to take backups according to the provided schedule and policy.

    hashtag
    Using CLI

    • --display-name➡️Optional workload name. (Default=None)

    • --display-description➡️Optional workload description. (Default=None)

    • --workload-type-id➡️Workload Type ID is required

    • --source-platform➡️Workload source platform is required. Supported platforms is 'openstack'

    • --instance➡️Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID

    • --jobschedule➡️Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'snapshots_to_retain' : '2'

    • --metadata➡️Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

    • --policy-id <policy_id>➡️ID of the policy to assign to the workload

    • --encryption <True/False> ➡️Enable/Disable encryption for this workload

    • --secret-uuid <secret_uuid> ➡️UUID of the Barbican secret to be used for the workload

    hashtag
    Workload Overview

    A workload contains many information, which can be seen in the workload overview.

    hashtag
    Using Horizon

    To enter the workload overview inside Horizon do the following steps:

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    4. Identify the workload to show the details on

    5. Click the workload name to enter the Workload overview

    hashtag
    Details Tab

    The Workload Details tab provides you with the general most important information about the workload:

    • Name

    • Description

    • Availability Zone

    • List of protected VMs including the information of qemu guest agent availability

    circle-exclamation

    The status of the qemu-guest-agent just shows, whether the necessary Openstack configuration has been done for this VM to provide qemu guest agent integration. It does not check, whether the qemu guest agent is installed and configured on the VM.

    circle-info

    It is possible to navigate to the protected VM directly from the list of protected VMs.

    hashtag
    Snapshots Tab

    The Workload Snapshots Tab shows the list of all available Snapshots in the chosen Workload.

    From here it is possible to work with the Snapshots, create Snapshots on demand and start Restores.

    circle-info

    Please refer to the Snapshot and Restore User Guide to learn more about those.

    hashtag
    Policy Tab

    The Workload Policy Tab gives an overview of the current configured scheduler and retention policy. The following elements are shown:

    • Scheduler Enabled / Disabled

    • Start Date / Time

    • End Date / Time

    • RPO

    • Time till next Snapshot run

    • Retention Policy and Value

    • Full Backup Interval policy and value

    hashtag
    Filesearch Tab

    The Workload Filesearch Tab provides access to the powerful search engine, which allows to find files and folders on Snapshots without the need of a restore.

    circle-info

    Please refer to the File Search User Guide to learn more about this feature.

    hashtag
    Misc. Tab

    The Workload Miscellaneous Tab shows the remaining metadata of the Workload. The following information are provided:

    • Creation time

    • last update time

    • Workload ID

    • Workload Type

    hashtag
    Using CLI

    • <workload_id> ➡️ ID/name of the workload to show

    • --verbose➡️option to show additional information about the workload

    hashtag
    Edit a Workload

    Workloads can be modified in all components to match changing needs.

    circle-info

    Editing a Workload will set the User, who edits the Workload, as the new owner.

    hashtag
    Using Horizon

    To edit a workload in Horizon do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to be modified

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Edit Workload"

    7. Modify the workload as desired - All parameters except workload type can be changed

    8. Click "Update"

    hashtag
    Using CLI

    • --display-name ➡️ Optional workload name. (Default=None)

    • --display-description➡️Optional workload description. (Default=None)

    • --instance <instance-id=instance-uuid>➡️Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID

    • --jobschedule <key=key-name>➡️Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. If don't specify timezone, then by default it takes your local machine timezone 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30'

    • --metadata <key=key-name>➡️Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

    • --policy-id <policy_id>➡️ID of the policy to assign

    • <workload_id> ➡️ID of the workload to edit

    hashtag
    Delete a Workload

    Once a workload is no longer needed it can be safely deleted.

    circle-info

    All Snapshots need to be deleted before the workload gets deleted. Please refer to the Snapshots User Guide to learn how to delete Snapshots.

    hashtag
    Using Horizon

    To delete a workload do the following steps:

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    4. Identify the workload to be deleted

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Delete Workload"

    7. Confirm by clicking "Delete Workload" yet again

    hashtag
    Using CLI

    • <workload_id> ➡️ ID/name of the workload to delete

    • --database_only <True/False>➡️Keep True if want to delete from database only.(Default=False)

    hashtag
    Unlock a Workload

    Workloads that are actively taking backups or restores are locked for further tasks. It is possible to unlock a workload by force if necessary.

    circle-info

    It is highly recommend to use this feature only as last resort in case of backups/restores being stuck without failing or a restore is required while a backup is running.

    hashtag
    Using Horizon

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to unlock

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Unlock Workload"

    7. Confirm by clicking "Unlock Workload" yet again

    hashtag
    Using CLI

    • <workload_id> ➡️ ID of the workload to unlock

    hashtag
    Reset a Workload

    In rare cases it might be necessary to start a backup chain all over again, to ensure the quality of the created backups. To not recreate a Workload in such cases is it possible to reset a Workload.

    The Workload reset will:

    • Cancel all ongoing tasks

    • Delete all existing Openstack Trilio Snapshots from the protected VMs

    • recalculate the next Snapshot time

    • take a full backup at the next Snapshot

    hashtag
    Using Horizon

    To reset a Workload do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to reset

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Reset Workload"

    7. Confirm by clicking "Reset Workload" yet again

    hashtag
    Using CLI

    • <workload_id> ➡️ ID/name of the workload to reset

    	i. Before proceeding for upgrade OR reinitialize, fetch the list of ALL workload IDs which are NOT in error OR deleted state from database 
    		Query : select id from workloads where status not in ('deleted','error')
    	ii. Use IDs from this list to create the import CLI command parameters. 
    		Sample : --workloadids <wl_id1> --workloadids <wl_id2> …. etc. Shell command below to do the same. 
    		wlIdList.txt to have all workload IDs; one ID per line.
    	iii. awk '{print " --workloadids "$1}' wlIdList.txt | tr -d '\n'
    	iv Append output of above command to the import command.
    		workloadmgr workload-importworkloads <Command output>
    		Eg : workloadmgr workload-importworkloads --workloadids ff24945f-7bef-498d-98eb-d727ec85bc7b --workloadids a15948b4-942c-47e2-85c5-06cad697010f
    [root@TVM2 ~]# pcs resource disable wlm-cron
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr                                                                                                                                                                                                                                                                                                                                                           -cron
    
    deb [trusted=yes] https://apt.fury.io/triliodata-4-2/ /
    apt-get update
    apt list --upgradable
    [triliovault-4-2]
    name=triliovault-4-2
    baseurl=http://trilio:[email protected]:8283/triliodata-4-2/yum/
    gpgcheck=0
    enabled=1
    yum repolist
    yum check-upgrade
    lxc-ls #grep container name for dmapi service
    lxc-attach -n <dmapi container name>
    tar -czvf dmapi_config.tar.gz /etc/dmapi
    apt list --upgradable
    apt install python3-dmapi --upgrade
    tar -xzvf dmapi_config.tar.gz -C /
    systemctl restart tvault-datamover-api
    systemctl status tvault-datamover-api
    lxc-ls #grep container name for dmapi service
    lxc-attach -n <dmapi container name>
    tar -czvf dmapi_config.tar.gz /etc/dmapi
    yum list installed | grep dmapi  ##use dnf if yum not available
    yum check-update python3-dmapi   ##use dnf if yum not available
    yum upgrade python3-dmapi        ##use dnf if yum not available
    tar -xzvf dmapi_config.tar.gz -C /
    systemctl restart tvault-datamover-api
    systemctl status tvault-datamover-api
    lxc-attach -n controller_horizon_container-ead7cc60
    apt list --upgradable
    apt install python3-tvault-horizon-plugin --upgrade
    apt install python3-workloadmgrclient --upgrade
    apt install python3-contegoclient --upgrade
    systemctl restart apache2
    workloadmgr --version
    lxc-attach -n controller_horizon_container-ead7cc60
    yum list installed | grep trilio ##use dnf if yum not available
    yum upgrade python3-contegoclient-el8 python3-tvault-horizon-plugin-el8 python3-workloadmgrclient-el8  ##use dnf if yum not available
    systemctl restart httpd
    workloadmgr --version
    tar -czvf contego_config.tar.gz /etc/tvault-contego/ 
    [root@compute ~]# df -h
    df: /var/trilio/triliovault-mounts: Transport endpoint is not connected
    Filesystem                     Size  Used Avail Use% Mounted on
    devtmpfs                        28G     0   28G   0% /dev
    tmpfs                           28G     0   28G   0% /dev/shm
    tmpfs                           28G  928K   28G   1% /run
    tmpfs                           28G     0   28G   0% /sys/fs/cgroup
    /dev/mapper/cl_centos8-root     70G   13G   58G  19% /
    /dev/vda1                     1014M  231M  784M  23% /boot
    tmpfs                          5.5G     0  5.5G   0% /run/user/0
    172.25.0.10:/mnt/tvault/42436  2.5T  1.8T  674G  74% /var/triliovault-mounts/MTcyLjI1LjAuMTA6L21udC90dmF1bHQvNDI0MzY=
    [root@compute ~]# umount /var/triliovault-mounts/MTcyLjI1LjAuMTA6L21udC90dmF1bHQvNDI0MzY=
    
    apt install python3-tvault-contego --upgrade
    apt install python3-s3-fuse-plugin --upgrade
    
    yum upgrade python3-tvault-contego   ##use dnf if yum not available
    yum upgrade python3-s3fuse-plugin   ##use dnf if yum not available
    tar -xzvf  contego_config.tar.gz -C /
    systemctl restart tvault-contego
    systemctl status tvault-contego
    #To check if backend storage got mounted successfully
    df -h
    tar -czvf contego_config.tar.gz /etc/tvault-contego/ /etc/tvault-object-store/
    e.g. 
    root@compute:~# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev             28G     0   28G   0% /dev
    tmpfs           5.5G  1.4M  5.5G   1% /run
    /dev/vda3       124G   16G  102G  13% /
    tmpfs            28G   20K   28G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs            28G     0   28G   0% /sys/fs/cgroup
    /dev/vda1       456M  297M  126M  71% /boot
    tmpfs           6.3G     0  6.3G   0% /var/triliovault/tmpfs
    Trilio        -     -  0.0K    - /var/triliovault-mounts
    tmpfs           5.5G     0  5.5G   0% /run/user/0
    [root@compute ~]# umount /var/triliovault-mounts
    apt install python3-tvault-contego --upgrade
    apt install python3-s3-fuse-plugin --upgrade
    
    yum upgrade python3-tvault-contego   ##use dnf if yum not available
    yum upgrade python3-s3fuse-plugin   ##use dnf if yum not available
    tar -xzvf  contego_config.tar.gz -C /
    systemctl restart tvault-contego
    systemctl status tvault-contego
    #To check if backend storage got mounted successfully
    df -h
    retries 5
    timeout http-request 10m
    timeout queue 10m
    timeout connect 10m
    timeout client 10m
    timeout server 10m
    timeout check 10m
    balance roundrobin
    maxconn 50000
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    # Datamover haproxy setting
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_balance_alg: roundrobin
          haproxy_timeout_client: 10m
          haproxy_timeout_server: 10m
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    cd /opt/openstack-ansible/playbooks
    openstack-ansible haproxy-install.yml
    cd /home/stack
    git clone -b 4.3.2 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/puppet/trilio/files/
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    chmod +x *.sh
    ./upload_puppet_module.sh
    
    ## Output of the above command looks like the following.
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Above command creates the following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    'OS::TripleO::Services::TrilioDatamoverApi'
    'OS::TripleO::Services::TrilioDatamover'
    'OS::TripleO::Services::TrilioHorizon'
    Trilio Datamove container:        docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
    Trilio Datamover Api Container:   docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
    Trilio horizon plugin:            docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    # For TripleO Train Centos7
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
       DockerTrilioDmApiImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
       DockerHorizonImage: docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_registry_hostname_or_ip> <OS_platform> <4.1-TRIPLEO-CONTAINER> <container_tool_available_on_undercloud>
    
    Options OS_platform: [centos7]
    Options container_tool_available_on_undercloud: [docker, podman]
    
    ## To get undercloud registry hostname/ip, we have two approaches. Use either one.
    1. openstack tripleo container image list
    
    2. find your 'containers-prepare-parameter.yaml' (from overcloud deploy command) and search for 'push_destination'
    cat /home/stack/containers-prepare-parameter.yaml | grep push_destination
     - push_destination: "undercloud.ctlplane.ooo.prod1:8787"
    
    Here, 'undercloud.ctlplane.ooo.prod1' is undercloud registry hostname. Use it in our command like following example.
    
    # Command Example:
    sudo ./prepare_trilio_images.sh undercloud.ctlplane.ooo.prod1 centos7 <HOTFIX-TAG-VERSION>-tripleo podman
    
    ## Verify changes
    # For TripleO Train Centos7
    $ grep '<HOTFIX-TAG-VERSION>-tripleo' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
       DockerTrilioDmApiImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
       DockerHorizonImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    ## For Centos7 Train
    
    (undercloud) [stack@undercloud redhat-director-scripts/tripleo-train]$ openstack tripleo container image list | grep trilio
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo                       |                        |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo                  |                   |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo                  |
    cd triliovault-cfg-scripts/common/
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    ## On Python3 env
    sudo pip3 install PyYAML==5.1
    
    ## On Python2 env
    sudo pip install PyYAML==5.1
    ## On Python3 env
    python3 ./generate_nfs_map.py
    
    ## On Python2 env
    python ./generate_nfs_map.py
    grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env_tls_endpoints_public_dns.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /home/stack/templates/roles_data.yaml
    [root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
    26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo       kolla_start           5 days ago  Up 5 days ago         horizon
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo       kolla_start           5 days ago  Up 5 days ago         horizon
    openstack stack failures list overcloud
    heat stack-list --show-nested -f "status=FAILED"
    heat resource-list --nested-depth 5 overcloud | grep FAILED
    
    ##=> If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
    docker logs trilio_dmapi
    tailf /var/log/containers/trilio-datamover-api/dmapi.log
    
    ##=> If trilio datamover containers does not start well or in restarting state, use following logs to debug.
    docker logs trilio_datamover
    tailf /var/log/containers/trilio-datamover/tvault-contego.log
    curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    chmod +x nova_userid.sh
    vi nova_userid.sh  # change nova user_id and group_id to uid & gid present on compute nodes. 
    ./nova_userid.sh
    id nova
    git clone -b <branch> https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/
    cp -R ansible/roles/* /opt/openstack-ansible/playbooks/roles/
    cp ansible/main-install.yml   /opt/openstack-ansible/playbooks/os-tvault-install.yml
    cp ansible/environments/group_vars/all/vars.yml /etc/openstack_deploy/user_tvault_vars.yml
    cp ansible/tvault_pre_install.yml /opt/openstack-ansible/playbooks/
    - import_playbook: os-tvault-install.yml
    # Datamover haproxy setting
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_balance_alg: roundrobin
          haproxy_timeout_client: 10m
          haproxy_timeout_server: 10m
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    cat > /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml 
    component_skel:
      dmapi_api:
        belongs_to:
          - dmapi_all
    
    container_skel:
      dmapi_container:
        belongs_to:
          - tvault-dmapi_containers
        contains:
          - dmapi_api
    
    physical_skel:
      tvault-dmapi_containers:
        belongs_to:
          - all_containers
      tvault-dmapi_hosts:
        belongs_to:
          - hosts
    #tvault-dmapi
    tvault-dmapi_hosts:   # Add controller details in this section as tvault DMAPI is resides on controller nodes.
      infra-1:            # controller host name. 
        ip: 172.26.0.3    # Ip address of controller
      infra-2:            # If we have multiple controllers add controllers details in same manner as shown in Infra-2  
        ip: 172.26.0.4    
        
    #tvault-datamover
    tvault_compute_hosts: # Add compute details in this section as tvault datamover is resides on compute nodes.
      infra-1:            # compute host name. 
        ip: 172.26.0.7    # Ip address of compute node
      infra-2:            # If we have multiple compute nodes add compute details in same manner as shown in Infra-2
        ip: 172.26.0.8
    ##common editable parameters required for installing tvault-horizon-plugin, tvault-contego and tvault-datamover-api
    #ip address of TVM
    IP_ADDRESS: sample_tvault_ip_address
    
    ##Time Zone
    TIME_ZONE: "Etc/UTC"
    
    ## Don't update or modify the value of TVAULT_PACKAGE_VERSION
    ## The default value of is '4.2.64'
    TVAULT_PACKAGE_VERSION: 4.2.64
    
    # Update Openstack dist code name like ussuri etc.
    OPENSTACK_DIST: ussuri
    
    #Need to add the following statement in nova sudoers file
    #nova ALL = (root) NOPASSWD: /home/tvault/.virtenv/bin/privsep-helper *
    #These changes require for Datamover, Otherwise Datamover will not work
    #Are you sure? Please set variable to 
    #  UPDATE_NOVA_SUDOERS_FILE: proceed
    #other wise ansible tvault-contego installation will exit
    UPDATE_NOVA_SUDOERS_FILE: proceed
    
    ##### Select snapshot storage type #####
    #Details for NFS as snapshot storage , NFS_SHARES should begin with "-".
    ##True/False
    NFS: False
    NFS_SHARES: 
              - sample_nfs_server_ip1:sample_share_path
              - sample_nfs_server_ip2:sample_share_path
    
    #if NFS_OPTS is empty then default value will be "nolock,soft,timeo=180,intr,lookupcache=none"
    NFS_OPTS: ""
    
    ## Valid for 'nfs' backup target only.
    ## If backup target NFS share supports multiple endpoints/ips but in backend it's a single share then
    ## set 'multi_ip_nfs_enabled' parameter to 'True'. Otherwise it's value should be 'False'
    multi_ip_nfs_enabled: False
    
    #### Details for S3 as snapshot storage
    ##True/False
    S3: False
    VAULT_S3_ACCESS_KEY: sample_s3_access_key
    VAULT_S3_SECRET_ACCESS_KEY: sample_s3_secret_access_key
    VAULT_S3_REGION_NAME: sample_s3_region_name
    VAULT_S3_BUCKET: sample_s3_bucket
    VAULT_S3_SIGNATURE_VERSION: default
    #### S3 Specific Backend Configurations
    #### Provide one of follwoing two values in s3_type variable, string's case should be match
    #Amazon/Other_S3_Compatible
    s3_type: sample_s3_type
    #### Required field(s) for all S3 backends except Amazon
    VAULT_S3_ENDPOINT_URL: ""
    #True/False
    VAULT_S3_SECURE: True
    VAULT_S3_SSL_CERT: ""
    
    ###details of datamover API
    ##If SSL is enabled "DMAPI_ENABLED_SSL_APIS" value should be dmapi.
    #DMAPI_ENABLED_SSL_APIS: dmapi
    ##If SSL is disabled "DMAPI_ENABLED_SSL_APIS" value should be empty.
    DMAPI_ENABLED_SSL_APIS: ""
    DMAPI_SSL_CERT: ""
    DMAPI_SSL_KEY: ""
    
    ## Trilio dmapi_workers count
    ## Default value of dmapi_workers is 16
    dmapi_workers: 16
    
    #### Any service is using Ceph Backend then set ceph_backend_enabled value to True
    #True/False
    ceph_backend_enabled: False
    
    ## Provide Horizon Virtual Env path from Horizon_container
    ## e.g. '/openstack/venvs/horizon-23.1.0'
    horizon_virtual_env: '/openstack/venvs/horizon*'
    
    ## When More Than One Nova Virtual Env. On Compute Node(s) and
    ## User Wants To Specify Specific Nova Virtual Env. From Existing
    ## Then Only Uncomment the var nova_virtual_env and pass value like 'openstack/venvs/nova-23.2.0'
    
    #nova_virtual_env: 'openstack/venvs/nova-23.2.0'
    
    #Set verbosity level and run playbooks with -vvv option to display custom debug messages
    verbosity_level: 3
    
    #******************************************************************************************************************************************************************
    ###static fields for tvault contego extension ,Please Do not Edit Below Variables
    #******************************************************************************************************************************************************************
    #SSL path
    DMAPI_SSL_CERT_DIR: /opt/config-certs/dmapi
    VAULT_S3_SSL_CERT_DIR: /opt/config-certs/s3
    RABBITMQ_SSL_DIR: /opt/config-certs/rabbitmq
    DMAPI_SSL_CERT_PATH: /opt/config-certs/dmapi/dmapi-ca.pem
    DMAPI_SSL_KEY_PATH: /opt/config-certs/dmapi/dmapi.key
    VAULT_S3_SSL_CERT_PATH: /opt/config-certs/s3/ca_cert.pem
    RABBITMQ_SSL_CERT_PATH: /opt/config-certs/rabbitmq/rabbitmq.pem
    RABBITMQ_SSL_KEY_PATH: /opt/config-certs/rabbitmq/rabbitmq.key
    RABBITMQ_SSL_CA_CERT_PATH: /opt/config-certs/rabbitmq/rabbitmq-ca.pem
    
    PORT_NO: 8085
    PYPI_PORT: 8081
    DMAPI_USR: dmapi
    DMAPI_GRP: dmapi
    #tvault contego file path
    TVAULT_CONTEGO_CONF: /etc/tvault-contego/tvault-contego.conf
    TVAULT_OBJECT_STORE_CONF: /etc/tvault-object-store/tvault-object-store.conf
    NOVA_CONF_FILE: /etc/nova/nova.conf
    #Nova distribution specific configuration file path
    NOVA_DIST_CONF_FILE: /usr/share/nova/nova-dist.conf
    TVAULT_CONTEGO_EXT_USER: nova
    TVAULT_CONTEGO_EXT_GROUP: nova
    TVAULT_DATA_DIR_MODE: 0775
    TVAULT_DATA_DIR_OLD: /var/triliovault
    TVAULT_DATA_DIR: /var/triliovault-mounts
    TVAULT_CONTEGO_VIRTENV: /home/tvault
    TVAULT_CONTEGO_VIRTENV_PATH: "{{TVAULT_CONTEGO_VIRTENV}}/.virtenv"
    TVAULT_CONTEGO_EXT_BIN: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/bin/tvault-contego"
    TVAULT_CONTEGO_EXT_PYTHON: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/bin/python"
    TVAULT_CONTEGO_EXT_OBJECT_STORE: ""
    TVAULT_CONTEGO_EXT_BACKEND_TYPE: ""
    TVAULT_CONTEGO_EXT_S3: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/lib/python2.7/site-packages/contego/nova/extension/driver/s3vaultfuse.py"
    privsep_helper_file: /home/tvault/.virtenv/bin/privsep-helper
    pip_version: 7.1.2
    virsh_version: "1.2.8"
    contego_service_file_path: /etc/systemd/system/tvault-contego.service
    contego_service_ulimits_count: 65536
    contego_service_debian_path: /etc/init/tvault-contego.conf
    objstore_service_file_path:  /etc/systemd/system/tvault-object-store.service
    objstore_service_debian_path: /etc/init/tvault-object-store.conf
    ubuntu: "Ubuntu"
    centos: "CentOS"
    redhat: "RedHat"
    Amazon: "Amazon"
    Other_S3_Compatible: "Other_S3_Compatible"
    tvault_datamover_api: tvault-datamover-api
    datamover_service_file_path: /etc/systemd/system/tvault-datamover-api.service
    datamover_service_debian_path: /etc/init/tvault-datamover.conf
    datamover_log_dir: /var/log/dmapi
    trilio_yum_repo_file_path: /etc/yum.repos.d/trilio.repo
    
    
    verbosity_level: 3
    ## Valid for 'nfs' backup target only.
    ## If backup target NFS share supports multiple endpoints/ips but in backend it's a single share then 
    ## set 'multi_ip_nfs_enabled' paremeter to 'True'. Otherwise it's value should be 'False'
    multi_ip_nfs_enabled: False
    cd triliovault-cfg-scripts/common/
    vi triliovault_nfs_map_input.yml
    pip3 install -U pyyaml
    python ./generate_nfs_map.py
    vi triliovault_nfs_map_output.yml
    cat triliovault_nfs_map_output.yml >> /etc/openstack_deploy/user_tvault_vars.yml
    cd /opt/openstack-ansible/playbooks
    
    ## Run tvault_pre_install.yml to install lxc packages
    ansible-playbook tvault_pre_install.yml
    
    # To create Dmapi container
    openstack-ansible lxc-containers-create.yml 
    
    #To Deploy Trilio Components
    openstack-ansible os-tvault-install.yml
    
    #To configure Haproxy for Dmapi
    openstack-ansible haproxy-install.yml
    openstack-ansible setup-infrastructure.yml --syntax-check
    openstack-ansible setup-hosts.yml
    openstack-ansible setup-infrastructure.yml
    openstack-ansible setup-openstack.yml
    lxc-ls                                           # Check the dmapi container is present on controller node.
    lxc-info -s controller_dmapi_container-a11984bf  # Confirm running status of the container
    systemctl status tvault-contego.service
    systemctl status tvault-object-store  # If Storage backend is S3
    df -h                                 # Verify the mount point is mounted on compute node(s)
    lxc-attach -n controller_horizon_container-1d9c055c                                   # To login on horizon container
    apt list | egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient'              # For ubuntu based container
    dnf list installed |egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient'     # For CentOS based container
     haproxy -c -V -f /etc/haproxy/haproxy.cfg # Verify the keyword datamover_service-back is present in output.
    workloadmgr workload-list [--all {True,False}] [--nfsshare <nfsshare>]
    workloadmgr workload-create --instance <instance-id=instance-uuid>
                                [--display-name <display-name>]
                                [--display-description <display-description>]
                                [--workload-type-id <workload-type-id>]
                                [--source-platform <source-platform>]
                                [--jobschedule <key=key-name>]
                                [--metadata <key=key-name>]
                                [--policy-id <policy_id>]
                                [--encryption <True/False>]
                                [--secret-uuid <secret_uuid>]
    workloadmgr workload-show <workload_id> [--verbose <verbose>]
    usage: workloadmgr workload-modify [--display-name <display-name>]
                                       [--display-description <display-description>]
                                       [--instance <instance-id=instance-uuid>]
                                       [--jobschedule <key=key-name>]
                                       [--metadata <key=key-name>]
                                       [--policy-id <policy_id>]
                                       <workload_id>
    workloadmgr workload-delete [--database_only <True/False>] <workload_id>
    workloadmgr workload-unlock <workload_id>
    workloadmgr workload-reset <workload_id>

    Additional step : To ensure that cron is actually stopped, search for any lingering processes against wlm-cron and kill them. [Cmd : ps -ef | grep -i workloadmgr-cron]

    tls-endpoints-public-ip.yaml
    this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’ - Instead of
    tls-everywhere-endpoints-dns.yaml
    this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
    this page
    https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.htmlarrow-up-right
    Bug #1915242 “[Train] [CentOS7] Undercloud jobs puppet task ertm...” : Bugs : tripleoarrow-up-right
    https://github.com/saltedsignal/puppet-certmonger/pull/35/filesarrow-up-right
    to this page
    hashtag
    Deliverables against T4O-4.3.0

    Package/Container Names

    Package Kind

    Package Version/Container Tags

    contegoclient

    deb

    4.3.1

    dmapi

    deb

    4.3.1

    python3-contegoclient

    deb

    4.3.1

    python3-dmapi

    hashtag
    Containers and Gitbranch

    Name

    Tag

    Gitbranch

    4.3.0

    RHOSP13 containers

    4.3.0-rhosp13

    RHOSP16.1 containers

    4.3.0-rhosp16.1

    RHOSP16.2 containers

    4.3.0-rhosp16.2

    RHOSP17.0 containers

    4.3.0-rhosp17.0

    hashtag
    Changelog

    • Improvement of workload import performance.

    • Issues reported by customers.

    hashtag
    Fixed Bugs and issues

    1. Trilio backup deleted after completion without manual intervention and backup not visible under workload.

    2. S3 mountpoint gets stalled when the datamover container is stopped.

    3. TVM uses outdated TLS protocols when HTTPS is activated for the Trilio API.

    hashtag
    Known issues

    1. Import of multiple workloads don't get segregated across all nodes evenly

    Observation : When import of multiple workloads is triggered, it is expected that system must evenly divide the load on all available nodes in round robin fashion. However, current it is running the import on any one of the available nodes.

    2. After clicking on snapshot or any restore against Imported WL, UI become unresponsive for first time only

    Observation : For workloads with a large number of VMs and with a large number of networks attached to those VMs, then the import of the snapshot details may take more than 1 minute (connection timeout) and hence this issue might be observed.

    Comments:

    1. If a user hits this issue, then the import of the snapshot has already been triggered and now the user needs to wait till the snapshot import is done.

    2. The users are advised to wait for a couple of minutes before they recheck the snapshot details.

    3. Once the details of the snapshot are visible, the restore operation can be carried out.

    3. Import of ALL workloads without specifying any workload id NOT recommended

    Observation: If the user uses the workload import command without any workload id, thereby expecting that all eligible workloads will get imported, then the command execution takes longer than expected time.

    Comments:

    1. As long as the import command is not returning, it is expected to be running; if successful, it will return the job ID, if not, it will throw an error msg.

    2. The execution time may vary based on the number of workloads present in the backend target.

    3. It is recommended to run this command with specific workload IDs.

    4. To import ALL workloads at once, all WL IDs to be provided as parameter to the import CLI command. Procedure for the same mentioned below.

    4. Deleting snapshots show status as available in horizon UI

    Observation: Snapshot may show as available instead of deleting when the deletion is in progress.

    Workaround:

    Wait until all the delete operations have completed. Eventually, all the snapshots will be deleted successfully.

    After the first login, you will be prompted to change the admin password.

    Unlike previous versions of Trilio, the current version only requires you to configure the cluster once and the Trilio dashboard provides cluster-wide management capability.

    hashtag
    Uploading the OpenStack certificate bundle

    OpenStack endpoints can be configured to use TLS. In such a configuration the Trilio appliance needs to trust the certificates provided by the OpenStack endpoints.

    To achieve this trust it is required to upload the OpenStack certificate bundle through the OS API certificate tab of the Trilio appliance Dashboard.

    The certificate bundle is located on the controller nodes of the OpenStack installation.

    The default paths for each distribution are as follows:

    The uploaded certificates can be verified on the Trilio appliance at the following location.

    hashtag
    Details needed for the Trilio Appliance

    Once you log in to an unconfigured Trilio Appliance, the first page you encounter is the configurator. This tool needs specific details about the Trilio Appliance, OpenStack, and Backup Storage to proceed.

    hashtag
    Trilio Cluster information

    The Trilio Cluster must integrated into an existing OpenStack environment. The following fields ask for the details of your Trilio Cluster.

    • Controller Nodes

      • This is the list of Trilio virtual appliance IP addresses along with their hostnames.

      • Format: comma-separated list with pairs combined through '='

      • Example: 172.20.4.151=tvault-104-1,172.20.4.152=tvault-104-2,172.20.4.153=tvault-104-3’

    circle-info

    The Trilio Cluster supports only 1 node and 3 node clusters.

    • Virtual IP Address

      • This is the Trilio cluster IP address which is mandatory

      • Format: IP/Subnet

      • Example: 172.20.4.150/24

    circle-exclamation

    The Virtual IP is mandatory even for single-node clusters and has to be different from any IP assigned to a Trilio Controller Node.

    • Name Server

      • List of nameservers, primarily used to resolve OpenStack service endpoints.

      • Format: comma-separated list

      • example: 10.10.10.1,172.20.4.1

    circle-exclamation

    If defining OpenStack endpoint hostnames in the /etc/hosts file on the Trilio Applicance VM is preferred over a DNS solution you may set the nameserver to 0.0.0.0, the default gateway.

    • Domain Search Order

      • The domain the Trilio Cluster will use.

      • Format: comma-separated list

      • example: trilio.io,trilio.demo

    • NTP Servers

      • NTP servers the Trilio Cluster will use

      • format: comma-separated list

    • Timezone

      • Timezone the Trilio Cluster will use internally

      • format: pre-populated list

    hashtag
    OpenStack Credentials information

    The Trilio Appliance integrates with one OpenStack environment. The following fields ask for the information required to access and connect with the OpenStack Cluster.

    • Keystone URL

      • The Keystone endpoint used to fetch authentication for configuration

      • format: URL

      • example: https://keystone.trilio.io:5000/v3

    • Endpoint Type

      • Defines which endpoint type will be used to communicate with the Openstack endpoints

      • format: predefined list of radio buttons

    circle-info

    When FQDNs are used for the Keystone endpoints it is necessary to configure at least one DNS server before the configuration.

    Absent a DNS server, the IPs should be defined in the /etc/hosts file on the Trilio Appliance, and the nameserver should be set to 0.0.0.0.

    Otherwise, the validation of the Openstack Credentials will fail.

    • Domain ID

      • domain the provided user and tenant are located in

      • format: ID

      • example: default

    • Administrator

      • Username of an account with the domain admin role

      • format: String

    • Password

      • password for the prior provided user

      • format: String

    circle-info

    Trilio requires domain admin role access. To provide domain admin role to a user, the following command can be used:

    openstack role add --domain <domain id> --user <username> admin

    circle-exclamation

    The Trilio configurator verifies after every entry if it is possible to login into Openstack using the provided credentials.

    This verification will fail until all entries are set and correct.

    When the verification is successful it is possible to choose the Admin tenant, the Region, and the Trustee role without error.

    • Admin Tenant

      • The tenant to be used together with the provided user

      • format: a pre-populated list

      • example: admin

    • Region

      • Openstack Region the user and tenant are located in

      • format: a pre-populated list

    • Trustee Role

      • The Openstack role required to be able to use Trilio functionalities

      • format: a pre-populated list

    triangle-exclamation

    When leveraging OpenStack Barbican for protecting encrypted volumes and offering encrypted backups, it's essential that the Trustee Role is assigned as 'Creator' or a role that possesses equivalent permissions to the Creator role.

    This is crucial because only the Creator role has the authority to create, read, and delete secrets within Barbican. The generation of encryption-enabled workloads would be unsuccessful if the Trustee Role does not possess the permissions associated with the 'Creator' role.

    hashtag
    Backup Storage Configuration information

    These fields request information about the backup target that the Trilio installation will use to store your backups.

    • OpenStack Distribution

      • Select the Distribution of OpenStack for Trilio integration

      • format: predefined list

      • example: RHOSP

    triangle-exclamation

    Some distributions of OpenStack require a special mount point to be used, so make the OpenStack Distribution selection carefully.

    • Backup Storage

      • Defines the Backup Storage protocol to use

      • format: predefined list of radio buttons

      • example: NFS

    hashtag
    Using the NFS protocol

    • NFS Export

      • The path under which the NFS Volumes to be used can be found

      • format: comma-separated list of NFS Volumes paths

      • example: 10.10.2.20:/upstream,10.10.5.100:/nfs2

    • NFS Options

      • NFS options used by the Trilio Cluster when mounting the NFS Exports

      • format: NFS options

    circle-info

    On Cohesity NFS if Input/Output errors are observed then try increasing timeout and retrans parameter value in NFS options

    Please use the predefined NFS Options and only change them when it is know that changes are necessary.

    Trilio is testing against the predefined NFS options.

    hashtag
    Using the S3 protocol

    • S3 Compatible

      • Switch between Amazon and other S3 compatible storage solutions

      • format: predefined list

      • example: Amazon S3

    • (S3 compatible) Endpoint URL

      • URL to be used to reach and access the provided S3 compatible storage

      • format: URL

    • Access Key

      • Access Key necessary to login into the S3 storage

      • format: access key

    • Secret Key

      • Secret Key necessary to login into the S3 storage

      • format: secret key

    • Region

      • Configured Region for the S3 Bucket (keep the default for S3 compatible without Region)

      • format: String

    • Signature Version

      • S3 signature version to use for signing into the S3 storage

      • format: string

    • Bucket Name

      • Name of the bucket to be used as Backup target

      • format: string

    hashtag
    Using a Secured HTTPS Endpoint for Non-AWS S3 Storage

    When using a secure HTTPS endpoint for non-AWS S3 storage (for example Ceph), you should validate the Certificate Authority (CA) by uploading the corresponding CA certificate. The certificate can be uploaded in the "OS API Certificate" section, under the "Upload Client Certificate" subsection, as explained in Uploading the OpenStack Certificate Bundle.

    hashtag
    Advanced settings

    At the end of the configurator is the option to activate advanced settings.

    Activating this option provides the ability to configure the Keystone endpoints used for the Datamover API and Trilio.

    hashtag
    Setup Trilio and Datamover API endpoints.

    Trilio generates Keystone endpoints for 2 services. The Trilio Datamover API and the Trilio Workloadmanager.

    OpenStack installations typically distribute endpoint types across various networks.

    The advanced settings for both the Datamover API endpoints and TrilioWorkloadManager endpoints enable Trilio configuration options which allow the user to accommodate for such an environment.

    IP addresses supplied in these fields are added as additional VIPs to the Trilio cluster.

    Should a Fully Qualified Domain Name (FQDN) be used for those endpoints, the Trilio configurator will resolve the FQDN, subsequently identifying the associated IP addresses, which are then added as additional Virtual IP addresses (VIPs).

    circle-exclamation

    It is recommended to verify the Datamover API settings against the ones configured during the installation of the Trilio components.

    circle-info

    Should these endpoints already exist in Keystone, their values will be prefilled and immutable. If changes are necessary, you must first remove the old Keystone endpoints.

    circle-info

    Providing a URL with https activates the TLS enabled configuration, which requires the upload of certificates and the connected private key.

    hashtag
    Set up an external database

    Trilio allows the use of an external MySQL or MariaDB database.

    This database needs to be prepared by creating the empty workloadmgr database, creating the workloadmgr user and setting the right permissions.

    An example command to create this database would be:

    Provide the connection string to the Trilio configurator.

    triangle-exclamation

    The database value can only be set upon an initial configuration of the Trilio solution.

    When the Cluster has been configured to use the internal database, then the connection string will not be shown in the next configuration attempt.

    In the case of an external database, the connection string will be shown but is immutable.

    hashtag
    Define the Trilio service user password

    Trilio is using a service user that is located in the OpenStack service project.

    The password for this service user will be generated randomly or can be defined in the advanced settings.

    hashtag
    Starting the configurator

    Once all entries have been set and all validations are error-free the configurator can be started.

    • Click Finish

    • Reconfirm in the pop-up that you want to start the configuration

    • Wait for the configurator to finish

    circle-info

    Some elements of the configuration take longer than others. Even when it looks like the configurator is stuck, please wait till the configurator finishes. Should the configurator not be finished after 6 hours have elapsed, please contact Trilio Support for assistance.

    The configurator utilizes Ansible and Trilio internal API calls during the configuration process

    Following each configuration block or upon completion of the entire configurator process, you have the opportunity to examine the output generated by Ansible.

    At the end of a successful configuration, the page will be forwarded to the configured VIP for the Trilio Appliance.

    Schedulers

    hashtag
    Disable Workload Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/pause

    Disables the scheduler of a given Workload

    cd /home/stack
    mv triliovault-cfg-scripts triliovault-cfg-scripts-old
    #Additionally keep the NFS share path noted
    #/var/lib/nova/triliovault-mounts/MTcyLjMwLjEuMzovcmhvc3BuZnM=
    
    ##Clone latest Trilio cfg scripts repository
    git clone --branch 4.3.2 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/puppet/trilio/files/
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    chmod +x *.sh
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following.
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    Trilio Datamove container:        docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
    Trilio Datamover Api Container:   docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
    Trilio horizon plugin:            docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    # For TripleO Train Centos7
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
       DockerTrilioDmApiImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
       DockerHorizonImage: docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_registry_hostname_or_ip> <OS_platform> <4.3-TRIPLEO-CONTAINER> <container_tool_available_on_undercloud>
    
    ## To get undercloud registry hostname/ip, we have two approaches. Use either one.
    1. openstack tripleo container image list
    
    2. find your 'containers-prepare-parameter.yaml' (from overcloud deploy command) and search for 'push_destination'
    cat /home/stack/containers-prepare-parameter.yaml | grep push_destination
     - push_destination: "undercloud.ctlplane.ooo.prod1:8787"
    
    Here, 'undercloud.ctlplane.ooo.prod1' is undercloud registry hostname. Use it in our command like following example.
    
    # Command Example:
    sudo ./prepare_trilio_images.sh undercloud.ctlplane.ooo.prod1 centos7 <HOTFIX-TAG-VERSION>-tripleo podman
    
    ## Verify changes
    # For TripleO Train Centos7
    $ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
       DockerTrilioDmApiImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
       DockerHorizonImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    ## For Centos7 Train
    
    (undercloud) [stack@undercloud redhat-director-scripts/tripleo-train]$ openstack tripleo container image list | grep trilio
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo                       |                        |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo                  |                   |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo                  |
    cd triliovault-cfg-scripts/common/
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    ## On Python3 env 
    sudo pip3 install PyYAML==5.1 
    
    ## On Python2 env 
    sudo pip install PyYAML==5.1
    ## On Python3 env 
    python3 ./generate_nfs_map.py 
     
    ## On Python2 env 
    python ./generate_nfs_map.py
    grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env_tls_endpoints_public_dns.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /home/stack/templates/roles_data.yaml
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  prod1-compute1.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo         kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  prod1-controller1.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo      kolla_start           5 days ago  Up 5 days ago         horizon
    openstack stack failures list overcloud
    heat stack-list --show-nested -f "status=FAILED"
    heat resource-list --nested-depth 5 overcloud | grep FAILED
    
    => If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_dmapi
    
    tailf /var/log/containers/trilio-datamover-api/dmapi.log
    
     
    
    => If trilio datamover containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_datamover
    
    tailf /var/log/containers/trilio-datamover/tvault-contego.log
    	i. Before proceeding for upgrade OR reinitialize, fetch the list of ALL workload IDs which are NOT in error OR deleted state from database 
    		Query : select id from workloads where status not in ('deleted','error')
    	ii. Use IDs from this list to create the import CLI command parameters. 
    		Sample : --workloadids <wl_id1> --workloadids <wl_id2> …. etc. Shell command below to do the same. 
    		wlIdList.txt to have all workload IDs; one ID per line.
    	iii. awk '{print " --workloadids "$1}' wlIdList.txt | tr -d '\n'
    	iv Append output of above command to the import command.
    		workloadmgr workload-importworkloads <Command output>
    		Eg : workloadmgr workload-importworkloads --workloadids ff24945f-7bef-498d-98eb-d727ec85bc7b --workloadids a15948b4-942c-47e2-85c5-06cad697010f
    RHOSP/TripleO: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
    Kolla Ansible with CentOS: /etc/pki/tls/certs/ca-bundle.crt
    Kolla Ansible with Ubuntu:  /usr/local/share/ca-certificates/
    OpenStack Ansible (OSA) with Ubuntu in our lab: /etc/openstack_deploy/ssl/
    OpenStack Asnible (OSA) with CentOS: /etc/openstack_deploy/ssl
    /etc/workloadmgr/ca-chain.pem
    create database workloadmgr_auto;
    CREATE USER 'trilio'@'localhost' IDENTIFIED BY 'password';
    GRANT ALL PRIVILEGES ON workloadmgr_auto.* TO 'trilio'@'10.10.10.67' IDENTIFIED BY 'password';
    mysql://trilio:[email protected]/workloadmgr_auto?charset=utf8

    deb

    4.3.1

    python3-s3-fuse-plugin

    deb

    4.3.1

    python3-tvault-contego

    deb

    4.3.1

    s3-fuse-plugin

    deb

    4.3.1

    tvault-contego

    deb

    4.3.1

    python3-tvault-horizon-plugin

    deb

    4.3.2

    tvault-horizon-plugin

    deb

    4.3.2

    python3-workloadmgrclient

    deb

    4.3.6

    python-workloadmgrclient

    deb

    4.3.6

    workloadmgr

    deb

    4.3.8

    python3-namedatomiclock

    deb

    1.1.3

    tvault-contego

    python

    4.3.1

    contegoclient

    python

    4.3.2

    dmapi

    python

    4.3.2

    s3fuse

    python

    4.3.2

    tvault-horizon-plugin

    python

    4.3.3

    tvault_configurator

    python

    4.3.7

    workloadmgrclient

    python

    4.3.7

    workloadmgr

    python

    4.3.9

    python3-trilio-fusepy-el9

    rpm

    3.0.1-1

    python3-trilio-fusepy

    rpm

    3.0.1-1

    trilio-fusepy

    rpm

    3.0.1-1

    puppet-triliovault

    rpm

    4.2.64-4.2

    contegoclient

    rpm

    4.3.1-4.3

    dmapi

    rpm

    4.3.1-4.3

    python3-contegoclient-el8

    rpm

    4.3.1-4.3

    python3-contegoclient-el9

    rpm

    4.3.1-4.3

    python3-dmapi-el9

    rpm

    4.3.1-4.3

    python3-dmapi

    rpm

    4.3.1-4.3

    python3-s3fuse-plugin-el9

    rpm

    4.3.1-4.3

    python3-s3fuse-plugin

    rpm

    4.3.1-4.3

    python3-tvault-contego-el9

    rpm

    4.3.1-4.3

    python3-tvault-contego

    rpm

    4.3.1-4.3

    python-s3fuse-plugin-cent7

    rpm

    4.3.1-4.3

    tvault-contego

    rpm

    4.3.1-4.3

    python3-tvault-horizon-plugin-el8

    rpm

    4.3.2-4.3

    python3-tvault-horizon-plugin-el9

    rpm

    4.3.2-4.3

    tvault-horizon-plugin

    rpm

    4.3.2-4.3

    python3-workloadmgrclient-el8

    rpm

    4.3.6-4.3

    python3-workloadmgrclient-el9

    rpm

    4.3.6-4.3

    workloadmgrclient

    rpm

    4.3.6-4.3

    Kolla Ansible Victoria containers

    4.3.0-victoria

    Kolla Ansible Wallaby containers

    4.3.0-wallaby

    Kolla Yoga Containers

    4.3.0-yoga

    Kolla Zed Containers

    4.3.0-zed

    TripleO Containers

    4.3.0-tripleo

    example: 0.pool.ntp.org,10.10.10.10
    example: UTC
    example: Public
    example: admin
    example: password
    example: RegionOne
    example: _member_
    example: nolock,soft,timeo=180,intr,lookupcache=none
  • NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

  • example: objects.trilio.io
    example: SFHSAFHPFFSVVBSVBSZRF
    example: bfAEURFGHsnvd3435BdfeF
    example: us-east-1
    example: default
    example: Trilio-backup
    hashtag
    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 11:52:56 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-99f51825-9b47-41ea-814f-8f8141157fc7

    hashtag
    Enable Workload Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/resume

    Enables the scheduler of a given Workload

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Scheduler Trust Status

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>

    Validates the Scheduler trust for a given Workload

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    triangle-exclamation

    All following API commands require an Authentication token against a user with admin-role in the authentication project.

    hashtag
    Global Job Scheduler status

    GET https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler

    Requests the status of the Global Job Scheduler

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Disable Global Job Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/disable

    Requests disabling the Global Job Scheduler

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Enable Global Job Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/enable

    Requests enabling the Global Job Scheduler

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:06:01 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-4eb1863e-3afa-4a2c-b8e6-91a41fe37f78
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:31:49 GMT
    Content-Type: application/json
    Content-Length: 1223
    Connection: keep-alive
    X-Compute-Request-Id: req-c6f826a9-fff7-442b-8886-0770bb97c491
    
    {
       "scheduler_enabled":true,
       "trust":{
          "created_at":"2020-10-23T14:35:11.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "value":"871ca24f38454b14b867338cb0e9b46c",
          "description":"token id for user ccddc7e7a015487fa02920f4d4979779 project c76b3355a164498aa95ddbc960adc238",
          "category":"identity",
          "type":"trust_id",
          "public":false,
          "hidden":true,
          "status":"available",
          "metadata":[
             {
                "created_at":"2020-10-23T14:35:11.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"a3cc9a01-3d49-4ff8-ad8e-b12a7b3c68b0",
                "settings_name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
                "settings_project_id":"c76b3355a164498aa95ddbc960adc238",
                "key":"role_name",
                "value":"member"
             }
          ]
       },
       "is_valid":true,
       "scheduler_obj":{
          "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "user_domain_id":"default",
          "user":"ccddc7e7a015487fa02920f4d4979779",
          "tenant":"c76b3355a164498aa95ddbc960adc238"
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:45:27 GMT
    Content-Type: application/json
    Content-Length: 30
    Connection: keep-alive
    X-Compute-Request-Id: req-cd447ce0-7bd3-4a60-aa92-35fc43b4729b
    
    {"global_job_scheduler": true}
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:49:29 GMT
    Content-Type: application/json
    Content-Length: 31
    Connection: keep-alive
    X-Compute-Request-Id: req-6f49179a-737a-48ab-91b7-7e7c460f5af0
    
    {"global_job_scheduler": false}
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:50:11 GMT
    Content-Type: application/json
    Content-Length: 30
    Connection: keep-alive
    X-Compute-Request-Id: req-ed279acc-9805-4443-af91-44a4420559bc
    
    {"global_job_scheduler": true}

    E-Mail Notification Settings

    E-Mail Notification Settings are done through the settings API. Use the values from the following table to set Email Notifications up through API.

    Setting name
    Settings Type
    Value type
    example

    smtp_default___recipient

    email_settings

    String

    [email protected]

    hashtag
    Create Setting

    POST https://$(tvm_address):8780/v1/$(tenant_id)/settings

    Creates a Trilio setting.

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Body format

    Setting create requires a Body in json format, to provide the requested information.

    hashtag
    Show Setting

    GET https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>

    Shows all details of a specified setting

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Modify Setting

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/settings

    Modifies the provided setting with the given details.

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Body format

    Workload modify requires a Body in json format, to provide the information about the values to modify.

    hashtag
    Delete Setting

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>

    Deletes the specified Workload.

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    Installing on Kolla Openstack

    This page lists all steps required to deploy Trilio components on Kolla-ansible deployed OpenStack cloud.

    hashtag
    1] Plan for Deployment

    circle-info

    Please ensure that the Trilio Appliance has been updated to the latest maintenance release before continuing the installation.

    Managing Trusts

    triangle-exclamation

    Openstack Administrators should never have the need to directly work with the trusts created.

    The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.

    Backups-Admin Area

    Trilio provides Backup-as-a-Service, which allows Openstack Users to manage and control their backups themselves. This doesn't eradicate the need for a Backup Administrator, who has an overview of the complete Backup Solution.

    To provide Backup Administrators with the tools they need does Trilio for Openstack provide a Backup-Admin area in Horizon in addition to the API and CLI.

    hashtag
    Access the Backups-Admin area

    To access the Backups-Admin area follow these steps:

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    smtp_default___sender

    email_settings

    String

    [email protected]

    smtp_port

    email_settings

    Integer

    587

    smtp_server_name

    email_settings

    String

    Mailserver_A

    smtp_server_username

    email_settings

    String

    admin

    smtp_server_password

    email_settings

    String

    password

    smtp_timeout

    email_settings

    Integer

    10

    smtp_email_enable

    email_settings

    Boolean

    True

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work with

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    setting_name

    string

    Name of the setting to show

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work with w

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    setting_name

    string

    Name of the setting to delete

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 11:55:43 GMT
    Content-Type: application/json
    Content-Length: 403
    Connection: keep-alive
    X-Compute-Request-Id: req-ac16c258-7890-4ae7-b7f4-015b5aa4eb99
    
    {
       "settings":[
          {
             "created_at":"2021-02-04T11:55:43.890855",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "name":"smtp_port",
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":null,
             "value":"8080",
             "description":null,
             "category":null,
             "type":"email_settings",
             "public":false,
             "hidden":0,
             "status":"available",
             "is_public":false,
             "is_hidden":false
          }
       ]
    }
    {
       "settings":[
          {
             "category":null,
             "name":<String Setting_name>,
             "is_public":false,
             "is_hidden":false,
             "metadata":{
                
             },
             "type":<String Setting type>,
             "value":<String Setting Value>,
             "description":null
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 12:01:27 GMT
    Content-Type: application/json
    Content-Length: 380
    Connection: keep-alive
    X-Compute-Request-Id: req-404f2808-7276-4c2b-8870-8368a048c28c
    
    {
       "setting":{
          "created_at":"2021-02-04T11:55:43.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"smtp_port",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_id":null,
          "value":"8080",
          "description":null,
          "category":null,
          "type":"email_settings",
          "public":false,
          "hidden":false,
          "status":"available",
          "metadata":[
             
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 12:05:59 GMT
    Content-Type: application/json
    Content-Length: 403
    Connection: keep-alive
    X-Compute-Request-Id: req-e92e2c38-b43a-4046-984e-64cea3a0281f
    
    {
       "settings":[
          {
             "created_at":"2021-02-04T11:55:43.000000",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "name":"smtp_port",
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":null,
             "value":"8080",
             "description":null,
             "category":null,
             "type":"email_settings",
             "public":false,
             "hidden":0,
             "status":"available",
             "is_public":false,
             "is_hidden":false
          }
       ]
    }
    {
       "settings":[
          {
             "category":null,
             "name":<String Setting_name>,
             "is_public":false,
             "is_hidden":false,
             "metadata":{
                
             },
             "type":<String Setting type>,
             "value":<String Setting Value>,
             "description":null
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 11:49:17 GMT
    Content-Type: application/json
    Content-Length: 1223
    Connection: keep-alive
    X-Compute-Request-Id: req-5a8303aa-6c90-4cd9-9b6a-8c200f9c2473

    Refer to the below-mentioned acceptable values for the placeholders triliovault_tag and kolla_base_distro , in this document as per the Openstack environment:

    Openstack Version
    triliovault_tag
    kolla_base_distro

    Victoria

    4.3.2-victoria

    ubuntu centos

    Wallaby

    4.3.2-wallaby

    ubuntu centos

    Yoga

    4.3.2-yoga

    ubuntu centos

    Zed

    4.3.2-zed

    hashtag
    1.1] Select backup target type

    Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

    Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.

    a) NFS

    Need NFS share path

    b) Amazon S3

    - S3 Access Key - Secret Key - Region - Bucket name

    c) Other S3 compatible storage (Like, Ceph based S3)

    - S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

    hashtag
    2] Clone Trilio Deployment Scripts

    Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterwards, copy Trilio Ansible role into Kolla-ansible roles directory

    hashtag
    3] Hook Trilio deployment scripts to Kolla-ansible deploy scripts

    hashtag
    3.1] Add Trilio global variables to globals.yml

    hashtag
    3.2] Add Trilio passwords to kolla passwords.yaml

    Append triliovault_passwords.yml to /etc/kolla/passwords.yml. Passwords are empty. Set these passwords manually in the /etc/kolla/passwords.yml.

    hashtag
    3.3] Append Trilio site.yml content to kolla ansible’s site.yml

    hashtag
    3.4] Append triliovault_inventory.txt to your cloud’s kolla-ansible inventory file.

    hashtag
    3.5] Configure multi-IP NFS

    circle-info

    This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs

    On kolla-ansible server node, change directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

    circle-info

    If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.

    Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.

    vi triliovault_nfs_map_input.yml

    The triliovault_nfs_map_imput.yml is explained here.

    Update PyYAML on the kolla-ansible server node only

    Expand the map file to create one to one mapping of compute and nfs share.

    Result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all necessary nfs shares.

    Append this output map file to 'triliovault_globals.yml' File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’

    Ensure to set multi_ip_nfs_enabled in __ triliovault_globals.yml file to yes

    hashtag
    4] Edit globals.yml to set Trilio parameters

    Edit /etc/kolla/globals.yml file to fill Trilio backup target and build details. You will find the Trilio related parameters at the end of globals.yml file. Details like Trilio build version, backup target type, backup target details, etc need to be filled out.

    Following is the list of parameters that the usr needs to edit.

    Parameter
    Defaults/choices
    comments

    triliovault_tag

    <triliovault_tag >

    Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned in the 1st step

    horizon_image_full

    Uncomment

    By default, Trilio Horizon container would not get deployed.

    Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.

    triliovault_docker_username

    <dockerhub-login-username>

    Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team

    triliovault_docker_password

    <dockerhub-login-password>

    In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.

    Following are the triliovault container image URLs for 4.3 releases**.** Replace kolla_base_distro and triliovault_tag variables with their values.\

    This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro

    circle-info

    Trilio supports the Source-based containers from the OpenStack Yoga release **** onwards.

    Below are the Source-based OpenStack deployment images

    Below are the Binary-based OpenStack deployment images

    hashtag
    5] Enable Trilio Snapshot mount feature

    To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.

    Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variables. Append the Trilio mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.

    For a default Kolla installation, will the variable look as follows afterward:

    Next, find the variable nova_compute_default_volumes in the same file and append the mount bind /var/trilio:/var/trilio:shared to the list.

    After the change will the variable look for a default Kolla installation as follows:

    In case of using Ironic compute nodes one more entry need to be adjusted in the same file. Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.

    After the changes the variable will looks like the following:

    hashtag
    6] Pull Trilio container images

    Activate the login into dockerhub for Trilio tagged containers.

    Please get the Dockerhub login credentials from Trilio Sales/Support team

    Pull the Trilio container images from the dockerhub based on the existing inventory file. In the example is the inventory file named multinode.

    hashtag
    7] Deploy Trilio

    All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.

    This is just an example command. You need to use your cloud deploy command.

    circle-info

    Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

    hashtag
    8] Verify Trilio deployment

    Verify on the controller and compute nodes that the Trilio containers are in UP state.

    Following is a sample output of commands from controller and compute nodes. triliovault_tag will have value as per the openstack release where deployment being done.

    hashtag
    9] Troubleshooting Tips

    hashtag
    9.1 ] Check Trilio containers and their startup logs

    To see all TriloVault containers running on a specific node use the docker ps command.

    To check the startup logs use the docker logs <container name> command.

    hashtag
    9.2] Trilio Horizon tabs are not visible in Openstack

    Verify that the Trilio Appliance is configured. The Horizon tabs are only shown, when a configured Trilio appliance is available.

    Verify that the Trilio horizon container is installed and in a running state.

    hashtag
    9.3] Trilio Service logs

    • Trilio datamover api service logs on datamover api node

    • Trilio datamover service logs on datamover node

    hashtag
    10. Change the nova user id on the Trilio Nodes

    Note: This step needs to be done on Trilio Appliance node. Not on OpenStack node.

    Pre-requisite: You should have already launched Trilio appliance VM

    In Kolla openstack distribution, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all Trilio nodes:

    1. Download the shell script that will change the user id

    2. Assign executable permissions

    3. Execute the script

    4. Verify that 'nova' user and group id has changed to '42436'

    5. After this step, you can proceed to 'Configuring Trilio' section.

    hashtag
    11. Advanced configurations - [Optional]

    11.1] We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user' in the file '/etc/kolla/globals.yaml'.

    If user wants to edit this parameter value they can do it. Impact will be, cinder's ceph user and triliovault datamover's ceph user will be updated upon next kolla-ansible deploy command.

    hashtag
    List Trusts

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts

    Provides the lists of trusts for the given Tenant.

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_name

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant / Project to fetch the trusts from

    hashtag
    Query Parameters

    Name
    Type
    Description

    is_cloud_admin

    boolean

    true/false

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:21:57 GMT
    Content-Type: application/json
    Content-Length: 868
    Connection: keep-alive
    X-Compute-Request-Id: req-fa48f0ad-aa76-42fa-85ea-1e5461889fb3
    
    {
       "trust":[
          {
             "created_at":"2020-11-26T13:10:53.000000",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
    

    hashtag
    Create Trust

    POST https://$(tvm_address):8780/v1/$(tenant_id)/trusts

    Creates a workload in the provided Tenant/Project with the given details.

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to create the Trust for

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    hashtag
    Body Format

    hashtag
    Show Trust

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>

    Shows all details of a specified trust

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    workload_id

    string

    ID of the Workload to show

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Delete Trust

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>

    Deletes the specified trust.

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Trust in

    trust_id

    string

    ID of the Trust to delete

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Validate Scheduler Trust

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>

    Validates the Trust of a given Workload.

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    workload_id

    string

    ID of the Workload to validate the Trust of

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

  • Login to Horizon using admin user.

  • Click on Admin Tab.

  • Navigate to Backups-Admin Tab.

  • Navigate to Trilio page.

  • The Backups-Admin area provides the following features.

    It is possible to reduce the shown information down to a single tenant. That way seeing the exact impact the chosen Tenant has.

    hashtag
    Status overview

    The status overview is always visible in the Backups-Admin area. It provides the most needed information on a glance, including:

    • Storage Usage (nfs only)

    • Number of protected VMs compared to number of existing VMs

    • Number of currently running Snapshots

    • Status of TVault Nodes

    • Status of Contego Nodes

    circle-info

    The status of nodes is filled when the services are running and in good status.

    hashtag
    Workloads tab

    This tab provides information about all currently existing Workloads. It is the most important overview tab for every Backup Administrator and therefor the default tab shown when opening the Backup-Admins area.

    The following information are shown:

    • User-ID that owns the Workload

    • Project that contains the Workload

    • Workload name

    • Workload Type

    • Availability Zone

    • Amount of protected VMs

    • Performance information about the last 30 backups

      • How much data was backed up (green bars)

      • How long did the Backup take (red line)

    • Piechart showing amount of Full (Blue) Backups compared to Incremental (Red) Backups

    • Number of successful Backups

    • Number of failed Backups

    • Storage used by that Workload

    • Which Backup target is used

    • When is the next Snapshot run

    • What is the general intervall of the Workload

    • Scheduler Status including a Switch to deactivate/activate the Workload

    hashtag
    Usage tab

    Administrators often need to figure out, where a lot of resources are used up, or they need to quickly provide usage information to a billing system. This tab helps in these tasks by providing the following information:

    • Storage used by a Tenant

    • VMs protected by a Tenant

    It is possible to drill down to see the same information per workload and finally per protected VM.

    circle-info

    The Usage tab includes workloads and VMs that are no longer actively used by a Tenant, but exist on the backup target.

    hashtag
    Nodes tab

    This tab displays information about Trilio cluster nodes. The following information are shown:

    • Node name

    • Node ID

    • Trilio Version of the node

    • IP Address

    • Active Controller Node (True/False)

    • Status of the Node

    circle-info

    The Virtual IP is shown as it's own node. It is typically shown directly below the current active Controller Node.

    hashtag
    Data Movers tab (Trilio Data Mover Service)

    This tab displays information about Trilio contego service. The following information are shown:

    • Service-Name

    • Compute Node the service is running on

    • Zone

    • Service Status from Openstack perspective (enabled/disabled)

    • Version of the Service

    • General Status

    • last time the Status was updated

    hashtag
    Storage tab

    This tab displays information about the backup target storage. It contains the following information:

    • Storage Name

    circle-info

    Clicking on the Storage name provides an overview of all workloads stored on that storage.

    • Capacity of the storage

    • Total utilization of the storage

    • Status of the storage

    • Statistic information

      • Percentage all storages are used

      • Percentage how much storage is used for full backups

      • Amount of Full backups versus Incremental backups

    hashtag
    Audit tab

    Audit logs provide the sequence of workload related activities done by users, like workload creation, snapshot creation, etc. The following information are shown:

    • Time of the entry

    • What task has been done

    • Project the task has performed in

    • User that performed the task

    The Audit log can be searched for strings to find for example only entries down by a specific user.

    Additionally, can the shown timeframe be changed as necessary.

    hashtag
    License tab

    The license tab provides an overview over the current license and allows to upload new licenses, or validate the current license.

    circle-info

    A license validation is automatically done, when opening the tab.

    The following information about an active license are shown:

    • Organization (License name)

    • License ID

    • Purchase date - when was the license created

    • License Expiry Date

    • Maintenance Expiry Date

    • License value

    • License Edition

    • License Version

    • License Type

    • Description of the License

    • Evaluation (True/False)

    triangle-exclamation

    Trilio will stop all activities once a license is no longer valid or expired.

    hashtag
    Policy tab

    The policy tab gives Administrators the possibility to work with workload policies.

    circle-info

    Please use Workload Policies in the Admin guide to learn more about how to create and use Workload Policies.

    hashtag
    Settings tab

    This tab manages all global settings for the whole cloud. Trilio has two types of settings:

    1. Email settings

    2. Job scheduler settings.

    hashtag
    Email Settings

    These settings will be used by Trilio to send email reports of snapshots and restores to users.

    circle-exclamation

    Configuring the Email settings is a must-have to provide Email notification to Openstack users.

    The following information are required to configure the email settings:

    • SMTP Server

    • SMTP username

    • SMTP password

    • SMTP port

    • SMTP timeout

    • Sender email address

    circle-info

    A test email can be sent directly from the configuration page.

    To work with email settings through CLI use the following commands:

    To set an email setting for the first time or after deletion use:

    • --description➡️Optional description (Default=None) ➡️ Not required for email settings

    • --category➡️Optional setting category (Default=None) ➡️ Not required for email settings

    • --type➡️settings type ➡️ set to email_settings

    • --is-public➡️sets if the setting can be seen publicly ➡️ set to False

    • --is-hidden➡️sets if the setting will always be hidden ➡️ set to False

    • --metadata➡️sets if the setting can be seen publicly ➡️ Not required for email settings

    • <name>➡️name of the setting ➡️ Take from the list below

    • <value>➡️value of the setting ➡️ Take value type from the list below

    To update an already set email setting through CLI use:

    • --description➡️Optional description (Default=None) ➡️ Not required for email settings

    • --category➡️Optional setting category (Default=None) ➡️ Not required for email settings

    • --type➡️settings type ➡️ set to email_settings

    • --is-public➡️sets if the setting can be seen publicly ➡️ set to False

    • --is-hidden➡️sets if the setting will always be hidden ➡️ set to False

    • --metadata➡️sets if the setting can be seen publicly ➡️ Not required for email settings

    • <name>➡️name of the setting ➡️ Take from the list below

    • <value>➡️value of the setting ➡️ Take value type from the list below

    To show an already set email setting use:

    • --get_hidden➡️show hidden settings (True) or not (False) ➡️ Not required for email settings, use False if set

    • <setting_name>➡️name of the setting to show➡️ Take from the list below

    To delete a set email setting use:

    • <setting_name>➡️name of the setting to delete ➡️ Take from the list below

    Setting name
    Value type
    example

    smtp_default___recipient

    String

    [email protected]

    smtp_default___sender

    String

    [email protected]

    smtp_port

    Integer

    587

    hashtag
    Disable/Enable Job Scheduler

    The Global Job Scheduler can be used to deactivate all scheduled workloads without modifying each one of them.

    To activate/deactivate the Global Job Scheduler through the Backups-Admin area:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin Tab.

    4. Navigate to Trilio page.

    5. Navigate to the Settings tab

    6. Click "Disable/Enable Job Scheduler"

    7. Check or Uncheck the box for "Job Scheduler Enabled"

    8. Confirm by clicking on "Change"

    The Global Job Scheduler can be controlled through CLI as well.

    To get the status of the Global Job Scheduler use:

    To deactivate the Global Job Scheduler use:

    To activate the Global Job Scheduler use:

    Upgrading on RHOSP

    hashtag
    0] Pre-requisites

    Please ensure the following points are met before starting the upgrade process:

    • No Snapshot or Restore is running

    • Global job scheduler is disabled

    • wlm-cron is disabled on the Trilio Appliance

    hashtag
    Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it is has been completely shut-down.

    hashtag
    1] [On Undercloud node] Clone latest Trilio repository and upload Trilio puppet module

    circle-exclamation

    All commands need to be run as user 'stack' on undercloud node

    hashtag
    1.1] Clone Trilio cfg scripts repository

    Separate directories are created as per Redhat OpenStack release under 'triliovault-cfg-scripts/redhat-director-scripts/' directory. Use all scripts/templates from respective directory. For ex, if your RHOSP release is 13, then use scripts/templates from 'triliovault-cfg-scripts/redhat-director-scripts/rhosp13' directory only.

    circle-info

    Available RHOSP_RELEASE___DIRECTORY values are:

    rhosp13 rhosp16.1 rhosp16.2 rhosp17.0

    hashtag
    1.2] If backup target type is 'Ceph based S3' with SSL:

    If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into the puppet directory of the right release.

    hashtag
    2] Upload Trilio puppet module

    hashtag
    3] Update overcloud roles data file to include Trilio services

    Trilio has two services as explained below. You need to add these two services to your roles_data.yaml. If you do not have customized roles_data file, you can find your default roles_data.yaml file at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml on undercloud.

    You need to find that role_data file and edit it to add the following Trilio services.

    i) Trilio Datamover Api Service:

    Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamoverApi

    circle-info

    This service needs to be co-located with database and keystone services. That said, you need to add this service on the same role as of keystone and database service.

    Typically this service should be deployed on controller nodes where keystone and database runs. If you are using RHOSP's pre-defined roles, you need to addOS::TripleO::Services::TrilioDatamoverApiservice to Controller role.

    ii) Trilio Datamover Service: Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamover This service should be deployed on role where nova-compute service is running.

    If you are using RHOSP's pre-defined roles, you need to add our OS::TripleO::Services::TrilioDatamover service to Compute role.

    If you have defined your custom roles, then you need to identify the role name where in 'nova-compute' service is running and then you need to add 'OS::TripleO::Services::TrilioDatamover' service to that role.

    iii) Trilio Horizon Service:

    This service needs to share the same role as the OpenStack Horizon server. In the case of the pre-defined roles will the Horizon service run on the role Controller. Add the following to the identified role OS::TripleO::Services::TrilioHorizon

    hashtag
    4] Prepare latest Trilio container images

    circle-exclamation

    All commands need to be run as user 'stack'

    Trilio containers are pushed to 'RedHat Container Registry'. Registry URL is ''. The Trilio container URLs are as follows:

    circle-info

    Refer to the word <HOTFIX-TAG-VERSION> as 4.3.2 in the below sections

    hashtag
    4.1] available container images

    hashtag
    RHOSP 13

    hashtag
    RHOSP 16.1

    hashtag
    RHOSP 16.2

    hashtag
    RHOSP 17.0

    There are three registry methods available in RedHat Openstack Platform.

    1. Remote Registry

    2. Local Registry

    3. Satellite Server

    hashtag
    4.2] Remote Registry

    circle-info

    Please refer to the to see which containers are available.

    Follow this section when 'Remote Registry' is used.

    For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from Redhat registry.

    Populate the trilio_env.yaml with container URLs for:

    • Trilio Datamover container

    • Trilio Datamover api container

    • Trilio Horizon Plugin

    circle-info

    trilio_env.yaml will be available in __triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments

    hashtag
    Example

    hashtag
    4.2] Local Registry

    circle-info

    Please refer to the to see which containers are available.

    Follow this section when 'local registry' is used on the undercloud.

    In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.

    hashtag
    RHOSP13 example

    hashtag
    RHOSP 16.1, RHOSP16.2 and RHOSP17.0 example

    The changes can be verified using the following commands.

    hashtag
    4.3] Red Hat Satellite Server

    circle-info

    Please refer to the to see which containers are available.

    Follow this section when a Satellite Server is used for the container registry.

    Pull the Trilio containers on the Red Hat Satellite using the given

    Populate the trilio_env.yaml with container urls.

    hashtag
    RHOSP 13 example

    hashtag
    RHOSP 16.1, RHOSP 16.2 and RHOSP 17.0

    hashtag
    5. Verify Trilio environment details

    It is recommended to re-populate the backup target details in the freshly downloaded trilio_env.yaml file. This will ensure that parameters that have been added since the last update/installation of Trilio are available and will be filled out too.

    Locations of the trilio_env.yaml:

    For more details about the trilio_env.yaml please check .

    hashtag
    6] Configure multi-IP NFS

    circle-info

    This section is only required when the multi-IP feature for NFS is required.

    This feature allows to set the IP to access the NFS Volume per datamover instead of globally.

    On Undercloud node, change directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

    Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.

    circle-info

    Run this command on undercloud by sourcing 'stackrc'.

    Edit input map file and fill all the details. Refer to the for details about the structure.

    vi triliovault_nfs_map_input.yml

    Update pyyaml on the undercloud node only

    circle-exclamation

    If pip isn't available please install pip on the undercloud.

    Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.

    The result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

    Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Validate the changes in file 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

    Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that nfs is used as backup target.

    hashtag
    7] Update Overcloud Trilio components

    Use the following heat environment file and roles data file in overcloud deploy command:

    1. trilio_env.yaml

    2. roles_data.yaml

    3. Use correct Trilio endpoint map file as per available Keystone endpoint configuration

    To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:

    hashtag
    8] Verify deployment

    triangle-exclamation

    If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

    hashtag
    8.1] On Controller node

    Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

    hashtag
    8.2] On Compute node

    Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

    hashtag
    8.3] On the node with Horizon service

    Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.

    If Trilio Horizon container is in restarted state on RHOSP 16.1.8/RHSOP 16.2.4 then use below workaroud

    hashtag
    9] Enable mount-bind for NFS

    In T4O 4.2 and later releases, deriving of the mount point has been changed. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2 and onwards releases.

    Please follow to set up the mount bind for RHOSP.

    git clone -b 4.3.2 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/kolla-ansible/
    
    # For Centos and Ubuntu
    cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
    ## For Centos and Ubuntu
    - Take backup of globals.yml
    cp /etc/kolla/globals.yml /opt/
    
    - If the OpenStack release is other than 'zed' append below Trilio global variables to globals.yml
    cat ansible/triliovault_globals.yml >> /etc/kolla/globals.yml
    
    - If the OpenStack release is ‘zed' append below Trilio global variables to globals.yml
    cat ansible/triliovault_globals_zed.yml >> /etc/kolla/globals.yml
    ## For Centos and Ubuntu
    - Take backup of passwords.yml
    cp /etc/kolla/passwords.yml /opt/
    
    - Append Trilio global variables to passwords.yml 
    cat ansible/triliovault_passwords.yml >> /etc/kolla/passwords.yml
    
    - Edit '/etc/kolla/passwords.yml', go to end of the file and set trilio passwords.
    # For Centos and Ubuntu
    - Take backup of site.yml
    cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/
    
    # If the OpenStack release is ‘yoga' append below Trilio code to site.yml  
    cat ansible/triliovault_site_yoga.yml >> /usr/local/share/kolla-ansible/ansible/site.yml    
    
    # If the OpenStack release is other than 'yoga' append below Trilio code to site.yml 
    cat ansible/triliovault_site.yml >> /usr/local/share/kolla-ansible/ansible/site.yml                                                            
    For example:
    If your inventory file name path '/root/multinode' then use following command.
    
    cat ansible/triliovault_inventory.txt >> /root/multinode
    cd triliovault-cfg-scripts/common/
    pip3 install -U pyyaml
    python ./generate_nfs_map.py
    cat triliovault_nfs_map_output.yml >> ../kolla-ansible/ansible/triliovault_globals.yml
    
    1. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
    2. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
    3. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-horizon-plugin:{{ triliovault_tag }}
    
    ## EXAMPLE from Kolla Ubuntu source based OpenStack
    docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:{{ triliovault_tag }}
    1. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
    2. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
    3. docker.io/trilio/{{ kolla_base_distro }}-binary-trilio-horizon-plugin:{{ triliovault_tag }}
    
    ## EXAMPLE from Kolla Ubuntu binary based OpenStack
    docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/ubuntu-binary-trilio-horizon-plugin:{{ triliovault_tag }}
    nova_libvirt_default_volumes:
      - "{{ node_config_directory }}/nova-libvirt/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run/:/run/:shared"
      - "/dev:/dev"
      - "/sys/fs/cgroup:/sys/fs/cgroup"
      - "kolla_logs:/var/log/kolla/"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "
    {% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    
    
    
    "
      - "nova_libvirt_qemu:/etc/libvirt/qemu"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }
      - "/var/trilio:/var/trilio:shared"
    nova_compute_default_volumes:
      - "{{ node_config_directory }}/nova-compute/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run:/run:shared"
      - "/dev:/dev"
      - "kolla_logs:/var/log/kolla/"
      - "
    {% if enable_iscsid | bool %}iscsi_info:/etc/iscsi{% endif %}"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    
    
    "
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    nova_compute_ironic_default_volumes:
      - "{{ node_config_directory }}/nova-compute-ironic/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "kolla_logs:/var/log/kolla/"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    ansible -i multinode control -m shell -a "docker login -u <docker-login-username> -p <docker-login-password> docker.io"
    kolla-ansible -i multinode pull --tags triliovault
    kolla-ansible -i multinode deploy
    [controller] docker ps  | grep "trilio-"
    a2a3593f76db   trilio/kolla-centos-trilio-datamover-api:<triliovault_tag>       "dumb-init --single-…"   23 hours ago    Up 23 hours    triliovault_datamover_api
    5f573caa7b02   trilio/kolla-centos-trilio-horizon-plugin:<triliovault_tag>      "dumb-init --single-…"   23 hours ago    Up 23 hours              horizon
    
    [compute] docker ps | grep "trilio-"
    f6d443c2942c   trilio/kolla-centos-trilio-datamover:<triliovault_tag>          "dumb-init --single-…"   23 hours ago    Up 23 hours    triliovault_datamover
    docker ps -a | grep trilio
    docker logs trilio_datamover_api
    docker logs trilio_datamover
    docker ps | grep horizon
    /var/log/kolla/triliovault-datamover-api/dmapi.log
    /var/log/kolla/triliovault-datamover/tvault-contego.log
    ## Download the shell script
    $ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    
    ## Assign executable permissions
    $ chmod +x nova_userid.sh
    
    ## Execute the shell script to change 'nova' user and group id to '42436'
    $ ./nova_userid.sh
    
    ## Ignore any errors and verify that 'nova' user and group id has changed to '42436'
    $ id nova
       uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:43:36 GMT
    Content-Type: application/json
    Content-Length: 868
    Connection: keep-alive
    X-Compute-Request-Id: req-2151b327-ea74-4eec-b606-f0df358bc2a0
    
    {
       "trust":[
          {
             "created_at":"2021-01-21T11:43:36.140407",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":"adfa32d7746a4341b27377d6f7c61adb",
             "value":"1c981a15e7a54242ae54eee6f8d32e6a",
             "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
             "category":"identity",
             "type":"trust_id",
             "public":false,
             "hidden":1,
             "status":"available",
             "is_public":false,
             "is_hidden":true,
             "metadata":[
                
             ]
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:39:12 GMT
    Content-Type: application/json
    Content-Length: 888
    Connection: keep-alive
    X-Compute-Request-Id: req-3c2f6acb-9973-4805-bae3-cd8dbcdc2cb4
    
    {
       "trust":{
          "created_at":"2020-11-26T13:15:29.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "value":"703dfabb4c5942f7a1960736dd84f4d4",
          "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
          "category":"identity",
          "type":"trust_id",
          "public":false,
          "hidden":true,
          "status":"available",
          "metadata":[
             {
                "created_at":"2020-11-26T13:15:29.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"86aceea1-9121-43f9-b55c-f862052374ab",
                "settings_name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
                "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
                "key":"role_name",
                "value":"member"
             }
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:41:51 GMT
    Content-Type: application/json
    Content-Length: 888
    Connection: keep-alive
    X-Compute-Request-Id: req-d838a475-f4d3-44e9-8807-81a9c32ea2a8
    {
       "scheduler_enabled":true,
       "trust":{
          "created_at":"2021-01-21T11:43:36.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "value":"1c981a15e7a54242ae54eee6f8d32e6a",
          "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
          "category":"identity",
          "type":"trust_id",
          "public":false,
          "hidden":true,
          "status":"available",
          "metadata":[
             {
                "created_at":"2021-01-21T11:43:36.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"d98d283a-b096-4a68-826a-36f99781787d",
                "settings_name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
                "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
                "key":"role_name",
                "value":"member"
             }
          ]
       },
       "is_valid":true,
       "scheduler_obj":{
          "workload_id":"209c13fa-e743-4ccd-81f7-efdaff277a1f",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_domain_id":"default",
          "user":"adfa32d7746a4341b27377d6f7c61adb",
          "tenant":"4dfe98a43bfa404785a812020066b4d6"
       }
    }
    {
       "trusts":{
          "role_name":"member",
          "is_cloud_trust":false
       }
    }
    workloadmgr setting-create [--description <description>]
                               [--category <category>]
                               [--type <type>]
                               [--is-public {True,False}]
                               [--is-hidden {True,False}]
                               [--metadata <key=value>]
                               <name> <value>
    workloadmgr setting-update [--description <description>]
                               [--category <category>]
                               [--type <type>]
                               [--is-public {True,False}]
                               [--is-hidden {True,False}]
                               [--metadata <key=value>]
                               <name> <value>
    workloadmgr setting-show [--get_hidden {True,False}] <setting_name>
    workloadmgr setting-delete <setting_name>
    workloadmgr get-global-job-scheduler
    workloadmgr disable-global-job-scheduler
    workloadmgr enable-global-job-scheduler

    ubuntu rocky

    Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team

    triliovault_docker_registry

    Default value: docker.io

    Edit this value if a different container registry for Trilio containers is to be used. Containers need to be pulled from docker.io and pushed to chosen registry first.

    triliovault_backup_target

    • nfs

    • amazon_s3

    • other_s3_compatible

    nfs if the backup target is NFS

    amazon_s3 if the backup target is Amazon S3

    other_s3_compatible if the backup target type is S3 but not amazon S3.

    multi_ip_nfs_enabled

    yes no default: no

    This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.

    triliovault_nfs_shares

    <NFS-IP/FQDN>:/<NFS path>

    NFS share path example: ‘192.168.122.101:/nfs/tvault’

    triliovault_nfs_options

    'nolock,soft,timeo=180, intr,lookupcache=none'. for Cohesity nfs: 'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10'

    -These parameter set NFS mount options. -Keep default values, unless a special requirement exists.

    triliovault_s3_access_key

    S3 Access Key

    Valid for amazon_s3 and

    triliovault_s3_secret_key

    S3 Secret Key

    Valid for amazon_s3 and other_s3_compatible

    triliovault_s3_region_name

    • Default value: us-east-1

    • S3 Region name

    Valid for amazon_s3 and other_s3_compatible

    If s3 storage doesn't have region parameter keep default

    triliovault_s3_bucket_name

    S3 Bucket name

    Valid for amazon_s3 and other_s3_compatible

    triliovault_s3_endpoint_url

    S3 Endpoint URL

    Valid for other_s3_compatible only

    triliovault_s3_ssl_enabled

    • True

    • False

    Valid for other_s3_compatible only

    Set true for SSL enabled S3 endpoint URL

    triliovault_s3_ssl_cert_file_name

    s3-cert.pem

    Valid for other_s3_compatible only with SSL enabled and self signed certificates

    OR issued by a private authority. In this case, copy the ceph s3 ca chain file to/etc/kolla/config/triliovault/

    directory on ansible server. Create this directory if it does not exist already.

    triliovault_copy_ceph_s3_ssl_cert

    • True

    • False

    Valid for other_s3_compatible only

    Set to True when: SSL enabled with self-signed certificates or issued by a private authority.

    "version":"4.0.115",
    "name":"trust-6e290937-de9b-446a-a406-eb3944e5a034",
    "project_id":"4dfe98a43bfa404785a812020066b4d6",
    "user_id":"cloud_admin",
    "value":"dbe2e160d4c44d7894836a6029644ea0",
    "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
    "category":"identity",
    "type":"trust_id",
    "public":false,
    "hidden":true,
    "status":"available",
    "metadata":[
    {
    "created_at":"2020-11-26T13:10:54.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"e9ec386e-79cf-4f6b-8201-093315648afe",
    "settings_name":"trust-6e290937-de9b-446a-a406-eb3944e5a034",
    "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
    "key":"role_name",
    "value":"admin"
    }
    ]
    }
    ]
    }

    User-Agent

    string

    python-workloadmgrclient

    smtp_server_name

    String

    Mailserver_A

    smtp_server_username

    String

    admin

    smtp_server_password

    String

    password

    smtp_timeout

    Integer

    10

    smtp_email_enable

    Boolean

    True

  • Instead of tls-endpoints-public-dns.yaml file, use environments/trilio_env_tls_endpoints_public_dns.yaml

  • Instead of tls-endpoints-public-ip.yaml file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml

  • Instead of tls-everywhere-endpoints-dns.yaml file, useenvironments/trilio_env_tls_everywhere_dns.yaml

  • registry.connect.redhat.comarrow-up-right
    following overview
    this overview
    following overview
    Red Hat registry URLs.
    here
    this page
    this documentation
    [root@TVM2 ~]# pcs resource disable wlm-cron
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr                                                                                                                                                                                                                                                                                                                                                           -cron
    
    cd /home/stack
    mv triliovault-cfg-scripts triliovault-cfg-scripts-old
    git clone -b 4.3.2 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following for RHOSP13, RHOSP16.1 and RHOSP16.2
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Output of above command looks like following for RHOSP17.0
    Creating tarball...
    Tarball created.
    renamed '/tmp/puppet-modules-P3duCg9/puppet-modules.tar.gz' -> '/var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz'
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    Trilio Datamover container:       registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover container:       registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover container:       registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover container:       registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    # For RHOSP13
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    
    # For RHOSP16.1
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
       
    # For RHOSP16.2
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    
    # For RHOSP17.0
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
    
    ./prepare_trilio_images.sh <undercloud_ip> <container_tag>
    
    # Example:
    ./prepare_trilio_images.sh 192.168.13.34 <HOTFIX-TAG-VERSION>-rhosp13
    
    ## Verify changes
    # For RHOSP13
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: 172.25.2.2:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: 172.25.2.2:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: 172.25.2.2:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
    
    sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER_TAG> 
    
    ## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME'. 
    -- In the below example 'trilio-undercloud.ctlplane.localdomain' is <UNDERCLOUD_REGISTRY_HOSTNAME>
    $ openstack tripleo container image list | grep keystone
    | docker://trilio-undercloud.ctlplane.localdomain:8787/rhosp-rhel8/openstack-keystone:16.0-82                       |
    | docker://trilio-undercloud.ctlplane.localdomain:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.0-84   
    
    ## 'CONTAINER_TAG' format for RHOSP16.1: <<HOTFIX-TAG-VERSION>>-rhosp16.1
    ## 'CONTAINER_TAG' format for RHOSP16.2: <<HOTFIX-TAG-VERSION>>-rhosp16.2
    ## 'CONTAINER_TAG' format for RHOSP17.0: <<HOTFIX-TAG-VERSION>>-rhosp17.0
    
    ## Example
    sudo ./prepare_trilio_images.sh trilio-undercloud.ctlplane.localdomain <HOTFIX-TAG-VERSION>-rhosp16.1
    (undercloud) [stack@undercloud redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>]$ openstack tripleo container image list | grep trilio
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1                       |                        |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1                   |                   |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1                  |
    
    -----------------------------------------------------------------------------------------------------
    
    (undercloud) [stack@undercloud redhat-director-scripts]$ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    ## For RHOSP16.1
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
    ## For RHOSP16.2
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    
    ## For RHOSP7.0
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    
    RHOSP13: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/environments/trilio_env.yaml
    RHOSP16.1: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env.yaml
    RHOSP16.2: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/environments/trilio_env.yaml
    RHOSP17.0: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/environments/trilio_env.yaml
    cd triliovault-cfg-scripts/common/
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    ## On Python3 env 
    sudo pip3 install PyYAML==5.1 3 
    
    ## On Python2 env 
    sudo pip install PyYAML==5.1
    ## On Python3 env 
    python3 ./generate_nfs_map.py 
     
    ## On Python2 env 
    python ./generate_nfs_map.py
    grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env_tls_endpoints_public_dns.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_nfs_map.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /home/stack/templates/roles_data.yaml
    [root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
    26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2       kolla_start           5 days ago  Up 5 days ago         horizon
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2       kolla_start           5 days ago  Up 5 days ago         horizon
    ## Either of the below workarounds should be performed on all the controller nodes where issue occurs for horizon pod.
    
    option-1: Restart the memcached service on controller using systemctl (command: systemctl restart tripleo_memcached.service)
    
    option-2: Restart the memcached pod (command: podman restart memcached)

    Upgrading on Kolla OpenStack

    Trilio supports the upgrade of Trilio-Openstack components from any of the older releases (4.1 onwards) to the latest 4.2 hotfix releases without ripping up the older deployments.

    Refer to the below-mentioned acceptable values for the placeholders kolla_base_distro and triliovault_tag in this document as per the Openstack environment:

    Openstack Version
    triliovault_tag
    kolla_base_distro

    hashtag
    1] Pre-requisites

    Please ensure the following points are met before starting the upgrade process:

    • Either 4.1 or 4.2 GA OR any hotfix patch against 4.1/4.2 should be already deployed

    • No Snapshot OR Restore is running

    • Global job scheduler should be disabled

    hashtag
    1.1] Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it has been completly shut-down.

    hashtag
    2] Clone latest configuration scripts

    Before the latest configuration script is loaded it is recommended to take a backup of the existing config scripts' folder & Trilio ansible roles. The following command can be used for this purpose:

    Clone the latest configuration scripts of the required branch and access the deployment script directory for Kolla Ansible Openstack.

    Copy the downloaded Trilio ansible role into the Kolla-Ansible roles directory.

    hashtag
    3] Append Trilio variables

    hashtag
    3.1] Clean old Trilio variables and append new Trilio variables

    circle-exclamation

    This step is not always required. It is recommended to comparetriliovault_globals.ymlwith the Trilio entries in the/etc/kolla/globals.ymlfile.

    In case of no changes, this step can be skipped.

    This is required, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_globals.yml they need to be updated in /etc/kolla/globals.yml file.

    hashtag
    3.2] Clean old Trilio passwords and append new Trilio password variables

    circle-exclamation

    This step is not always required. It is recommended to comparetriliovault_passwords.ymlwith the Trilio entries in the/etc/kolla/passwords.ymlfile.

    In case of no changes, this step can be skipped.

    This step is required, when some password variable names have been added, changed, or removed in the latest triliovault_passwords.yml. In this case, the /etc/kolla/passwords.yml needs to be updated.

    hashtag
    3.3] Append triliovault_site.yml content to kolla ansible's site.yml

    circle-exclamation

    This step is not always required. It is recommended to comparetriliovault_site.ymlwith the Trilio entries in the/usr/local/share/kolla-ansible/ansible/site.ymlfile.

    In case of no changes, this step can be skipped.

    This is required because, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_site.yml they need to be updated in /usr/local/share/kolla-ansible/ansible/site.yml file.

    hashtag
    3.4] Append triliovault_inventory.txt to the kolla-ansible inventory file

    circle-exclamation

    This step is not always required. It is recommended to comparetriliovault_inventory.yml ith the Trilio entries in the/root/multinodefile.

    In case of no changes, this step can be skipped.

    By default, the triliovault-datamover-api service gets installed on ‘control' hosts and the trilio-datamover service gets installed on 'compute’ hosts. You can edit the T4O groups in the inventory file as per your cloud architecture.

    T4O group names are ‘triliovault-datamover-api’ and ‘triliovault-datamover’

    hashtag
    4] Configure multi-IP NFS as Trilio backup target

    circle-info

    This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs

    On kolla-ansible server node, change directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

    circle-info

    If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.

    Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.

    vi triliovault_nfs_map_input.yml

    The triliovault_nfs_map_input.yml is explained .

    Update PyYAML on the kolla-ansible server node only

    Expand the map file to create a one-to-one mapping of compute and NFS share.

    The result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all necessary NFS shares.

    Append this output map file to triliovault_globals.yml File Path: /home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml

    Ensure to set multi_ip_nfs_enabled in _`_triliovault_globals.yml` file to yes

    • A new parameter is added to triliovault_globals.yml , set this parameter value to 'yes' if backup target NFS supports multiple endpoints/IPs. File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’ multi_ip_nfs_enabled: 'yes'

    • Later append triliovault_globals.yaml file to /etc/kolla/globals.yml

    hashtag
    5] Edit globals.yml to set T4O parameters

    Edit /etc/kolla/globals.yml file to fill triliovault backup target and build details. You will find the triliovault related parameters at the end of globals.yml. The user needs to fill in details like triliovault build version, backup target type, backup target details, etc.

    Following is the list of parameters that the user needs to edit.

    Parameter
    Defaults/choices
    comments

    hashtag
    6] Enable T4O Snapshot mount feature

    circle-info

    This step is already part of the 4.2 GA installation procedure and should only be verified.

    To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.

    Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variable. Append the Trilio mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.

    For a default Kolla installation, will the variable look as follows afterward:

    Next, find the variable nova_compute_default_volumes in the same file and append the mount bind /var/trilio:/var/trilio:shared to the list.

    After the change will the variable look for a default Kolla installation as follows:

    In the case of using Ironic compute nodes one more entry needs to be adjusted in the same file. Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.

    After the changes the variable will look like the following:

    hashtag
    7] Pull containers in case of private repository

    In case, the user doesn’t want to use the docker hub registry for triliovault containers during cloud deployment, then the user can pull triliovault images before starting cloud deployment and push them to other preferred registries.

    Following are the triliovault container image URLs for 4.3.2 releases. Replace kolla_base_distro and triliovault_tag variables with their values.

    This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro This {{ triliovault_tag }} is mentioned at the start of this document.

    circle-info

    Trilio supports the Source-based containers from the OpenStack Yoga release **** onwards.

    Below are the Source-based OpenStack deployment images

    Below are the Binary-based OpenStack deployment images

    hashtag
    8] Pull T4O container images

    Activate the login into dockerhub for Trilio tagged containers.

    Please get the Dockerhub login credentials from Trilio Sales/Support team

    Run the below command from the directory with the multinode file tull pull the required images.

    hashtag
    9] Run Kolla-Ansible upgrade command

    Run the below command from the directory with the multinode file to start the upgrade process.

    hashtag
    10] Verify Trilio deployment

    Verify on the controller and compute nodes that the Trilio containers are in UP state.

    Following is a sample output of commands from controller and compute nodes. triliovault_tag will have value as per the openstack release where deployment being done.

    hashtag
    11] Advance settings/configuration for Trilio services

    hashtag
    11.1] Customize HAproxy configuration parameters for Trilio datamover api service

    Following are the default haproxy conf parameters set against triliovault datamover api service.

    These values work best for triliovault dmapi service. It’s not recommended to change these parameter values. However, in some exceptional cases, If needed to change any of the above parameter values then same can be done on kolla-ansible server in the following file.

    After editing, run kolla-ansible deploy command again to push these changes to openstack cloud.

    Post kolla-ansible deploy, to verify the changes, please check following file, available on all controller/haproxy nodes.

    hashtag
    12] Enable mound-bind for NFS

    In T4O 4.2 and later releases, deriving of the mount point has been changed. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2 and onwards releases.

    Please follow to ensure that Backups taken from T4O 4.1 or older can be used with T4O 4.2 and onwards releases.

    Workload Quotas

    hashtag
    List Quota Types

    GET https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types

    Lists all available Quota Types

    wlm-cron is disabled on the primary Trilio Appliance
  • Access to the gemfury repository to fetch new packages

  • triliovault_docker___registry

    Default: docker.io

    If users want to use a different container registry for the triliovault containers, then the user can edit this value. In that case, the user first needs to manually pull triliovault containers from and push them to the other registry.

    triliovault_backup___target

    nfs

    amazon_s3

    ceph_s3

    'nfs': If the backup target is NFS

    'amazon_s3': If the backup target is Amazon S3

    'ceph_s3': If the backup target type is S3 but not amazon S3.

    multi_ip_nfs_enabled

    yes no Default: no

    This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.

    dmapi_workers

    Default: 16

    If dmapi_workers field is not present in config file. The Default value will be equals to number of cores present on the node

    triliovault_nfs___shares

    <NFS-IP/FQDN>:/<NFS path>

    Only with nfs for triliovault_backup_target

    User needs to provide NFS share path, e.g.: 192.168.122.101:/opt/tvault

    triliovault_nfs___options

    Default:'nolock,soft,timeo=180, intr,lookupcache=none'. for Cohesity nfs: 'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10'

    -Only with nfs for triliovault_backup_target -Keep default values if unclear

    triliovault_s3___access_key

    S3 Access Key

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 access key

    triliovault_s3___secret_key

    S3 Secret Key

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 secret key

    triliovault_s3___region_name

    S3 Region Name

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 region or keep default if no region required

    triliovault_s3___bucket_name

    S3 Bucket name

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 bucket

    triliovault_s3___endpoint_url

    S3 Endpoint URL

    Valid for other_s3_compatible only

    triliovault_s3___ssl_enabled

    True

    False

    Only with ceph_s3 for triliovault_backup_target

    Set to true if endpoint is on HTTPS

    triliovault_s3__ssl_cert__file_name

    s3-cert-pem

    Only with ceph_s3 for triliovault_backup_target and

    if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority user needs to copy the 'ceph s3 ca chain file' to "/etc/kolla/config/triliovault/" directory on ansible server. Create this directory if it does not exist already.

    triliovault_copy__ceph_s3__ssl_cert

    True

    False

    Set to true if:

    ceph_s3 for triliovault_backup_target and

    if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority

    Victoria

    4.3.2-victoria

    ubuntu centos

    Wallaby

    4.3.2-wallaby

    ubuntu centos

    Yoga

    4.3.2-yoga

    ubuntu centos

    Zed

    4.3.2-zed

    ubuntu rocky

    triliovault_tag

    <triliovault_tag >

    Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned at the start of this document

    horizon_image___full

    Uncomment

    By default, Trilio Horizon container would not get deployed.

    Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.

    triliovault_docker___username

    <dockerhub-login-username>

    Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team

    triliovault_docker___password

    <dockerhub-login-password>

    here
    this documentation

    Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team

    hashtag
    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 15:40:56 GMT
    Content-Type: application/json
    Content-Length: 1625
    Connection: keep-alive
    X-Compute-Request-Id: req-2ad95c02-54c6-4908-887b-c16c5e2f20fe
    
    {
       "quota_types":[
          {
             "created_at":"2020-10-19T10:05:52.000000",
             "updated_at":"2020-10-19T10:07:32.000000",
             "deleted_at":null,
    
    

    hashtag
    Show Quota Type

    GET https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types/<quota_type_id>

    Requests the details of a Quota Type

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project to work in

    quota_type_id

    string

    ID of the Quota Type to show

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Create allowed Quota

    POST https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>

    Creates an allowed Quota with the given parameters

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    project_id

    string

    ID of the Tenant/Project to create the allowed Quota in

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    hashtag
    Body Format

    hashtag
    List allowed Quota

    GET https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>

    Lists all allowed Quotas for a given project.

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    project_id

    string

    ID of the Tenant/Project to list allowed Quotas from

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Show allowed Quota

    GET https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quota/<allowed_quota_id>

    Shows details for a given allowed Quota

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    <allowed_quota_id>

    string

    ID of the allowed Quota to show

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Update allowed Quota

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/update_allowed_quota/<allowed_quota_id>

    Updates an allowed Quota with the given parameters

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    <allowed_quota_id>

    string

    ID of the allowed Quota to update

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    hashtag
    Body Format

    hashtag
    Delete allowed Quota

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<allowed_quota_id>

    Deletes a given allowed Quota

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    <allowed_quota_id>

    string

    ID of the allowed Quota to delete

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Workload Import and Migration

    hashtag
    import Workload list

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/get_list/import_workloads

    Provides the list of all importable workloads

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Query Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    orphaned Workload list

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/orphan_workloads

    Provides the list of all orphaned workloads

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Query Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Import Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/import_workloads

    Imports all or the provided workloads

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Body format

    hashtag
    Track Workload Import Progress

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/import_workloads/progress

    Track Workload Import Progress against jobid

    hashtag
    Path Parameters

    Name
    Type
    Description
    [root@TVM2 ~]# pcs resource disable wlm-cron
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr                                                                                                                                                                                                                                                                                                                                                           -cron
    
    mv triliovault-cfg-scripts triliovault-cfg-scripts_old
    mv /usr/local/share/kolla-ansible/ansible/roles/triliovault /opt/triliovault_old
    git clone -b 4.3.2 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/kolla-ansible/
    cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
    #copy the backed-up original globals.yml which is not having triliovault variables iniside current globals.yml
    cp /opt/globals.yml /etc/kolla/globals.yml
    
    #Append Trilio global variables to globals.yml
    cat ansible/triliovault_globals.yml >> /etc/kolla/globals.yml
    Take backup of current password file
    cp /etc/kolla/password.yml /opt/password-<CURRENT-RELEASE>.yml
    
    #Reset the passwords file to default one by reverting the backed-up original password.yml. This backup would have been taken during previous install/upgrade.
    cp /opt/password.yml /etc/kolla/password.yml
    
    #Append Trilio password variables to passwords.yml 
    cat ansible/triliovault_passwords.yml >> /etc/kolla/passwords.yml
    
    #File /etc/kolla/passwords.yml to be edited to set passwords.
    #To set the passwords, it's recommended to use the same passwords as done during previous T4O deployment, as present in the password file backup (/opt/password-<CURRENT-RELEASE>.yml). 
    #Any additional passwords (in triliovault_passwords.yml), should be set by the user in /etc/kolla/passwords.yml.
    #Take backup of current site.yml file
    cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/site-<CURRENT-RELEASE>.yml
    
    #Reset the site.yml to default one by reverting the backed-up original site.yml inside current site.yml. This backup would have been taken during previous install/upgrade.
    cp /opt/site.yml /usr/local/share/kolla-ansible/ansible/site.yml
    
    # If the OpenStack release is ‘yoga' append below Trilio code to site.yml  
    cat ansible/triliovault_site_yoga.yml >> /usr/local/share/kolla-ansible/ansible/site.yml    
    
    # If the OpenStack release is other than 'yoga' append below Trilio code to site.yml 
    cat ansible/triliovault_site.yml >> /usr/local/share/kolla-ansible/ansible/site.yml                               
    For example:
    If your inventory file name path '/root/multinode' then use following
    #cleanup old T4O groups from /root/multinode and copy latest triliovault inventory file
    cat ansible/triliovault_inventory.txt >> /root/multinode
    cd triliovault-cfg-scripts/common/
    pip3 install -U pyyaml
    python ./generate_nfs_map.py
    cat triliovault_nfs_map_output.yml >> ../kolla-ansible/ansible/triliovault_globals.yml
    nova_libvirt_default_volumes:
      - "{{ node_config_directory }}/nova-libvirt/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run/:/run/:shared"
      - "/dev:/dev"
      - "/sys/fs/cgroup:/sys/fs/cgroup"
      - "kolla_logs:/var/log/kolla/"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "
    {% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    
    
    
    
    "
      - "nova_libvirt_qemu:/etc/libvirt/qemu"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }
      - "/var/trilio:/var/trilio:shared"
    nova_compute_default_volumes:
      - "{{ node_config_directory }}/nova-compute/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run:/run:shared"
      - "/dev:/dev"
      - "kolla_logs:/var/log/kolla/"
      - "
    {% if enable_iscsid | bool %}iscsi_info:/etc/iscsi{% endif %}"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    
    
    "
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    nova_compute_ironic_default_volumes:
      - "{{ node_config_directory }}/nova-compute-ironic/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "kolla_logs:/var/log/kolla/"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-horizon-plugin:{{ triliovault_tag }}
    
    ## EXAMPLE from Kolla Ubuntu source based OpenStack
    docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:{{ triliovault_tag }}
    docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/{{ kolla_base_distro }}-binary-trilio-horizon-plugin:{{ triliovault_tag }}
    
    ## EXAMPLE from Kolla Ubuntu binary based OpenStack
    docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/ubuntu-binary-trilio-horizon-plugin:{{ triliovault_tag }}
    ansible -i multinode control -m shell -a "docker login -u <docker-login-username> -p <docker-login-password> docker.io"
    kolla-ansible -i multinode pull --tags triliovault
    kolla-ansible -i multinode upgrade
    [controller] docker ps  | grep "trilio-"
    a2a3593f76db   trilio/kolla-centos-trilio-datamover-api:<triliovault_tag>       "dumb-init --single-…"   23 hours ago    Up 23 hours    triliovault_datamover_api
    5f573caa7b02   trilio/kolla-centos-trilio-horizon-plugin:<triliovault_tag>      "dumb-init --single-…"   23 hours ago    Up 23 hours              horizon
    
    [compute] docker ps | grep "trilio-"
    f6d443c2942c   trilio/kolla-centos-trilio-datamover:<triliovault_tag>          "dumb-init --single-…"   23 hours ago    Up 23 hours    triliovault_datamover
    retries 5
    timeout http-request 10m
    timeout queue 10m
    timeout connect 10m
    timeout client 10m
    timeout server 10m
    timeout check 10m
    balance roundrobin
    maxconn 50000
    /usr/local/share/kolla-ansible/ansible/roles/triliovault/defaults/main.yml
    /etc/kolla/haproxy/services.d/triliovault-datamover-api.cfg
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 15:44:43 GMT
    Content-Type: application/json
    Content-Length: 342
    Connection: keep-alive
    X-Compute-Request-Id: req-5bf629fe-ffa2-4c90-b704-5178ba2ab09b
    
    {
       "quota_type":{
          "created_at":"2020-10-19T10:05:52.000000",
          "updated_at":"2020-10-19T10:07:32.000000",
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
          "display_name":"Workloads",
          "display_description":"Total number of workload creation allowed per project",
          "status":"available"
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 15:51:51 GMT
    Content-Type: application/json
    Content-Length: 24
    Connection: keep-alive
    X-Compute-Request-Id: req-08c8cdb6-b249-4650-91fb-79a6f7497927
    
    {
       "allowed_quotas":[
          {
             
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:01:39 GMT
    Content-Type: application/json
    Content-Length: 766
    Connection: keep-alive
    X-Compute-Request-Id: req-e570ce15-de0d-48ac-a9e8-60af429aebc0
    
    {
       "allowed_quotas":[
          {
             "id":"262b117d-e406-4209-8964-004b19a8d422",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
             "allowed_value":5,
             "high_watermark":4,
             "version":"4.0.115",
             "quota_type_name":"Workloads"
          },
          {
             "id":"68e7203d-8a38-4776-ba58-051e6d289ee0",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "quota_type_id":"f02dd7a6-2e08-11ea-889c-7440bb00b67d",
             "allowed_value":-1,
             "high_watermark":-1,
             "version":"4.0.115",
             "quota_type_name":"Storage"
          },
          {
             "id":"ed67765b-aea8-4898-bb1c-7c01ecb897d2",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "quota_type_id":"be323f58-2e08-11ea-889c-7440bb00b67d",
             "allowed_value":50,
             "high_watermark":25,
             "version":"4.0.115",
             "quota_type_name":"VMs"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:15:07 GMT
    Content-Type: application/json
    Content-Length: 268
    Connection: keep-alive
    X-Compute-Request-Id: req-d87a57cd-c14c-44dd-931e-363158376cb7
    
    {
       "allowed_quotas":{
          "id":"262b117d-e406-4209-8964-004b19a8d422",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
          "allowed_value":5,
          "high_watermark":4,
          "version":"4.0.115",
          "quota_type_name":"Workloads"
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:24:04 GMT
    Content-Type: application/json
    Content-Length: 24
    Connection: keep-alive
    X-Compute-Request-Id: req-a4c02ee5-b86e-4808-92ba-c363b287f1a2
    
    {"allowed_quotas": [{}]}
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:33:09 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    {
       "allowed_quotas":[
          {
             "project_id":"<project_id>",
             "quota_type_id":"<quota_type_id>",
             "allowed_value":"<integer>",
             "high_watermark":"<Integer>"
          }
       ]
    }
    {
       "allowed_quotas":{
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "allowed_value":"20000",
          "high_watermark":"18000"
       }
    }
    "deleted":false,
    "version":"4.0.115",
    "id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
    "display_name":"Workloads",
    "display_description":"Total number of workload creation allowed per project",
    "status":"available"
    },
    {
    "created_at":"2020-10-19T10:05:52.000000",
    "updated_at":"2020-10-19T10:07:32.000000",
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"b7273a06-2e08-11ea-889c-7440bb00b67d",
    "display_name":"Snapshots",
    "display_description":"Total number of snapshot creation allowed per project",
    "status":"available"
    },
    {
    "created_at":"2020-10-19T10:05:52.000000",
    "updated_at":"2020-10-19T10:07:32.000000",
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"be323f58-2e08-11ea-889c-7440bb00b67d",
    "display_name":"VMs",
    "display_description":"Total number of VMs allowed per project",
    "status":"available"
    },
    {
    "created_at":"2020-10-19T10:05:52.000000",
    "updated_at":"2020-10-19T10:07:32.000000",
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"c61324d0-2e08-11ea-889c-7440bb00b67d",
    "display_name":"Volumes",
    "display_description":"Total number of volume attachments allowed per project",
    "status":"available"
    },
    {
    "created_at":"2020-10-19T10:05:52.000000",
    "updated_at":"2020-10-19T10:07:32.000000",
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"f02dd7a6-2e08-11ea-889c-7440bb00b67d",
    "display_name":"Storage",
    "display_description":"Total storage (in Bytes) allowed per project",
    "status":"available"
    }
    ]
    }

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    docker.ioarrow-up-right

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work in

    project_id

    string

    restricts the output to the given project

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work in

    migrate_cloud

    boolean

    True also shows Workloads from different clouds

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of the Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to take the Snapshot in

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    jobid

    int

    jobid returned by workload-importworkloads CLI command.

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 10:34:10 GMT
    Content-Type: application/json
    Content-Length: 7888
    Connection: keep-alive
    X-Compute-Request-Id: req-9d73e5e6-ca5a-4c07-bdf2-ec2e688fc339
    
    {
       "workloads":[
          {
             "created_at":"2020-11-02T13:40:06.000000",
             "updated_at":"2020-11-09T09:53:30.000000",
             "id":"18b809de-d7c8-41e2-867d-4a306407fb11",
             "user_id":"ccddc7e7a015487fa02920f4d4979779",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "availability_zone":"nova",
             "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
             "name":"Workload_1",
             "description":"no-description",
             "interval":null,
             "storage_usage":null,
             "instances":null,
             "metadata":[
                {
                   "created_at":"2020-11-09T09:57:23.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"ee27bf14-e460-454b-abf5-c17e3d484ec2",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"63cd8d96-1c4a-4e61-b1e0-3ae6a17bf533",
                   "value":"c8468146-8117-48a4-bfd7-49381938f636"
                },
                {
                   "created_at":"2020-11-05T10:27:06.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"22d3e3d6-5a37-48e9-82a1-af2dda11f476",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                   "value":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2"
                },
                {
                   "created_at":"2020-11-09T09:37:20.000000",
                   "updated_at":"2020-11-09T09:57:23.000000",
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"61615532-6165-45a2-91e2-fbad9eb0b284",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"b083bb70-e384-4107-b951-8e9e7bbac380",
                   "value":"c8468146-8117-48a4-bfd7-49381938f636"
                },
                {
                   "created_at":"2020-11-02T13:40:24.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"5a53c8ee-4482-4d6a-86f2-654d2b06e28c",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"backup_media_target",
                   "value":"10.10.2.20:/upstream"
                },
                {
                   "created_at":"2020-11-05T10:27:14.000000",
                   "updated_at":"2020-11-09T09:57:23.000000",
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"5cb4dc86-a232-4916-86bf-42a0d17f1439",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"e33c1eea-c533-4945-864d-0da1fc002070",
                   "value":"c8468146-8117-48a4-bfd7-49381938f636"
                },
                {
                   "created_at":"2020-11-02T13:40:06.000000",
                   "updated_at":"2020-11-02T14:10:30.000000",
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"506cd466-1e15-416f-9f8e-b9bdb942f3e1",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"hostnames",
                   "value":"[\"cirros-1\", \"cirros-2\"]"
                },
                {
                   "created_at":"2020-11-02T13:40:06.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"093a1221-edb6-4957-8923-cf271f7e43ce",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"pause_at_snapshot",
                   "value":"0"
                },
                {
                   "created_at":"2020-11-02T13:40:06.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"79baaba8-857e-410f-9d2a-8b14670c4722",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"policy_id",
                   "value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
                },
                {
                   "created_at":"2020-11-02T13:40:06.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"4e23fa3d-1a79-4dc8-86cb-dc1ecbd7008e",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"preferredgroup",
                   "value":"[]"
                },
                {
                   "created_at":"2020-11-02T14:10:30.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"ed06cca6-83d8-4d4c-913b-30c8b8418b80",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"topology",
                   "value":"\"\\\"\\\"\""
                },
                {
                   "created_at":"2020-11-02T13:40:23.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"4b6a80f7-b011-48d4-b5fd-f705448de076",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"workload_approx_backup_size",
                   "value":"6"
                }
             ],
             "jobschedule":"(dp0\nVfullbackup_interval\np1\nV-1\np2\nsVretention_policy_type\np3\nVNumber of Snapshots to Keep\np4\nsVend_date\np5\nVNo End\np6\nsVstart_time\np7\nV01:45 PM\np8\nsVinterval\np9\nV5\np10\nsVenabled\np11\nI00\nsVretention_policy_value\np12\nV10\np13\nsVtimezone\np14\nVUTC\np15\nsVstart_date\np16\nV11/02/2020\np17\nsVappliance_timezone\np18\nVUTC\np19\ns.",
             "status":"locked",
             "error_msg":null,
             "links":[
                {
                   "rel":"self",
                   "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
                },
                {
                   "rel":"bookmark",
                   "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
                }
             ],
             "scheduler_trust":null
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 10:42:01 GMT
    Content-Type: application/json
    Content-Length: 120143
    Connection: keep-alive
    X-Compute-Request-Id: req-b443f6e7-8d8e-413f-8d91-7c30ba166e8c
    
    {
       "workloads":[
          {
             "created_at":"2019-04-24T14:09:20.000000",
             "updated_at":"2019-05-16T09:10:17.000000",
             "id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
             "user_id":"6ef8135faedc4259baac5871e09f0044",
             "project_id":"863b6e2a8e4747f8ba80fdce1ccf332e",
             "availability_zone":"nova",
             "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
             "name":"comdirect_test",
             "description":"Daily UNIX Backup 03:15 PM Full 7D Keep 8",
             "interval":null,
             "storage_usage":null,
             "instances":null,
             "metadata":[
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":false,
                   "created_at":"2019-05-16T09:13:54.000000",
                   "updated_at":null,
                   "value":"ca544215-1182-4a8f-bf81-910f5470887a",
                   "version":"3.2.46",
                   "key":"40965cbb-d352-4618-b8b0-ea064b4819bb",
                   "deleted_at":null,
                   "id":"5184260e-8bb3-4c52-abfa-1adc05fe6997"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:30.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"10.10.2.20:/upstream",
                   "version":"3.2.46",
                   "key":"backup_media_target",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"02dd0630-7118-485c-9e42-b01d23aa882c"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":false,
                   "created_at":"2019-05-16T09:13:51.000000",
                   "updated_at":null,
                   "value":"51693eca-8714-49be-b409-f1f1709db595",
                   "version":"3.2.46",
                   "key":"eb7d6b13-21e4-45d1-b888-d3978ab37216",
                   "deleted_at":null,
                   "id":"4b79a4ef-83d6-4e5a-afb3-f4e160c5f257"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:20.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"[\"Comdirect_test-2\", \"Comdirect_test-1\"]",
                   "version":"3.2.46",
                   "key":"hostnames",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"0cb6a870-8f30-4325-a4ce-e9604370198e"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":false,
                   "created_at":"2019-04-24T14:09:20.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"0",
                   "version":"3.2.46",
                   "key":"pause_at_snapshot",
                   "deleted_at":null,
                   "id":"5d4f109c-9dc2-48f3-a12a-e8b8fa4f5be9"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:20.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"[]",
                   "version":"3.2.46",
                   "key":"preferredgroup",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"9a223fbc-7cad-4c2c-ae8a-75e6ee8a6efc"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:11:49.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"\"\\\"\\\"\"",
                   "version":"3.2.46",
                   "key":"topology",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"77e436c0-0921-4919-97f4-feb58fb19e06"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:30.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"121",
                   "version":"3.2.46",
                   "key":"workload_approx_backup_size",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"79aa04dd-a102-4bd8-b672-5b7a6ce9e125"
                }
             ],
             "jobschedule":"(dp1\nVfullbackup_interval\np2\nV7\nsVretention_policy_type\np3\nVNumber of days to retain Snapshots\np4\nsVend_date\np5\nV05/31/2019\np6\nsVstart_time\np7\nS'02:15 PM'\np8\nsVinterval\np9\nV24 hrs\np10\nsVenabled\np11\nI01\nsVretention_policy_value\np12\nI8\nsS'appliance_timezone'\np13\nS'UTC'\np14\nsVtimezone\np15\nVAfrica/Porto-Novo\np16\nsVstart_date\np17\nS'04/24/2019'\np18\ns.",
             "status":"locked",
             "error_msg":null,
             "links":[
                {
                   "rel":"self",
                   "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
                },
                {
                   "rel":"bookmark",
                   "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
                }
             ],
             "scheduler_trust":null
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 11:03:55 GMT
    Content-Type: application/json
    Content-Length: 100
    Connection: keep-alive
    X-Compute-Request-Id: req-0e58b419-f64c-47e1-adb9-21ea2a255839
    
    {
       "workloads":{
          "imported_workloads":[
             "faa03-f69a-45d5-a6fc-ae0119c77974"        
          ],
          "failed_workloads":[
     
          ]
       }
    }
    {
       "workload_ids":[
          "<workload_id>"
       ],
       "upgrade":true
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 22 Aug 2023 11:03:55 GMT
    Content-Type: application/json
    Content-Length: 100
    Connection: keep-alive
    X-Compute-Request-Id: req-0e58b419-f64c-47e1-adb9-21ea2a255839
    
    {'jobid':[{
    	"id": "1",
    	"created_at": "22nd Aug 2023",
    	"wllist": [{
    		'id':'123',
    		'name':'Test-WL-01',
    		'progress': 10}, {
    			'id':'124',
    			'name': 'Test-W:-02',
    			'progress': 25}],
    		"completedat": "22nd Aug 2023",
    		"status": 'In-Progress'
    	  }]
    }

    Workloads

    hashtag
    List Workloads

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads

    Provides the list of all workloads for the given tenant/project id

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Query Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Create Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads

    Creates a workload in the provided Tenant/Project with the given details.

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Body format

    Workload create requires a Body in json format, to provide the requested information.

    circle-exclamation

    Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.

    hashtag
    Show Workload

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    Shows all details of a specified workload

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Modify Workload

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    Modifies a workload in the provided Tenant/Project with the given details.

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Body format

    Workload modify requires a Body in json format, to provide the information about the values to modify.

    circle-info

    All values in the body are optional.

    circle-exclamation

    Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.

    hashtag
    Delete Workload

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    Deletes the specified Workload.

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Query Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Unlock Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/unlock

    Unlocks the specified Workload

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Reset Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/reset

    Resets the defined workload

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to fetch the workloads from

    nfs_share

    string

    lists workloads located on a specific nfs-share

    all_workloads

    boolean

    admin role required - True lists workloads of all tenants/projects

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to create the workload in

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    workload_id

    string

    ID of the Workload to show

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project where to find the workload in

    workload_id

    string

    ID of the Workload to modify

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    workload_id

    string

    ID of the Workload to delete

    database_only

    boolean

    True leaves the Workload data on the Backup Target

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    workload_id

    string

    ID of the Workload to unlock

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    workload_id

    string

    ID of the Workload to reset

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 29 Oct 2020 14:55:40 GMT
    Content-Type: application/json
    Content-Length: 3480
    Connection: keep-alive
    X-Compute-Request-Id: req-a2e49b7e-ce0f-4dcb-9e61-c5a4756d9948
    
    {
       "workloads":[
          {
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":"adfa32d7746a4341b27377d6f7c61adb",
             "id":"8ee7a61d-a051-44a7-b633-b495e6f8fc1d",
             "name":"worklaod1",
             "snapshots_info":"",
             "description":"no-description",
             "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
             "status":"available",
             "created_at":"2020-10-26T12:07:01.000000",
             "updated_at":"2020-10-29T12:22:26.000000",
             "scheduler_trust":null,
             "links":[
                {
                   "rel":"self",
                   "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
                },
                {
                   "rel":"bookmark",
                   "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
                }
             ]
          },
          {
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":"adfa32d7746a4341b27377d6f7c61adb",
             "id":"a90d002a-85e4-44d1-96ac-7ffc5d0a5a84",
             "name":"workload2",
             "snapshots_info":"",
             "description":"no-description",
             "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
             "status":"available",
             "created_at":"2020-10-20T09:51:15.000000",
             "updated_at":"2020-10-29T10:03:33.000000",
             "scheduler_trust":null,
             "links":[
                {
                   "rel":"self",
                   "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
                },
                {
                   "rel":"bookmark",
                   "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
                }
             ]
          }
       ]
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Thu, 29 Oct 2020 15:42:02 GMT
    Content-Type: application/json
    Content-Length: 703
    Connection: keep-alive
    X-Compute-Request-Id: req-443b9dea-36e6-4721-a11b-4dce3c651ede
    
    {
       "workload":{
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "id":"c4e3aeeb-7d87-4c49-99ed-677e51ba715e",
          "name":"API created",
          "snapshots_info":"",
          "description":"API description",
          "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
          "status":"creating",
          "created_at":"2020-10-29T15:42:01.000000",
          "updated_at":"2020-10-29T15:42:01.000000",
          "scheduler_trust":null,
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             }
          ]
       }
    }
    retention_policy_type
    retention_policy_value
    interval
    {
       "workload":{
          "name":"<name of the Workload>",
          "description":"<description of workload>",
          "workload_type_id":"<ID of the chosen Workload Type",
          "source_platform":"openstack",
          "instances":[
             {
                "instance-id":"<Instance ID>"
             },
             {
                "instance-id":"<Instance ID>"
             }
          ],
          "jobschedule":{
             "retention_policy_type":"<'Number of Snapshots to Keep'/'Number of days to retain Snapshots'>",
             "retention_policy_value":"<Integer>"
             "timezone":"<timezone>",
             "start_date":"<Date format: MM/DD/YYYY>"
             "end_date":"<Date format MM/DD/YYYY>",
             "start_time":"<Time format: HH:MM AM/PM>",
             "interval":"<Format: Integer hr",
             "enabled":"<True/False>"
          },
          "metadata":{
             <key>:<value>,
             "policy_id":"<policy_id>"
          }
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 12:08:42 GMT
    Content-Type: application/json
    Content-Length: 1536
    Connection: keep-alive
    X-Compute-Request-Id: req-afb76abb-aa33-427e-8219-04fc2b91bce0
    
    {
       "workload":{
          "created_at":"2020-10-29T15:42:01.000000",
          "updated_at":"2020-10-29T15:42:18.000000",
          "id":"c4e3aeeb-7d87-4c49-99ed-677e51ba715e",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "availability_zone":"nova",
          "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
          "name":"API created",
          "description":"API description",
          "interval":null,
          "storage_usage":{
             "usage":0,
             "full":{
                "snap_count":0,
                "usage":0
             },
             "incremental":{
                "snap_count":0,
                "usage":0
             }
          },
          "instances":[
             {
                "id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b",
                "name":"cirros-4",
                "metadata":{
                   
                }
             },
             {
                "id":"7c1bb5d2-aa5a-44f7-abcd-2d76b819b4c8",
                "name":"cirros-3",
                "metadata":{
                   
                }
             }
          ],
          "metadata":{
             "hostnames":"[]",
             "meta":"data",
             "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
             "preferredgroup":"[]",
             "workload_approx_backup_size":"6"
          },
          "jobschedule":{
             "retention_policy_type":"Number of Snapshots to Keep",
             "end_date":"15/27/2020",
             "start_time":"3:00 PM",
             "interval":"5",
             "enabled":false,
             "retention_policy_value":"10",
             "timezone":"UTC+2",
             "start_date":"10/27/2020",
             "fullbackup_interval":"-1",
             "appliance_timezone":"UTC",
             "global_jobscheduler":true
          },
          "status":"available",
          "error_msg":null,
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             }
          ],
          "scheduler_trust":null
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 12:31:42 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-674a5d71-4aeb-4f99-90ce-7e8d3158d137
    retention_policy_type
    retention_policy_value
    interval
    {
       "workload":{
          "name":"<name>",
          "description":"<description>"
          "instances":[
             {
                "instance-id":"<instance_id>"
             },
             {
                "instance-id":"<instance_id>"
             }
          ],
          "jobschedule":{
             "retention_policy_type":"<'Number of Snapshots to Keep'/'Number of days to retain Snapshots'>",
             "retention_policy_value":"<Integer>",
             "timezone":"<timezone>",
             "start_time":"<HH:MM AM/PM>",
             "end_date":"<MM/DD/YYYY>",
             "interval":"<Integer hr>",
             "enabled":"<True/False>"
          },
          "metadata":{
             "meta":"data",
             "policy_id":"<policy_id>"
          },
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 13:31:00 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 13:41:55 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 13:52:30 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive

    Snapshots

    hashtag
    List Snapshots

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots

    Lists all Snapshots.

    hashtag
    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Projects to fetch the Snapshots from

    hashtag
    Query Parameters

    Name
    Type
    Description

    host

    string

    host name of the TVM that took the Snapshot

    workload_id

    string

    ID of the Workload to list the Snapshots off

    date_from

    string

    starting date of Snapshots to show

    \

    Format: YYYY-MM-DDTHH:MM:SS

    string

    ending date of Snapshots to show

    \

    Format: YYYY-MM-DDTHH:MM:SS

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Take Snapshot

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of the Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to take the Snapshot in

    workload_id

    string

    ID of the Workload to take the Snapshot in

    hashtag
    Query Parameters

    Name
    Type
    Description

    full

    boolean

    True creates a full Snapshot

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    hashtag
    Body format

    When creating a Snapshot it is possible to provide additional information

    circle-info

    This Body is completely optional

    hashtag
    Show Snapshot

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Shows the details of a specified Snapshot

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of the Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to take the Snapshot from

    snapshot_id

    string

    ID of the Snapshot to show

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Delete Snapshot

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Deletes a specified Snapshot

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to find the Snapshot in

    snapshot_id

    string

    ID of the Snapshot to delete

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Cancel Snapshot

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/cancel

    Cancels the Snapshot process of a given Snapshot

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to find the Snapshot in

    snapshot_id

    string

    ID of the Snapshot to cancel

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 12:58:38 GMT
    Content-Type: application/json
    Content-Length: 266
    Connection: keep-alive
    X-Compute-Request-Id: req-ed391cf9-aa56-4c53-8153-fd7fb238c4b9
    
    {
       "snapshots":[
          {
             "id":"1ff16412-a0cd-4e6a-9b4a-b5d4440fffc4",
             "created_at":"2020-11-02T14:03:18.000000",
             "status":"available",
             "snapshot_type":"full",
             "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
             "name":"snapshot",
             "description":"-",
             "host":"TVM1"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 13:58:38 GMT
    Content-Type: application/json
    Content-Length: 283
    Connection: keep-alive
    X-Compute-Request-Id: req-fb8dc382-e5de-4665-8d88-c75b2e473f5c
    
    {
       "snapshot":{
          "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "created_at":"2020-11-04T13:58:37.694637",
          "status":"creating",
          "snapshot_type":"full",
          "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
          "name":"API taken 2",
          "description":"API taken description 2",
          "host":""
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 14:07:18 GMT
    Content-Type: application/json
    Content-Length: 6609
    Connection: keep-alive
    X-Compute-Request-Id: req-f88fb28f-f4ce-4585-9c3c-ebe08a3f60cd
    
    {
       "snapshot":{
          "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "created_at":"2020-11-04T13:58:37.000000",
          "updated_at":"2020-11-04T14:06:03.000000",
          "finished_at":"2020-11-04T14:06:03.000000",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"available",
          "snapshot_type":"full",
          "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
          "instances":[
             {
                "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                "name":"cirros-2",
                "status":"available",
                "metadata":{
                   "availability_zone":"nova",
                   "config_drive":"",
                   "data_transfer_time":"0",
                   "object_store_transfer_time":"0",
                   "root_partition_type":"Linux",
                   "trilio_ordered_interfaces":"192.168.100.80",
                   "vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.80\", \"config_drive\": \"\"}",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "workload_name":"Workload_1"
                },
                "flavor":{
                   "vcpus":"1",
                   "ram":"512",
                   "disk":"1",
                   "ephemeral":"0"
                },
                "security_group":[
                   {
                      "name":"default",
                      "security_group_type":"neutron"
                   }
                ],
                "nics":[
                   {
                      "mac_address":"fa:16:3e:cf:10:91",
                      "ip_address":"192.168.100.80",
                      "network":{
                         "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                         "name":"robert_internal",
                         "cidr":null,
                         "network_type":"neutron",
                         "subnet":{
                            "id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
                            "name":"robert_internal",
                            "cidr":"192.168.100.0/24",
                            "ip_version":4,
                            "gateway_ip":"192.168.100.1"
                         }
                      }
                   }
                ],
                "vdisks":[
                   {
                      "label":null,
                      "resource_id":"fa888089-5715-4228-9e5a-699f8f9d59ba",
                      "restore_size":1073741824,
                      "vm_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                      "volume_id":"51491d30-9818-4332-b056-1f174e65d3e3",
                      "volume_name":"51491d30-9818-4332-b056-1f174e65d3e3",
                      "volume_size":"1",
                      "volume_type":"iscsi",
                      "volume_mountpoint":"/dev/vda",
                      "availability_zone":"nova",
                      "metadata":{
                         "readonly":"False",
                         "attached_mode":"rw"
                      }
                   }
                ]
             },
             {
                "id":"e33c1eea-c533-4945-864d-0da1fc002070",
                "name":"cirros-1",
                "status":"available",
                "metadata":{
                   "availability_zone":"nova",
                   "config_drive":"",
                   "data_transfer_time":"0",
                   "object_store_transfer_time":"0",
                   "root_partition_type":"Linux",
                   "trilio_ordered_interfaces":"192.168.100.176",
                   "vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.176\", \"config_drive\": \"\"}",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "workload_name":"Workload_1"
                },
                "flavor":{
                   "vcpus":"1",
                   "ram":"512",
                   "disk":"1",
                   "ephemeral":"0"
                },
                "security_group":[
                   {
                      "name":"default",
                      "security_group_type":"neutron"
                   }
                ],
                "nics":[
                   {
                      "mac_address":"fa:16:3e:cf:4d:27",
                      "ip_address":"192.168.100.176",
                      "network":{
                         "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                         "name":"robert_internal",
                         "cidr":null,
                         "network_type":"neutron",
                         "subnet":{
                            "id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
                            "name":"robert_internal",
                            "cidr":"192.168.100.0/24",
                            "ip_version":4,
                            "gateway_ip":"192.168.100.1"
                         }
                      }
                   }
                ],
                "vdisks":[
                   {
                      "label":null,
                      "resource_id":"c8293bb0-031a-4d33-92ee-188380211483",
                      "restore_size":1073741824,
                      "vm_id":"e33c1eea-c533-4945-864d-0da1fc002070",
                      "volume_id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                      "volume_name":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                      "volume_size":"1",
                      "volume_type":"iscsi",
                      "volume_mountpoint":"/dev/vda",
                      "availability_zone":"nova",
                      "metadata":{
                         "readonly":"False",
                         "attached_mode":"rw"
                      }
                   }
                ]
             }
          ],
          "name":"API taken 2",
          "description":"API taken description 2",
          "host":"TVM1",
          "size":44171264,
          "restore_size":2147483648,
          "uploaded_size":44171264,
          "progress_percent":100,
          "progress_msg":"Snapshot of workload is complete",
          "warning_msg":null,
          "error_msg":null,
          "time_taken":428,
          "pinned":false,
          "metadata":[
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"16fc1ce5-81b2-4c07-ac63-6c9232e0418f",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"backup_media_target",
                "value":"10.10.2.20:/upstream"
             },
             {
                "created_at":"2020-11-04T13:58:37.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"5a56bbad-9957-4fb3-9bbc-469ec571b549",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"cancel_requested",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:29.000000",
                "updated_at":"2020-11-04T14:05:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"d36abef7-9663-4d88-8f2e-ef914f068fb4",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"data_transfer_time",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"c75f9151-ef87-4a74-acf1-42bd2588ee64",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"hostnames",
                "value":"[\"cirros-1\", \"cirros-2\"]"
             },
             {
                "created_at":"2020-11-04T14:05:29.000000",
                "updated_at":"2020-11-04T14:05:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"02916cce-79a2-4ad9-a7f6-9d9f59aa8424",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"object_store_transfer_time",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"96efad2f-a24f-4cde-8e21-9cd78f78381b",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"pause_at_snapshot",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"572a0b21-a415-498f-b7fa-6144d850ef56",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"policy_id",
                "value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"dfd7314d-8443-4a95-8e2a-7aad35ef97ea",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"preferredgroup",
                "value":"[]"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"2e17e1e4-4bb1-48a9-8f11-c4cd2cfca2a9",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"topology",
                "value":"\"\\\"\\\"\""
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"33762790-8743-4e20-9f50-3505a00dbe76",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"workload_approx_backup_size",
                "value":"6"
             }
          ],
          "restores_info":""
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 14:18:36 GMT
    Content-Type: application/json
    Content-Length: 56
    Connection: keep-alive
    X-Compute-Request-Id: req-82ffb2b6-b28e-4c73-89a4-310890960dbc
    
    {"task": {"id": "a73de236-6379-424a-abc7-33d553e050b7"}}
    
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 14:26:44 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-47a5a426-c241-429e-9d69-d40aed0dd68d
    {
       "snapshot":{
          "is_scheduled":<true/false>,
          "name":"<name>",
          "description":"<description>"
       }
    }

    all

    boolean

    admin role required - True lists all Snapshots of all Workloads

    User-Agent

    string

    python-workloadmgrclient

    Workload Policies

    hashtag
    List Policies

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy

    Requests the list of available Workload Policies

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Show Policy

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>

    Requests the details of a given policy

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    list assigned Policies

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/assigned/<project_id>

    Requests the lists of Policies assigned to a Project.

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Create Policy

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy

    Creates a Policy with the given parameters

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Body Format

    hashtag
    Update Policy

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>

    Updates a Policy with the given information

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Body Format

    hashtag
    Assign Policy

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>

    Updates a Policy with the given information

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    hashtag
    Body Format

    hashtag
    Delete Policy

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>

    Deletes a given Policy

    hashtag
    Path Parameters

    Name
    Type
    Description

    hashtag
    Headers

    Name
    Type
    Description

    Restores

    hashtag
    Definition

    A Restore is the workflow to bring back the backed up VMs from a Trilio Snapshot.

    hashtag

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    policy_id

    string

    ID of the Policy to show

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    project_id

    string

    ID of the Project to fetch assigned Policies from

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    policy_id

    string

    ID of the Policy to update

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    policy_id

    string

    ID of the Policy to assign

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    policy_id

    string

    ID of the Policy to delete

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 13:56:08 GMT
    Content-Type: application/json
    Content-Length: 1399
    Connection: keep-alive
    X-Compute-Request-Id: req-4618161e-64e4-489a-b8fc-f3cb21d94096
    
    {
       "policy_list":[
          {
             "id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
             "created_at":"2020-10-26T12:52:22.000000",
             "updated_at":"2020-10-26T12:52:22.000000",
             "status":"available",
             "name":"Gold",
             "description":"",
             "metadata":[
                
             ],
             "field_values":[
                {
                   "created_at":"2020-10-26T12:52:22.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"0201f8b4-482d-4ec1-9b92-8cf3092abcc2",
                   "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                   "policy_field_name":"retention_policy_value",
                   "value":"10"
                },
                {
                   "created_at":"2020-10-26T12:52:22.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"48cc7007-e221-44de-bd4e-6a66841bdee0",
                   "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                   "policy_field_name":"interval",
                   "value":"5"
                },
                {
                   "created_at":"2020-10-26T12:52:22.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"79070c67-9021-4220-8a79-648ffeebc144",
                   "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                   "policy_field_name":"retention_policy_type",
                   "value":"Number of Snapshots to Keep"
                },
                {
                   "created_at":"2020-10-26T12:52:22.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"9fec205a-9528-45ea-a118-ffb64d8c7d9d",
                   "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                   "policy_field_name":"fullbackup_interval",
                   "value":"-1"
                }
             ]
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 14:18:42 GMT
    Content-Type: application/json
    Content-Length: 2160
    Connection: keep-alive
    X-Compute-Request-Id: req-0583fc35-0f80-4746-b280-c17b32cc4b25
    
    {
       "policy":{
          "id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
          "created_at":"2020-10-26T12:52:22.000000",
          "updated_at":"2020-10-26T12:52:22.000000",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "status":"available",
          "name":"Gold",
          "description":"",
          "field_values":[
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"0201f8b4-482d-4ec1-9b92-8cf3092abcc2",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"retention_policy_value",
                "value":"10"
             },
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"48cc7007-e221-44de-bd4e-6a66841bdee0",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"interval",
                "value":"5"
             },
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"79070c67-9021-4220-8a79-648ffeebc144",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"retention_policy_type",
                "value":"Number of Snapshots to Keep"
             },
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"9fec205a-9528-45ea-a118-ffb64d8c7d9d",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"fullbackup_interval",
                "value":"-1"
             }
          ],
          "metadata":[
             
          ],
          "policy_assignments":[
             {
                "created_at":"2020-10-26T12:53:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"3e3f1b12-1b1f-452b-a9d2-b6e5fbf2ab18",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "project_id":"4dfe98a43bfa404785a812020066b4d6",
                "policy_name":"Gold",
                "project_name":"admin"
             },
             {
                "created_at":"2020-10-29T15:39:13.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"8b4a6236-63f1-4e2d-b8d1-23b37f4b4346",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "project_id":"c76b3355a164498aa95ddbc960adc238",
                "policy_name":"Gold",
                "project_name":"robert"
             }
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:14:01 GMT
    Content-Type: application/json
    Content-Length: 338
    Connection: keep-alive
    X-Compute-Request-Id: req-57175488-d267-4dcb-90b5-f239d8b02fe2
    
    {
       "policies":[
          {
             "created_at":"2020-10-29T15:39:13.000000",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"8b4a6236-63f1-4e2d-b8d1-23b37f4b4346",
             "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "policy_name":"Gold",
             "project_name":"robert"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:24:03 GMT
    Content-Type: application/json
    Content-Length: 1413
    Connection: keep-alive
    X-Compute-Request-Id: req-05e05333-b967-4d4e-9c9b-561f1a7add5a
    
    {
       "policy":{
          "id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
          "created_at":"2020-11-17T09:24:01.000000",
          "updated_at":"2020-11-17T09:24:01.000000",
          "status":"available",
          "name":"CLI created",
          "description":"CLI created",
          "metadata":[
             
          ],
          "field_values":[
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"767ae42d-caf0-4d36-963c-9b0e50991711",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"interval",
                "value":"4 hr"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_value",
                "value":"10"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_type",
                "value":"Number of Snapshots to Keep"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"fullbackup_interval",
                "value":"-1"
             }
          ]
       }
    }
    {
       "workload_policy":{
          "field_values":{
             "fullbackup_interval":"<-1 for never / 0 for always / Integer>",
             "retention_policy_type":"<Number of Snapshots to Keep/Number of days to retain Snapshots>",
             "interval":"<Integer hr>",
             "retention_policy_value":"<Integer>"
          },
          "display_name":"<String>",
          "display_description":"<String>",
          "metadata":{
             <key>:<value>
          }
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:32:13 GMT
    Content-Type: application/json
    Content-Length: 1515
    Connection: keep-alive
    X-Compute-Request-Id: req-9104cf1c-4025-48f5-be92-1a6b7117bf95
    
    {
       "policy":{
          "id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
          "created_at":"2020-11-17T09:24:01.000000",
          "updated_at":"2020-11-17T09:24:01.000000",
          "status":"available",
          "name":"API created",
          "description":"API created",
          "metadata":[
             
          ],
          "field_values":[
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"767ae42d-caf0-4d36-963c-9b0e50991711",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"interval",
                "value":"8 hr"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_value",
                "value":"20"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_type",
                "value":"Number of days to retain Snapshots"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"fullbackup_interval",
                "value":"7"
             }
          ]
       }
    }
    {
       "policy":{
          "field_values":{
             "fullbackup_interval":"<-1 for never / 0 for always / Integer>",
             "retention_policy_type":"<Number of Snapshots to Keep/Number of days to retain Snapshots>",
             "interval":"<Integer hr>",
             "retention_policy_value":"<Integer>"
          },
          "display_name":"String",
          "display_description":"String",
          "metadata":{
             <key>:<value>
          }
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:46:23 GMT
    Content-Type: application/json
    Content-Length: 2318
    Connection: keep-alive
    X-Compute-Request-Id: req-169a53e4-b1c9-4bd1-bf68-3416d177d868
    
    {
       "policy":{
          "id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
          "created_at":"2020-11-17T09:24:01.000000",
          "updated_at":"2020-11-17T09:24:01.000000",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "status":"available",
          "name":"API created",
          "description":"API created",
          "field_values":[
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"767ae42d-caf0-4d36-963c-9b0e50991711",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"interval",
                "value":"8 hr"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_value",
                "value":"20"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_type",
                "value":"Number of days to retain Snapshots"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"fullbackup_interval",
                "value":"7"
             }
          ],
          "metadata":[
             
          ],
          "policy_assignments":[
             {
                "created_at":"2020-11-17T09:46:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"4794ed95-d8d1-4572-93e8-cebd6d4df48f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "project_id":"cbad43105e404c86a1cd07c48a737f9c",
                "policy_name":"API created",
                "project_name":"services"
             },
             {
                "created_at":"2020-11-17T09:46:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"68f187a6-3526-4a35-8b2d-cb0e9f497dd8",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "project_id":"c76b3355a164498aa95ddbc960adc238",
                "policy_name":"API created",
                "project_name":"robert"
             }
          ]
       },
       "failed_ids":[
          
       ]
    }
    {
       "policy":{
          "remove_projects":[
             "<project_id>"
          ],
          "add_projects":[
             "<project_id>",
          ]
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:56:03 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    List of Restores

    hashtag
    Using Horizon

    To reach the list of Restores for a Snapshot follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the searched Snapshot in the Snapshot list

    8. Click the Snapshot Name

    9. Navigate to the Restores tab

    hashtag
    Using CLI

    • --snapshot_id <snapshot_id> ➡️ ID of the Snapshot to show the restores of

    hashtag
    Restores overview

    hashtag
    Using Horizon

    To reach the detailed Restore overview follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the searched Snapshot in the Snapshot list

    8. Click the Snapshot Name

    9. Navigate to the Restores tab

    10. Identify the restore to show

    11. Click the restore name

    hashtag
    Details Tab

    The Restore Details Tab shows the most important information about the Restore.

    • Name

    • Description

    • Restore Type

    • Status

    • Time taken

    • Size

    • Progress Message

    • Progress

    • Host

    • Restore Options

    circle-info

    The Restore Options are the restore.json provided to Trilio.

    • List of VMs restored

      • restored VM Name

      • restored VM Status

      • restored VM ID

    hashtag
    Misc Tab

    The Misc tab provides additional Metadata information.

    • Creation Time

    • Restore ID

    • Snapshot ID containing the Restore

    • Workload

    hashtag
    Using CLI

    • <restore_id> ➡️ ID of the restore to be shown

    • --output <output> ➡️ Option to get additional restore details, Specify --output metadata for restore metadata,--output networks --output subnets --output routers --output flavors

    hashtag
    Delete a Restore

    Once a Restore is no longer needed, it can be safely deleted from a Workload.

    circle-info

    Deleting a Restore will only delete the Trilio information about this Restore. No Openstack resources are getting deleted.

    hashtag
    Using Horizon

    There are 2 possibilities to delete a Restore.

    hashtag
    Possibility 1: Single Restore deletion through the submenu

    To delete a single Restore through the submenu follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to delete

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the searched Snapshot in the Snapshot list

    8. Click the Snapshot Name

    9. Navigate to the Restore tab

    10. Click "Delete Restore" in the line of the restore in question

    11. Confirm by clicking "Delete Restore"

    hashtag
    Possibility 2: Multiple Restore deletion through a checkbox in Snapshot overview

    To delete one or more Restores through the Restore list do the following:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the searched Snapshots in the Snapshot list

    8. Enter the Snapshot by clicking the Snapshot name

    9. Navigate to the Restore tab

    10. Check the checkbox for each Restore that shall be deleted

    11. Click "Delete Restore" in the menu above

    12. Confirm by clicking "Delete Restore"

    hashtag
    Using CLI

    • <restore_id> ➡️ ID of the restore to be deleted

    hashtag
    Cancel a Restore

    Ongoing Restores can be canceled.

    hashtag
    Using Horizon

    To cancel a Restore in Horizon follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to delete

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the searched Snapshot in the Snapshot list

    8. Click the Snapshot Name

    9. Navigate to the Restore tab

    10. Identify the ongoing Restore

    11. Click "Cancel Restore" in the line of the restore in question

    12. Confirm by clicking "Cancel Restore"

    hashtag
    Using CLI

    • <restore_id> ➡️ ID of the restore to be deleted

    hashtag
    One Click Restore

    The One Click Restore will bring back all VMs from the Snapshot in the same state as they were backed up. They will:

    • be located in the same cluster in the same datacenter

    • use the same storage domain

    • connect to the same network

    • have the same flavor

    The user can't change any Metadata.

    circle-info

    The One Click Restore requires, that the original VM's that have been backed up are deleted or otherwise lost. If even one VM is still existing, will the One Click Restore fail.

    circle-info

    The One Click Restore will automatically update the Workload to protect the restored VMs.

    hashtag
    Using Horizon

    There are 2 possibilities to start a One Click Restore.

    hashtag
    Possibility 1: From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the Snapshot to be restored

    8. Click "One Click Restore" in the same line as the identified Snapshot

    9. (Optional) Provide a name / description

    10. Click "Create"

    hashtag
    Possibility 2: From the Snapshot overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the Snapshot to be restored

    8. Click the Snapshot Name

    9. Navigate to the "Restores" tab

    10. Click "One Click Restore"

    11. (Optional) Provide a name / description

    12. Click "Create"

    hashtag
    Using CLI

    • <snapshot_id> ➡️ ID of the snapshot to restore.

    • --display-name <display-name> ➡️ Optional name for the restore.

    • --display-description <display-description> ➡️ Optional description for restore.

    hashtag
    Selective Restore

    The Selective Restore is the most complex restore Trilio has to offer. It allows to adapt the restored VMs to the exact needs of the User.

    With the selective restore the following things can be changed:

    • Which VMs are getting restored

    • Name of the restored VMs

    • Which networks to connect with

    • Which Storage domain to use

    • Which DataCenter / Cluster to restore into

    • Which flavor the restored VMs will use

    circle-info

    The Selective Restore is always available and doesn't have any prerequirements.

    circle-info

    The Selective Restore will automatically update the Workload to protect the created instance in the case that the original instance is no longer existing.

    hashtag
    Using Horizon

    There are 2 possibilities to start a Selective Restore.

    hashtag
    Possibility 1: From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the Snapshot to be restored

    8. Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

    9. Click on "Selective Restore"

    10. Configure the Selective Restore as desired

    11. Click "Restore"

    hashtag
    Possibility 2: From the Snapshot overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the Snapshot to be restored

    8. Click the Snapshot Name

    9. Navigate to the "Restores" tab

    10. Click "Selective Restore"

    11. Configure the Selective Restore as desired

    12. Click "Restore"

    hashtag
    Using CLI

    • <snapshot_id> ➡️ ID of the snapshot to restore.

    • --display-name <display-name> ➡️ Optional name for the restore.

    • --display-description <display-description> ➡️ Optional description for restore.

    • --filename <filename> ➡️ Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.

    hashtag
    Inplace Restore

    The Inplace Restore covers those use cases, where the VM and its Volumes are still available, but the data got corrupted or needs to a rollback for other reasons.

    It allows the user to restore only the data of a selected Volume, which is part of a backup.

    circle-info

    The Inplace Restore only works when the original VM and the original Volume are still available and connected. Trilio is checking this by the saved Object-ID.

    circle-info

    The Inplace Restore will not create any new RHV resources. Please use one of the other restore options if new Volumes or VMs are required.

    hashtag
    Using Horizon

    There are 2 possibilities to start an Inplace Restore.

    hashtag
    Possibility 1: From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the Snapshot to be restored

    8. Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

    9. Click on "Inplace Restore"

    10. Configure the Inplace Restore as desired

    11. Click "Restore"

    hashtag
    Possibility 2: From the Snapshot overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    5. Click the workload name to enter the Workload overview

    6. Navigate to the Snapshots tab

    7. Identify the Snapshot to be restored

    8. Click the Snapshot Name

    9. Navigate to the "Restores" tab

    10. Click "Inplace Restore"

    11. Configure the Inplace Restore as desired

    12. Click "Restore"

    hashtag
    Using CLI

    • <snapshot_id> ➡️ ID of the snapshot to restore.

    • --display-name <display-name> ➡️ Optional name for the restore.

    • --display-description <display-description> ➡️ Optional description for restore.

    • --filename <filename> ➡️ Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.

    hashtag
    required restore.json for CLI

    The workloadmgr client CLI is using a restore.json file to define the restore parameters for the selective and the inplace restore.

    An example for a selective restore of this restore.json is shown below. A detailed analysis and explanation is given afterwards.

    circle-info

    The restore.json requires many information about the backed up resources. All required information can be gathered in the Snapshot overview.

    hashtag
    General required information

    Before the exact details of the restore are to be provided it is necessary to provide the general metadata for the restore.

    • name➡️the name of the restore

    • description➡️the description of the restore

    • oneclickrestore <True/False>➡️If the restore is a oneclick restore. Setting this to True will override all other settings and a One Click Restore is started.

    • restore_type <oneclick/selective/inplace>➡️defines the restore that is intended

    • type openstack➡️defines that the restore is into an openstack cloud.

    • openstackstarts the exact definition of the restore

    hashtag
    Selective Restore required information

    The Selective Restore requires a lot of information to be able to execute the restore as desired.

    Those information are divided into 3 components:

    • instances

    • restore_topology

    • networks_mapping

    hashtag
    Information required in instances

    This part contains all information about all instances that are part of the Snapshot to restore and how they are to be restored.

    circle-info

    Even when VMs are not to be restored are they required inside the restore.json to allow a clean execution of the restore.

    Each instance requires the following information

    • id ➡️ original id of the instance

    • include <True/False> ➡️ Set True when the instance shall be restored

    circle-info

    All further information are only required, when the instance is part of the restore.

    • name ➡️ new name of the instance

    • availability_zone ➡️ Nova Availability Zone the instance shall be restored into. Leave empty for "Any Availability Zone"

    • Nics ➡️ list of openstack Neutron ports that shall be attached to the instance. Each Neutron Port consists of:

      • id ➡️ ID of the Neutron port to use

      • mac_address ➡️ Mac Address of the Neutron port

    circle-info

    To use the next free IP available in the set Nics to an empty list [ ]

    circle-info

    Using an empty list for Nics combined with the Network Topology Restore, will the restore automatically restore the original IP address of the instance.

    • vdisks ➡️ List of all Volumes that are part of the instance. Each Volume requires the following information:

      • id ➡️ Original ID of the Volume

      • new_volume_type ➡️ The Volume Type to use for the restored Volume. Leave empty for Volume Type None

      • availability_zone ➡️ The Cinder Availability Zone to use for the Volume. The default Availability Zone of Cinder is Nova

    • flavor➡️Defines the Flavor to use for the restored instance. Contains the following information:

      • ram➡️How much RAM the restored instance will have (in MB)

    circle-exclamation

    The root disk needs to be at least as big as the root disk of the backed up instance was.

    The following example describes a single instance with all values.

    hashtag
    Information required in network topology restore or network mapping

    triangle-exclamation

    Do not mix network topology restore together with network mapping.

    To activate a network topology restore set:

    To activate network mapping set:

    When the network mapping is activated it is used, it is necessary to provide the mapping details, which are part of the networks_mapping block:

    • networks ➡️ list of snapshot_network and target_network pairs

      • snapshot_network ➡️ the network backed up in the snapshot, contains the following:

        • id ➡️ Original ID of the network backed up

        • subnet ➡️ the subnet of the network backed up in the snapshot, contains the following:

          • id ➡️ Original ID of the subnet backed up

      • target_network ➡️ the existing network to map to, contains the following

        • id ➡️ ID of the network to map to

    hashtag
    Full selective restore example

    hashtag
    Inplace Restore required information

    The Inplace Restore requires less information thana selective restore. It only requires the base file with some information about the Instances and Volumes to be restored.

    hashtag
    Information required in instances

    • id ➡️ ID of the instance inside the Snapshot

    • restore_boot_disk ➡️ Set to True if the boot disk of that VM shall be restored.

    circle-info

    When the boot disk is at the same time a Cinder Disk, both values need to be set true.

    • include ➡️ Set to True if at least one Volume from this instance shall be restored

    • vdisks ➡️ List of disks, that are connected to the instance. Each disk contains:

      • id ➡️ Original ID of the Volume

      • restore_cinder_volume ➡️ set to true if the Volume shall be restored

    hashtag
    Network mapping information required

    There are no network information required, but the field have to exist as empty value for the restore to work.

    hashtag
    Full Inplace restore example

    Installing on RHOSP

    The is the supported and recommended method to deploy and maintain any RHOSP installation.

    Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.

    hashtag
    1. Prepare for deployment

    workloadmgr restore-list [--snapshot_id <snapshot_id>]
    workloadmgr restore-show [--output <output>] <restore_id>
    workloadmgr restore-delete <restores_id>
    workloadmgr restore-cancel <restore_id>
    workloadmgr snapshot-oneclick-restore [--display-name <display-name>]
                                          [--display-description <display-description>]
                                          <snapshot_id>
    workloadmgr snapshot-selective-restore [--display-name <display-name>]
                                           [--display-description <display-description>]
                                           [--filename <filename>]
                                           <snapshot_id>
    workloadmgr snapshot-inplace-restore [--display-name <display-name>]
                                         [--display-description <display-description>]
                                         [--filename <filename>]
                                         <snapshot_id>
    {
        oneclickrestore: False,
        restore_type: selective, 
        type: openstack, 
        openstack: 
            {
                instances: 
                    [
                        {
                            include: True, 
                            id: 890888bc-a001-4b62-a25b-484b34ac6e7e,                        
                            name: cdcentOS-1, 
                            availability_zone:, 
                            nics: [], 
                            vdisks: 
                                [
                                    {
                                        id: 4cc2b474-1f1b-4054-a922-497ef5564624, 
                                        new_volume_type:, 
                                        availability_zone: nova
                                    }
                                ], 
                            flavor: 
                                {
                                    ram: 512, 
                                    ephemeral: 0, 
                                    vcpus: 1,
                                    swap:,
                                    disk: 1, 
                                    id: 1
                                }                         
                        }
                    ], 
                restore_topology: True, 
                networks_mapping: 
                    {
                        networks: []
                    }
            }
    }
    'instances':[
      {
         'name':'cdcentOS-1-selective',
         'availability_zone':'US-East',
         'nics':[
           {
              'mac_address':'fa:16:3e:00:bd:60',
              'ip_address':'192.168.0.100',
              'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
              'network':{
                 'subnet':{
                    'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
                 },
                 'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
              }
           }
         ],
         'vdisks':[
           {
              'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
              'new_volume_type':'ceph',
              'availability_zone':'nova'
           }
         ],
         'flavor':{
            'ram':2048,
            'ephemeral':0,
            'vcpus':1,
            'swap':'',
            'disk':20,
            'id':'2'
         },
         'include':True,
         'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
      }
    ]
    restore_topology:True
    restore_topology:False
    {
       'oneclickrestore':False,
       'openstack':{
          'instances':[
             {
                'name':'cdcentOS-1-selective',
                'availability_zone':'US-East',
                'nics':[
                   {
                      'mac_address':'fa:16:3e:00:bd:60',
                      'ip_address':'192.168.0.100',
                      'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
                      'network':{
                         'subnet':{
                            'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
                         },
                         'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
                      }
                   }
                ],
                'vdisks':[
                   {
                      'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
                      'new_volume_type':'ceph',
                      'availability_zone':'nova'
                   }
                ],
                'flavor':{
                   'ram':2048,
                   'ephemeral':0,
                   'vcpus':1,
                   'swap':'',
                   'disk':20,
                   'id':'2'
                },
                'include':True,
                'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
             }
          ],
          'restore_topology':False,
          'networks_mapping':{
             'networks':[
                {
                   'snapshot_network':{
                      'subnet':{
                         'id':'8b609440-4abf-4acf-a36b-9a0fa70c383c'
                      },
                      'id':'8b871820-f92e-41f6-80b4-00555a649b4c'
                   },
                   'target_network':{
                      'subnet':{
                         'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
                      },
                      'id':'d5047e84-077e-4b38-bc43-e3360b0ad174',
                      'name':'internal'
                   }
                }
             ]
          }
       },
       'restore_type':'selective',
       'type':'openstack'
    }
    {
       'oneclickrestore':False,
       'restore_type':'inplace',
       'type':'openstack',   
       'openstack':{
          'instances':[
             {
                'restore_boot_disk':True,
                'include':True,
                'id':'ba8c27ab-06ed-4451-9922-d919171078de',
                'vdisks':[
                   {
                      'restore_cinder_volume':True,
                      'id':'04d66b70-6d7c-4d1b-98e0-11059b89cba6',
                   }
                ]
             }
          ]
       }
    }
    ip_address ➡️ IP Address of the Neutron port
  • network ➡️ network the port is assigned to. Contains the following information:

    • id ➡️ ID of the network the Neutron port is part of

    • subnet➡️subnet the port is assigned to. Contains the following information:

      • id ➡️ ID of the network the Neutron port is part of

  • ephemeral➡️How big the ephemeral disk of the instance will be (in GB)
  • vcpus➡️How many vcpus the restored instance will have available

  • swap➡️How big the Swap of the restored instance will be (in MB). Leave empty for none.

  • disk➡️Size of the root disk the instance will boot with

  • id➡️ID of the flavor that matches the provided information

  • subnet ➡️ the subnet of the network backed up in the snapshot, contains the following:
    • id ➡️ ID of the subnet to map to

    hashtag
    1.1] Select 'backup target' type

    Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

    Following backup target types are supported by Trilio

    a) NFS

    Need NFS share path

    b) Amazon S3

    - S3 Access Key - Secret Key - Region - Bucket name

    c) Other S3 compatible storage (Like, Ceph based S3)

    - S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

    hashtag
    1.2] Clone triliovault-cfg-scripts repository

    The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.

    circle-exclamation

    All commands need to be run as user 'stack' on undercloud node

    The following command clones the triliovault-cfg-scripts github repository.

    Next access the Red Hat Director scripts according to the used RHOSP version.

    hashtag
    RHOSP 13

    hashtag
    RHOSP 16.1

    hashtag
    RHOSP 16.2

    hashtag
    RHOSP 17.0

    circle-exclamation

    The remaining documentation will use the following path for examples:

    hashtag
    1.3] If backup target type is 'Ceph based S3' with SSL:

    If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE___Directory/puppet/trilio/files'

    hashtag
    2] Upload trilio puppet module

    The following commands upload the Trilio puppet module to the overcloud registry. The actual upload happens upon the next deployment.

    Trilio puppet module is uploaded to overcloud as a swift deploy artifact with heat resource name 'DeployArtifactURLs'. If you check trilio's puppet module artifact file it looks like following.

    Note: If your overcloud deploy command using any other deploy artifact through a environment file, then you need to merge trilio deploy artifact url and your url in single file.

    • How to check if your overcloud deploy environment files using deploy artifacts? You need to check string 'DeployArtifactURLs' in your environment files (only those mentioned in overcloud deploy command with -e option). If you find it any such environment file that is mentioned in overcloud dpeloy command with '-e' option then your deploy command is using deploy artifact.

    • In that case you need to merge all deploy artifacts in single file. Refer following steps.

    Let's say, your artifact file path is "/home/stack/templates/user-artifacts.yaml" then refer following steps to merge both urls in single file and pass that new file to overcloud deploy command with '-e' option.

    hashtag
    3] Update overcloud roles data file to include Trilio services

    Trilio contains multiple services. Add these services to your roles_data.yaml.

    circle-info

    In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:

    /usr/share/openstack-tripleo-heat-templates/roles_data.yaml

    Add the following services to the roles_data.yaml

    circle-exclamation

    All commands need to be run as user 'stack'

    hashtag
    3.1] Add Trilio Datamover Api Service to role data file

    This service needs to share the same role as the keystone and database service. In case of the pre-defined roles will these services run on the role Controller. In case of custom defined roles, it is necessary to use the same role where 'OS::TripleO::Services::Keystone' service installed.

    Add the following line to the identified role:

    hashtag
    3.2] Add Trilio Datamover Service to role data file

    This service needs to share the same role as the nova-compute service. In case of the pre-defined roles will the nova-compute service run on the role Compute. In case of custom defined roles, it is necessary to use the role the nova-compute service is using.

    Add the following line to the identified role:

    hashtag
    3.3] Add Trilio Horizon Service role data file

    This service needs to share the same role as the OpenStack Horizon server. In case of the pre-defined roles will the Horizon service run on the role Controller. Add the following line to the identified role:

    hashtag
    4] Prepare Trilio container images

    circle-exclamation

    All commands need to be run as user 'stack'

    Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.

    circle-exclamation

    Please note that using the hotfix containers requires that the Trilio Appliance is getting upgraded to the desired hotfix level as well.

    circle-info

    Refer to the word <HOTFIX-TAG-VERSION> as 4.3.2 in the below sections

    hashtag
    RHOSP 13

    hashtag
    RHOSP 16.1

    hashtag
    RHOSP 16.2

    hashtag
    RHOSP 17.0

    There are three registry methods available in RedHat Openstack Platform.

    1. Remote Registry

    2. Local Registry

    3. Satellite Server

    hashtag
    4.1] Remote Registry

    Follow this section when 'Remote Registry' is used.

    In this method, container images gets downloaded directly on overcloud nodes during overcloud deploy/update command execution. User can set remote registry to redhat registry or any other private registry that he wants to use. User needs to provide credentials of the registry in 'containers-prepare-parameter.yaml' file.

    1. Make sure other openstack service images are also using the same method to pull container images. If it's not the case you can not use this method.

    2. Populate 'containers-prepare-parameter.yaml' with content like following. Important parameters are 'push_destination: false', ContainerImageRegistryLogin: true and registry credentials. Trilio container images are published to registry 'registry.connect.redhat.com'. Credentials of registry 'registry.redhat.io' will work for 'registry.connect.redhat.com' registry too.

    Note: This file -'containers-prepare-parameter.yaml'

    Redhat document for remote registry method: Herearrow-up-right\

    Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat

    3. Make sure you have network connectivity to above registries from all overcloud nodes. Otherwise image pull operation will fail.

    4. User need to manually populate the trilio_env.yaml file with Trilio container image URLs as given below:

    circle-info

    trilio_env.yaml file path:

    At this step, you have configured Trilio image urls in the necessary environment file.

    hashtag
    4.2] Local Registry

    Follow this section when 'local registry' is used on the undercloud.

    In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.

    hashtag
    RHOSP13

    hashtag
    RHOSP 16.1

    hashtag
    RHOSP16.2

    hashtag
    RHOSP17.0

    At this step, you have downloaded triliovault container images and configured triliovault image urls in necessary environment file.

    hashtag
    4.3] Red Hat Satellite Server

    Follow this section when a Satellite Server is used for the container registry.

    Pull the Trilio containers on the Red Hat Satellite using the given Red Hat registry URLs.arrow-up-right

    Populate the trilio_env.yaml with container urls.

    hashtag
    RHOSP 13

    hashtag
    RHOSP 16.1

    hashtag
    RHOSP16.2

    hashtag
    RHOSP17.0

    At this step, you have downloaded triliovault container images into RedHat sattelite server and configured triliovault image urls in necessary environment file.

    hashtag
    5] Configure multi-IP NFS

    circle-info

    This section is only required when the multi-IP feature for NFS is required.

    This feature allows to set the IP to access the NFS Volume per datamover instead of globally.

    On Undercloud node, change directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

    Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.

    circle-info

    Run this command on undercloud by sourcing 'stackrc'.

    Edit input map file and fill all the details. Refer to the this page for details about the structure.

    vi triliovault_nfs_map_input.yml

    Update pyyaml on the undercloud node only

    circle-exclamation

    If pip isn't available please install pip on the undercloud.

    Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.

    The result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

    Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Validate the changes in file 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

    Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that nfs is used as backup target.

    hashtag
    6] Provide environment details in trilio-env.yaml

    Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.

    The following information are required additionally:

    • Network for the datamover api

    • datamover password

    • Backup target type {nfs/s3}

    • In case of NFS

      • list of NFS Shares

      • NFS options

      • MultiIPNfsEnabled

    circle-info

    NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    • In case of S3

      • S3 type {amazon_s3/ceph_s3}

      • S3 Access key

      • S3 Secret key

      • S3 Region name

      • S3 Bucket

      • S3 Endpoint URL

      • S3 Signature Version

      • S3 Auth Version

      • S3 SSL Enabled {true/false}

      • S3 SSL Cert

    circle-info

    Use ceph_s3 for any non-aws S3 backup targets.

    hashtag
    7] Advanced Settings/Configuration

    hashtag
    7.1] Haproxy customized configuration for Trilio dmapi service __

    circle-info

    The existing default haproxy configuration works fine with most of the environments. Only when timeout issues with the dmapi are observed or other reasons are known, change the configuration as described here.

    Following is the haproxy conf file location on haproxy nodes of the overcloud. Trilio datamover api service haproxy configuration gets added to this file.

    Trilio datamover haproxy default configuration from the above file looks as follows:

    The user can change the following configuration parameter values.

    To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for edit (Edit <RHOSP_RELEASE> with your cloud's release information. Valid values are - rhosp13, rhosp16, rhosp16.1)

    For RHOSP13

    For RHOSP16.1

    For RHOSP16.2

    For RHOSP17.0

    ii) Search the following entries and edit as required

    iii) Save the changes.

    hashtag
    7.2] Configure Custom Volume/Directory Mounts for the Trilio Datamover Service

    • If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use the following heat environment file. A variable named 'TrilioDatamoverOptVolumes' is available in this file.

    • This variable 'TrilioDatamoverOptVolumes' accepts list of volume/bind mounts.

    • User needs to edit this file and add their volume mounts in below format.

    • For example:

    • In this volume mount "/mnt/dir2:/var/dir2", "/mnt/dir2" is a diretcory available on compute host and "/var/dir2" is the mount point inside datamover container.

    • Next, User needs to pass this file to overcloud deploy command with '-e' option Like below.

    hashtag
    8] Deploy overcloud with trilio environment

    Use the following heat environment file and roles data file in overcloud deploy command:

    1. trilio_env.yaml

    2. roles_data.yaml

    3. Use correct Trilio endpoint map file as per available Keystone endpoint configuration

      1. Instead of tls-endpoints-public-dns.yaml file, use environments/trilio_env_tls_endpoints_public_dns.yaml

      2. Instead of tls-endpoints-public-ip.yaml file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml

    4. If activated use the correct trilio_nfs_map.yaml file

    To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:

    circle-info

    Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

    hashtag
    9] Verify deployment

    triangle-exclamation

    If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

    hashtag
    9.1] On Controller node

    Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

    Verify the haproxy configuration under:

    hashtag
    9.2] On Compute node

    Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

    hashtag
    9.3] On the node with Horizon service

    Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.

    If Trilio Horizon container is in restarted state on RHOSP 16.1.8/RHSOP 16.2.4 then use below workaroud

    hashtag
    10] Troubleshooting for overcloud deployment failures

    Trilio components will be deployed using puppet scripts.

    oIn case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.htmlarrow-up-right

    Red Hat Openstack Platform Directorarrow-up-right
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/
    cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
    cd /home/stack
    git clone -b 4.3.2 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp13/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/puppet/trilio/files
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/scripts/
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following for RHOSP13, RHOSP16.1 and RHOSP16.2
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    ## Command is same for RHOSP17.0 but the command output and file content would be different
    
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/scripts/
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following for RHOSP17.0
    Creating tarball...
    Tarball created.
    renamed '/tmp/puppet-modules-P3duCg9/puppet-modules.tar.gz' -> '/var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz'
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    ## For RHOSP13, RHOSP16.1 and RHOSP16.2
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml
    # Heat environment to deploy artifacts via Swift Temp URL(s)
    parameter_defaults:
        DeployArtifactURLs:
        - 'http://172.25.0.103:8080/v1/AUTH_46ba596219d143c8b076e9fcc4139fed/overcloud-artifacts/puppet-modules.tar.gz?temp_url_sig=c3972b7ce75226c278ab3fa8237d31cc1f2115bd&temp_url_expires=1646738377'
    
    ## For RHOSP17.0
    (undercloud) [stack@undercloud17-3 scripts]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml
    parameter_defaults:
      DeployArtifactFILEs:
      - /var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz
    
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml | grep http >> /home/stack/templates/user-artifacts.yaml
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/templates/user-artifacts.yaml
    # Heat environment to deploy artifacts via Swift Temp URL(s)
    parameter_defaults:
        DeployArtifactURLs:
        - 'http://172.25.0.103:8080/v1/AUTH_57ba596219d143c8b076e9fcc4139f3g/overcloud-artifacts/some-artifact.tar.gz?temp_url_sig=dc972b7ce75226c278ab3fa8237d31cc1f2115sc&temp_url_expires=3446738365'
        - 'http://172.25.0.103:8080/v1/AUTH_46ba596219d143c8b076e9fcc4139fed/overcloud-artifacts/puppet-modules.tar.gz?temp_url_sig=c3972b7ce75226c278ab3fa8237d31cc1f2115bd&temp_url_expires=1646738377'
    
    'OS::TripleO::Services::TrilioDatamoverApi'
    'OS::TripleO::Services::TrilioDatamover'
    OS::TripleO::Services::TrilioHorizon
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    File Name: containers-prepare-parameter.yaml
    parameter_defaults:
      ContainerImagePrepare:
      - push_destination: false
        set:
          namespace: registry.redhat.io/...
          ...
      ...
      ContainerImageRegistryCredentials:
        registry.redhat.io:
          myuser: 'p@55w0rd!'
        registry.connect.redhat.com:
          myuser: 'p@55w0rd!'
      ContainerImageRegistryLogin: true
    # For RHOSP13
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    
    # For RHOSP16.1
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.1' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
    # For RHOSP16.2
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.2' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    
    # For RHOSP17.0
    $ grep '<HOTFIX-TAG-VERSION>-rhosp17.0' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/scripts/
    
    ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp13
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: 172.25.2.2:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: 172.25.2.2:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: 172.25.2.2:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    
    $ docker image list | grep <HOTFIX-TAG-VERSION>-rhosp13
    172.30.5.101:8787/trilio/trilio-datamover                  <HOTFIX-TAG-VERSION>-rhosp13        f2dfb36bb176        8 weeks ago         3.61 GB
    registry.connect.redhat.com/trilio/trilio-datamover        <HOTFIX-TAG-VERSION>-rhosp13        f2dfb36bb176        8 weeks ago         3.61 GB
    172.30.5.101:8787/trilio/trilio-datamover-api              <HOTFIX-TAG-VERSION>-rhosp13        5d62f572a00c        8 weeks ago         2.24 GB
    registry.connect.redhat.com/trilio/trilio-datamover-api    <HOTFIX-TAG-VERSION>-rhosp13        5d62f572a00c        8 weeks ago         2.24 GB
    registry.connect.redhat.com/trilio/trilio-horizon-plugin   <HOTFIX-TAG-VERSION>-rhosp13        27c4de28e5ae        2 months ago        2.27 GB
    172.30.5.101:8787/trilio/trilio-horizon-plugin             <HOTFIX-TAG-VERSION>-rhosp13        27c4de28e5ae        2 months ago        2.27 GB
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp16.1
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.1' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
    $ openstack tripleo container image list | grep <HOTFIX-TAG-VERSION>-rhosp16.1
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1 |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1      |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1  |
    -----------------------------------------------------------------------------------------------------
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp16.2
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.2' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    
    $ openstack tripleo container image list | grep <HOTFIX-TAG-VERSION>-rhosp16.2
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2 |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2      |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2  |
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp17.0
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp17.0' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    
    $ openstack tripleo container image list | grep <HOTFIX-TAG-VERSION>-rhosp17.0
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0 |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0      |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0  |
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.1' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.2' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    $ grep '<HOTFIX-TAG-VERSION>-rhosp17.0' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    cd triliovault-cfg-scripts/common/
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    ## On Python3 env
    sudo pip3 install PyYAML==5.1 3
    
    ## On Python2 env
    sudo pip install PyYAML==5.1
    ## On Python3 env
    python3 ./generate_nfs_map.py
    
    ## On Python2 env
    python ./generate_nfs_map.py
    grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
    resource_registry:
      OS::TripleO::Services::TrilioDatamover: ../services/trilio-datamover.yaml
      OS::TripleO::Services::TrilioDatamoverApi: ../services/trilio-datamover-api.yaml
      OS::TripleO::Services::TrilioHorizon: ../services/trilio-horizon.yaml
    
      # NOTE: If there are addition customizations to the endpoint map (e.g. for
      # other integratiosn), this will need to be regenerated.
      OS::TripleO::EndpointMap: endpoint_map.yaml
    
    parameter_defaults:
    
       ## Enable Trilio's quota functionality on horizon
       ExtraConfig:
         horizon::customization_module: 'dashboards.overrides'
    
       ## Define network map for trilio datamover api service
       ServiceNetMap:
           TrilioDatamoverApiNetwork: internal_api
    
       ## Trilio Datamover Password for keystone and database
       TrilioDatamoverPassword: "test1234"
    
       ## Trilio container pull urls
       DockerTrilioDatamoverImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
    
       ## If you do not want Trilio's horizon plugin to replace your horizon container, just comment following line.
       ContainerHorizonImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
       ## Backup target type nfs/s3, used to store snapshots taken by triliovault
       BackupTargetType: 'nfs'
    
       ## If backup target NFS share support multiple IPs and you want to use those IPs(more than one) then
       ## set this parameter to True. Otherwise keep it False.
       MultiIPNfsEnabled: False
    
       ## For backup target 'nfs'
       NfsShares: '192.168.122.101:/opt/tvault'
       NfsOptions: 'nolock,soft,timeo=180,intr,lookupcache=none'
    
       ## For backup target 's3'
       ## S3 type: amazon_s3/ceph_s3
       S3Type: 'amazon_s3'
    
       ## S3 access key
       S3AccessKey: ''
    
       ## S3 secret key
       S3SecretKey: ''
    
       ## S3 region, if your s3 does not have any region, just keep the parameter as it is
       S3RegionName: ''
    
       ## S3 bucket name
       S3Bucket: ''
    
       ## S3 endpoint url, not required for Amazon S3, keep it as it is
       S3EndpointUrl: ''
    
       ## S3 signature version
       S3SignatureVersion: 'default'
    
       ## S3 Auth version
       S3AuthVersion: 'DEFAULT'
    
       ## If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint url then change it to 'True', otherwise keep it as 'False'
       S3SslEnabled: False
    
       ## If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint URL and SSL certificates are self signed, then
       ## user need to set this parameter value to: '/etc/tvault-contego/s3-cert.pem', otherwise keep it's value as empty string.
       S3SslCert: ''
    
       ## Configure 'dmapi_workers' parameter of '/etc/dmapi/dmapi.conf' file
       ## This parameter value used to spawn the number of dmapi processes to handle the incoming api requests.
       ## If your dmapi node has ‘n' cpu cores, It is recommended, to set this parameter to '4*n’.
       ## If dmapi_workers field is not present in config file. The Default value will be equals to number of cores present on the node
       DmApiWorkers: 16
    
       ## Don't edit following parameter
       EnablePackageInstall: True
    
    
       ## Load 'rbd' kernel module on all compute nodes
       ComputeParameters:
         ExtraKernelModules:
           rbd: {}
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    listen trilio_datamover_api
      bind 172.25.0.107:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
      bind 172.25.0.107:8784 transparent
      balance roundrobin
      http-request set-header X-Forwarded-Proto https if { ssl_fc }
      http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
      http-request set-header X-Forwarded-Port %[dst_port]
      maxconn 50000
      option httpchk
      option httplog
      retries 5
      timeout check 10m
      timeout client 10m
      timeout connect 10m
      timeout http-request 10m
      timeout queue 10m
      timeout server 10m
      server overcloud-controller-0.internalapi.localdomain 172.25.0.106:8784 check fall 5 inter 2000 rise 2
    
    retries 5
    timeout http-request 10m
    timeout queue 10m
    timeout connect 10m
    timeout client 10m
    timeout server 10m
    timeout check 10m
    balance roundrobin
    maxconn 50000
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/services/trilio-datamover-api.yaml
              tripleo::haproxy::trilio_datamover_api::options:
                 'retries': '5'
                 'maxconn': '50000'
                 'balance': 'roundrobin'
                 'timeout http-request': '10m'
                 'timeout queue': '10m'
                 'timeout connect': '10m'
                 'timeout client': '10m'
                 'timeout server': '10m'
                 'timeout check': '10m'
    triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE>/environments/trilio_datamover_opt_volumes.yaml
    parameter_defaults:
      TrilioDatamoverOptVolumes:
        - /opt/dir1:/opt/dir1
        - /mnt/dir2:/var/dir2
    openstack overcloud deploy --templates \
    -e <> \
    .
    .
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_datamover_opt_volumes.yaml
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env_tls_endpoints_public_dns.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_nfs_map.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
    [root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
    26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2       kolla_start           5 days ago  Up 5 days ago         horizon
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2       kolla_start           5 days ago  Up 5 days ago         horizon
    ## Either of the below workarounds should be performed on all the controller nodes where issue occurs for horizon pod.
    
    option-1: Restart the memcached service on controller using systemctl (command: systemctl restart tripleo_memcached.service)
    
    option-2: Restart the memcached pod (command: podman restart memcached)
    openstack stack failures list overcloud
    heat stack-list --show-nested -f "status=FAILED"
    heat resource-list --nested-depth 5 overcloud | grep FAILED
    
    => If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_dmapi
    
    tailf /var/log/containers/trilio-datamover-api/dmapi.log
    
    
    
    => If trilio datamover containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_datamover
    
    tailf /var/log/containers/trilio-datamover/tvault-contego.log
    Instead of tls-everywhere-endpoints-dns.yaml file, useenvironments/trilio_env_tls_everywhere_dns.yaml

    Example runbook for Disaster Recovery using NFS

    This runbook will demonstrate how to set up Disaster Recovery with Trilio for a given scenario.

    The chosen scenario is following an actively used Trilio customer environment.

    hashtag
    Scenario

    There are two Openstack clouds available "Openstack Cloud A" and Openstack Cloud B". "Openstack Cloud B" is the Disaster Recovery restore point of "Openstack Cloud A" and vice versa. Both clouds have an independent Trilio installation integrated. These Trilio installations are writing their Backups to NFS targets. "Trilio A" is writing to "NFS A1" and "Trilio B" is writing to "NFS B1". The NFS Volumes used are getting synced against another NFS Volume on the other side. "NFS A1" is syncing with "NFS B2" and "NFS B1" is syncing with "NFS A2". The syncing process is set up independently from Trilio and will always favor the newer dataset.

    This scenario will cover the Disaster Recovery of a single Workload and a complete Cloud. All processes are done be the Openstack administrator.

    hashtag
    Prerequisites for the Disaster Recovery process

    This runbook will assume that the following is true:

    • "Openstack Cloud A" and "Openstack Cloud B" both have an active Trilio installation with a valid license

    • "Openstack Cloud A" and "Openstack Cloud B" have free resources to host additional VMs

    • "Openstack Cloud A" and "Openstack Cloud B" have Tenants/Projects available that are the designated restore points for Tenant/Projects of the other side

    circle-exclamation

    For ease of writing will this runbook assume from here on, that "Openstack Cloud A" is down and the Workloads are getting restored into "Openstack Cloud B".

    triangle-exclamation

    In the case of the usage of shared Tenant networks, beyond the floating IP, the following additional requirement is needed: All Tenant Networks, Routers, Ports, Floating IPs, and DNS Zones are created

    hashtag
    Disaster Recovery of a single Workload

    A single Workload can do a Disaster Recovery in this Scenario, while both Clouds are still active. To do so the following high-level process needs to be followed:

    1. Copy the Workload directories to the configured NFS Volume

    2. Make the right Mount-Paths available

    3. Reassign the Workload

    hashtag
    Copy the Workload directories to the configured NFS Volume

    circle-info

    This process only shows how to get a Workload from "Openstack Cloud A" to "Openstack Cloud B". The vice versa process is similar.

    As only a single Workload is to be recovered it is more efficient to copy the data of that single Workload over to the "NFS B1" Volume, which is used by "Trilio B".

    hashtag
    Mount "NFS B2" Volume to a Trilio VM

    It is recommended to use the Trilio VM as a connector between both NFS Volumes, as the nova user is available on the Trilio VM.

    hashtag
    Identify the Workload on the "NFS B2" Volume

    Trilio Workloads are identified by their ID und which they are stored on the Backup Target. See below example:

    In the case that the Workload ID is not known can available Metadata inside the Workload directories be used to identify the correct Workload.

    hashtag
    Copy the Workload

    The identified workload needs to be copied with all subdirectories and files. Afterward, it is necessary to adjust the ownership to nova:nova with the right permissions.

    hashtag
    Make the Mount-Paths available

    Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.

    The MTAuMTAuMi4yMDovdXBzdHJlYW0= part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.

    This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.

    Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.

    Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.

    hashtag
    Identify the base64 hash values

    The used hash values can be calculated using the base64 tool in any Linux distribution.

    hashtag
    Create and bind the paths

    Based on the identified base64 hash values the following paths are required on each Compute node.

    /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

    and

    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0

    In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.

    To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.

    hashtag
    Reassign the workload

    Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.

    hashtag
    Add admin-user to required domains and projects

    To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.

    hashtag
    Discover orphaned Workloads from NFS-Storage of Target Cloud

    Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.

    hashtag
    List available projects on Target Cloud in the Target Domain

    The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.

    hashtag
    List available users on the Target Cloud in the Target Project that have the right backup trustee role

    To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.

    hashtag
    Reassign the workload to the target project

    Now that all informations have been gathered the workload can be reassigned to the target project.

    hashtag
    Verify the workload is available at the desired target_project

    After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.

    hashtag
    Restore the workload

    The reassigned workload can be restored using Horizon following the procedure described .

    This runbook will continue on the CLI only path.

    hashtag
    Prepare the selective restore by getting the snapshot information

    To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.

    List all Snapshots of the workload to restore to identify the snapshot to restore

    Get Snapshot Details with network details for the desired snapshot

    Get Snapshot Details with disk details for the desired Snapshot

    hashtag
    Prepare the selective restore by creating the restore.json file

    The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.

    hashtag
    Run the selective restore

    To do the actual restore use the following command:

    hashtag
    Verify the restore

    To verify the success of the restore from a Trilio perspective the restore status is checked.

    hashtag
    Clean up

    After the Desaster Recovery Process has been successfully completed it is recommended to bring the TVM installation back into its original state to be ready for the next DR process.

    hashtag
    Delete the workload

    Delete the workload that got restored.

    hashtag
    Remove the database entry

    The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.

    To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.

    Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.

    This script can be found here:

    hashtag
    Remove the admin user from the project

    After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.

    hashtag
    Disaster Recovery of a complete cloud

    This Scenario will cover the Disaster Recovery of a full cloud. It is assumed that the source cloud is down or lost completly. To do the disaster recovery the following high-level process needs to be followed:

    1. Reconfigure the Target Trilio installation

    2. Make the right Mount-Paths available

    3. Reassign the Workload

    hashtag
    Reconfigure the Target Trilio installation

    Before the Desaster Recovery Process can start is it necessary to make the backups to be restored available for the Trilio installation. The following steps need to be done to completely reconfigure the Trilio installation.

    circle-exclamation

    During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.

    hashtag
    Add NFS B2 to the Trilio Appliance Cluster

    To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.

    Edit the workloadmgr.conf

    Look for the line defining the NFS mounts

    Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

    Write and close the workloadmgr.conf

    Restart the wlm-workloads service

    hashtag
    Add NFS B2 to the Trilio Datamovers

    triangle-exclamation

    Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.

    To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.

    Edit the tvault-contego.conf

    Look for the line defining the NFS mounts

    Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

    Write and close the tvault-contego.conf

    Restart the tvault-contego service

    hashtag
    Make the Mount-Paths available

    Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.

    The MTAuMTAuMi4yMDovdXBzdHJlYW0= part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.

    This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.

    Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.

    Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.

    hashtag
    Identify the base64 hash values

    The used hash values can be calculated using the base64 tool in any Linux distribution.

    hashtag
    Create and bind the paths

    Based on the identified base64 hash values the following paths are required on each Compute node.

    /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

    and

    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0

    In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.

    To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.

    hashtag
    Reassign the workload

    Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.

    hashtag
    Add admin-user to required domains and projects

    To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.

    hashtag
    Discover orphaned Workloads from NFS-Storage of Target Cloud

    Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.

    hashtag
    List available projects on Target Cloud in the Target Domain

    The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.

    hashtag
    List available users on the Target Cloud in the Target Project that have the right backup trustee role

    To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.

    hashtag
    Reassign the workload to the target project

    Now that all informations have been gathered the workload can be reassigned to the target project.

    hashtag
    Verify the workload is available at the desired target_project

    After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.

    hashtag
    Restore the workload

    The reassigned workload can be restored using Horizon following the procedure described .

    This runbook will continue on the CLI only path.

    hashtag
    Prepare the selective restore by getting the snapshot information

    To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.

    List all Snapshots of the workload to restore to identify the snapshot to restore

    Get Snapshot Details with network details for the desired snapshot

    Get Snapshot Details with disk details for the desired Snapshot

    hashtag
    Prepare the selective restore by creating the restore.json file

    The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.

    hashtag
    Run the selective restore

    To do the actual restore use the following command:

    hashtag
    Verify the restore

    To verify the success of the restore from a Trilio perspective the restore status is checked.

    hashtag
    Reconfigure the Target Trilio installation back to the original one

    After the Desaster Recovery Process has finished it is necessary to return the Trilio installation to its original configuration. The following steps need to be done to completely reconfigure the Trilio installation.

    circle-exclamation

    During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.

    hashtag
    Delete NFS B2 to the Trilio Appliance Cluster

    To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.

    Edit the workloadmgr.conf

    Look for the line defining the NFS mounts

    Delete NFS B2 from the comma-seperated list

    Write and close the workloadmgr.conf

    Restart the wlm-workloads service

    hashtag
    Delete NFS B2 to the Trilio Datamovers

    triangle-exclamation

    Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.

    To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.

    Edit the tvault-contego.conf

    Look for the line defining the NFS mounts

    Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

    Write and close the tvault-contego.conf

    Restart the tvault-contego service

    hashtag
    Clean up

    After the Desaster Recovery Process has been successfully completed and the Trilio installation reconfigured to its original state, it is recommended to do the following additional steps to be ready for the next Disaster Recovery process.

    hashtag
    Remove the database entry

    The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.

    To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.

    Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.

    This script can be found here:

    hashtag
    Remove the admin user from the project

    After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.

    Restores

    hashtag
    List Restores

    GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/detail

    Lists Restores with details

    Access to a user with the admin role permissions on domain level

  • One of the Openstack clouds is down/lost

  • Restore the Workload
  • Clean up

  • Restore the Workload
  • Reconfigure the Target Trilio installation back to the original one

  • Clean up

  • here
    https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabasearrow-up-right
    fully reconfigured
    here
    fully reconfigured
    https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabasearrow-up-right
    Disaster Recovery Scenario
    # mount <NFS B2-IP/NFS B2-FQDN>:/<VOL-Path> /mnt
    workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    /…/workload_<id>/workload_db <<< Contains User ID and Project ID of Workload owner
    /…/workload_<id>/workload_vms_db <<< Contains VM IDs and VM Names of all VMs actively protected be the Workload
    # cp /mnt/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    # chown -R nova:nova /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    # chmod -R 644 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    #qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    file format: qcow2
    virtual size: 1.0G (1073741824 bytes)
    disk size: 516K
    cluster_size: 65536
    
    backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
    
    # echo -n 10.10.2.20:/NFS_A1 | base64
    MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    
    # echo -n 10.20.3.22:/NFS_B2 | base64
    MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
    #mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #mount --bind 
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #vi /etc/fstab
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ 		/ var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl	none		bind 		0 0
    # source {customer admin rc file}  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
    # workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True    
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    |     Name   |                  ID                  |            Project ID            |  User ID                         |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    | Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 |  329880dedb4cd357579a3279835f392 |  
    | Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 |  329880dedb4cd357579a3279835f392 |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+
    # openstack project list --domain <target_domain>  
    +----------------------------------+----------+  
    | ID                               | Name     |  
    +----------------------------------+----------+  
    | 01fca51462a44bfa821130dce9baac1a | project1 |  
    | 33b4db1099ff4a65a4c1f69a14f932ee | project2 |  
    | 9139e694eb984a4a979b5ae8feb955af | project3 |  
    +----------------------------------+----------+ 
    # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | Role                             | User                             | Group | Project                          | Domain | Inherited |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    # workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True    
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    |    Name   |                  ID                  |            Project ID            |  User ID                         |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    | project1  | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+ 
    # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
    +-------------------+------------------------------------------------------------------------------------------------------+
    | Property          | Value                                                                                                |
    +-------------------+------------------------------------------------------------------------------------------------------+
    | availability_zone | nova                                                                                                 |
    | created_at        | 2019-04-18T02:19:39.000000                                                                           |
    | description       | Test Linux VMs                                                                                       |
    | error_msg         | None                                                                                                 |
    | id                | ac9cae9b-5e1b-4899-930c-6aa0600a2105                                                                 |
    | instances         | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id":                      |
    |                   | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}]                                     |
    | interval          | None                                                                                                 |
    | jobschedule       | True                                                                                                 |
    | name              | Test Linux                                                                                           |
    | project_id        | 2fc4e2180c2745629753305591aeb93b                                                                     |
    | scheduler_trust   | None                                                                                                 |
    | status            | available                                                                                            |
    | storage_usage     | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
    |                   | "snap_count": 13}}                                                                                   |
    | updated_at        | 2019-11-15T02:32:43.000000                                                                           |
    | user_id           | 72e65c264a694272928f5d84b73fe9ce                                                                     |
    | workload_type_id  | f82ce76f-17fe-438b-aa37-7a023058e50d                                                                 |
    +-------------------+------------------------------------------------------------------------------------------------------+
    # workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
    
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    |         Created At         |     Name     |                  ID                  |             Workload ID              | Snapshot Type |   Status  |    Host   |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    | 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |      full     | available | Upstream2 |
    | 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    | 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    # workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |   Networks  | Value                                                                                                                                        |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |  ip_address | 172.20.20.20                                                                                                                                 |
    |    vm_id    | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44', 
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:74:58:bb                                                                                                                            |
    |             |                                                                                                                                              |
    |  ip_address | 172.20.20.13                                                                                                                                 |
    |    vm_id    | 3fd869b2-16bd-4423-b389-18d19d37c8e0                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:6b:46:ae                                                                                                                            |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------------+--------------------------------------------------+
    |       Vdisks      |                      Value                       |
    +-------------------+--------------------------------------------------+
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a       |
    |    volume_name    |       0027b140-a427-46cb-9ccf-7895c7624493       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       0027b140-a427-46cb-9ccf-7895c7624493       |
    | availability_zone |                       nova                       |
    |       vm_id       |       38b620f1-24ae-41d7-b0ab-85ffc2d7958b       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       8007ed89-6a86-447e-badb-e49f1e92f57a       |
    |    volume_name    |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    | availability_zone |                       nova                       |
    |       vm_id       |       3fd869b2-16bd-4423-b389-18d19d37c8e0       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    +-------------------+--------------------------------------------------+
    {
       u'description':u'<description of the restore>',
       u'oneclickrestore':False,
       u'restore_type':u'selective',
       u'type':u'openstack',
       u'name':u'<name of the restore>'
       u'openstack':{
          u'instances':[
             {
                u'name':u'<name instance 1>',
                u'availability_zone':u'<AZ instance 1>',
                u'nics':[ #####Leave empty for network topology restore
                ],
                u'vdisks':[
                   {
                      u'id':u'<old disk id>',
                      u'new_volume_type':u'<new volume type name>',
                      u'availability_zone':u'<new cinder volume AZ>'
                   }
                ],
                u'flavor':{
                   u'ram':<RAM in MB>,
                   u'ephemeral':<GB of ephemeral disk>,
                   u'vcpus':<# vCPUs>,
                   u'swap':u'<GB of Swap disk>',
                   u'disk':<GB of boot disk>,
                   u'id':u'<id of the flavor to use>'
                },
                u'include':<True/False>,
                u'id':u'<old id of the instance>'
             } #####Repeat for each instance in the snapshot
          ],
          u'restore_topology':<True/False>,
          u'networks_mapping':{
             u'networks':[ #####Leave empty for network topology restore
                
             ]
          }
       }
    }
    
    # workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
    
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    |         Created At         |       Name       |                  ID                  |             Snapshot ID              |   Size   |   Status  |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    | 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
    +------------------+------------------------------------------------------------------------------------------------------+
    | Property         | Value                                                                                                |
    +------------------+------------------------------------------------------------------------------------------------------+
    | created_at       | 2019-09-24T12:44:38.000000                                                                           |
    | description      | -                                                                                                    |
    | error_msg        | None                                                                                                 |
    | finished_at      | 2019-09-24T12:46:07.000000                                                                           |
    | host             | Upstream2                                                                                            |
    | id               | 5b4216d0-4bed-460f-8501-1589e7b45e01                                                                 |
    | instances        | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata":   |
    |                  | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}]     |
    | name             | OneClick Restore                                                                                     |
    | progress_msg     | Restore from snapshot is complete                                                                    |
    | progress_percent | 100                                                                                                  |
    | project_id       | 8e16700ae3614da4ba80a4e57d60cdb9                                                                     |
    | restore_options  | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
    |                  | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]},   |
    |                  | "type": "openstack", "name": "OneClick Restore"}                                                     |
    | restore_type     | restore                                                                                              |
    | size             | 41126400                                                                                             |
    | snapshot_id      | 5928554d-a882-4881-9a5c-90e834c071af                                                                 |
    | status           | available                                                                                            |
    | time_taken       | 89                                                                                                   |
    | updated_at       | 2019-09-24T12:44:38.000000                                                                           |
    | uploaded_size    | 41126400                                                                                             |
    | user_id          | d5fbd79f4e834f51bfec08be6d3b2ff2                                                                     |
    | warning_msg      | None                                                                                                 |
    | workload_id      | 02b1aca2-c51a-454b-8c0f-99966314165e                                                                 |
    +------------------+------------------------------------------------------------------------------------------------------+
    # workloadmgr workload-delete <workload_id>
    # source {customer admin rc file}  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
    
    # vi /etc/workloadmgr/workloadmgr.conf
    vault_storage_nfs_export = <NFS_B1/NFS_B1-FQDN>:/<VOL-B1-Path>
    vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path>,<NFS-IP/NFS-FQDN>:/<VOL—2-Path>
    # systemctl restart wlm-workloads
    # vi /etc/tvault-contego/tvault-contego.conf
    vault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path>
    vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>
    # systemctl restart tvault-contego
    #qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    file format: qcow2
    virtual size: 1.0G (1073741824 bytes)
    disk size: 516K
    cluster_size: 65536
    
    backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
    
    # echo -n 10.10.2.20:/NFS_A1 | base64
    MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    
    # echo -n 10.20.3.22:/NFS_B2 | base64
    MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
    #mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #mount --bind 
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #vi /etc/fstab
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ 		/ var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl	none		bind 		0 0
    # source {customer admin rc file}  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
    # workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True    
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    |     Name   |                  ID                  |            Project ID            |  User ID                         |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    | Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 |  329880dedb4cd357579a3279835f392 |  
    | Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 |  329880dedb4cd357579a3279835f392 |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+
    # openstack project list --domain <target_domain>  
    +----------------------------------+----------+  
    | ID                               | Name     |  
    +----------------------------------+----------+  
    | 01fca51462a44bfa821130dce9baac1a | project1 |  
    | 33b4db1099ff4a65a4c1f69a14f932ee | project2 |  
    | 9139e694eb984a4a979b5ae8feb955af | project3 |  
    +----------------------------------+----------+ 
    # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | Role                             | User                             | Group | Project                          | Domain | Inherited |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    # workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True    
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    |    Name   |                  ID                  |            Project ID            |  User ID                         |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    | project1  | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+ 
    # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
    +-------------------+------------------------------------------------------------------------------------------------------+
    | Property          | Value                                                                                                |
    +-------------------+------------------------------------------------------------------------------------------------------+
    | availability_zone | nova                                                                                                 |
    | created_at        | 2019-04-18T02:19:39.000000                                                                           |
    | description       | Test Linux VMs                                                                                       |
    | error_msg         | None                                                                                                 |
    | id                | ac9cae9b-5e1b-4899-930c-6aa0600a2105                                                                 |
    | instances         | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id":                      |
    |                   | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}]                                     |
    | interval          | None                                                                                                 |
    | jobschedule       | True                                                                                                 |
    | name              | Test Linux                                                                                           |
    | project_id        | 2fc4e2180c2745629753305591aeb93b                                                                     |
    | scheduler_trust   | None                                                                                                 |
    | status            | available                                                                                            |
    | storage_usage     | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
    |                   | "snap_count": 13}}                                                                                   |
    | updated_at        | 2019-11-15T02:32:43.000000                                                                           |
    | user_id           | 72e65c264a694272928f5d84b73fe9ce                                                                     |
    | workload_type_id  | f82ce76f-17fe-438b-aa37-7a023058e50d                                                                 |
    +-------------------+------------------------------------------------------------------------------------------------------+
    # workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
    
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    |         Created At         |     Name     |                  ID                  |             Workload ID              | Snapshot Type |   Status  |    Host   |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    | 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |      full     | available | Upstream2 |
    | 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    | 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    # workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |   Networks  | Value                                                                                                                                        |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |  ip_address | 172.20.20.20                                                                                                                                 |
    |    vm_id    | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44', 
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:74:58:bb                                                                                                                            |
    |             |                                                                                                                                              |
    |  ip_address | 172.20.20.13                                                                                                                                 |
    |    vm_id    | 3fd869b2-16bd-4423-b389-18d19d37c8e0                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:6b:46:ae                                                                                                                            |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------------+--------------------------------------------------+
    |       Vdisks      |                      Value                       |
    +-------------------+--------------------------------------------------+
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a       |
    |    volume_name    |       0027b140-a427-46cb-9ccf-7895c7624493       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       0027b140-a427-46cb-9ccf-7895c7624493       |
    | availability_zone |                       nova                       |
    |       vm_id       |       38b620f1-24ae-41d7-b0ab-85ffc2d7958b       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       8007ed89-6a86-447e-badb-e49f1e92f57a       |
    |    volume_name    |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    | availability_zone |                       nova                       |
    |       vm_id       |       3fd869b2-16bd-4423-b389-18d19d37c8e0       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    +-------------------+--------------------------------------------------+
    {
       u'description':u'<description of the restore>',
       u'oneclickrestore':False,
       u'restore_type':u'selective',
       u'type':u'openstack',
       u'name':u'<name of the restore>'
       u'openstack':{
          u'instances':[
             {
                u'name':u'<name instance 1>',
                u'availability_zone':u'<AZ instance 1>',
                u'nics':[ #####Leave empty for network topology restore
                ],
                u'vdisks':[
                   {
                      u'id':u'<old disk id>',
                      u'new_volume_type':u'<new volume type name>',
                      u'availability_zone':u'<new cinder volume AZ>'
                   }
                ],
                u'flavor':{
                   u'ram':<RAM in MB>,
                   u'ephemeral':<GB of ephemeral disk>,
                   u'vcpus':<# vCPUs>,
                   u'swap':u'<GB of Swap disk>',
                   u'disk':<GB of boot disk>,
                   u'id':u'<id of the flavor to use>'
                },
                u'include':<True/False>,
                u'id':u'<old id of the instance>'
             } #####Repeat for each instance in the snapshot
          ],
          u'restore_topology':<True/False>,
          u'networks_mapping':{
             u'networks':[ #####Leave empty for network topology restore
                
             ]
          }
       }
    }
    
    # workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
    
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    |         Created At         |       Name       |                  ID                  |             Snapshot ID              |   Size   |   Status  |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    | 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
    +------------------+------------------------------------------------------------------------------------------------------+
    | Property         | Value                                                                                                |
    +------------------+------------------------------------------------------------------------------------------------------+
    | created_at       | 2019-09-24T12:44:38.000000                                                                           |
    | description      | -                                                                                                    |
    | error_msg        | None                                                                                                 |
    | finished_at      | 2019-09-24T12:46:07.000000                                                                           |
    | host             | Upstream2                                                                                            |
    | id               | 5b4216d0-4bed-460f-8501-1589e7b45e01                                                                 |
    | instances        | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata":   |
    |                  | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}]     |
    | name             | OneClick Restore                                                                                     |
    | progress_msg     | Restore from snapshot is complete                                                                    |
    | progress_percent | 100                                                                                                  |
    | project_id       | 8e16700ae3614da4ba80a4e57d60cdb9                                                                     |
    | restore_options  | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
    |                  | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]},   |
    |                  | "type": "openstack", "name": "OneClick Restore"}                                                     |
    | restore_type     | restore                                                                                              |
    | size             | 41126400                                                                                             |
    | snapshot_id      | 5928554d-a882-4881-9a5c-90e834c071af                                                                 |
    | status           | available                                                                                            |
    | time_taken       | 89                                                                                                   |
    | updated_at       | 2019-09-24T12:44:38.000000                                                                           |
    | uploaded_size    | 41126400                                                                                             |
    | user_id          | d5fbd79f4e834f51bfec08be6d3b2ff2                                                                     |
    | warning_msg      | None                                                                                                 |
    | workload_id      | 02b1aca2-c51a-454b-8c0f-99966314165e                                                                 |
    +------------------+------------------------------------------------------------------------------------------------------+
    # vi /etc/workloadmgr/workloadmgr.conf
    vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>
    vault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path>
    # systemctl restart wlm-workloads
    # vi /etc/tvault-contego/tvault-contego.conf
    vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>
    vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path>
    # systemctl restart tvault-contego
    # source {customer admin rc file}  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
    
    hashtag
    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the Restores from

    hashtag
    Query Parameters

    Name
    Type
    Description

    snapshot_id

    string

    ID of the Snapshot to fetch the Restores from

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 11:28:43 GMT
    Content-Type: application/json
    Content-Length: 4308
    Connection: keep-alive
    X-Compute-Request-Id: req-0bc531b6-be6e-43b4-90bd-39ef26ef1463
    
    {
       "restores":[
          {
             "id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
             "created_at":"2020-11-05T10:17:40.000000",
             "updated_at":"2020-11-05T10:17:40.000000",
    
    

    hashtag
    Get Restore

    GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>

    Provides all details about the specified Restore

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the restore from

    restore_id

    string

    ID of the restore to show

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Delete Restore

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>

    Deletes the specified Restore

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the Restore from

    restore_id

    string

    ID of the Restore to be deleted

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    Cancel Restore

    GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>/cancel

    Cancels an ongoing Restore

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of the Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the Restore from

    restore_id

    string

    ID of the Restore to cancel

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    hashtag
    One Click Restore

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Starts a restore according to the provided information

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    snapshot_id

    string

    ID of the snapshot to restore

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    hashtag
    Body Format

    The One-Click restore requires a body to provide all necessary information in json format.

    hashtag
    Selective Restore

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Starts a restore according to the provided information.

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    snapshot_id

    string

    ID of the snapshot to restore

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    hashtag
    Body Format

    The One-Click restore requires a body to provide all necessary information in json format.

    hashtag
    Inplace Restore

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Starts a restore according to the provided information

    hashtag
    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    snapshot_id

    string

    ID of the snapshot to restore

    hashtag
    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    hashtag
    Body Format

    The One-Click restore requires a body to provide all necessary information in json format.

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 14:04:45 GMT
    Content-Type: application/json
    Content-Length: 2639
    Connection: keep-alive
    X-Compute-Request-Id: req-30640219-e94e-4651-9b9e-49f5574e2a7f
    
    {
       "restore":{
          "id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
          "created_at":"2020-11-05T10:17:40.000000",
          "updated_at":"2020-11-05T10:17:40.000000",
          "finished_at":"2020-11-05T10:27:20.000000",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"available",
          "restore_type":"restore",
          "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "snapshot_details":{
             "created_at":"2020-11-04T13:58:37.000000",
             "updated_at":"2020-11-05T10:27:22.000000",
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
             "user_id":"ccddc7e7a015487fa02920f4d4979779",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
             "snapshot_type":"full",
             "display_name":"API taken 2",
             "display_description":"API taken description 2",
             "size":44171264,
             "restore_size":2147483648,
             "uploaded_size":44171264,
             "progress_percent":100,
             "progress_msg":"Creating Instance: cirros-2",
             "warning_msg":null,
             "error_msg":null,
             "host":"TVM1",
             "finished_at":"2020-11-04T14:06:03.000000",
             "data_deleted":false,
             "pinned":false,
             "time_taken":428,
             "vault_storage_id":null,
             "status":"available"
          },
          "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
          "instances":[
             {
                "id":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2",
                "name":"cirros-2",
                "status":"available",
                "metadata":{
                   "config_drive":"",
                   "instance_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                   "production":"1"
                }
             },
             {
                "id":"b083bb70-e384-4107-b951-8e9e7bbac380",
                "name":"cirros-1",
                "status":"available",
                "metadata":{
                   "config_drive":"",
                   "instance_id":"e33c1eea-c533-4945-864d-0da1fc002070",
                   "production":"1"
                }
             }
          ],
          "networks":[
             
          ],
          "subnets":[
             
          ],
          "routers":[
             
          ],
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
             }
          ],
          "name":"OneClick Restore",
          "description":"-",
          "host":"TVM2",
          "size":2147483648,
          "uploaded_size":2147483648,
          "progress_percent":100,
          "progress_msg":"Restore from snapshot is complete",
          "warning_msg":null,
          "error_msg":null,
          "time_taken":580,
          "restore_options":{
             "name":"OneClick Restore",
             "oneclickrestore":true,
             "restore_type":"oneclick",
             "openstack":{
                "instances":[
                   {
                      "name":"cirros-2",
                      "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                      "availability_zone":"nova"
                   },
                   {
                      "name":"cirros-1",
                      "id":"e33c1eea-c533-4945-864d-0da1fc002070",
                      "availability_zone":"nova"
                   }
                ]
             },
             "type":"openstack",
             "description":"-"
          },
          "metadata":[
             
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 14:21:07 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-0e155b21-8931-480a-a749-6d8764666e4d
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 15:13:30 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-98d4853c-314c-4f27-bd3f-f81bda1a2840
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 14:30:56 GMT
    Content-Type: application/json
    Content-Length: 992
    Connection: keep-alive
    X-Compute-Request-Id: req-7e18c309-19e5-49cb-a07e-90dd368fddae
    
    {
       "restore":{
          "id":"3df1d432-2f76-4ebd-8f89-1275428842ff",
          "created_at":"2020-11-05T14:30:56.048656",
          "updated_at":"2020-11-05T14:30:56.048656",
          "finished_at":null,
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"restoring",
          "restore_type":"restore",
          "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
             }
          ],
          "name":"One Click Restore",
          "description":"One Click Restore",
          "host":"",
          "size":0,
          "uploaded_size":0,
          "progress_percent":0,
          "progress_msg":null,
          "warning_msg":null,
          "error_msg":null,
          "time_taken":0,
          "restore_options":{
             "openstack":{
                
             },
             "type":"openstack",
             "oneclickrestore":true,
             "vmware":{
                
             },
             "restore_type":"oneclick"
          },
          "metadata":[
             
          ]
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 09:53:31 GMT
    Content-Type: application/json
    Content-Length: 1713
    Connection: keep-alive
    X-Compute-Request-Id: req-84f00d6f-1b12-47ec-b556-7b3ed4c2f1d7
    
    {
       "restore":{
          "id":"778baae0-6c64-4eb1-8fa3-29324215c43c",
          "created_at":"2020-11-09T09:53:31.037588",
          "updated_at":"2020-11-09T09:53:31.037588",
          "finished_at":null,
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"restoring",
          "restore_type":"restore",
          "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
             }
          ],
          "name":"API",
          "description":"API Created",
          "host":"",
          "size":0,
          "uploaded_size":0,
          "progress_percent":0,
          "progress_msg":null,
          "warning_msg":null,
          "error_msg":null,
          "time_taken":0,
          "restore_options":{
             "openstack":{
                "instances":[
                   {
                      "vdisks":[
                         {
                            "new_volume_type":"iscsi",
                            "id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                            "availability_zone":"nova"
                         }
                      ],
                      "name":"cirros-1-selective",
                      "availability_zone":"nova",
                      "nics":[
                         
                      ],
                      "flavor":{
                         "vcpus":1,
                         "disk":1,
                         "swap":"",
                         "ram":512,
                         "ephemeral":0,
                         "id":"1"
                      },
                      "include":true,
                      "id":"e33c1eea-c533-4945-864d-0da1fc002070"
                   },
                   {
                      "include":false,
                      "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe"
                   }
                ],
                "restore_topology":false,
                "networks_mapping":{
                   "networks":[
                      {
                         "snapshot_network":{
                            "subnet":{
                               "id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
                            },
                            "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26"
                         },
                         "target_network":{
                            "subnet":{
                               "id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
                            },
                            "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                            "name":"internal"
                         }
                      }
                   ]
                }
             },
             "restore_type":"selective",
             "type":"openstack",
             "oneclickrestore":false
          },
          "metadata":[
             
          ]
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 12:53:03 GMT
    Content-Type: application/json
    Content-Length: 1341
    Connection: keep-alive
    X-Compute-Request-Id: req-311fa97e-0fd7-41ed-873b-482c149ee743
    
    {
       "restore":{
          "id":"0bf96f46-b27b-425c-a10f-a861cc18b82a",
          "created_at":"2020-11-09T12:53:02.726757",
          "updated_at":"2020-11-09T12:53:02.726757",
          "finished_at":null,
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"restoring",
          "restore_type":"restore",
          "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
             }
          ],
          "name":"API",
          "description":"API description",
          "host":"",
          "size":0,
          "uploaded_size":0,
          "progress_percent":0,
          "progress_msg":null,
          "warning_msg":null,
          "error_msg":null,
          "time_taken":0,
          "restore_options":{
             "restore_type":"inplace",
             "type":"openstack",
             "oneclickrestore":false,
             "openstack":{
                "instances":[
                   {
                      "restore_boot_disk":true,
                      "include":true,
                      "id":"7c1bb5d2-aa5a-44f7-abcd-2d76b819b4c8",
                      "vdisks":[
                         {
                            "restore_cinder_volume":true,
                            "id":"f6b3fef6-4b0e-487e-84b5-47a14da716ca"
                         }
                      ]
                   },
                   {
                      "restore_boot_disk":true,
                      "include":true,
                      "id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b",
                      "vdisks":[
                         {
                            "restore_cinder_volume":true,
                            "id":"53204f34-019d-4ba8-ada1-e6ab7b8e5b43"
                         }
                      ]
                   }
                ]
             }
          },
          "metadata":[
             
          ]
       }
    }
    {
       "restore":{
          "options":{
             "openstack":{
                
             },
             "type":"openstack",
             "oneclickrestore":true,
             "vmware":{},
             "restore_type":"oneclick"
          },
          "name":"One Click Restore",
          "description":"One Click Restore"
       }
    }
    {
       "restore":{
        "name":"<restore name>",
        "description":"<restore description>",
    	  "options":{
             "openstack":{
                "instances":[
                   {
                      "name":"<new name of instance>",
                      "include":<true/false>,
                      "id":"<original id of instance to be restored>"
    				  "availability_zone":"<availability zone>",
    				  "vdisks":[
                         {
                            "id":"<original ID of Volume>",
                            "new_volume_type":"<new volume type>",
                            "availability_zone":"<Volume availability zone>"
                         }
                      ],
                      "nics":[
                         {
                             'mac_address':'<mac address of the pre-created port>',
                             'ip_address':'<IP of the pre-created port>',
                             'id':'<ID of the pre-created port>',
                             'network':{
                                'subnet':{
                                   'id':'<ID of the subnet of the pre-created port>'
                                },
                             'id':'<ID of the network of the pre-created port>'
                          }
                      ],
                      "flavor":{
                         "vcpus":<Integer>,
                         "disk":<Integer>,
                         "swap":<Integer>,
                         "ram":<Integer>,
                         "ephemeral":<Integer>,
                         "id":<Integer>
                      }
                   }
                ],
                "restore_topology":<true/false>,
                "networks_mapping":{
                   "networks":[
                      {
                         "snapshot_network":{
                            "subnet":{
                               "id":"<ID of the original Subnet ID>"
                            },
                            "id":"<ID of the original Network ID>"
                         },
                         "target_network":{
                            "subnet":{
                               "id":"<ID of the target Subnet ID>"
                            },
                            "id":"<ID of the target Network ID>",
                            "name":"<name of the target network>"
                         }
                      }
                   ]
                }
             },
             "restore_type":"selective",
             "type":"openstack",
             "oneclickrestore":false
          }
       }
    }
    {
       "restore":{
          "name":"<restore-name>",
          "description":"<restore-description>",
          "options":{
             "restore_type":"inplace",
             "type":"openstack",
             "oneclickrestore":false,
             "openstack":{
                "instances":[
                   {
                      "restore_boot_disk":<Boolean>,
                      "include":<Boolean>,
                      "id":"<ID of the instance the volumes are attached to>",
                      "vdisks":[
                         {
                            "restore_cinder_volume":<boolean>,
                            "id":"<ID of the Volume to restore>"
                         }
                      ]
                   }
                ]
             }
          }
       }
    }
    "finished_at":"2020-11-05T10:27:20.000000",
    "user_id":"ccddc7e7a015487fa02920f4d4979779",
    "project_id":"c76b3355a164498aa95ddbc960adc238",
    "status":"available",
    "restore_type":"restore",
    "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
    "links":[
    {
    "rel":"self",
    "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
    },
    {
    "rel":"bookmark",
    "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
    }
    ],
    "name":"OneClick Restore",
    "description":"-",
    "host":"TVM2",
    "size":2147483648,
    "uploaded_size":2147483648,
    "progress_percent":100,
    "progress_msg":"Restore from snapshot is complete",
    "warning_msg":null,
    "error_msg":null,
    "time_taken":580,
    "restore_options":{
    "name":"OneClick Restore",
    "oneclickrestore":true,
    "restore_type":"oneclick",
    "openstack":{
    "instances":[
    {
    "name":"cirros-2",
    "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
    "availability_zone":"nova"
    },
    {
    "name":"cirros-1",
    "id":"e33c1eea-c533-4945-864d-0da1fc002070",
    "availability_zone":"nova"
    }
    ]
    },
    "type":"openstack",
    "description":"-"
    },
    "metadata":[
    {
    "created_at":"2020-11-05T10:27:20.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"91ab2495-1903-4d75-982b-08a4e480835b",
    "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
    "key":"data_transfer_time",
    "value":"0"
    },
    {
    "created_at":"2020-11-05T10:27:20.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"e0e01eec-24e0-4abd-9b8c-19993a320e9f",
    "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
    "key":"object_store_transfer_time",
    "value":"0"
    },
    {
    "created_at":"2020-11-05T10:27:20.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"eb909267-ba9b-41d1-8861-a9ec22d6fd84",
    "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
    "key":"restore_user_selected_value",
    "value":"Oneclick Restore"
    }
    ]
    },
    {
    "id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
    "created_at":"2020-11-04T14:37:31.000000",
    "updated_at":"2020-11-04T14:37:31.000000",
    "finished_at":"2020-11-04T14:45:27.000000",
    "user_id":"ccddc7e7a015487fa02920f4d4979779",
    "project_id":"c76b3355a164498aa95ddbc960adc238",
    "status":"error",
    "restore_type":"restore",
    "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
    "links":[
    {
    "rel":"self",
    "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/4673d962-f6a5-4209-8d3e-b9f2e9115f07"
    },
    {
    "rel":"bookmark",
    "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/4673d962-f6a5-4209-8d3e-b9f2e9115f07"
    }
    ],
    "name":"OneClick Restore",
    "description":"-",
    "host":"TVM2",
    "size":2147483648,
    "uploaded_size":2147483648,
    "progress_percent":100,
    "progress_msg":"",
    "warning_msg":null,
    "error_msg":"Failed restoring snapshot: Error creating instance e271bd6e-f53e-4ebc-875a-5787cc4dddf7",
    "time_taken":476,
    "restore_options":{
    "name":"OneClick Restore",
    "oneclickrestore":true,
    "restore_type":"oneclick",
    "openstack":{
    "instances":[
    {
    "name":"cirros-2",
    "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
    "availability_zone":"nova"
    },
    {
    "name":"cirros-1",
    "id":"e33c1eea-c533-4945-864d-0da1fc002070",
    "availability_zone":"nova"
    }
    ]
    },
    "type":"openstack",
    "description":"-"
    },
    "metadata":[
    {
    "created_at":"2020-11-04T14:45:27.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"be6dc7e2-1be2-476b-9338-aed986be3b55",
    "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
    "key":"data_transfer_time",
    "value":"0"
    },
    {
    "created_at":"2020-11-04T14:45:27.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"2e4330b7-6389-4e21-b31b-2503b5441c3e",
    "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
    "key":"object_store_transfer_time",
    "value":"0"
    },
    {
    "created_at":"2020-11-04T14:45:27.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"561c806b-e38a-496c-a8de-dfe96cb3e956",
    "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
    "key":"restore_user_selected_value",
    "value":"Oneclick Restore"
    }
    ]
    }
    ]
    }

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient