Only this pageAll pages
Powered by GitBook
1 of 96

T4O-4.2

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deployment Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Trilio Appliance Administration Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

User Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Admin Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Troubleshooting

Loading...

Loading...

Loading...

Loading...

Loading...

API GUIDE

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Advanced Ceph configurations

Ceph is the most common OpenSource solution to provide block storage through OpenStack Cinder.

Ceph is a very flexible solution. The possibilities of Ceph require additional steps to the Trilio solution.

Uninstall Trilio

The uninstallation of Trilio is depending on the Openstack Distribution it is installed in. The high-level process is the same for all Distributions.

  1. Uninstall the Horizon Plugin or the Trilio Horizon container

  2. Uninstall the datamover-api container

  3. Uninstall the datamover

  4. Delete the Trilio Appliance Cluster

Uninstalling from Canonical OpenStack

Trilio is not providing the JuJu Charms to deploy Trilio 4.1 in Canonical Openstack. At the time of release are the JuJu Charms not yet updated to Trilio 4.1. We will update this page once the Charms are available.

About Trilio for OpenStack

Trilio, by TrilioData, is a native OpenStack service that provides policy-based comprehensive backup and recovery for OpenStack workloads. The solution captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data and Metadata of an environment) as full or incremental snapshots. These snapshots can be held in a variety of storage environments including NFS AWS S3 compatible storage. With Trilio and its single click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). With Trilio, IT departments are enabled to fully deploy OpenStack solutions and provide business assurance through enhanced data retention, protection and integrity.

With the use of Trilio’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. Trilio takes point-in-time backup of the entire workload consisting of compute resources, network configurations and storage data as one unit. It also takes incremental backups that only captures the changes that were made since last backup. Incremental snapshots save time and storage space as the backup only includes changes since the last backup. The benefits of using VAST for backup and restore could be summarized as below:

  1. Efficient capture and storage of snapshots. Since our full backups only include data that is committed to storage volume and the incremental backups only include changed blocks of data since last backup, our backup processes are efficient and storages backup images efficiently on the backup media

  2. Faster and reliable recovery. When your applications become complex that snap multiple VMs and storage volumes, our efficient recovery process will bring your application from zero to operational with just click of button

  3. Easy migration of workloads between clouds. Trilio captures all the details of your application and hence our migration includes your entire application stack without leaving any thing for guess work.

  4. Through policy and automation lower the Total Cost of Ownership. Our tenant driven backup process and automation eliminates the need for dedicated backup administrators, there by improves your total cost of ownership.

Apply the Trilio license

After the Trilio VM has been configured and all components are installed can the license be applied.

The license can be applied either through the admin-tab in Horizon or the CLI

Apply license through Horizon

To apply the license through Horizon follow these steps:

Upgrade Trilio

Starting Trilio for Openstack 4.0 does Trilio for Openstack allow in-place upgrades.

The following versions can be upgraded to each other:

Old
New

Trilio for OpenStack Architecture

Backup-as-a-Service

Trilio is an add on service to OpenStack cloud infrastructure and provides backup and disaster recovery functions for tenant workloads. Trilio is very similar to other OpenStack services including nova, cinder, glance, etc and adheres to all tenets of OpenStack. It is a stateless service that scales with your cloud.

4.1 GA (4.1.94)

4.1 latest Hotfix

4.1 Hotfix

4.1 latest Hotfix

4.1 GA (4.1.94)

4.2 GA (4.2.64)

4.1 Hotfix

4.2 GA (4.2.64)

The upgrade process contains upgrading the Trilio appliance and the Openstack components and is dependent on the underlying operating system.

The Upgrade of Trilio for Canonical Openstack is managed through the charms.

Enable mount-bind for NFS

T4O 4.2 has changed the calculation of the mount point. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2

Please follow this documentation to set up the mount bind for Canonical OpenStack.

4.0 GA (4.0.92)

4.0 SP1 (4.0.115)

4.0 GA (4.0.92)

4.1 latest hotfix

4.0 GA (4.0.92)

4.2 GA (4.2.64)

Preparing the installation

It is recommended to think about the following elements prior to the installation of Trilio for Openstack.

Tenant Quotas

Trilio uses Cinder snapshots for calculating full and incremental backups. For full backups, Trilio creates Cinder snapshots for all the volumes in the backup job. It then leaves these Cinder snapshots behind for calculating the incremental backup image during next backup. During an incremental backup operation it creates new Cinder snapshots, calculates the changed blocks between the new snapshots and the old snapshots that were left behind during full/previous backups. It then deletes the old snapshots but leaves the newly created snapshots behind. So, it is important that each tenant who is availing Trilio backup functionality has sufficient Cinder snapshot quotas to accommodate these additional snapshots. The guideline is to add 2 snapshots for every volume that is added to backups to volume snapshot quotas for that tenant. You may also increase the volume quotas for the tenant by the same amount because Trilio briefly creates a volume from snapshot to read data from the snapshot for backup purposes. During a restore process, Trilio creates additional instances and Cinder volumes. To accommodate restore operations, a tenant should have sufficient quota for Nova instances and Cinder volumes. Otherwise restore operations will result in failures.

AWS S3 eventual consistency¶

AWS S3 object consistency model includes:

  1. Read-after-write

  2. Read-after-update

  3. Read-after-delete

Each of them describes how an object will reach its consistent state after an object is created/updated or deleted. None of them provides strong consistency and there is a lag time for an object to reach the consistent state. Though Trilio employed mechanisms to work around the limitations of eventual consistency of AWS S3, when an object reach its consistency state is not deterministic. There is no official statement from AWS on how long it takes for an object to reach consistent state. However read-after-write has a shorter time to reach consistency compared to other IO patterns. Our solution is designed to maximize read-after-write IO pattern. The time in which an object reaches eventual consistency also depends on the AWS region. For example, aws-standard region does not have strong consistency model compared to us-east or us-west. We suggest to use these regions when creating s3 buckets for Trilio. Though read-after-update IO pattern is hard to avoid completely, we employed ample delays in accessing objects to accommodate larger durations for objects to get into consistent state. However in rare occasions, backups may still fail and need to restarted.

Trilio Cluster¶

Trilio can be deployed as a single node or a three node cluster. It is highly recommended that Trilio is deployed as three node cluster for fault tolerance and load balancing. Starting with 3.0 release, Trilio requires additional IP for cluster and is required for both single node and three node deployments. Cluster ip a.k.a virtual ip is used for managing cluster and is used to register Trilio service endpoint in the keystone sevice catalog.

Installing Trilio Components

Once the Trilio VM or the Cluster of Trilio VMs has been spun, the actual installation process can begin. This process contains of the following steps:

  1. Install the Trilio dm-api service on the control plane

  2. Install the Trilio datamover service on the compute plane

  3. Install the Trilio Horizon plugin into the Horizon service

How these steps look in detail is depending on the Openstack distribution Trilio is installed in. Each supported Openstack distribution has its own deployment tools. Trilio is integrating into these deployment tools to provide a native integration from the beginning to the end.

  • Test text

Additions for multiple CEPH configurations

It is possible to configure Cinder to have multiple configurations and keyrings for CEPH.

In this case, the Trilio Datamover file needs to be extended with the CEPH information.

For Trilio to be able to work in such an environment it is required to put copies of each of these configurations and keyrings into a separate directory, which is then made known to the Trilio Datamover inside a [ceph] block in the tvault-contego.conf.

A tvault-contego.conf file with the extended [ceph] block would look like this.

[DEFAULT]

vault_storage_type = nfs
vault_storage_nfs_export = 192.168.1.34:/mnt/tvault/tvm5
vault_storage_nfs_options = nolock,soft,timeo=180,intr,lookupcache=none


vault_data_directory_old = /var/triliovault
vault_data_directory = /var/trilio/triliovault-mounts
log_file = /var/log/kolla/triliovault-datamover/tvault-contego.log
debug = False
verbose = True
max_uploads_pending = 3
max_commit_pending = 3

dmapi_transport_url = rabbit://openstack:[email protected]:5672,openstack:[email protected]:5672,openstack:[email protected]:5672//

[dmapi_database]
connection = mysql+pymysql://dmapi:x5nvYXnAn4rXmCHfWTK8h3wwShA4vxMq3gE2jH57@kolla-victoriaR-internal.triliodata.demo:3306/dmapi


[libvirt]
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = volumes

[ceph]
keyring_ext = .volumes.keyring
ceph_dir = /etc/ceph/directory1/,/etc/ceph/directory2/

[contego_sys_admin]
helper_command = sudo /usr/bin/privsep-helper


[conductor]
use_local = True

[oslo_messaging_rabbit]
ssl = false

[cinder]
http_retries = 10

Additions for multiple Ceph users

It is possible to configure Cinder and Ceph to use different Ceph users for different Ceph pools and Cinder volume types. Or to have the nova boot volumes and cinder block volumes controlled by different users.

In the case of multiple Ceph users, it is required to adopt the keyring extension in the tvault-contego.conf inside the Ceph block.

The following example will try all files with the extension keyring that are located inside /etc/ceph to access the Ceph cluster for a Trilio related task.

[DEFAULT]

vault_storage_type = nfs
vault_storage_nfs_export = 192.168.1.34:/mnt/tvault/tvm5
vault_storage_nfs_options = nolock,soft,timeo=180,intr,lookupcache=none


vault_data_directory_old = /var/triliovault
vault_data_directory = /var/trilio/triliovault-mounts
log_file = /var/log/kolla/triliovault-datamover/tvault-contego.log
debug = False
verbose = True
max_uploads_pending = 3
max_commit_pending = 3

dmapi_transport_url = rabbit://openstack:[email protected]:5672,openstack:[email protected]:5672,openstack:[email protected]:5672//

[dmapi_database]
connection = mysql+pymysql://dmapi:x5nvYXnAn4rXmCHfWTK8h3wwShA4vxMq3gE2jH57@kolla-victoriaR-internal.triliodata.demo:3306/dmapi


[libvirt]
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = volumes

[ceph]
keyring_ext = .keyring
ceph_dir = /etc/ceph/

[contego_sys_admin]
helper_command = sudo /usr/bin/privsep-helper


[conductor]
use_local = True

[oslo_messaging_rabbit]
ssl = false

[cinder]
http_retries = 10
Login to Horizon using admin user.
  • Click on Admin Tab.

  • Navigate to Backups-Admin

  • Navigate to Trilio

  • Navigate to License

  • Click "Update License"

  • Click "Choose File"

  • choose license-file on client system

  • click "Apply"

  • Apply license through CLI

    • <license_file> ➡️ path to the license file

    workloadmgr license-create <license_file>
    Main Components
    Trilio architecture overview

    Trilio has four main software components:

    1. Trilio ships as a QCOW2 image. User can instantiate one or more VMs from the QCOW2 image on a standalone KVM boxes.

    2. Trilio API is a python module that is installed on all OpenStack controller nodes where the nova-api service is running.

    3. Trilio Datamover is a python module that is installed on every OpenStack compute nodes

    4. Trilio horizon plugin is installed as an add on to horizon servers. This module is installed on every server that runs horizon service.

    Service Endpoints

    Service endpoints overview

    Trilio is both a provider and consumer into OpenStack ecosystem. It uses other OpenStack services such as nova, cinder, glance, neutron, and keystone and provides its own service to OpenStack tenants. To accomodate all possible OpenStack deployments, Trilio can be configured to use either public or internal URLs of services. Likewise Trilio provides its own public, internal and admin URLs.

    Network Topology

    Example network topology

    This figure represents a typical network topology. Trilio exposes its public URL endpoint on public network and Trilio virtual appliances and data movers typically use either internal network or dedicated backup network for storing and retrieving backup images from backup store.

    Trilio is like a Data Protection project providing Backup-as-a-Service

    Requirements

    Trilio has four main software components:

    1. Trilio ships as a QCOW2 image. User can instantiate one or more VMs from the QCOW2 image on a standalone KVM boxes.

    2. Trilio API is a python module that is an extension to nova api service. This module is installed on all OpenStack controller nodes

    3. Trilio Datamover is a python module that is installed on every OpenStack compute nodes

    4. Trilio horizon plugin is installed as an add on to horizon servers. This module is installed on every server that runs horizon service.

    System requirements Trilio Appliance

    The Trilio Appliance is not supported as instance inside Openstack.

    The Trilio Appliance gets delivered as a qcow2 image, which gets attached to a virtual machine.

    Trilio supports KVM-based hypervisors on x86 architectures, with the following properties:

    Software
    Supported

    The recommended size of the VM for the Trilio Appliance is:

    When running Trilio in production, a 3-node cluster of the Trilio appliance is recommended for high availability and load balancing.

    Ressource
    Value

    The qcow2 image itself defines the 40GB disk size of the VM.

    In the rare case of the Trilio Appliance database or log files getting larger than 40GB disk, contact or open a ticket with Trilio Customer Success to attach another drive to the Trilio Appliance.

    Software Requirements

    In addition to the Trilio Appliance does Trilio contain components, which are installed directly into the Openstack itself.

    Each Openstack distribution comes with a set of supported operating systems. Please check the to see, which Openstack Distribution is supported with which Operating System.

    Additionally, it is necessary to have the nfs-common packages installed on the compute nodes in case of using the NFS protocol for the backup target.

    OpenStack Barbican for encryption

    Since Trilio for OpenStack 4.2 is T4O capable of providing encrypted backups

    The Trilio for OpenStack solution is leveraging the OpenStack Barbican service to provide encryption capabilities for its backups.

    To be precise, T4O is using secrets provided by Barbican to encrypt and decrypt backups. Barbican is therefore required to utilize this feature.

    libvirt

    2.0.0 and above

    QEMU

    2.0.0 and above

    qemu-img

    2.6.0 and above

    vCPU

    8

    RAM

    24 GB

    support matrix

    Uninstalling from Kolla OpenStack

    Clean triliovault_datamover_api container

    The container needs to be cleaned on all nodes where the triliovault_datamover_api container is running. The Kolla Openstack inventory file helps to identify the nodes with the service.

    Following steps need to be done to clean the triliovault_datamover_api container:

    Stop the triliovault_datamover_api container.

    Remove the triliovault_datamover_api container.

    Clean /etc/kolla/triliovault-datamover-api directory.

    Clean log directory of triliovault_datamover_api container.

    Clean triliovault_datamover container

    The container needs to be cleaned on all nodes where the triliovault_datamover container is running. The Kolla Openstack inventory file helps to identify the nodes with the service.

    Following steps need to be done to clean the triliovault_datamover container:

    Stop the triliovault_datamover container.

    Remove the triliovault_datamover container.

    Clean /etc/kolla/triliovault-datamover directory.

    Clean log directory of triliovault_datamover container.

    Clean haproxy of Trilio Datamover API

    The Trilio Datamover API entries need to be cleaned on all haproxy nodes. The Kolla Openstack inventory file helps to identify the nodes with the service.

    Following steps need to be done to clean the haproxy container:

    Clean Kolla Ansible deployment procedure

    Delete all Trilio related entries from:

    To cross-verify the uninstallation undo all steps done in and .

    Trilio entries can be found in:

    • /usr/local/share/kolla-ansible/ansible/roles/ ➡️ There is a role triliovault

    • /etc/kolla/globals.yml➡️ Trilio entries had been appended at the end of the file

    • /etc/kolla/passwords.yml➡️

    Revert to original Horizon container

    Run deploy command to replace the Trilio Horizon container with original Kolla Ansible Horizon container.

    Clean Keystone resources

    Trilio created a dmapi service with dmapi user.

    Clean Trilio database resources

    Trilio Datamover API service has its own database in the Openstack database.

    Login to Openstack database as root user or user with similar priviliges.

    Delete dmapi database and user.

    Destroy the Trilio VM Cluster

    List all VMs running on the KVM node

    Destroy the Trilio VMs

    Undefine the Trilio VMs

    Delete the TrlioVault VM disk from KVM Host storage

    Trilio network considerations

    Trilio integrates natively with Openstack. This includes that Trilio communicates completely through APIs using the Openstack Endpoints. Trilio is also generating its own Openstack endpoints. In addition, is the Trilio appliance and the compute nodes writing to and reading from the backup target. These points affect the network planning for the Trilio installation.

    Existing endpoints in Openstack

    Openstack knows 3 types of endpoints:

    Switch Backup Target on Kolla-ansible

    Unmount old mount point

    The first step is to remove the datamover container and to unmount the old mounts. This is necessary to make sure, that the new datamover container with the new backend target is not getting any interference from the old backup target.

    Change the Trilio GUI password

    Change Trilio Dashboard password

    To change the Trilio GUI password do:

    • Login into the Trilio Dashboard

    Reconfigure the Trilio Cluster

    The Trilio appliance can be reconfigured at any time to adjust the Trilio cluster to any changes in the Openstack environment or the general backup solution.

    To reconfigure the Trilio Cluster go to the "Configure". The configure page shows the current configuration of the TriloVault cluster.

    The configuration page also gives access to the ansible playbooks of the last successful configuration.

    To start the reconfiguration of the Trilio Cluster click "Reconfigure" at the end of the table.

    Follow the guide afterwards.

    Once the Trilio configurator has started, it needs to run through successfully to continue to use Trilio.

    Reinitialize Trilio

    The Trilio Appliance can be reinitialized, which will delete all workload related values from the Trilio database.

    To reinitialize the Trilio Appliance do:

    • Login into the Trilio Dashboard

    • Click on "admin" in the upper right corner to open the submenu

    Using the workloadmgr CLI tool on the Trilio Appliance

    To use the workloadmgr CLI tool on the Trilio appliance it is only necessary to activate the virtual environment of the workloadmgr

    An rc-file to authenticate against Openstack is required.

    Click on "admin" in the upper right corner to open the submenu
  • Choose "Reset Password"

  • Set the new Trilio password

  • Choose "Reinitialize"
  • Verify that you want to reinitialize the Trilio

  • The cluster will not roll back to its last working state in case of any errors.

    When the reconfiguration is required to switch to an external database it is necessary to reinitialize the Trilio appliance and configure it from scratch.

    Configuring Trilio

    Download Trilio logs

    Download Trilio logs

    It is possible to download the Trilio logs directly through the Trilio web gui.

    To download logs throught the Trilio web gui:

    • Login into the Trilio web gui

    • Go to "Logs"

    • Choose the log to be downloaded

      • Each log for every Trilio Appliance can be downloaded seperatly

      • or a zip of all logfiles can be created and downloaded

    This will download the current log files. Already rotated logs need to be downloaded through SSH from the Trilio appliance directly. All logs, including rotated old logs, can be found at:

    /var/log/workloadmgr/

    Trilio entries had been appended at the end of the file
  • /usr/local/share/kolla-ansible/ansible/site.yml ➡️ Trilio entries had been appended at the end of the file

  • /root/multinode ➡️ Trilio entries had been appended at the end of this example inventory file

  • append Kolla Ansible yml files
    clone Trilio Ansible role
    source /home/stack/myansible/bin/activate
    Update globals.yml

    Edit the globals.yml file to contain the new backup target.

    Follow the installation documentation to learn about the globals.yml Trilio variables.

    Deploy Trilio components with new backup target

    Verify the successful change in backup target

    Reconfigure the Trilio Appliance

    Follow the documentation to reconfigure the Trilio appliance with the new backup target.

    root@controller:~# kolla-ansible -i multinode deploy
    docker stop triliovault_datamover_api
    docker rm triliovault_datamover_api
    rm -rf /etc/kolla/triliovault-datamover-api
    rm -rf /var/log/kolla/triliovault-datamover-api/
    docker stop triliovault_datamover
    docker rm triliovault_datamover
    rm -rf /etc/kolla/triliovault-datamover
    rm -rf /var/log/kolla/triliovault-datamover/
    rm /etc/kolla/haproxy/services.d/triliovault-datamover-api.cfg
    docker restart haproxy
    kolla-ansible -i multinode deploy
    openstack service delete dmapi
    openstack user delete dmapi
    mysql -u root -p
    DROP DATABASE dmapi;
    DROP USER dmapi;
    virsh list
    virsh destroy <Trilio VM Name or ID>
    virsh undefine <Trilio VM name>
    #check current mount point
    [root@compute ~]# df -h
    Filesystem                      Size  Used Avail Use% Mounted on
    devtmpfs                        7.8G     0  7.8G   0% /dev
    tmpfs                           7.8G     0  7.8G   0% /dev/shm
    tmpfs                           7.8G   26M  7.8G   1% /run
    tmpfs                           7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root             280G   12G  269G   5% /
    /dev/sda1                       976M  197M  713M  22% /boot
    192.168.1.34:/mnt/tvault/42436  2.5T 1005G  1.5T  41% /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    
    #Stop triliovault_datamover
    [root@compute ~]# docker stop triliovault_datamover
    triliovault_datamover
    [root@compute ~]#
    
    #Delete triliovault_datamover
    [root@compute ~]# docker rm triliovault_datamover
    triliovault_datamover
    [root@compute ~]#
    
    #unmount mount point
    [root@compute ~]# umount /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    
    #check mount point is unmounted successfully 
    [root@compute ~]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    devtmpfs             7.8G     0  7.8G   0% /dev
    tmpfs                7.8G     0  7.8G   0% /dev/shm
    tmpfs                7.8G   26M  7.8G   1% /run
    tmpfs                7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root  280G   12G  269G   5% /
    /dev/sda1            976M  197M  713M  22% /boot
    [root@compute ~]#
    
    #Delete mounted dir from compute node
    [root@compute trilio]# rm -rf /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    ##Check that all Containers are up and running
    #Controller node
    root@controller:~# docker ps -a | grep trilio
    583b8d42ab42        trilio/ubuntu-binary-trilio-datamover-api:4.1.36-ussuri    "dumb-init --single-…"   3 days ago          Up 3 days                               openstack-nova-api-triliodata-plugin
    3be25d3819ac        trilio/ubuntu-binary-trilio-horizon-plugin:4.1.36-ussuri   "dumb-init --single-…"   4 days ago          Up 4 days                               horizon
    
    #Compute node
    root@compute:~# docker ps -a | grep trilio
    bf52face23fb        trilio/ubuntu-binary-trilio-datamover:4.1.36-ussuri    "dumb-init --single-…"   3 days ago          Up 3 days                               trilio-datamover
    
    ## Verify the backup target has been changed successfully
    # In case of switch to NFS
    [root@compute ~]# df -h
    Filesystem                      Size  Used Avail Use% Mounted on
    devtmpfs                        7.8G     0  7.8G   0% /dev
    tmpfs                           7.8G     0  7.8G   0% /dev/shm
    tmpfs                           7.8G   26M  7.8G   1% /run
    tmpfs                           7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root             280G   12G  269G   5% /
    /dev/sda1                       976M  197M  713M  22% /boot
    192.168.1.34:/mnt/tvault/42436  2.5T 1005G  1.5T  41% /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM0Oi9tbnQvdHZhdWx0LzQyNDM2
    
    #In case of switch to S3
    [root@compute ~]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    devtmpfs             7.8G     0  7.8G   0% /dev
    tmpfs                7.8G     0  7.8G   0% /dev/shm
    tmpfs                7.8G   34M  7.8G   1% /run
    tmpfs                7.8G     0  7.8G   0% /sys/fs/cgroup
    /dev/mapper/cl-root  280G   12G  269G   5% /
    /dev/sda1            976M  197M  713M  22% /boot
    Trilio             -     -  0.0K    - /var/trilio/triliovault-mounts
    
    ##Reverify in the triliovault_datamover containers
    [root@compute ~]# docker exec -it triliovault_datamover bash
    (triliovault-datamover)[nova@compute /]$ df -h
    Filesystem           Size  Used Avail Use% Mounted on
    overlay              280G   12G  269G   5% /
    tmpfs                7.8G     0  7.8G   0% /sys/fs/cgroup
    devtmpfs             7.8G     0  7.8G   0% /dev
    tmpfs                7.8G     0  7.8G   0% /dev/shm
    /dev/mapper/cl-root  280G   12G  269G   5% /etc/iscsi
    tmpfs                6.3G     0  6.3G   0% /var/triliovault/tmpfs
    Trilio             -     -  0.0K    - /var/trilio/triliovault-mounts
    Public Endpoints
  • Internal Endpoints

  • Admin Endpoints

  • Each of these endpoint types is designed for a specific purpose. Public endpoints are meant to be used by the Openstack end-users to work with Openstack. Internal endpoints are meant to be used by the Openstack services to communicate with each other. Admin endpoints are meant to be used by Openstack administrators.

    Out of those 3 endpoint types does only the admin endpoint sometimes contain APIs which are not available on any other endpoint type.

    To learn more about Openstack endpoints please visit the official Openstack documentation.

    Openstack endpoints required by Trilio

    Trilio is communicating with all services of Openstack on a defined endpoint type. Which endpoint type Trilio is using to communicate with Openstack is decided during the configuration of the Trilio appliance.

    There is one exception: The Trilio Appliance always requires access to the Keystone admin endpoint.

    The following network requirement can be identified this way:

    • Trilio appliance needs access to the Keystone admin endpoint on the admin endpoint network

    • Trilio appliance needs access to all endpoints of one type

    Recommendation: Provide access to all Openstack Endpoint types

    Trilio is recommending providing full access to all Openstack endpoints to the Trilio appliance to follow the Openstack standards and best practices.

    Trilio is generating its own endpoints as well. These endpoints are pointing towards the Trilio Appliance directly. This means that using those endpoints will not send the API calls towards the Openstack Controller nodes first, but directly to the Trilio VM.

    Following the Openstack standards and best practices, it is therefore recommended to put the Trilio endpoints on the same networks as the already existing Openstack endpoints. This allows to extend the purpose of each endpoint type to the Trilio service:

    • The public endpoint to be used by Openstack users when using Trilio CLI or API

    • The internal endpoint to communicate with the Openstack services

    • The admin endpoint to use the required admin only APIs of Keystone

    Backup target access required by Trilio

    The Trilio solution is using backup target storage to securely place the backup data. Trilio is dividing its backup data into two parts:

    1. Metadata

    2. Volume Disk Data

    The first type of data is generated by the Trilio appliance through communicating with the Openstack Endpoints. All metadata that is stored together with a backup is written by the Trilio Appliance to the backup target in the json format.

    The second type of data is generated by the Trilio Datamover service running on the compute nodes. The Datamover service is reading the Volume Data from the Cinder or Nova storage and transferring this data as qcow2 image to the backup target. Each Datamover service is hereby responsible for the VMs running on its compute node.

    The network requirements are therefor:

    • The Trilio appliance needs access to the backup target

    • Every compute node needs access to the backup target

    Example of a typical Trilio network integration

    Most Trilio customers are following the Openstack standards and best practices to have the public, internal, and admin endpoints on separate networks. They also typically don't have any network yet, which can access the desired backup target.

    The starting network configuration typically looks like this:

    Typical Openstack Network configuration before Trilio gets installed

    Following the Openstack standards and Trilio's recommendation will the Trilio Appliance be placed on all those 3 networks. Further is the access to the backup target required by Trilio Appliance and Compute nodes. Here done by adding a 4th network.

    The resulting network configuration would look like this:

    Typical Openstack network configuration with Trilio installed

    It is of course possible to combine networks as necessary. As long as the required network access is available will Trilio work.

    Other examples of Trilio network integrations

    Each Openstack installation is different and so is the network configuration. There are endless possibilities of how to configure the Openstack network and how to implement the Trilio appliance into this network. The following three examples have been seen in production:

    The first example is from a manufacturing company, which wanted to split the networks by function and decided to put the Trilio backup target on the internal network as the backup and recovery function was identified as an Openstack internal solution. This example looks complex but integrates Trilio just as recommended.

    The split them all network example

    The second example is from a financial institute that wanted to be sure that the Openstack Users have no direct uncontrolled network access to the Openstack infrastructure. Following this example requires additional work as the internal HA-Proxy needs to be configured to correctly translates the API calls towards the Trilio

    The no trust network example

    The third example is from a service company that was forced to treat Trilio as an external 3rd party solution, as we require a virtual machine running outside of Openstack. This kind of network configuration requires good planning on the Trilio endpoints and firewall rules.

    Trilio as third party component network example

    Uninstalling from Ansible OpenStack

    Uninstall Trilio Services

    The Trilio Ansible OpenStack playbook can be run to uninstall the Trilio services.

    Destroy Trilio Datamover API container

    To cleanly remove the Trilio Datamover API container run the following Ansible playbook.

    Clean openstack_user_config.yml

    Remove the tvault-dmapi_hosts and tvault_compute_hosts entries from /etc/openstack_deploy/openstack_user_config.yml

    Remove Trilio haproxy settings in user_variables.yml

    Remove Trilio Datamover API settings from /etc/openstack_deploy/user_variables.yml

    Remove Trilio Datamover API inventory file

    Remove Trilio Datamover API service endpoints

    Delete Trilio Datamover API database and user

    • Go inside galera container.

    • Login as root user in mysql database engine.

    • Drop dmapi database.

    • Drop dmapi user

    Remove dmapi rabbitmq user from rabbitmq container

    • Go inside rabbitmq container.

    • Delete dmapi user.

    • Delete dmapi vhost.

    Clean haproxy

    Remove /etc/haproxy/conf.d/datamover_service file.

    Remove HAproxy configuration entry from /etc/haproxy/haproxy.cfg file.

    Restart the HAproxy service.

    Remove certificates from Compute nodes

    Destroy the Trilio VM Cluster

    List all VMs running on the KVM node

    Destroy the Trilio VMs

    Undefine the Trilio VMs

    Delete the TrlioVault VM disk from KVM Host storage

    Enabling T4O 4.1 or older backups when using NFS backup target

    Trilio for OpenStack is generating a base64 hash value for every NFS backup target connected to the T4O solution. This enables T4O to mount multiple NFS backup targets to the same T4O installation.

    The mountpoints are generated utilizing a hash value inside the mountpoint, providing a unique mount for every NFS backup target.

    This mountpoint is then used inside the incremental backups to point to the qcow2 backing files. The backing file path is required as a full path and can not be set as a relative path. This is a limitation of the qcow2 format.

    T4O 4.2 has changed how the hash value gets calculated. T4O 4.1 and prior calculated the hash value out of the complete NFS path provided as shown in the example below.

    T4O 4.2 is now only considering the NFS directory part for the hash value as shown below.

    It is therefore necessary to make older backups taken by T4O 4.1 or prior available for TVO4.2 to restore.

    This can be done by one of two methods:

    1. Rebase all backups and change the backing file path

    2. Make the old mount point available again and point it to the new one using mount bind

    Rebase all backups

    This method is taking a significant amount of time depending on the number of backups, that need to be rebased. It is therefore recommended to determine the required time through a test workload.

    Trilio is providing a script, that is taking care of the rebase procedure. This script can be downloaded from the following location.

    Copy and use the script from the Trilio appliance as the nova user.

    The nova user is required, as this user owns the backup files, and using any other user will change the ownership and lead to unrestorable backups.

    The script is requiring the complete path to the workload directory on the backup target.

    This needs to be repeated until all workloads have been rebased.

    Use mount bind with the old mount point

    This method is a temporary solution, which is to be kept in place until all workloads have gone through a complete retention cycle.

    Once all Workloads only contain backups created by T4O 4.2 it is no longer required to keep the mount bind active.

    This method is generating a second mount point based on the old hash value calculation and then mounting the new mount point to the old one. By doing this mount bind both mount points will be available and point to the same backup target.

    To use this method the following information needs to be available:

    • old mount path

    • new mount path

    Examples of how to calculate the base64 hash value are shown below.

    Old mountpoint hash value:

    New mountpoint hash value:

    The mount bind needs to be done for all Trilio appliances and Datamover services.

    Trilio Appliance

    To enable the mount bind on the Trilio appliance follow these steps:

    1. Create the old mountpoint directory mkdir -p old_mount_path

    2. run mount --bind command mount --bind <new_mount_path> <old_mount_path>

    3. set permissions for mountpoint chmod 777 <old_mount_path>

    It is recommended to use df -h to identify the current mountpoint as RHOSP, TripleO and Kolla Ansible OpenStack are using a different path than Ansible OpenStack or Canonical OpenStack.

    An example is given below.

    RHOSP/TripleO Datamover

    The following steps need to be done on the overcloud compute node. They don't need to be done inside any container.

    To enable the mount bind on the Trilio appliance follow these steps:

    1. Create the old mountpoint directory mkdir -p old_mount_path

    2. run mount --bind command mount --bind <new_mount_path> <old_mount_path>

    3. set permissions for mountpoint chmod 777 <old_mount_path>

    Kolla Ansible OpenStack Datamover

    The following steps need to be done on the overcloud compute node. They don't need to be done inside any container.

    To enable the mount bind on the Trilio appliance follow these steps:

    1. Create the old mountpoint directory mkdir -p old_mount_path

    2. run mount --bind command mount --bind <new_mount_path> <old_mount_path>

    3. set permissions for mountpoint chmod 777 <old_mount_path>

    Ansible OpenStack Datamover

    The following steps need to be done on the overcloud compute node. They don't need to be done inside any container.

    To enable the mount bind on the Trilio appliance follow these steps:

    1. Create the old mountpoint directory mkdir -p old_mount_path

    2. run mount --bind command mount --bind <new_mount_path> <old_mount_path>

    3. set permissions for mountpoint chmod 777 <old_mount_path>

    Canonical Openstack WLM & Datamover containers

    Canonical OpenStack does the creation of the mountpoint and the mount bind through JuJu using the following commands.

    To create the mountpoint, if it doesn't already exist:

    To create the mount bind

    Multi-IP NFS Backup target mapping file configuration

    Introduction

    Filename and location:

    This file is used only when the user wants to configure 'multiple IP/endpoints based NFS share' as a backup target for Trilio. In all other cases like single IP NFS, S3 this file does not get used. Follow regular install doc.

    If the user is using multiple-IP/endpoints based NFS share as a backup target for triliovault then triliovault mounts any one IP/endpoint on a given compute node for a given NFS share. Users can distribute NFS share IPs/endpoints across compute nodes.

    'triliovault_nfs_map_input.yml' file is a tool that users can use to distribute/load balance NFS share endpoints across compute nodes in a given cloud.

    Note: Two IPs/endpoints of same NFS share on single compute node is invalid scenario and not required because in backend it stores data at same place.

    Examples

    Using hostnames

    Here, the User has ‘one’ NFS share exposed with three IP addresses. 192.168.1.34, 192.168.1.35, 192.168.1.33 Share directory path is: /var/share1

    So, this NFS share supports the following full paths that clients can mount:

    There are 32 compute nodes in the OpenStack cloud. 30 node hostnames have the following naming pattern

    The remaining 2 node hostnames do not follow any format/pattern.

    Now the mapping file will look like this

    Using IPs

    Compute node IP range used here: 172.30.3.11-40 and 172.30.4.40, 172.30.4.50 Total of 32 compute nodes

    Other complex examples are available on github at

    Getting the correct compute hostnames/IPs

    RHOSP or TripleO

    Use the following command to get compute hostnames. Check the ‘Name' column. Use these exact hostnames in 'triliovault_nfs_map_input.yml' file.

    In the following command output, ‘overcloudtrain1-novacompute-0' and ‘overcloudtrain1-novacompute-1' are correct hostnames.

    Run this command on undercloud by sourcing 'stackrc'.

    Kolla-Ansible OpenStack

    Compute hostnames/IPs should match in kolla-ansible inventory file and triliovault_nfs_map_input.yml.

    If IP addresses are sued in kolla-ansible inventory file then the same IP addresses should be used in ‘triliovault_nfs_map_input.yml' file too. If the kolla-ansible deploy command looks like the following,

    kolla-ansible -i multinode deploy

    then, the inventory file is 'multinode'. Generally, it is available at “/root/multinode“.

    OpenStack Ansible

    Compute hostnames/IPs should match in the Openstack Ansible inventory file and triliovault_nfs_map_input.yml

    If IP addresses have been used in the Openstack-ansible inventory file then you should use the same IP addresses in the ‘triliovault_nfs_map_input.yml' file too.

    Generally inventory file available at /etc/openstack_deploy/openstack_user_config.yml

    Openstack Ansible deploy command looks like following,

    openstack-ansible os-tvault-install.yml

    Openstack Ansible automatically picks up values that the user has set in the file openstack_user_config.yml

    Set Trilio GUI login banner

    To configure the banner shown upon accessing the Trilio Appliance GUI do the following.

    1. Login into Trilio Appliance console

    2. edit the banner.yaml at /etc/tvault-config/banner.yaml

    3. restart tvault-config service

    The content of the banner.yaml looks as follows and can be edited as required:

    Rebasing existing workloads

    The Trilio solution is using the qcow2 backing file to provide full synthetic backups.

    Especially when the NFS backup target is used, there are scenarios at which this backing file needs to be updated to a new mount path.

    To make this process easier and streamlined Trilio is providing the following rebase tool.

    Trilio Appliance Dashboard

    The Trilio Appliance Dashboard gives an overview of the running services and their Status inside the Cluster. The dashboard can be accessed using the virtual IP.

    If service status panels on the dashboard page are not visible then access the virtual IP on port 3001 (https://<T4O-VIP>:3001/) and accept the SSL exception, and then refresh the dashboard page.

    The Trilio Appliance Dashboard gives an overview of the running services and their Status inside the Cluster. This dashboard is accessible through the virtual IP.

    Reset the Trilio GUI password

    In case of the Trilio Dashboard being lost it can be resetted as long as SSH access to the appliance is available.

    To reset the password to its default do the following:

    The dashboard login will be reset to:

    Clean up Trilio database

    The Trilio database is following an older OpenStack database schema. This schema is not forgetting anything and is instead setting the deleted flag for any data that is no longer valid/deleted.

    This leads to ever-growing databases, which can lead to performance degradation.

    To counter this a database clean-up and optimization tool has been made available.

    cd /opt/openstack-ansible/playbooks
    openstack-ansible os-tvault-install.yml --tags "tvault-all-uninstall"
    # echo -n 10.10.2.20:/Trilio_Backup | base64
    MTAuMTAuMi4yMDovVHJpbGlvX0JhY2t1cA==
    # echo -n /Trilio_Backup | base64
    L1RyaWxpb19CYWNrdXA=
    triliovault-cfg-scripts/common/triliovault_nfs_map_input.yml

    General Troubleshooting Tips

    Troubleshooting inside a complex environment like Openstack can be very time-consuming. The following tipps will help to speed up the troubleshooting process to identify root causes.

    What is happening where

    Openstack and Trilio are divided into multiple services. Each service has a very specific purpose that is called during a backup or recovery procedure. Knowing which service is doing what helps to understand where the error is happening, allowing more focused troubleshooting.

    Trilio cluster

    The Trilio Cluster is the Controller of Trilio. It receives all Workload related requests from the users.

    Every task of a backup or restore process is triggered and managed from here. This includes the creation of the directory structure and initial metadata files on the Backup Target.

    During a backup process

    During a backup process is the Trilio cluster also responsible to gather the metadata about the backed up VMs and networks from the Openstack environment. It is sending API calls towards the Openstack endpoints on the configured endpoint type to fetch this information. Once the metadata has been received does the Trilio Cluster write it as json files on the Backup Target.

    The Trilio cluster is also sending the Cinder Snapshot command.

    During a restore process

    During restore process is the Trilio cluster reading the VM metadata from its Database and is using the metadata to create the Shell for the restore. It is sending API calls to the Openstack environment to create the necessary resources.

    dmapi

    The dmapi service is the connector between the Trilio cluster and the datamover running on the compute nodes.

    The purpose of the dmapi service is to identify which compute node is responsible for the current backup or restore task. To do so is the dmapi service connecting to the nova api database requesting the compute hose of a provided VM.

    Once the compute host has been identified is the dmapi forwarding the command from the Trilio Cluster to the datamover running on the identified compute host.

    datamover

    The datamover is the Trilio service running on the compute nodes.

    Each datamover is responsible for the VMs running on top of its compute node. A datamover can not work with VMs running on a different compute node.

    The datamover is controlling the freeze and thaw of VMs as well as the actual movement of the data.

    Everything on the Backup Target is happening as user nova

    Trilio is reading and writing on the Backup Target as nova:nova.

    The POSIX user-id and group-id of nova:nova need to be aligned between the Trilio Cluster and all compute nodes. Otherwise backup or restores may fail with permission or file not found issues.

    Alternativ ways to achieve the goal are possible, as long as all required nodes can fully write and read as nova:nova on the Backup Target.

    It is recommended to verify the required permissions on the Backup Target in case of any errors during the data transfer phase or in case of any file permission errors.

    Input/Output Error with Cohesity NFS

    On Cohesity NFS if an Input/Output error is observed, then increase the timeo and retrans parameter values in your NFS options.

    Timeout receiving packet error in multipath iscsi environment

    Logging inside all datamover containers and add uxsock_timeout with value as 60000 which is equals to 60 sec inside /etc/multipath.conf.Restart datamover container

    Trilio Trustee Role

    Trilio is using RBAC to allow the usage of Trilio features to users.

    This trustee role is absolutely required and can not be overwritten using the admin role.

    It is recommended to verify the assignment of the Trilio Trustee Role in case of any permission errors from Trilio during creation of Workloads, backups or restores.

    Openstack Quotas

    Trilio is creating Cinder Snapshots and temporary Cinder Volumes. The Openstack Quotas need to allow that.

    Every disk that is getting backed up requires one temporary Cinder Volumes.

    Every Cinder Volume that is getting backup requires two Cinder Snapshots. The second Cinder Snapshot is temporary to calculate the incremental.

    Trilio Configurator

    Once Trilio is configured use virtual IP to access its dashboard. If service status panels on the dashboard page are not visible then access the virtual IP on port 3001 (https://<T4O-VIP>:3001/) and accept the SSL exception, and then refresh the dashboard page.

    \

    triliovault-cfg-scripts/common/examples-multi-ip-nfs-map at master · trilioData/triliovault-cfg-scripts
    header: 
    header_color: blue 
    body_text_color: "#DC143C" 
    body_text: 
    header_font_size: 25px 
    body_text_font_size: 22px
    [root@TVM1 ~]# source /home/stack/myansible/bin/activate
    (myansible) [root@TVM1 ~]# cd /home/stack/myansible/lib/python3.6/site-packages/tvault_configurator
    (myansible) [root@TVM1 tvault_configurator]# python recreate_conf.py
    (myansible) [root@TVM1 tvault_configurator]# systemctl restart tvault-config
    Username: admin
    Password: password
    cd /opt/openstack-ansible/playbooks
    openstack-ansible lxc-containers-destroy.yml --limit "DMPAI CONTAINER_NAME"
    #tvault-dmapi
    tvault-dmapi_hosts:
      infra-1:
        ip: 172.26.0.3
      infra-2:
        ip: 172.26.0.4
        
    #tvault-datamover
    tvault_compute_hosts:
      infra-1:
        ip: 172.26.0.7
      infra-2:
        ip: 172.26.0.8
    # Datamover haproxy setting
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    rm /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml
     source cloudadmin.rc
     openstack endpoint delete "internal datamover service endpoint_id"
     openstack endpoint delete "public datamover service endpoint_id"
     openstack endpoint delete "admin datamover service endpoint_id"
    lxc-attach -n "GALERA CONTAINER NAME"
    mysql -u root -p "root password"
    DROP DATABASE dmapi;
    DROP USER dmapi;
    lxc-attach -n "RABBITMQ CONTAINER NAME"
    rabbitmqctl delete_user dmapi
    rabbitmqctl delete_vhost /dmapi
    rm  /etc/haproxy/conf.d/datamover_service
    frontend datamover_service-front-1
        bind ussuriubuntu.triliodata.demo:8784 ssl crt /etc/ssl/private/haproxy.pem ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
        option httplog
        option forwardfor except 127.0.0.0/8
        reqadd X-Forwarded-Proto:\ https
        mode http
        default_backend datamover_service-back
    
    frontend datamover_service-front-2
        bind 172.26.1.2:8784
        option httplog
        option forwardfor except 127.0.0.0/8
        mode http
        default_backend datamover_service-back
    
    
    backend datamover_service-back
        mode http
        balance leastconn
        stick store-request src
        stick-table type ip size 256k expire 30m
        option forwardfor
        option httplog
        option httpchk GET / HTTP/1.0\r\nUser-agent:\ osa-haproxy-healthcheck
    
    
        server controller_dmapi_container-bf17d5b3 172.26.1.75:8784 check port 8784 inter 12000 rise 1 fall 1
    systemctl restart haproxy
    rm -rf /opt/config-certs/rabbitmq
    rm -rf /opt/config-certs/s3
    virsh list
    virsh destroy <Trilio VM Name or ID>
    virsh undefine <Trilio VM name>
    192.168.1.33:/var/share1 
    192.168.1.34:/var/share1 
    192.168.1.35:/var/share1
    prod-compute-1.trilio.demo 
    prod-compute-2.trilio.demo 
    prod-compute-3.trilio.demo 
    . 
    . 
    . 
    prod-compute-30.trilio.demo
    compute_bare.trilio.demo 
    compute_virtual
    multi_ip_nfs_shares: 
     - "192.168.1.34:/var/share1": ['prod-compute-[1:10].trilio.demo', 'compute_bare.trilio.demo'] 
       "192.168.1.35:/var/share1": ['prod-compute-[11:20].trilio.demo', 'compute_virtual'] 
       "192.168.1.33:/var/share1": ['prod-compute-[21:30].trilio.demo'] 
    
    single_ip_nfs_shares: []
    multi_ip_nfs_shares: 
     - "192.168.1.34:/var/share1": ['172.30.3.[11:20]', '172.30.4.40'] 
       "192.168.1.35:/var/share1": ['172.30.3.[21:30]', '172.30.4.50'] 
       "192.168.1.33:/var/share1": ['172.30.3.[31:40]'] 
    
    single_ip_nfs_shares: []
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    If service status panels on the dashboard page are not visible then access the virtual IP on port 3001 (https://<T4O-VIP>:3001/) and accept the SSL exception, and then refresh the dashboard page.

    It shows for each Trilio Appliance the Status of the following Trilio services:

    • wlm-workloads

    • wlm-scheduler

    • wlm-api

    • wlm-cron

    The wlm-cron service runs on only one Trilio appliance at all times. That they are shown inactive on other nodes is not an error

    To give administrators an overview of the HA status, does the dashboard also show the service status for:

    • Pacemaker

    • RabbitMQ

    • MySQL Galera Cluster

    Disaster Recovery

    Trilio Workloads are designed to allow a Desaster Recovery without the need to backup the Trilio database.

    As long as the Trilio Workloads are existing on the Backup Target Storage and a Trilio installation has access to them, it is possible to restore the Workloads.

    Disaster Recovery Process

    1. Install and Trilio for the target cloud

    2. Notify users to of Workloads being available

    This procedure is designed to be applicable to all Openstack installations using Trilio. It is to be used as a starting point to develop the exact Desaster Recovery process of a specific environment.

    In case that instead of noticing the users, the workloads shall be restored is it necessary to have an User in each Project, that has the necessary privileges to restore.

    Mount-paths

    Trilio incremental Snapshots involve a backing file to the prior backup taken, which makes every Trilio incremental backup a synthetic full backup.

    Trilio is using qcow2 backing files for this feature:

    As can be seen in the example is the backing file an absolute path, which makes it necessary, that this path exists so the backing files can be accessed.

    Trilio is using the base64 hashing algorithm for the NFS mount-paths, to allow the configuration of multiple NFS Volumes at the same time. The hash value is calculated using the provided NFS path.

    When the path of the backing file is not available on the Trilio appliance and Compute nodes, will the restores of incremental backups fail.

    The tested and recommended method to make the backing files available is creating the required directory path and using mount --bind to make the path available for the backups.

    Running the mount --bind command will make the necessary path available until the next reboot. If it is required to have access to the path beyond a reboot is it necessary to edit the fstab.

    E-Mail Notifications

    Definition

    Trilio can notify users via E-Mail upon the completion of backup and restore jobs.

    The E-Mail will be sent to the owner of the Workload.

    Requirements to activate E-Mail Notifications

    To use the E-mail notifications, two requirements need to be met.

    Both requirements need to be set or configured by the Openstack Administrator. Please contact your Openstack Administrator to verify the requirements.

    User E-Mail assigned

    As the E-Mail will be sent to the owner of the Workload does the Openstack User, who created the workload, require to have an E-Mail address associated.

    Trilio E-Mail Server configured

    Trilio needs to know which E-Mail server to use, to send the E-mail notifications. Backup Administrators can do this in the "Backup Admin" area.

    Activate/Deactivate the E-Mail Notifications

    E-Mail notifications are activated tenant wide. To activate the E-Mail notification feature for a tenant follow these steps:

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Settings

    4. Check/Uncheck the box for "Enable Email Alerts"

    Example E-Mails

    The following screenshots show example E-mails send by Trilio.

    Workload Encryption with Barbican

    Learn about encrypting Trilio workloads with Barbican

    Introduction

    Integrating Trilio for OpenStack 4.2 with Barbican facilitates encryption support for the qcow2-data segment of Trilio backups. However, the JSON files housing the backed-up OpenStack metadata remain unencrypted.

    This capability necessitates the presence of the OpenStack Barbican service. Absence of the Barbican service will result in the omission of encryption options for Workloads within the Horizon interface.

    Trilio for OpenStack (T4O) 4.2 exclusively retrieves secrets from Barbican, without generating, modifying, or deleting any secrets within the Barbican platform.

    Barbican secrets are indispensable for executing backups or restorations in encryption-enabled Workloads. The onus lies on the OpenStack project user to supply these secrets and guarantee their accurate availability.

    Installing on Canonical OpenStack

    Trilio and Canonical have started a partnership to ensure a native deployment of Trilio using JuJu Charms.

    Those JuJu Charms are publicly available as Open Source Charms.

    Trilio is providing the JuJu Charms to deploy Trilio 4.2 in Canonical OpenStack from Yoga release onwards only. JuJu Charms to deploy Trilio 4.2 in Canonical OpenStack up to wallaby release are developed and maintained by Canonical.

    Managing Trusts

    Trilio is using the which enables the Trilio service user to act in the name of another Openstack user.

    This system is used during all backup and restore features.

    Openstack Administrators should never have the need to directly work with the trusts created.

    The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.

    Trusts can only be worked with via CLI

    Frequently Asked Questions

    Frequently Asked Questions about Trilio for OpenStack

    1. Can Trilio for OpenStack restore instance UUIDs?

    Answer: NO

    Trilio for OpenStack does not restore Instance UUIDs (also known as Instance IDs). The only scenario where we do not modify the Instance UUID is during an Inplace Restore, where we only recover the data without creating new instances.

    When Trilio for OpenStack restores virtual machines (VMs), it effectively creates new instances. This means that new Virtual Machine Instance UUIDs are generated for the restored VMs. We achieve this by orchestrating a call to Nova, which creates new VMs with new UUIDs.

    Migrating encrypted Workloads

    Same cloud - different owner

    Migration within the same cloud to a different owner Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project A — User B Cloud A — Domain A — Project A — User A => Cloud A — Domain A — Project B — User B Cloud A — Domain A — Project A — User A =>Cloud A — Domain B — Project B — User B

    Steps used:

    Switch NFS Backing file

    Trilio is using a base64 hash for the mount point of NFS Backup targets. This hash makes sure, that multiple NFS Shares can be used with the same Trilio installation.

    This base64 hash is part of the Trilio incremental backups as an absolute path of the backing files. This requires the usage of mount bind during a DR scenario or quick migration scenario.

    In the case that there is time for a thorough migration there is another option to change the backing file and make the Trilio backups available on a different NFS Share. This option is updating the backing file to the new NFS Share mount point.

    Backing file change script

    Trilio is providing a shell script for the purpose of changing the backing file. This script is used after the Trilio appliance has been reconfigured to use the new NFS share.

    ./backing_file_update.sh /var/triliovault-mounts/<base64>/workload_<workload_id>
    # echo -n 10.10.2.20:/Trilio_Backup | base64
    MTAuMTAuMi4yMDovVHJpbGlvX0JhY2t1cA==
    # echo -n /Trilio_Backup | base64
    L1RyaWxpb19CYWNrdXA=
    mkdir /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mount --bind /var/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=
    chmod 777 /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mkdir /var/lib/nova/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mount --bind /var/lib/nova/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/lib/nova/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=
    chmod 777 /var/lib/nova/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mkdir /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mount --bind /var/trilio/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=
    chmod 777 /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mkdir /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    mount --bind /var/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/trilio/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=
    chmod 777 /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    juju exec [-m <model>] --application trilio-data-mover "sudo -u nova mkdir /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/"
    juju exec [-m <model>] --application trilio-wlm "sudo -u nova mkdir /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/
    juju exec [-m <model>] --application trilio-data-mover "sudo mount --bind /var/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/"
    juju exec [-m <model>] --application trilio-wlm "sudo mount --bind /var/triliovault-mounts/L21udC90dmF1bHQvdHZtNA==/ /var/triliovault-mounts/MTkyLjE2OC4xLjM1Oi9tbnQvdHZhdWx0L3R2bTQ=/"
    Screenshot of a notification E-mail for a successful Snapshot
    Screenshot of a notification E-Mail for a failed Snapshot
    Screenshot of a notification E-Mail for a successful Restore
    Configure
    Verify required mount-paths and create if necessary
    Reassign Workloads
    List all trusts

    Show a trust

    • <trust_id> ➡️ ID of the trust to show

    Create a trust

    • <role_name> ➡️Name of the role that trust is created for

    • --is_cloud_trust {True,False} ➡️ Set to true if creating cloud admin trust. While creating cloud trust use same user and tenant which used to configure Trilio and keep the role admin.

    Delete a trust

    • <trust_id> ➡️ ID of the trust to be deleted

    Openstack Keystone Trust system
    Create a secret for Project A in Domain A via User A.
  • Create encrypted workload in Project A in Domain A via User A. Take snapshot.

  • Reassign workload to new owner

  • Load rc file of User A & provide read only rights through acl to the new owner

    openstack acl user add --user <userB_id> <secret_href> --insecure

  • Different cloud

    Migration between clouds Cloud A — Domain A — Project A — User A => Cloud B — Domain B — Project B — User B

    Steps used:

    1. Create a secret for Project A in Domain A via User A.

    2. Create an encrypted workload in Project A in Domain A via User A. Trigger snapshot.

    3. Reassign workload to Cloud B - Domain B — Project B — User B

    4. Load RC file of User B.

    5. Create a secret for Project B in Domain B via User B with the same payload used in Cloud A.

    6. Create token via “openstack token issue --insecure”

    7. Add migrated workload's metadata to the new secret (provide issued token to Auth-Token & workload id to matadata as below)

    qemu-img info 85b645c5-c1ea-4628-b5d8-1faea0e9d549
    image: 85b645c5-c1ea-4628-b5d8-1faea0e9d549
    file format: qcow2
    virtual size: 1.0G (1073741824 bytes)
    disk size: 21M
    cluster_size: 65536
    backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_3c2fbee5-ad90-4448-b009-5047bcffc2ea/snapshot_f4874ed7-fe85-4d7d-b22b-082a2e068010/vm_id_9894f013-77dd-4514-8e65-818f4ae91d1f/vm_res_id_9ae3a6e7-dffe-4424-badc-bc4de1a18b40_vda/a6289269-3e72-4085-adca-e228ba656984
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
    # echo -n 10.10.2.20:/upstream | base64
    MTAuMTAuMi4yMDovdXBzdHJlYW0=
    #mount --bind <mount-path1> <mount-path2>
    #vi /etc/fstab
    <mount-path1> <mount-path2>	none bind	0 0
    workloadmgr trust-list
    workloadmgr trust-show <trust_id>
    workloadmgr trust-create [--is_cloud_trust {True,False}] <role_name>
    workloadmgr trust-delete <trust_id>
    curl -i -X PUT \
       -H "X-Auth-Token:gAAAAABh0ttjiKRPpVNPBjRjZywzsgVton2HbMHUFrbTXDhVL1w2zCHF61erouo4ZUjGyHVoIQMG-NyGLdR7nexmgOmG7ed66LJ3IMVul1LC6CPzqmIaEIM48H0kc-BGvhV0pvX8VMZiozgFdiFnqYHPDvnLRdh7cK6_X5dw4FHx_XPmkhx7PsQ" \
       -H "Content-Type:application/json" \
       -d \
    '{
      "metadata": {
          "workload_id": "c13243a3-74c8-4f23-b3ac-771460d76130",
          "workload_name": "workload-c13243a3-74c8-4f23-b3ac-771460d76130"
        }
    }' \
     'https://kolla-victoria-ubuntu20-1.triliodata.demo:9311/v1/secrets/f3b2fce0-3c7b-4728-b178-7eb8b8ebc966/metadata'
     
     
    curl -i -X GET \
       -H "X-Auth-Token:gAAAAABh0ttjiKRPpVNPBjRjZywzsgVton2HbMHUFrbTXDhVL1w2zCHF61erouo4ZUjGyHVoIQMG-NyGLdR7nexmgOmG7ed66LJ3IMVul1LC6CPzqmIaEIM48H0kc-BGvhV0pvX8VMZiozgFdiFnqYHPDvnLRdh7cK6_X5dw4FHx_XPmkhx7PsQ" \
     'https://kolla-victoria-ubuntu20-1.triliodata.demo:9311/v1/secrets/f3b2fce0-3c7b-4728-b178-7eb8b8ebc966/metadata'

    In order to employ encrypted Workloads, the Trilio trustee role must be capable of interacting with the Barbican service to access and retrieve secrets from Barbican. By default, only the 'admin' and 'creator' roles are endowed with these permissions.

    Encryption availability for a Workload is confined to the Workload creation stage. Post-creation, the encryption status of a Workload is irreversible; it cannot be altered or toggled.

    Note : When leveraging OpenStack Barbican for protecting encrypted volumes and offering encrypted backups, it's essential that the Trustee Role is assigned as 'Creator' or a role that possesses equivalent permissions to the Creator role.

    This is crucial because only the Creator role has the authority to create, read, and delete secrets within Barbican. The generation of encryption-enabled workloads would be unsuccessful if the Trustee Role does not possess the permissions associated with the 'Creator' role.

    The following secret configurations are supported for AES-256

    Mode(s)
    Content Types
    Payload Content Encoding
    Secret Type
    Payload/Secret File

    cbc, xts

    text/plain

    None

    passphrase

    plaintext

    ctr, cbc, xts

    application/octet-stream

    base64

    By default Barbican will generate secrets of the following type:

    • Alghorithm: AES-256 (All supported types utilize this algorithm)

    • Mode: cbc

    • content type: application/octet-stream

    • payload-content-encoding: base64

    • secret type: opaque

    • payload: plaintext

    Prerequisite

    For encrypted workload Barbican should enabled on Openstack, then while configuring TVO-4.2, trustee role should be kept as creator .

    Additionally every user who’ll be interacting with TrilioVault for any operator, should also have creator role assigned.

    1. Secret creation

    Secret can be created via OpenStack cli with following the below steps

    1. Source the rc file of the user who is going to create the encrypted workload.

    2. Create any supported type of secret and fetch the secret UUID using OpenStack secret cli.

      1. Refer to this guide for more information about managing Barbican secrets

      2. Below is the example of symmetric secret type creation & fetch the secret UUID.

        1. Generate a new 256-bit key using order create

    ()[root@overcloudtrain1-controller-0 /]# openstack secret order create --name secret2 --algorithm aes --mode ctr --bit-length 256 --payload-content-type=application/octet-stream key +----------------+-------------------------------------------------------------------------------------------+ | Field | Value | +----------------+-------------------------------------------------------------------------------------------+ | Order href | https://overcloudtrain1.trilio.local:13311/v1/orders/641bac2c-b5b2-4a7a-9172-8fe0cf55425d | | Type | Key | | Container href | N/A | | Secret href | None | | Created | None | | Status | None | | Error code | None | | Error message | None | +----------------+-------------------------------------------------------------------------------------------+

    b. View the details of the order to identify the location of the generated key, shown here as the Secret href value: ()[root@overcloudtrain1-controller-0 /]# openstack secret order get https://overcloudtrain1.trilio.local:13311/v1/orders/641bac2c-b5b2-4a7a-9172-8fe0cf55425d +----------------+--------------------------------------------------------------------------------------------+ | Field | Value | +----------------+--------------------------------------------------------------------------------------------+ | Order href | https://overcloudtrain1.trilio.local:13311/v1/orders/641bac2c-b5b2-4a7a-9172-8fe0cf55425d | | Type | Key | | Container href | N/A | | Secret href | https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c | | Created | 2023-07-12T11:38:17+00:00 | | Status | ACTIVE | | Error code | None | | Error message | None | +----------------+--------------------------------------------------------------------------------------------+

    c. Fetch the secret UUID via below command (use Secret href)

    ()[root@overcloudtrain1-controller-0 /]# openstack secret get https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c +---------------+--------------------------------------------------------------------------------------------+ | Field | Value | +---------------+--------------------------------------------------------------------------------------------+ | Secret href | https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c | | Name | secret2 | | Created | 2023-07-12T11:38:17+00:00 | | Status | ACTIVE | | Content types | {'default': 'application/octet-stream'} | | Algorithm | aes | | Bit length | 256 | | Secret type | symmetric | | Mode | ctr | | Expiration | None | +---------------+--------------------------------------------------------------------------------------------+ ()[root@overcloudtrain1-controller-0 /]#

    Note down the last value from Secret href URL which is UUID (8-4-4-4-12 format).

    For example secret UUID for the Secret href URL https://overcloudtrain1.trilio.local:13311/v1/secrets/f2c96fe2-6ae7-4985-b98c-e571ba05403c will be f2c96fe2-6ae7-4985-b98c-e571ba05403c

    This UUID will be used further for creating the encrypted workload.

    2. Workload creation

    1. Login to openstack horizon dashboard with user who has created the secret & go to Project->Backup->workloads

    2. click on create workload.

    3. select the checkbox for Enable Encryption

    4. Provide the UUID noted in above steps in Secret UUID text box

    Create Workload
    1. Follow the usual procedure for further tabs (Workload member, Schedule, policy & Option) & click on create.

    2. Workload will be created and value for Encryption field will be True.

    Workloads

    Encrypted workload migration

    For migration of encrypted workload please follow: Migrating Encrypted Workloads

    Upgrade from older release to 4.2

    For upgrade from older release to 4.2 please follow: Upgrade Trilio

    For Trilio configuration please follow: Configuring Trilio

    Canonical OpenStack doesn't require the Trilio Cluster. The required services are installed and managed via JuJu Charms.

    The documentation of the charms can be found here:

    Juju charms for OpenStack Yoga release onwards

    Charm names

    Channel

    Supported releases

    latest/edge

    Jammy (Ubuntu 22.04)

    latest/edge

    Jammy (Ubuntu 22.04)

    latest/edge

    Jammy (Ubuntu 22.04)

    Juju charms for other supported OpenStack releases upto wallaby

    Charm names

    Channel

    Supported releases

    4.2/stable

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    4.2/stable

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    4.2/stable

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    Prerequisite

    Have a canonical OpenStack base setup deployed for a required release like Jammy Zed/Yoga, Focal yoga/Wallaby/Victoria/Ussuri, or Bionic Ussuri/Queens.

    Steps to install the Trilio charms

    1. Export the OpenStack base bundle

    2. Create a Trilio overlay bundle as per the OpenStack setup release using the charms given above.

    Some sample Trilio overlay bundles can be found here.

    NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    Trilio File Search functionality requires that the Trilio Workload manager (trilio-wlm) be deployed as a virtual machine. File Search will not function if the Trilio Workload manager (trilio-wlm) is running as a lxd container(s).

    3. If file search functionality is required, provision any additional node(s) that will be required for deploying the Trilio Workload manager (trilio-wlm) as a VM instead of lxd container(s).

    4. Commission the additional node from MAAS UI.

    5. Do a dry run to check if the Trilio bundle is working

    6. Do the deployment

    7. Wait till all the Trilio units are deployed successfully. Check the status via juju status command.

    8. Once the deployment is complete, perform the below operations:

    1. Create cloud admin trust

    1. Add license

    Note: Reach out to the Trilio support team for the license file.

    For multipath enabled environments, perform the following actions

    1. log into each nova compute node

    2. add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf

    3. restart tvault-contego service

    Sample Trilio overlay bundles

    For bionic-queens openstack-origin parameter value for trilio-dm-api charm must be cloud:bionic-train

    For the AWS S3 storage backend, we need to use `http://s3.amazonaws.com` as S3 end-point URL.

    A few Sample overlay bundles for different OpenStack versions can be found HERE.

    By following this approach, we maintain the principles of OpenStack and auditing. We do not update or modify existing database entries when objects are deleted and subsequently recovered. Instead, all deletions are marked as such, and new instances, including the recovered ones, are created as new objects in the Nova tables. This ensures compliance and preserves the integrity of the OpenStack environment.

    2. Can Trilio for OpenStack restore MAC addresses?

    Answer: YES

    Trilio can restore the VMs MAC address, however, there is a caveat when restoring a virtual machine (VM) to a different IP address: a new MAC address will be assigned to the VM.

    In the case of a One-Click Restore, the original MAC addresses and IP addresses will be recovered, but the VM will be created with a new UUID, as mentioned in question #1.

    When performing a Selective Restore, you have the option to recover the original MAC address. To do so, you need to select the original IP address from the available dropdown menu during the recovery process.

    By choosing the original IP address, Trilio for OpenStack will ensure that the VM is restored with its original MAC address, providing more flexibility and customization in the restoration process.

    Example of Selective Restore with original MAC (and IP address):

    1. In this example, we have taken a Trilio backup of a VM called prod-1.

    1. The VM is deleted and we perform a Selective Restore of a VM called prod-1, selecting the IP address it was originally assigned from the drop-down menu:

    1. Trilio then restores the VM with the original MAC address:

    1. If you left the option as "Choose next available IP address", it will assign a new MAC to the VM instead as Neutron maps all MAC addresses to IP addresses on the Subnet - so logically a new IP will result in a new MAC address.

    Downloading the shell script

    The Shell script is publicly available at:

    Pre-Requisites

    The following requirements need to be met before the change of the backing file can be attempted.

    • The Trilio Appliance has been reconfigured with the new NFS Share

      • Please check here for reconfiguring the Trilio Appliance

    • The Openstack environment has been reconfigured with the new NFS Share

      • Please check for Red Hat Openstack Platform

      • Please check for Canonical Openstack

      • Please check for Kolla Ansible Openstack

      • Please check for Ansible Openstack

    • The workloads are available on the new NFS Share

    • The workloads are owned by nova:nova user

    Usage

    The shell script is changing one workload at a time.

    The shell script has to run as nova user, otherwise the owner will get changed and the backup can not be used by Trilio.

    Run the following command:

    with

    • /var/triliovault-mounts/<base64>/ being the new NFS mount path

    • workload_<workload_id> being the workload to rebase

    Logging of the procedure

    The shell script is generating the following log file at the following location:

    The log file will not get overwritten when the script is run multiple times. Each run of the script will append the available log file.

    T4O 4.2 HF2 Release notes

    Release Versions

    Packages

    Name
    Type
    Version

    Containers and Gitbranch

    Name
    Tag

    Changelog

    Fixed issues and bugs

    Snapshot getting hung in upload phase for multipath environments

    An issue has been fixed, which lead to Snapshots being stuck in the upload phase in the case of using an FC multipath connected Cinder Storage.

    Backups fail due to an old Trilio Cinder Snapshot not being in available state

    An issue has been fixed, which lead to Trilio-created Cinder Snapshots, which are not in the state available, blocking the creation of new Cinder Snapshots.

    Latin characters in restore names lead to failed restores

    The support for Latin characters has been enhanced to no longer impact restores.

    Uninstalling from RHOSP

    Clean Trilio Datamover API service

    The following steps need to be run on all nodes, which have the Trilio Datamover API service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::TrilioDatamoverApi.

    Once the role that runs the Trilio Datamover API service has been identified will the following commands clean the nodes from the service.

    Run all commands as root or user with sudo permissions.

    Stop trilio_dmapi container.

    Remove trilio_dmapi container.

    Clean Trilio Datamover API service conf directory.

    Clean Trilio Datamover API service log directory.

    Clean Trilio Datamover Service

    The following steps need to be run on all nodes, which have the Trilio Datamover service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::TrilioDatamover.

    Once the role that runs the Trilio Datamover service has been identified will the following commands clean the nodes from the service.

    Run all commands as root or user with sudo permissions.

    Stop trilio_datamover container.

    Remove trilio_datamover container.

    Unmount Trilio Backup Target on compute host.

    Clean Trilio Datamover service conf directory.

    Clean log directory of Trilio Datamover service.

    Clean Trilio haproxy resources

    The following steps need to be run on all nodes, which have the haproxy service running. Those nodes can be identified by checking the roles_data.yaml for the role that contains the entry OS::TripleO::Services::HAproxy.

    Once the role that runs the haproxy service has been identified will the following commands clean the nodes from all Trilio resources.

    Run all commands as root or user with sudo permissions.

    Edit the following file inside the haproxy container and remove all Trilio entries.

    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg

    An example of these entries is given below.

    Restart the haproxy container once all edits have been done.

    Clean Trilio Keystone resources

    Trilio registers services and users in Keystone. Those need to be unregistered and deleted.

    Clean Trilio database resources

    Trilio creates a database for the dmapi service. This database needs to be cleaned.

    Login into the database cluster

    Run the following SQL statements to clean the database.

    Revert overcloud deploy command

    Remove the following entries from roles_data.yaml used in the overcloud deploy command.

    • OS::TripleO::Services::TrilioDatamoverApi

    • OS::TripleO::Services::TrilioDatamover

    In the case that the overcloud deploy command used prior to the deployment of Trilio is still available, it can directly be used.

    Follow these steps to clean the overcloud deploy command from all Trilio entries.

    1. Remove trilio_env.yaml entry

    2. Remove trilio endpoint map file Replace with original map file if existing

    Revert back to original RHOSP Horizon container

    Run the cleaned overcloud deploy command.

    Destroy the Trilio VM Cluster

    List all VMs running on the KVM node

    Destroy the Trilio VMs

    Undefine the Trilio VMs

    Delete the TrlioVault VM disk from KVM Host storage

    Upgrade Trilio Appliance

    The Trilio appliance of T4O 4.2 is running a different Kernel than the Trilio appliance for T4O 4.1 or older.

    When upgrading from T4O 4.1 or older it is recommended to replace the Trilio appliance entirely and not do an in-place upgrade. This document provides both online/offline upgrades of Trilio Appliance from any of the older TVO-4.2-based releases to the latest TVO-4.2 release.

    Generic Prerequisites

    The prerequisites should already be fulfilled from upgrading the Trilio components on the Controller and Compute nodes.

    • Please ensure to complete the upgrade of all the Trilio components on the Openstack controller & compute nodes before starting the rolling upgrade of Trilio.

    • The mentioned Gemfury repository should be accessible from Trilio VM.

    • Either 4.2 GA OR any hotfix patch against 4.2 should be already deployed for performing upgrades mentioned in the current document.

    Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut-down.

    Steps to upgrade Trilio having Internet access

    1. Download the hf_upgrade.sh script on all T4O nodes where the upgrade is to be done.

    You can check the usage of this script .

    Either perform the Step 1.1 Upgrade to the latest Hotfix/Maintenance release or 1.2 Upgrade to the intermediate Hotfix/Maintenance release depending on your requirement.

    1.1 Upgrade to the latest Hotfix/Maintenance release

    1.2 Upgrade to the intermediate Hotfix/Maintenance release

    a. Download the hf_upgrade.sh of desired Hotfix/Maintenance release

    You have to replace the Gitbranch in below command with the actual branch name of the Hotfix/Maintenance release you wish to upgrade.

    You will get that git branch name from Release Notes page of that particular Hotfix/Maintenance release

    b. Edit downloaded hf_upgrade.sh and set the BRANCH_NAME as triliodata-hotfix-4-2 for any Hotfix/Maintenance release and OFFLINE_PKG_NAME must be set to particular hotfix package name

    You would find the offline package name for particular release at http://repos.trilio.io:8283/triliodata-hotfix-4-2/offlinePkgs/

    For example, if you wish to upgarde to 4.2.7 release then BRANCH_NAME would be triliodata-hotfix-4-2 and OFFLINE_PKG_NAME must be set to 4.2.7-offlinePkgs.tar.gz in the script.

    2. Run the following command to download and Install the upgraded packages

    3. Run the following command to enable the wlm-cron service

    Steps to upgrade Trilio having No Internet access

    Here the packages need to be downloaded on a separate host which has internet access. The script ./hf_upgrade.sh can be used with the below-mentioned option to download the required package

    1. Download the hf_upgrade.sh script on a separate host which has internet access

    Use this script to download the upgraded packages.

    You can check the usage of this script .

    Either perform the Step 1.1 Upgrade to the latest Hotfix/Maintenance release or 1.2 Upgrade to the intermediate Hotfix/Maintenance release depending on your requirement.

    1.1 Upgrade to the latest Hotfix/Maintenance release

    1.2 Upgrade to the intermediate Hotfix/Maintenance release.

    a. Download the hf_upgrade.sh of desired Hotfix/Maintenance release

    You have to replace the Gitbranch in below command with the actual branch name of the Hotfix/Maintenance release you wish to upgrade.

    You will get that git branch name from Release Notes page of that particular Hotfix/Maintenance release

    b. Edit downloaded hf_upgrade.sh and set the BRANCH_NAME as triliodata-hotfix-4-2 for any Hotfix/Maintenance release and OFFLINE_PKG_NAME must be set to particular hotfix package name and then run the script.

    You would find the offline package name for particular release at http://repos.trilio.io:8283/triliodata-hotfix-4-2/offlinePkgs/

    For example, if you wish to upgarde to 4.2.7 release then BRANCH_NAME would be triliodata-hotfix-4-2 and OFFLINE_PKG_NAME must be set to 4.2.7-offlinePkgs.tar.gz in the script.

    2. Copy the downloaded 4.2-offlinePkgs.tar.gz package and hf_upgrade.sh script to all the Trilio nodes

    3. Now access the appliance VMs and install the copied offline packages on each of the Trilio appliances by running the below-mentioned command:

    4. Once the installation is done, run the following command to enable the wlm-cron service

    Post upgrade steps

    1. Verify all wlm services on all Trilio nodes

    2. Make sure all wlm services are up and running from python version 3.8.x

    3. Check the mount point using “df -h” command

    4. Grafana dashboard shows the correct wlm service status on all T4O nodes.

    Additional check for wlm-cron on the primary node

    [RHOSP, TripleO and Kolla only] Verify nova UID/GID for nova user on the Appliance

    Red Hat OpenStack, TripleO, and Kolla Ansible Openstack are using the nova UID/GID of 42436 inside their containers instead of 162:162 which is the standard in other Openstack environments.

    Please verify that the nova UID/GID on the Trilio Appliance is still 42436, to do so follow the below document provided here:

    T4O 4.2 HF1 Release Notes

    Release Versions

    Packages

    Name
    Type
    Version

    Containers and Gitbranch

    Name
    Tag

    Changelog

    New Qualifications

    This Hotfix extends the Support Matrix of T4O 4.2 as follows:

    • Canonical Openstack Wallaby based on Focal (20.04) Support

    • Kolla Ansible Openstack Wallaby on Ubuntu 20.04 and CentOS Stream

    • Openstack Ansible Wallaby on Ubuntu 20.04 and CentOS Stream

    Fixed Bugs and issues

    Rebase permission error

    An issue has been fixed which prevented the correct rebase of T4O incremental backups in the case that root didn't have the required permissions on the backup target.

    Change Certificates used by Trilio

    The following Trilio services are providing certificates for secured access to the Trilio solution.

    Service
    Port used
    Description

    TVault-Config

    443

    Webservice providing the TrilIoVault Dashboard

    Nginx (wlm-api)

    8780

    provides the VIP for wlm-api service

    Changing the certificate of TVault-Config and Nginx for Grafana Service

    The TVault-Config service and the Nginx Resource for the Grafana Dashboard are using the same certificate.

    The certificate used is a symlink to a host-specific certificate. Each Trilio VM has its own self-signed certificate by default which is getting recreated every time the TVault-Config service is restarted.

    When the certificate for the TVault-Config and Nginx (Grafana) is to be changed to a customer chosen certificate it is required to deactivate the recreation of the certificates upon service restart.

    Trilio is planning to change this behavior to make it easier for customers to change the certificate in the future.

    1. Login into the Trilio VM via SSH

    2. Edit the following file: /home/stack/myansible/lib/python3.6/site-packages/tvault_configurator/tvault_config_bottle.py

    3. Look for create_ssl_certificates() in the main function

    The resulting main function will look like this:

    Afterward, the certificates can be replaced manually by overwriting the files.

    Once the certificates have been replaced by the desired ones restart the TVault-Config service and the Nginx pcs resource.

    Changing the certificate used by Nginx for wlm-api service

    The certificate provided by the Nginx for the wlm-api service is set during configuration when HTTPS endpoints are configured for the Trilio appliance. This certificate is provided to the end-user or Openstack every time an API call to the Trilio solution is sent.

    The certificate and its related private key can be changed through the OS API certificate tab.

    In this tab is the section to Upload Server certificate | Private key. Use this section to update the wlm-api certificate as required.

    The certificate and it's private key can also be changed through reconfiguration.

    Shutdown/Restart the Trilio cluster

    To gracefully shutdown/restart the Trilio cluster the following steps are recommended.

    Verify no snapshots or restores are running

    It is recommended to verify that no snapshots or restores are running on the Trilio Cluster.

    Schedulers

    Definition

    Every Workload has its own schedule. Those schedules can be activated, deactivated and modified.

    A schedule is defined by:

    • Status (Enabled/Disabled)

    Important log files

    On the Trilio Nodes

    The Trilio Cluster contains multiple log files.

    The main log is workloadmgr-workloads.log, which contains all logs about ongoing and past Trilio backup and restore tasks. It can be found at:

    /var/log/workloadmgr/workloadmgr-workloads.log

    The next important log is the workloadmgr-api.log, which contains all logs about API calls received by the Trilio Cluster. It can be found at:

    solutions/openstack/backing-file-update at master · trilioData/solutionsGitHub
    juju export-bundle --filename openstack_base_file.yaml
    juju deploy --dry-run ./openstack_base_file.yaml --overlay <Trilio bundle path>
    juju deploy ./openstack_base_file.yaml --overlay <Trilio bundle path>
    Juju 2.x
    juju run-action --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
    Juju 3.x
    juju run --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
    juju attach-resource trilio-wlm license=<Path to trilio license file>
    Juju 2.x
    juju run-action --wait trilio-wlm/leader create-license
    Juju 3.x
    juju run --wait trilio-wlm/leader create-license
    Please ask your Trilio Customer Success Manager or Engineer.
    This page will be updated once the script is publicly available.
    ./backing_file_update.sh /var/triliovault-mounts/<base64>/workload_<workload_id>
    /tmp/backing_file_update.log
    /var/log/workloadmgr/workloadmgr-api.log

    The log for the third service is the workloadmgr-scheduler.log, which contains all logs about the internal job scheduling between Trilio nodes in the Trilio Cluster.

    /var/log/workloadmgr/workloadmgr-scheduler.log

    The last but not least service running on the Trilio Nodes is the wlm-cron service, which is controlling the scheduled automated backups.

    /var/log/workloadmgr/workloadmgr-workloads.log

    In the case of using S3 as a backup target is there also a log file that keeps track of the S3-Fuse plugin used to connect with the S3 storage.

    /var/log/workloadmgr/s3vaultfuse.py.log

    Canonical Openstack is having these logs inside the workloadmgr container.

    Trilio Datamover service logs on RHOSP

    Datamover API log

    The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running under:

    /var/log/containers/trilio-datamover-api/dmapi.log

    Datamover log

    The log for the Trilio Datamover service is located on the nodes, typically compute, where the Trilio Datamover container is running under:

    /var/log/containers/trilio-datamover/tvault-contego.log

    In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:

    /var/log/containers/trilio-datamover/tvault-object-store.log

    Trilio Datamover service logs on Kolla Openstack

    Datamover API log

    The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running under:

    /var/log/kolla/trilio-datamover-api/dmapi.log

    Datamover log

    The log for the Trilio Datamover service is located on the nodes, typically compute, where the Trilio Datamover container is running under:

    /var/log/kolla/triliovault-datamover/tvault-contego.log

    In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:

    /var/log/kolla/trilio-datamover/tvault-object-store.log

    Trilio Datamover service logs on Ansible Openstack

    Datamover API log

    The log for the Trilio Datamover API service is located on the nodes, typically controller, where the Trilio Datamover API container is running. Log into the dmapi container using lxc-attach command (example below).

    lxc-attach -n controller_dmapi_container-a11984bf

    The log file is then located under:

    /var/log/dmapi/dmapi.log

    Datamover log

    The log for the Trilio Datamover service is typically located on the compute nodes and the logs can be found here:

    /var/log/tvault-contego/tvault-contego.log

    In case of S3 being used in the log for the S3 Fuse plugin located on the same nodes under:

    /var/log/tvault-object-store/tvault-object-store.log

    here
    here
    here
    here

    symmetric keys

    encoded with base64

    cbc, ctr, xts

    text/plain

    None

    opaque

    plaintext

    dmapi

    deb package

    4.2.64

    python3-dmapi

    deb package

    4.2.64

    tvault-contego

    deb package

    4.2.64.5

    python3-tvault-contego

    deb package

    4.2.64.5

    tvault-horizon-plugin

    deb package

    4.2.64

    python3-tvault-horizon-plugin

    deb package

    4.2.64

    s3-fuse-plugin

    deb package

    4.2.64

    python3-s3-fuse-plugin

    deb package

    4.2.64

    workloadmgr

    deb package

    4.2.64.5

    workloadmgrclient

    deb package

    4.2.64

    dmapi

    rpm package

    4.2.64-4.2

    python3-dmapi

    rpm package

    4.2.64-4.2

    puppet-triliovault

    rpm package

    4.2.64-4.2

    python3-contegoclient-el8

    rpm package

    4.2.64-4.2

    tvault-contego

    rpm package

    4.2.64.5-4.2

    python3-tvault-contego

    rpm package

    4.2.64.5-4.2

    tvault-horizon-plugin

    rpm package

    4.2.64-4.2

    python3-tvault-horizon plugin-el8

    rpm package

    4.2.64-4.2

    python-s3fuse-plugin-cent7

    rpm package

    4.2.64-4.2

    python3-s3fuse-plugin

    rpm package

    4.2.64-4.2

    python3-trilio-fusepy

    rpm package

    3.0.1-1

    trilio-fusepy

    rpm package

    3.0.1-1

    workloadmgrclient

    rpm package

    4.2.64-4.2

    python3-workloadmgrclient-el8

    rpm package

    4.2.64-4.2

    4.2.64-hotfix-2-wallaby

    TripleO Train containers

    4.2.64-hotfix-2-tripleo

    s3fuse

    python package

    4.2.64

    tvault-configurator

    python package

    4.2.64.5

    workloadmgr

    python package

    4.2.64.5

    workloadmgrclient

    python package

    4.2.64

    contegoclient

    python package

    Gitbranch

    hotfix-2-TVO/4.2

    RHOSP13 containers

    4.2.64-hotfix-2-rhosp13

    RHOSP16.1 containers

    4.2.64-hotfix-2-rhosp16.1

    RHOSP16.2 containers

    4.2.64-hotfix-2-rhosp16.2

    Kolla Ansible Victoria containers

    4.2.64-hotfix-2-victoria

    4.2.64

    Kolla Ansible Wallaby containers

    python3-dmapi

    deb package

    4.2.64

    tvault-contego

    deb package

    4.2.64.1

    python3-tvault-contego

    deb package

    4.2.64.1

    tvault-horizon-plugin

    deb package

    4.2.64

    python3-tvault-horizon-plugin

    deb package

    4.2.64

    s3-fuse-plugin

    deb package

    4.2.64

    python3-s3-fuse-plugin

    deb package

    4.2.64

    workloadmgr

    deb package

    4.2.64.1

    workloadmgrclient

    deb package

    4.2.64

    dmapi

    rpm package

    4.2.64-4.2

    python3-dmapi

    rpm package

    4.2.64-4.2

    puppet-triliovault

    rpm package

    4.2.64-4.2

    python3-contegoclient-el8

    rpm package

    4.2.64-4.2

    tvault-contego

    rpm package

    4.2.64.1-4.2

    python3-tvault-contego

    rpm package

    4.2.64.1-4.2

    tvault-horizon-plugin

    rpm package

    4.2.64-4.2

    python3-tvault-horizon plugin-el8

    rpm package

    4.2.64-4.2

    python-s3fuse-plugin-cent7

    rpm package

    4.2.64-4.2

    python3-s3fuse-plugin

    rpm package

    4.2.64-4.2

    python3-trilio-fusepy

    rpm package

    3.0.1-1

    trilio-fusepy

    rpm package

    3.0.1-1

    workloadmgrclient

    rpm package

    4.2.64-4.2

    python3-workloadmgrclient-el8

    rpm package

    4.2.64-4.2

    4.2.64-hotfix-1-wallaby

    TripleO Train containers

    4.2.64-hotfix-1-tripleo

    s3fuse

    python package

    4.2.64

    tvault-configurator

    python package

    4.2.64.1

    workloadmgr

    python package

    4.2.64.1

    workloadmgrclient

    python package

    4.2.64

    dmapi

    deb package

    Gitbranch

    hotfix-1-TVO/4.2

    RHOSP13 containers

    4.2.64-hotfix-1-rhosp13

    RHOSP16.1 containers

    4.2.64-hotfix-1-rhosp16.1

    RHOSP16.2 containers

    4.2.64-hotfix-1-rhosp16.2

    Kolla Ansible Victoria containers

    4.2.64-hotfix-1-victoria

    4.2.64

    Kolla Ansible Wallaby containers

    Please ensure the following points before starting the upgrade process:
    • No snapshot OR restore to be running.

    • Global job-scheduler should be disabled.

    • wlm-cron should be disabled and any lingering process should be killed.

    Enable Global Job Scheduler

    here
    here
    Update nova UID/GID on the appliance
    Comment out create_ssl_certificates()
  • Repeat for all nodes of the Trilio cluster

  • Nginx (Grafana)

    3001

    VIP for the dashboard of Grafana service running on TrilIioVault VM

    Upload Server certificate | Private key block
    Stopping or restarting the Trilio cluster will cancel all running actively running backup or restore jobs. These jobs will be marked as errored after the system has come up again.

    This can be verified using the following two commands:

    Identify the master node for the VIP(s) and wlm-cron service

    The Trilio cluster is using the pacemaker service for setting the VIP(s) of the cluster and controlling the active node for the wlm-cron service. The identified node will be the last to shut down in case that the whole cluster gets shut down.

    This can be checked using the following command:

    In the following example is the master node the tvm1

    Shutdown/Restart of a single node in the cluster

    A single node in the cluster can be shut down or restarted without issues. All services will come up and the RabbitMQ and Galeera service will rejoin the remaining cluster.

    When the master node gets shutdown or restarted the VIP(s) and the wlm-cron service will switch to one of the remaining cluster nodes.

    Stop the services on the node

    To speed up the shutdown/restart process it is recommended to stop the Trilio services, the RabbitMQ service, and the MariaDB service on the node.

    The wlm-cron service and the VIP(s) are not getting stopped when only the master node gets rebooted or shut down. The pacemaker will automatically move the wlm-cron service and the VIP(s) to one of the remaining nodes.

    Shutdown/Restart the node

    After the services have been stopped the node can be restarted or shut down using standard Linux commands.

    Restarting the complete cluster node by node

    Restarting the whole cluster node by node follows the same procedure as restarting a single node, with the difference that each restarted node needs to be fully started again before the next node can be restarted.

    Shutdown/Restart the complete cluster as a whole

    When the complete cluster needs to get stopped and restarted at the same time the following procedure needs to be completed.

    The procedure on a high level is:

    • Shutdown the two slave nodes

    • Shutdown the master node

    • Start the master node

    • Enable the Galeera cluster

    • Start the two slave nodes

    Shutdown the two slave nodes

    Before shutting down the two slave nodes it is recommended to stop running Trilio services, the RabbitMQ server, and the MariaDB on the nodes.

    Afterward, the nodes can be shut down.

    Shutdown the master node

    Before shutting down the master node it is recommended to stop running Trilio services, the RabbitmQ server, the MariaDB, the wlm-cron and the VIP(s) resource in Pacemaker.

    Afterward, the node can be shut down.

    Start the master node

    The first server that is getting booted will be the master node. It is highly recommended that the old master node will be booted first again.

    Not booting the old mater node first again can lead to data loss when the Galeera Cluster is restarted.

    Enable the Galeera cluster

    Login into the freshly started master node and run the following command. This will restart the Galeera cluster with this node as master.

    Start the slave nodes

    After the master node has been booted and the Galeera cluster started the remaining nodes can be started and will automatically rejoin the Trilio cluster.

    # For RHOSP13
    systemctl disable tripleo_trilio_dmapi.service
    systemctl stop tripleo_trilio_dmapi.service
    docker stop trilio_dmapi
    
    # For RHOSP16 onwards
    systemctl disable tripleo_trilio_dmapi.service
    systemctl stop tripleo_trilio_dmapi.service
    podman stop trilio_dmapi
    # For RHOSP13
    docker rm trilio_dmapi
    docker rm trilio_datamover_api_init_log
    docker rm trilio_datamover_api_db_sync
    
    # For RHOSP16 onwards
    podman rm trilio_dmapi
    podman rm trilio_datamover_api_init_log
    podman rm trilio_datamover_api_db_sync
    
    ## If present, remove below container as well
    podman rm container-puppet-triliodmapi
    rm -rf /var/lib/config-data/puppet-generated/triliodmapi
    rm /var/lib/config-data/puppet-generated/triliodmapi.md5sum
    rm -rf /var/lib/config-data/triliodmapi*
    rm -rf /var/log/containers/trilio-datamover-api/
    # For RHOSP13
    docker stop trilio_datamover
    
    # For RHOSP16 onwards
    systemctl disable tripleo_trilio_datamover.service
    systemctl stop tripleo_trilio_datamover.service
    podman stop trilio_datamover
    # For RHOSP13
    docker rm trilio_datamover
    
    # For RHOSP16 onwards
    podman rm trilio_datamover
    
    ## If present, remove below container as well
    podman rm container-puppet-triliodmapi
    ## Following steps applicable for all supported RHOSP releases.
    
    # Check triliovault backup target mount point
    mount | grep trilio
    
    # Unmount it
    -- If it's NFS	(COPY UUID_DIR from your compute host using above command)
    umount /var/lib/nova/triliovault-mounts/<UUID_DIR>
    
    -- If it's S3
    umount /var/lib/nova/triliovault-mounts
    
    # Verify that it's unmounted		
    mount | grep trilio
    	
    df -h  | grep trilio
    
    # Remove mount point directory after verifying that backup target unmounted successfully.
    # Otherwise actual data from backup target may get cleaned.	
    
    rm -rf /var/lib/nova/triliovault-mounts
    rm -rf /var/lib/config-data/puppet-generated/triliodm/
    rm /var/lib/config-data/puppet-generated/triliodm.md5sum
    rm -rf /var/lib/config-data/triliodm*
    rm -rf /var/log/containers/trilio-datamover/
    listen trilio_datamover_api
      bind 172.25.3.60:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
      bind 172.25.3.60:8784 transparent
      http-request set-header X-Forwarded-Proto https if { ssl_fc }
      http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
      http-request set-header X-Forwarded-Port %[dst_port]
      option httpchk
      option httplog
      server overcloud-controller-0.internalapi.localdomain 172.25.3.59:8784 check fall 5 inter 2000 rise 2
    # For RHOSP13
    docker restart haproxy-bundle-docker-0
    
    # For RHOSP16 onwards
    podman restart haproxy-bundle-podman-0
    openstack service delete dmapi
    openstack user delete dmapi
    ## On RHOSP13, run following command on node where database service runs
    docker exec -ti -u root galera-bundle-docker-0 mysql -u root
    
    ## On RHOSP16
    podman exec -it galera-bundle-podman-0 mysql -u root
    ## Clean database
    DROP DATABASE dmapi;
    
    ## Clean dmapi user
    => List 'dmapi' user accounts
    MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
    +-------+-------------+
    | user  | host        |
    +-------+-------------+
    | dmapi | 172.25.2.10 |
    | dmapi | 172.25.2.8  |
    +-------+-------------+
    2 rows in set (0.00 sec)
    
    => Delete those user accounts
    MariaDB [mysql]> DROP USER [email protected];
    Query OK, 0 rows affected (0.82 sec)
    
    MariaDB [mysql]> DROP USER [email protected];
    Query OK, 0 rows affected (0.05 sec)
    
    => Verify that dmapi user got cleaned
    MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
    Empty set (0.00 sec)
    virsh list
    virsh destroy <Trilio VM Name or ID>
    virsh undefine <Trilio VM name>
    [root@TVM2 ~]# pcs resource disable wlm-cron
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr                                                                                                                                                                                                                                                                                                                                                           -cron
    
    cd /opt/ 
    wget https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/TVO/4.2.8/TVOAppliance/hf_upgrade.sh
    chmod +x hf_upgrade.sh
    cd /opt/ 
    wget https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/<Gitbranch>/TVOAppliance/hf_upgrade.sh
    chmod +x hf_upgrade.sh
    ./hf_upgrade.sh --all  
    OR 
    ./hf_upgrade.sh -a
    
    pcs resource enable wlm-cron
    
    cd /<tvo_packages_download_path>/ 
    wget https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/TVO/4.2.8/TVOAppliance/hf_upgrade.sh
    
    # Download the upgraded packages
    ./hf_upgrade.sh --downloadonly
    
    cd /opt/
    wget https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/<Gitbranch>/TVOAppliance/hf_upgrade.sh
    chmod +x hf_upgrade.sh
    # Download the upgraded packages
    ./hf_upgrade.sh --downloadonly
    ```
    scp <tvo_packages_download_path>/* root@<TrilioVault_node_IP>:/<path_to_upgrade_package>/
    ```
    ```
    cd <path_to_upgrade_package>/
    ./hf_upgrade.sh --installonly
    
    ```
    ```
    pcs resource enable wlm-cron
    ```
    systemctl status tvault-config wlm-workloads wlm-api wlm-scheduler
    pcs status (on primary node)
    systemctl status wlm-cron (on primary node)
    systemctl status tvault-object-store (only if Trilio configured with S3 backend storage)
    ps -ef | grep workloadmgr-cron | grep -v grep
    # Above command should show only 2 processes running; sample below
    
    [root@tvm6 ~]# ps -ef | grep workloadmgr-cron | grep -v grep
    nova      8841     1  2 Jul28 ?        00:40:44 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    nova      8898  8841  0 Jul28 ?        00:07:03 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    [root@TVM1 ssl]# cd /etc/tvault/ssl/
    [root@TVM1 ssl]# ls -lisa server*
     577678 0 lrwxrwxrwx 1 root root 8 Jan 21 14:36 server.crt -> TVM1.crt
     577672 0 lrwxrwxrwx 1 root root 8 Jan 21 14:36 server.key -> TVM1.key
    1178820 0 lrwxrwxrwx 1 root root 8 Jan 21 14:36 server.pem -> TVM1.pem
    def main():
        # configure the networking
        #create_ssl_certificates()
    
        http_thread = Thread(target=main_http)
        http_thread.daemon = True  # thread dies with the program
        http_thread.start()
    
        bottle.debug(True)
        srv = SSLWSGIRefServer(host='::', port=443)
        bottle.run(server=srv, app=app, quiet=False, reloader=False)
    [root@TVM1 ~]# systemctl restart tvault-config
    [root@TVM1 ~]# pcs resource restart lb_nginx-clone
    lb_nginx-clone successfully restarted
    workloadmgr snapshot-list --all=True
    workloadmgr restore-list
    pcs status
    pcs status
    Cluster name: triliovault
    
    WARNINGS:
    Corosync and pacemaker node names do not match (IPs used in setup?)
    
    Stack: corosync
    Current DC: tvm3 (version 1.1.23-1.el7_9.1-9acf116022) - partition with quorum
    Last updated: Thu Aug 26 12:10:32 2021
    Last change: Thu Aug 26 08:02:51 2021 by root via crm_resource on tvm1
    
    3 nodes configured
    8 resource instances configured
    
    Online: [ tvm1 tvm2 tvm3 ]
    
    Full list of resources:
    
     virtual_ip     (ocf::heartbeat:IPaddr2):       Started tvm1
     virtual_ip_public      (ocf::heartbeat:IPaddr2):       Started tvm1
     virtual_ip_admin       (ocf::heartbeat:IPaddr2):       Started tvm1
     virtual_ip_internal    (ocf::heartbeat:IPaddr2):       Started tvm1
     wlm-cron       (systemd:wlm-cron):     Started tvm1
     Clone Set: lb_nginx-clone [lb_nginx]
         Started: [ tvm1 ]
         Stopped: [ tvm2 tvm3 ]
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    systemctl stop wlm-api
    systemctl stop wlm-scheduler
    systemctl stop wlm-workloads
    systemctl stop mysqld
    rabbitmqctl stop
    reboot
    shutdown
    systemctl stop wlm-api
    systemctl stop wlm-scheduler
    systemctl stop wlm-workloads
    systemctl stop mysqld
    rabbitmqctl stop
    shutdown
    systemctl stop wlm-api
    systemctl stop wlm-scheduler
    systemctl stop wlm-workloads
    systemctl stop mysqld
    rabbitmqctl stop
    pcs resource stop wlm-cron
    pcs resource stop lb_nginx-clone
    shutdown
    galera_new_cluster

    Start Day/Time

  • End Day

  • Hrs between 2 snapshots

  • Disable a schedule

    Using Horizon

    To disable the scheduler of a single Workload in Horizon do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to be modified

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Edit Workload"

    7. Navigate to the tab "Schedule"

    8. Uncheck "Enabled"

    9. Click "Update"

    Using CLI

    • --workloadid <workloadid> ➡️ Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>

    Enable a schedule

    Using Horizon

    To disable the scheduler of a single Workload in Horizon do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to be modified

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Edit Workload"

    7. Navigate to the tab "Schedule"

    8. check "Enabled"

    9. Click "Update"

    Using CLI

    • --workloadid <workloadid> ➡️ Requires at least one workloadid, Specify an ID of the workload whose scheduler disables. Specify option multiple times to include multiple workloads. --workloadids <workloadid> --workloadids <workloadid>

    Modify a schedule

    To modify a schedule the workload itself needs to be modified.

    Please follow this procedure to modify the workload.

    Verify the scheduler trust is working

    Trilio is using the Openstack Keystone Trust system which enables the Trilio service user to act in the name of another Openstack user.

    This system is used during all backup and restore features.

    Using Horizon

    As a trust is bound to a specific user for each Workload does the Trilio Horizon plugin show the status of the Scheduler on the Workload list page.

    Screenshot of an Workload with established scheduler trust

    Using CLI

    • <workload_id> ➡️ ID of the workload to validate

    latest/edge

    Jammy (Ubuntu 22.04)

    trilio-charmers-trilio-wlm-focal

    latest/edge

    Focal (Ubuntu 20.04)

    trilio-charmers-trilio-dm-api-focal

    latest/edge

    Focal (Ubuntu 20.04)

    trilio-charmers-trilio-data-mover-focal

    latest/edge

    Focal (Ubuntu 20.04)

    trilio-charmers-trilio-horizon-plugin-focal

    latest/edge

    Focal (Ubuntu 20.04)

    4.2/stable

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    trilio-charmers-trilio-wlm-jammy
    trilio-charmers-trilio-dm-api-jammy
    trilio-charmers-trilio-data-mover-jammy
    trilio-charmers-trilio-horizon-plugin-jammy
    trilio-wlm
    trilio-data-mover
    trilio-dm-api
    trilio-horizon-plugin

    Workload Import & Migration

    Each Trilio Workload has a dedicated owner. The ownership of a Workload is defined by:

    • Openstack User - The Openstack User-ID is assigned to a Workload

    • Openstack Project - The Openstack Project-ID is assigned to a Workload

    • Openstack Cloud - The Trilio Serviceuser-ID is assigned to a Workload

    Openstack Users can update the User ownership of a Workload by modifying the Workload.

    This ownership secures, that only the owners of a Workload are able to work with it.

    Openstack Administrators can reassign Workloads or reimport Workloads from older Trilio installations.

    Import workloads

    Workload import allows to import Workloads existing on the Backup Target into the Trilio database.

    The Workload import is designed to import Workloads, which are owned by the Cloud.

    It will not import or list any Workloads that are owned by a different cloud.

    To get a list of importable Workloads use the following CLI command:

    • --project_id <project_id> ➡️ List workloads belongs to given project only.

    To import Workloads into the Trilio database use the following CLI command:

    • --workloadids <workloadid> ➡️ Specify workload ids to import only specified workloads. Repeat option for multiple workloads.

    Orphaned Workloads

    The definition of an orphaned Workload is from the perspective of a specific Trilio installation. Any workload that is located on the Backup Target Storage, but not known to the TrilioVualt installation is considered orphaned.

    Further is to divide between Workloads that were previously owned by Projects/Users in the same cloud or are migrated from a different cloud.

    The following CLI command provides the list of orphaned workloads:

    • --migrate_cloud {True,False} ➡️ Set to True if you want to list workloads from other clouds as well. Default is False.

    • --generate_yaml {True,False} ➡️ Set to True if want to generate output file in yaml format, which would be further used as input for workload reassign API.

    Running this command against a Backup Target with many Workloads can take a bit of time. Trilio is reading the complete Storage and verifies every found Workload against the Workloads known in the database.

    Reassigning Workloads

    Openstack administrators are able to reassign a Workload to a new owner. This involves the possibility to migrate a Workload from one cloud to another or between projects.

    Reassigning a workload only changes the database of the target Trilio installation. When the Workload was managed before by a different Trilio installation, will that installation not be updated.

    Use the following CLI command to reassign a Workload:

    • --old_tenant_ids <old_tenant_id>➡️ Specify old tenant ids from which workloads need to reassign to new tenant. Specify multiple times to choose Workloads from multiple tenants.

    • --new_tenant_id <new_tenant_id> ➡️ Specify new tenant id to which workloads need to reassign from old tenant. Only one target tenant can be specified.

    • --workload_ids <workload_id>

    A sample mapping file with explanations is shown below:

    Set network accessibility of Trilio GUI

    By default is the Trilio GUI available on all NICs on port 443.

    To limit this to only one IP the following steps need to be applied.

    Network Setup

    The Trilio Appliance provides by default the possibility of 4 VIPs.

    • A general VIP which can be used for everything

    • A public VIP for the public endpoint

    • An internal VIP for the internal endpoint

    • An admin VIP for the admin endpoint

    Should an additional VIP be required to restrict the access of the Trilio Dashboard to this VIP the new VIP needs to be created as a new resource inside the PCS cluster.

    Nginx setup

    When the new dashboard_ip has been created or decided, then the next step is to set up the proxy forwarding inside Nginx, which will make the Trilio GUI available through port 8000.

    All of the following steps need to be done all Trilio appliances of the cluster.

    1. Create new conf file at /etc/nginx/conf.d/tvault-dashboard.conf. Replace variables dashboard_ip and virtual_ip as configured or decided.

    2. edit /etc/nginx/nginx.conf and uncomment line #include /etc/nginx/conf.d/*.conf;

    Limit the access of the Dashboard

    The configured dashboard_ip will always end on the nginx service on port 8000 and will then be forwarded to the local dashboard service on port 443.

    This configuration limits the required access to the local dashboard service to the Trilio appliance cluster itself. All other connections on port 443 can be dropped.

    The following commands will set the required iptable rules.

    Verify the accessibility as required

    At this point is the Trilio GUI only reachable on the dashboard_ip on port 8000. Accessing the Trilio GUI through any other IP or on port 443 is not allowed.

    T4O 4.2 HF5 Release Notes

    Release Versions

    Packages

    File Search

    Definition

    The file search functionality allows the user to search for files and folders located on a chosen VM in a workload in one or more Backups.

    Navigating to the file search tab in Horizon

    solutions/openstack/CleanWlmDatabase/cleanAndOptimizeWLMDB at master · trilioData/solutionsGitHub
    workloadmgr disable-scheduler --workloadids <workloadid>
    workloadmgr enable-scheduler --workloadids <workloadid>
    workloadmgr scheduler-trust-validate <workload_id>
    Logo
    ➡️
    Specify workload_ids which need to reassign to new tenant. If not provided then all the workloads from old tenant will get reassigned to new tenant. Specifiy multiple times for multiple workloads.
  • --user_id <user_id>➡️ Specify user id to which workloads need to reassign from old tenant. only one target user can be specified.

  • --migrate_cloud {True,False}➡️ Set to True if want to reassign workloads from other clouds as well. Default if False

  • --map_file➡️ Provide file path(relative or absolute) including file name of reassign map file. Provide list of old workloads mapped to new tenants. Format for this file is YAML.

  • check nginx syntax: nginx -t
  • reload nginx conf: nginx -s reload

  • Verify if the new cluster resource is visible or not using pcs resource command and by accessing the dashboard_ip.

  • The file search tab is part of every workload overview. To reach it follow these steps:
    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload a file search shall be done in

    5. Click the workload name to enter the Workload overview

    6. Click File Search to enter the file search tab

    Configuring and starting a file search Horizon

    A file search runs against a single virtual machine for a chosen subset of backups using a provided search string.

    To run a file search the following elements need to be decided and configured

    Choose the VM the file search shall run against

    Under VM Name/ID choose the VM that the search is done upon. The drop down menu provides a list of all VMs that are part of any Snapshot in the Workload.

    VMs that are no longer activly protected by the Workload but are still part of an existing Snapshot are listed in red.

    Set the File Path

    The File Path defines the search string that is run against the chosen VM and Snapshots. This search string does support basic RegEx.

    The File Path has to start with a '/'

    Windows partitions are fully supported. Each partition is its own Volume with its own root. Use '/Windows' instead of 'C:\Windows'

    The file search does not go into deeper directories and always searches on the directory provided in the File Path

    Example File Path for all files inside /etc : /etc/*

    Define the Snapshots to search in

    "Filter Snapshots by" is the third and last component that needs to be set. This defines which Snapshots are going to be searched.

    There are 3 possibilities for a pre-filtering:

    1. All Snapshots - Lists all Snapshots that contain the chosen VM from all available Snapshots

    2. Last Snapshots - Choose between the last 10, 25, 50, or custom Snapshots and click Apply to get the list of the available Snapshots for the chosen VM that match the criteria.

    3. Date Range - Set a start and end date and click apply to get the list of all available Snapshots for the chosen VM within the set dates.

    After the pre-filtering is done all matching Snapshots are automatic prechosen. Uncheck any Snapshot that shall not be searched.

    When no Snapshot is chosen the file search will not start.

    Start the File Search and retrieve the results in Horizon

    To start a File Search the following elements need to be set:

    • A VM to search in has to be chosen

    • A valid File Path provided

    • At least one Snapshot to search in selected

    Once those have been set click "Search" to start the file search.

    Do not navigate to any other Horizon tab or website after starting the File Search. Results are lost and the search has to be repeated to regain them.

    After a short time the results will be presented. The results are presented in a tabular format grouped by Snapshots and Volumes inside the Snapshot.

    For each found file or folder the following information are provided:

    • POSIX permissions

    • Amount of links pointing to the file or folder

    • User ID who owns the file or folder

    • Group ID assigned to the file or folder

    • Actual size in Bytes of the file or folder

    • Time of creation

    • Time of last modification

    • Time of last access

    • Full path to the found file or folder

    Once the Snapshot of interest has been identified it is possible to go directly to the Snapshot using the "View Snapshot" option at the top of the table. It is also possible to directly mount the Snapshot using the "Mound Snapshot" Button at the end of the table.

    Doing a CLI File Search

    • <vm_id> ➡️ ID of the VM to be searched

    • <file_path> ➡️ Path of the file to search for

    • --snapshotids <snapshotid> ➡️ Search only in specified snapshot ids snapshot-id: include the instance with this UUID

    • --end_filter <end_filter> ➡️Displays last snapshots, example , last 10 snapshots, default 0 means displays all snapshots

    • --start_filter <start_filter> ➡️Displays snapshots starting from , example , snapshot starting from 5, default 0 means starts from first snapshot

    • --date_from <date_from> ➡️ From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If time isn't specified then it takes 00:00 by default

    • --date_to <date_to> ➡️ To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day),Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to

    workloadmgr workload-get-importworkloads-list [--project_id <project_id>]
    workloadmgr workload-importworkloads [--workloadids <workloadid>]
    workloadmgr workload-get-orphaned-workloads-list [--migrate_cloud {True,False}]
                                                     [--generate_yaml {True,False}]
    workloadmgr workload-reassign-workloads
                                            [--old_tenant_ids <old_tenant_id>]
                                            [--new_tenant_id <new_tenant_id>]
                                            [--workload_ids <workload_id>]
                                            [--user_id <user_id>]
                                            [--migrate_cloud {True,False}]
                                            [--map_file <map_file>]
    reassign_mappings:
       - old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
         new_tenant_id: new_tenant_id
         user_id: user_id
         workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
         migrate_cloud: True/False #Set to True if want to reassign workloads from
                      # other clouds as well. Default is False
    
       - old_tenant_ids: [] #user can provide list of old_tenant_ids or workload_ids
         new_tenant_id: new_tenant_id
         user_id: user_id
         workload_ids: [] #user can provide list of old_tenant_ids or workload_ids
         migrate_cloud: True/False #Set to True if want to reassign workloads from
                      # other clouds as well. Default is False
    pcs resource create dashboard_ip ocf:heartbeat:IPaddr2 ip=<new_vip> cidr_netmask=<netmask> nic=<new_nw_interface> op monitor interval=30s
    pcs constraint colocation add dashboard_ip virtual_ip
    server {
        listen <dashboard_ip>:8000 ssl ;
        ssl_certificate "/opt/stack/data/cert/workloadmgr.cert";
        ssl_certificate_key "/opt/stack/data/cert/workloadmgr.key";
        keepalive_timeout 65;
        proxy_read_timeout 1800;
        access_log on;
        location / {
            proxy_set_header Host $host:$server_port;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_pass https://<virtual_ip>:443;
        }
    }
    server {
        listen <dashboard_ip>:3001 ssl ;
        ssl_certificate "/opt/stack/data/cert/workloadmgr.cert";
        ssl_certificate_key "/opt/stack/data/cert/workloadmgr.key";
        keepalive_timeout 65;
        proxy_read_timeout 1800;
        access_log on;
        location / {
            proxy_set_header Host $host:$server_port;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_pass https://<virtual_ip>:3001;
        }
    }
    
    iptables -A INPUT -p tcp -s tvm1,tvm2,tvm3 --dport 80 -j ACCEPT
    iptables -A INPUT -p tcp -s tvm1,tvm2,tvm3 --dport 443 -j ACCEPT
    iptables -A INPUT -p tcp --dport 80 -j DROP
    iptables -A INPUT -p tcp --dport 443 -j DROP
    https://<dashboard_ip>:8000
    workloadmgr filepath-search [--snapshotids <snapshotid>]
                                [--end_filter <end_filter>]
                                [--start_filter <start_filter>]
                                [--date_from <date_from>]
                                [--date_to <date_to>]
                                <vm_id> <file_path>
    Deliverables against TVO-4.2.HF5

    Package/Container Names

    Package Kind

    Package Version/Container Tags

    contego

    deb

    4.2.64

    contegoclient

    rpm

    4.2.64-4.2

    contegoclient

    deb

    4.2.64

    contegoclient

    Following packages changed/added in the current release

    Package/Container Names

    Package Kind

    Package/Container Version/Tags

    python3-s3-fuse-plugin

    deb

    4.2.64.1

    python3-tvault-contego

    deb

    4.2.64.10

    s3-fuse-plugin

    deb

    4.2.64.1

    tvault-contego

    Containers and Gitbranch

    Name

    Tag

    Gitbranch

    hotfix-5-TVO/4.2

    RHOSP13 containers

    4.2.64-hotfix-5-rhosp13

    RHOSP16.1 containers

    4.2.64-hotfix-5-rhosp16.1

    RHOSP16.2 containers

    4.2.64-hotfix-5-rhosp16.2

    Kolla Ansible Victoria containers

    4.2.64-hotfix-5-victoria

    Changelog

    • End of support for Openstack Ansible from 4.2.HF5

    • Bug fixes targeted for 4.2.HF5 release

    Fixed Bugs and issues

    1. Kolla openstack source based container for wallaby

    2. Cinder volumes NFS folder not mounted to datamover container

    3. Cinder has IBM GPFS mounted as NFS

    4. RHOSP16.1 breaks horizon 4.2hf4

    5. Backup/restore does not work when NFS has access to Nova user just NOVA

    6. Snapshot for encrypted volume is failing with error

    7. Cinder attached volume backup is taking time and also created 4 temporary volumes

    8. Cinder booted VM backup failing

    Post Installation Health-Check

    After the installation and configuration of Trilio for Openstack did succeed the following steps can be done to verify that the Trilio installation is healthy.

    Verify the Trilio Appliance

    Verify the services are up

    Trilio is using 4 main services on the Trilio Appliance:

    • wlm-api

    • wlm-scheduler

    • wlm-workloads

    Those can be verified to be up and running using the systemctl status command.

    Check the Trilio pacemaker and nginx cluster

    The second component to check the Trilio Appliance's health is the nginx and pacemaker cluster.

    Verify API connectivity of the Trilio Appliance

    Checking the availability of the Trilio API on the chosen endpoints is recommended.

    The following example curl command lists the available workload-types and verifies that the connection is available and working:

    Please check the API guide for more commands and how to generate the X-Auth-Token.

    Verify Trilio components on OpenStack

    On OpenStack Ansible

    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    In order to check the dmapi service go to dmapi container which is residing on controller nodes and run below command

    Datamover service (tvault-contego)

    The datamover service is running on each compute node. Logging to compute node and run below command

    On Kolla Ansible OpenStack

    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    Run the following command on “nova-api” nodes and make sure “triliovault_datamover_api” container is in started state.

    Datamover service (tvault-contego)

    Run the following command on "nova-compute" nodes and make sure the container is in a started state.

    Trilio Horizon integration

    Run the following command on horizon nodes and make sure the container is in a started state.

    On Canonical OpenStack

    Run the following command on MAAS nodes and make sure all trilio units like trilio-data-mover, trilio-dm-api, trilio-horizon-plugin, trilio-wlmare in active state

    On Red Hat OpenStack and TripleO

    On controller node

    Make sure the Trilio dmapi and horizon containers (shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.

    On compute node

    Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly.

    On overcloud

    Please check dmapi endpoints on overcloud node.

    T4O 4.2.8 Release Notes

    Release Versions

    Packages

    Deliverables against T4O-4.2.8

    Following packages changed/added in the current release

    Containers and Gitbranch

    Changelog

    • Support for Kolla Zed (Ubuntu & Rocky) OpenStack

    • Verification of Jira issues targeted for 4.2.8 release

    Fixed Bugs and issues

    1. Backup failing on cinder backend Fiber Channel.

    2. Single VM backup (configurable) at a time out of workload having multiple VMs.

    3. Snapshot and Selective/One Click Restore failing on quobyte.

    4. Inplace restore fails saying no space left on device.

    Known issues

    Deleting snapshots show status as available in horizon UI

    Observation : Snapshot for which delete operation is in-progress from UI , its status is showing as available instead deleting.

    Workaround:

    Wait for sometime to complete all the delete operations.Eventually all the snapshots will be deleted successfully.

    Install workloadmgr CLI client

    About the workloadmgr CLI client

    The workloadmgr CLI client is provided as rpm and deb packages.

    It got tested against the following operating systems:

    • CentOS7, CentOS8

    • Ubuntu 18.04, Ubuntu 20.04

    Installing the workloadmgr client will automatically install all required Openstack clients as well.

    Further will the installation of the workloadmgr client integrate the client into the global openstack python client, if available.

    The required connection strings and package names can be found on the Trilio Dashboard under the Downloads tab.

    Install workloadmgr client rpm package on CentOS7/8

    The Trilio workload manager CLI client has several requirements that need to be met before the client can be installed without dependency issues.

    Preparing the workloadmgr client installation

    The following steps need to be done to prepare the installation the workloadmgr client:

    1. Add required repositories

      1. epel-release

      2. for CentOS7: centos-release-openstack-stein

      3. for CentOS8: centos-release-openstack-train

    These repositories are required to fulfill the following dependencies:

    On CentOS7 Python2: python-pbr,python-prettytable,python2-requests,python2-simplejson,python2-six,pytz,PyYAML,python2-openstackclient

    On CentOS8 Python3: python3-pbr,python3-prettytable,python3-requests,python3-simplejson,python3-six,python3-pyyaml,python3-pytz,python3-openstackclient

    Installing the workloadmgr client

    There are 2 possibilities for how the workloadmgr client packages can be installed.

    Download from the Trilio Appliance and install directly

    The Trilio appliance is shipping the workloadmgr client version, that is matching the Trilio version of the Trilio appliance. These clients will always work with their respective Trilio versions.

    The workloadmgr client can be directly downloaded using the following command:

    For CentOS7: wget http://<TVM-IP>:8085/yum-repo/queens/workloadmgrclient-<Trilio-Version>-<Trilio-Release>.noarch.rpm

    For CentOS8: http://<TVM-IP>:8085/yum-repo/queens/python3-workloadmgrclient-<Trilio-Version>-<TVault-Release>.noarch.rpm

    To identify the Trilio Version and Trilio release login into the Trilio Dashboard and check the upper left corner.

    The yum package manager is used to install the workloadmgr client package:

    yum install workloadmgrclient-<Trilio-Version>-<Trilio-Release>.noarch.rpm

    An example installation can be found below:

    Installing from the Trilio online repository

    To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:

    Create the Trilio yum repository file /etc/yum.repos.d/trilio.repo Enter the following details into the repository file:

    Install the workloadmgr client issuing the following command:

    For CentOS7: yum install workloadmgrclient For CentOS8: yum install python-3-workloadmgrclient-el8

    An example installation can be found below:

    Install workloadmgr client deb packages on Ubuntu

    The Trilio workloadmgr client packages for Ubuntu are only available from the online repository.

    Preparing the workloadmgr client installation

    There is no preparation required. All dependencies are automatically resolved by the standard repositories provided by Ubuntu.

    Installing the Workloadmgr client

    There are 2 possibilities for how the workloadmgr client packages can be installed.

    Download from the Trilio Appliance and install directly

    The Trilio appliance is shipping the workloadmgr client version, that is matching the Trilio version of the Trilio appliance. These clients will always work with their respective Trilio versions.

    The workloadmgr client can be directly downloaded using the following command:

    For Python2: curl -Og6 http://<TVM-IP>:8085/deb-repo/deb-repo/python-workloadmgrclient_<Trilio-Version>_all.deb

    For Python3:curl -Og6 http://<TVM-IP>:8085/deb-repo/deb-repo/python3-workloadmgrclient_<Trilio-Version>_all.deb

    o identify the Trilio Version and Trilio release login into the Trilio Dashboard and check the upper left corner.

    The apt package manager is used to install the workloadmgr client package:

    For Python2:apt-get install ./python-workloadmgrclient_<Trilio-Version>_all.deb -y For Python3:apt-get install ./python3-workloadmgrclient_<Trilio-Version>_all.deb -y

    An example installation can be found below:

    Installing from the Trilio online repository

    To install the latest available workloadmgr package for a Trilio release from the Trilio repository the following steps need to be done:

    Create the Trilio yum repository file /etc/apt/sources.list.d/fury.list Enter the following details into the repository file:

    run apt update to make the new repository available.

    The apt package manager is used to install the workloadmgr client package:

    For Python2:apt-get install python-workloadmgrclient For Python3:apt-get install python3-workloadmgrclient

    An example installation can be seen below:

    T4O 4.2.7 Release Notes

    Release Versions

    Packages

    Deliverables against T4O-4.2.7

    Following packages changed/added in the current release

    Containers and Gitbranch

    Changelog

    • Support for RHOSP17

    • Support for Canonical zed Openstack

    • Support for physical delete from DB against all delete operations

    • Verification of Jira issues targeted for 4.2.7 release

    Fixed Bugs and issues

    1. workloadmgr trust-create command is failing with error - There are multiple role entities named 'admin'

    2. Selective restore failing: Invalid volume

    3. Backport DB cleanup code from 5.0 to 4.2

    Known issues

    Deleting snapshots show status as available in horizon UI

    Observation : Snapshot for which delete operation is in-progress from UI , its status is showing as available instead deleting.

    Workaround:

    Wait for sometime to complete all the delete operations.Eventually all the snapshots will be deleted successfully.

    T4O 4.2.6 Release Notes

    Release Versions

    Packages

    Upgrading on Ansible OpenStack

    Upgrading from Trilio 4.1 to a higher version

    Trilio 4.1 can be upgraded without reinstallation to a higher version of T4O if available.

    Pre-requisites

    Workload Policies

    Trilio’s tenant driven backup service gives tenants control over backup policies. However, sometimes it may be too much control to tenants and the cloud admins may want to limit what policies are allowed by tenants. For example, a tenant may become overzealous and only uses full backups every 1 hr interval. If every tenant were to pursue this backup policy, it puts a severe strain on cloud infrastructure. Instead, if cloud admin can define predefined backup policies and each tenant is only limited to those policies then cloud administrators can exert better control over backup service.

    Workload policy is similar to nova flavor where a tenant cannot create arbitrary instances. Instead, each tenant is only allowed to use the nova flavors published by the admin.

    List and showing available Workload policies

    File Search

    Start File Search

    POST https://$(tvm_address):8780/v1/$(tenant_id)/search

    Starts a File Search with the given parameters

    python

    4.2.64

    puppet-triliovault

    rpm

    4.2.64-4.2

    python3-contegoclient

    deb

    4.2.64

    python3-contegoclient-el8

    rpm

    4.2.64-4.2

    python3-trilio-fusepy

    rpm

    3.0.1-1

    trilio-fusepy

    rpm

    3.0.1-1

    python3-workloadmgrclient

    deb

    4.2.64.1

    python3-workloadmgrclient-el8

    rpm

    4.2.64.1-4.2

    python-workloadmgrclient

    deb

    4.2.64.1

    workloadmgrclient

    python

    4.2.64.1

    workloadmgrclient

    rpm

    4.2.64.1-4.2

    dmapi

    python

    4.2.64.1

    dmapi

    rpm

    4.2.64.1-4.2

    dmapi

    deb

    4.2.64.1

    python3-dmapi

    deb

    4.2.64.1

    python3-dmapi

    rpm

    4.2.64.1-4.2

    workloadmgr

    deb

    4.2.64.10

    workloadmgr

    python

    4.2.64.10

    tvault_configurator

    python

    4.2.64.10

    deb

    4.2.64.10

    python3-tvault-contego

    rpm

    4.2.64.10-4.2

    tvault-contego

    rpm

    4.2.64.10-4.2

    tvault-contego

    python

    4.2.64.3

    tvault-horizon-plugin

    deb

    4.2.64.3

    tvault-horizon-plugin

    rpm

    4.2.64.3-4.2

    tvault-horizon-plugin

    python

    4.2.64.2

    python3-tvault-horizon-plugin

    deb

    4.2.64.3

    python3-tvault-horizon-plugin-el8

    rpm

    4.2.64.3-4.2

    python-s3fuse-plugin-cent7

    rpm

    4.2.64.1-4.2

    python3-s3fuse-plugin

    rpm

    4.2.64.1-4.2

    s3fuse

    python

    4.2.64.2

    Kolla Ansible Wallaby containers

    4.2.64-hotfix-5-wallaby

    Kolla Yoga Containers

    4.2.64-hotfix-5-yoga

    TripleO Containers

    4.2.63-hotfix-5-tripleo

    4.2.64.15

    contego

    deb

    4.2.64

    contegoclient

    deb

    4.2.64

    python3-contegoclient

    deb

    4.2.64

    dmapi

    deb

    4.2.64.2

    python-workloadmgrclient

    deb

    4.2.64.2

    python3-dmapi

    deb

    4.2.64.2

    python3-workloadmgrclient

    deb

    4.2.64.2

    python3-s3-fuse-plugin

    deb

    4.2.64.3

    s3-fuse-plugin

    deb

    4.2.64.3

    contegoclient

    rpm

    4.2.64-4.2

    puppet-triliovault

    rpm

    4.2.64-4.2

    python3-contegoclient-el8

    rpm

    4.2.64-4.2

    python3-trilio-fusepy

    rpm

    3.0.1-1

    trilio-fusepy

    rpm

    3.0.1-1

    dmapi

    rpm

    4.2.64.2-4.2

    python3-dmapi

    rpm

    4.2.64.2-4.2

    python3-workloadmgrclient-el8

    rpm

    4.2.64.2-4.2

    workloadmgrclient

    rpm

    4.2.64.2-4.2

    python-s3fuse-plugin-cent7

    rpm

    4.2.64.3-4.2

    python3-s3fuse-plugin

    rpm

    4.2.64.3-4.2

    4.2.64.22

    python3-tvault-contego

    deb

    4.2.64.26

    python3-tvault-horizon-plugin

    deb

    4.2.64.5

    tvault-contego

    deb

    4.2.64.26

    tvault-horizon-plugin

    deb

    4.2.64.5

    workloadmgr

    deb

    4.2.64.21

    python3-tvault-contego

    rpm

    4.2.64.26-4.2

    python3-tvault-contego-el9

    rpm

    4.2.64.14-4.2

    python3-tvault-horizon-plugin-el8

    rpm

    4.2.64.5-4.2

    tvault-contego

    rpm

    4.2.64.26-4.2

    tvault-horizon-plugin

    rpm

    4.2.64.5-4.2

    4.2.8-wallaby

    Kolla Yoga Containers

    4.2.8-yoga

    Kolla Zed Containers

    4.2.8-zed

    TripleO Containers

    4.2.8-tripleo

    Specific workload fails to be restored; says VMDiskResourceSnap could not be found.

  • Restores are failing; error says security group rules already exist.

  • Snapshot intermittently fails with IO error while backup uploading.

  • Stopping multipath volumes is taking huge time.

  • Upgrading or deploying RHOSP 16.2.5 results in TrilioVaultWLM service and respective endpoints disappearing.

  • Package/Container Names

    Package Kind

    Package Version/Container Tags

    contegoclient

    python

    4.2.64

    s3fuse

    python

    4.2.64.4

    dmapi

    python

    4.2.64.2

    workloadmgr

    Package Names

    Package Kind

    Package/Container Version/Tags

    tvault_configurator

    python

    4.2.64.22

    tvault-contego

    python

    4.2.64.18

    tvault-horizon-plugin

    python

    4.2.64.4

    workloadmgr

    Name

    Tag

    Gitbranch

    TVO/4.2.8

    RHOSP13 containers

    4.2.8-rhosp13

    RHOSP16.1 containers

    4.2.8-rhosp16.1

    RHOSP16.2 containers

    4.2.8-rhosp16.2

    Kolla Ansible Victoria containers

    4.2.8-victoria

    python

    python

    Kolla Ansible Wallaby containers

    4.2.64

    puppet-triliovault

    rpm

    4.2.64-4.2

    python3-contegoclient

    deb

    4.2.64

    python3-contegoclient-el8

    rpm

    4.2.64-4.2

    python3-trilio-fusepy

    rpm

    3.0.1-1

    trilio-fusepy

    rpm

    3.0.1-1

    4.2.64.12

    tvault_configurator

    python

    4.2.64.16

    workloadmgr

    python

    4.2.64.15

    workloadmgr

    deb

    4.2.64.15

    python3-tvault-contego

    deb

    4.2.64.20

    python3-workloadmgrclient

    deb

    4.2.64.2

    python3-workloadmgrclient-el8

    rpm

    4.2.64.2-4.2

    python-workloadmgrclient

    deb

    4.2.64.2

    workloadmgrclient

    python

    4.2.64.2

    workloadmgrclient

    rpm

    4.2.64.2-4.2

    dmapi

    python

    4.2.64.2

    dmapi

    rpm

    4.2.64.2-4.2

    dmapi

    deb

    4.2.64.2

    python3-dmapi

    deb

    4.2.64.2

    python3-dmapi

    rpm

    4.2.64.2-4.2

    python3-s3-fuse-plugin

    deb

    4.2.64.3

    python3-tvault-horizon-plugin

    deb

    4.2.64.4

    s3-fuse-plugin

    deb

    4.2.64.3

    tvault-horizon-plugin

    deb

    4.2.64.4

    s3fuse

    python

    4.2.64.4

    tvault-horizon-plugin

    python

    4.2.64.3

    python-s3fuse-plugin-cent7

    rpm

    4.2.64.3-4.2

    python3-s3fuse-plugin

    rpm

    4.2.64.3-4.2

    python3-tvault-horizon-plugin-el8

    rpm

    4.2.64.4-4.2

    tvault-horizon-plugin

    rpm

    4.2.64.4-4.2

    python3-s3fuse-plugin-el9

    rpm

    4.2.64.3-4.2

    python3-contegoclient-el9

    rpm

    4.2.64.1-4.2

    python3-trilio-fusepy-el9

    rpm

    3.0.1-1

    python3-tvault-horizon-plugin-el9

    rpm

    4.2.64.2-4.2

    python3-workloadmgrclient-el9

    rpm

    4.2.64.2-4.2

    python3-dmapi-el9

    rpm

    4.2.64.2-4.2

    python3-tvault-contego-el9

    rpm

    4.2.64.8-4.2

    4.2.7-wallaby

    Kolla Yoga Containers

    4.2.7-yoga

    TripleO Containers

    4.2.7-tripleo

    Package/Container Names

    Package Kind

    Package Version/Container Tags

    contego

    deb

    4.2.64

    contegoclient

    rpm

    4.2.64-4.2

    contegoclient

    deb

    4.2.64

    contegoclient

    Package/Container Names

    Package Kind

    Package/Container Version/Tags

    tvault-contego

    deb

    4.2.64.20

    python3-tvault-contego

    rpm

    4.2.64.20-4.2

    tvault-contego

    rpm

    4.2.64.20-4.2

    tvault-contego

    Name

    Tag

    Gitbranch

    TVO/4.2.7

    RHOSP13 containers

    4.2.7-rhosp13

    RHOSP16.1 containers

    4.2.7-rhosp16.1

    RHOSP16.2 containers

    4.2.7-rhosp16.2

    Kolla Ansible Victoria containers

    4.2.7-victoria

    python

    python

    Kolla Ansible Wallaby containers

    wlm-cron

    install base packages

    1. yum -y install epel-release

    2. for CentOS7: yum -y install centos-release-openstack-stein

    3. for CentOS8: yum -y install centos-release-openstack-train

    Please ensure the following points are met before starting the upgrade process:
    • No Snapshot or Restore is running

    • The Global-Job-Scheduler is disabled

    • wlm-cron is disabled on the Trilio Appliance

    • Access to the Gemfury repository to fetch new packages \

      1. Note: For single IP-based NFS share as a backup target refer to this rolling upgrade on the Ansible Openstack document. User needs to follow the Ansible Openstack installation document if having multiple IP-based NFS share

    Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it has been completely shut down.

    Update the repositories

    Deb-based (Ubuntu)

    Add the Gemfury repository on each of the DMAPI containers, Horizon containers & Compute nodes.

    Create a file /etc/apt/sources.list.d/fury.list and add the below line to it.

    The following commands can be used to verify the connection to the Gemfury repository and to check for available packages.

    RPM-based (CentOS)

    Add Trilio repo on each of the DMAPI containers, Horizon containers & Compute nodes.

    Modify the file /etc/yum.repos.d/trilio.repo and add the below line in it.

    The following commands can be used to verify the connection to the Gemfury repository and to check for available packages.

    Upgrade tvault-datamover-api package

    The following steps represent the best practice procedure to upgrade the DMAPI service.

    1. Login to DMAPI container

    2. Take a backup of the DMAPI configuration in /etc/dmapi/

    3. use apt list --upgradeable to identify the package used for the dmapi service

    4. Update the DMAPI package

    5. restore the backed-up config files into /etc/dmapi/

    6. Restart the DMAPI service

    7. Check the status of the DMAPI service

    These steps are done with the following commands. This example is assuming that the more common python3 packages are used.

    Deb-based (Ubuntu)

    RPM-based (CentOS)

    Upgrade Horizon plugin

    The following steps represent the best practice procedure to update the Horizon plugin.

    1. Login to Horizon Container

    2. use apt list --upgradeable to identify the package the Trilio packages for the workloadmgrclient, contegoclient and tvault-horizon-plugin

    3. Install the tvault-horizon-plugin package in the required python version

    4. Install the workloadmgrclient package

    5. Install the contegoclient

    6. Restart the Horizon webserver

    7. Check the installed version of the workloadmgrclient

    These steps are done with the following commands. This example is assuming that the more common python3 packages are used.

    Deb-based (Ubuntu)

    RPM-based (CentOS)

    Upgrade the tvault-contego service

    The following steps represent the best practice procedure to update the tvault-contego service on the compute nodes.

    1. Login into the compute node

    2. Take a backup of the config files at and /etc/tvault-contego/ and /etc/tvault-object-store (if S3)

    3. Unmount storage mount path

    4. Upgrade the tvault-contego package in the required python version

    5. (S3 only) upgrade the s3-fuse-plugin package

    6. Restore the config files

    7. (S3 only) Restart the tvault-object-store service

    8. Restart the tvault-contego service

    9. Check the status of the service(s)

    These steps are done with the following commands. This example is assuming that the more common python3 packages are used.

    NFS as Storage Backend

    • Take a backup of the config files

    • Check the mount path of the NFS storage using the command df -h and unmount the path using umount command. e.g.

    • Upgrade the Trilio packages:

    Deb-based (Ubuntu):

    RPM-based (CentOS):

    • Restore the config files, restart the service and verify the mount point

    S3 as Storage Backend

    • Take a backup of the config files

    • Check the mount path of the S3 storage using the command df -h and unmount the path using umount command.

    • Upgrade the Trilio packages

    Deb-based (Ubuntu):

    RPM-based (CentOS):

    • Restore the config files, restart the service and verify the mount point

    Advance settings/configuration

    Customize HAproxy cfg parameters for Trilio datamover api service

    Following are the haproxy cfg parameters recommended for optimal performance of dmapi service. File location on controller /etc/haproxy/haproxy.cfg

    If values were already updated during any of the previous releases, further steps can be skipped.

    Parameters timeout client, timeout server, and balance for DMAPI service

    Remove the below content, if present in the file/etc/openstack_deploy/user_variables.ymlon the ansible host.

    Add the below lines at end of the file /etc/openstack_deploy/user_variables.yml on the ansible host.

    Update Haproxy configuration using the below command on the ansible host.

    Enable mount-bind for NFS

    T4O 4.2 has changed the calculation of the mount point. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2

    Please follow this documentation for detailed steps to set up mount bind.

    Using Horizon

    To see all available Workload policies in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    4. Navigate to Trilio

    5. Navigate to Policy

    The following information are shown in the policy tab for each available policy:

    • Creation time

    • name

    • description

    • status

    • set interval

    • set retention type

    • set retention value

    Using CLI

    • <policy_id>➡️ Id of the policy to show

    Create a policy

    Using Horizon

    To create a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    4. Navigate to Trilio

    5. Navigate to Policy

    6. Click new policy

    7. provide a policy name on the Details tab

    8. provide a description on the Details tab

    9. provide the RPO in the Policy tab

    10. Choose the Snapshot Retention Type

    11. provide the Retention value

    12. Choose the Full Backup Interval

    13. Click create

    Using CLI

    • --policy-fields <key=key-name> ➡️ Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'

    • --display-description <display_description> ➡️ Optional policy description. (Default=No description)

    • --metadata <key=keyname> ➡️ Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

    • <display_name> ➡️ the name the policy will get

    Edit a policy

    Using Horizon

    To edit a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    4. Navigate to Trilio

    5. Navigate to Policy

    6. identify the policy to edit

    7. click on "Edit policy" at the end of the line of the chosen policy

    8. edit the policy as desired - all values can be changed

    9. Click "Update"

    Using CLI

    • --display-name <display-name>➡️Name of the policy

    • --display-description <display_description> ➡️ Optional policy description. (Default=No description)

    • --policy-fields <key=key-name> ➡️ Specify following key value pairs for policy fields Specify option multiple times to include multiple keys. 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30' 'fullbackup_interval' : '-1' (Enter Number of incremental snapshots to take Full Backup between 1 to 999, '-1' for 'NEVER' and '0' for 'ALWAYS')For example --policy-fields interval='1 hr' --policy-fields retention_policy_type='Number of Snapshots to Keep '--policy-fields retention_policy_value='30' --policy- fields fullbackup_interval='2'

    • --metadata <key=keyname> ➡️ Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

    • <policy_id> ➡️ the name the policy will get

    Assign/Remove a policy

    Using Horizon

    To assign or remove a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    4. Navigate to Trilio

    5. Navigate to Policy

    6. identify the policy to assign/remove

    7. click on the small arrow at the end of the line of the chosen policy to open the submenu

    8. click "Add/Remove Projects"

    9. Choose projects to add or remove by using the plus/minus buttons

    10. Click "Apply"

    Using CLI

    • --add_project <project_id> ➡️ ID of the project to assign policy to. Use multiple times to assign multiple projects.

    • --remove_project <project_id> ➡️ ID of the project to remove policy from. Use multiple times to remove multiple projects.

    • <policy_id>➡️policy to be assigned or removed

    Delete a policy

    Using Horizon

    To delete a policy in Horizon follow these steps:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin

    4. Navigate to Trilio

    5. Navigate to Policy

    6. identify the policy to assign/remove

    7. click on the small arrow at the end of the line of the chosen policy to open the submenu

    8. click "Delete Policy"

    9. Confirm by clicking "Delete"

    Using CLI

    • <policy_id> ➡️ID of the policy to be deleted

    systemctl | grep wlm
      wlm-api.service          loaded active running   workloadmanager api service
      wlm-cron.service         loaded active running   Cluster Controlled wlm-cron
      wlm-scheduler.service    loaded active running   Cluster Controlled wlm-scheduler
      wlm-workloads.service    loaded active running   workloadmanager workloads service
    systemctl status wlm-api
    ######
    ● wlm-api.service - workloadmanager api service
       Loaded: loaded (/etc/systemd/system/wlm-api.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:41:19 UTC; 2 months 21 days ago
     Main PID: 4688 (workloadmgr-api)
       CGroup: /system.slice/wlm-api.service
               ├─4688 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-api --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-scheduler
    ######
    ● wlm-scheduler.service - Cluster Controlled wlm-scheduler
       Loaded: loaded (/etc/systemd/system/wlm-scheduler.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-scheduler.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9342 (workloadmgr-sch)
       CGroup: /system.slice/wlm-scheduler.service
               └─9342 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-scheduler --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-workloads
    ######
    ● wlm-workloads.service - workloadmanager workloads service
       Loaded: loaded (/etc/systemd/system/wlm-workloads.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:51:05 UTC; 2 months 21 days ago
     Main PID: 606 (workloadmgr-wor)
       CGroup: /system.slice/wlm-workloads.service
               ├─ 606 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-cron
    ######
    ● wlm-cron.service - Cluster Controlled wlm-cron
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-cron.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9209 (workloadmgr-cro)
       CGroup: /system.slice/wlm-cron.service
               ├─9209 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    pcs status
    ######
    Cluster name: triliovault
    
    WARNINGS:
    Corosync and pacemaker node names do not match (IPs used in setup?)
    
    Stack: corosync
    Current DC: TVM1 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
    Last updated: Mon Jan 24 13:42:01 2022
    Last change: Tue Nov  2 19:07:04 2021 by root via crm_resource on TVM2
    
    3 nodes configured
    9 resources configured
    
    Online: [ TVM1 TVM2 TVM3 ]
    
    Full list of resources:
    
     virtual_ip     (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_public      (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_admin       (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_internal    (ocf::heartbeat:IPaddr2):       Started TVM2
     wlm-cron       (systemd:wlm-cron):     Started TVM2
     wlm-scheduler  (systemd:wlm-scheduler):        Started TVM2
     Clone Set: lb_nginx-clone [lb_nginx]
         Started: [ TVM2 ]
         Stopped: [ TVM1 TVM3 ]
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    curl http://10.10.2.34:8780/v1/8e16700ae3614da4ba80a4e57d60cdb9/workload_types/detail -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-workloadmgrclient" -H "Accept: application/json" -H "X-Auth-Token: gAAAAABe40NVFEtJeePpk1F9QGGh1LiGnHJVLlgZx9t0HRrK9rC5vqKZJRkpAcW1oPH6Q9K9peuHiQrBHEs1-g75Na4xOEESR0LmQJUZP6n37fLfDL_D-hlnjHJZ68iNisIP1fkm9FGSyoyt6IqjO9E7_YVRCTCqNLJ67ZkqHuJh1CXwShvjvjw
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    lxc-attach -n <dmapi-container-name>  (go to dmapi conatiner)
    root@controller-dmapi-container-08df1e06:~# systemctl status tvault-datamover-api.service
    ● tvault-datamover-api.service - TrilioData DataMover API service
         Loaded: loaded (/lib/systemd/system/tvault-datamover-api.service; enabled; vendor preset: enabled)
         Active: active (running) since Wed 2022-01-12 11:53:39 UTC; 1 day 17h ago
       Main PID: 23888 (dmapi-api)
          Tasks: 289 (limit: 57729)
         Memory: 607.7M
         CGroup: /system.slice/tvault-datamover-api.service
                 ├─23888 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23893 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23894 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23895 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23896 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23897 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23898 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23899 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23900 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23901 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23902 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23903 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23904 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23905 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23906 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23907 /usr/bin/python3 /usr/bin/dmapi-api
                 └─23908 /usr/bin/python3 /usr/bin/dmapi-api
    
    Jan 12 11:53:39 controller-dmapi-container-08df1e06 systemd[1]: Started TrilioData DataMover API service.
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    
    root@compute:~# systemctl status tvault-contego
    ● tvault-contego.service - Tvault contego
         Loaded: loaded (/etc/systemd/system/tvault-contego.service; enabled; vendor preset: enabled)
         Active: active (running) since Fri 2022-01-14 05:45:19 UTC; 2s ago
       Main PID: 1489651 (python3)
          Tasks: 19 (limit: 67404)
         Memory: 6.7G (max: 10.0G)
         CGroup: /system.slice/tvault-contego.service
                 ├─ 998543 /bin/qemu-nbd -c /dev/nbd45 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998772 /bin/qemu-nbd -c /dev/nbd73 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998931 /bin/qemu-nbd -c /dev/nbd100 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─ 999147 /bin/qemu-nbd -c /dev/nbd35 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─1371322 /bin/qemu-nbd -c /dev/nbd63 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─1371524 /bin/qemu-nbd -c /dev/nbd91 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 └─1489651 /openstack/venvs/nova-22.3.1/bin/python3 /usr/bin/tvault-contego --config-file=/etc/nova/nova.conf --config-file=/etc/tvault-contego/tvault-cont>
    
    Jan 14 05:45:19 compute systemd[1]: Started Tvault contego.
    Jan 14 05:45:20 compute sudo[1489653]:     nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/openstack/venvs/nova-22.3.1/bin/nova-rootwrap /etc/nova/rootwrap.conf umou>
    Jan 14 05:45:20 compute sudo[1489653]: pam_unix(sudo:session): session opened for user root by (uid=0)
    Jan 14 05:45:21 compute python3[1489655]: umount: /var/triliovault-mounts/VHJpbGlvVmF1bHQ=: no mount point specified.
    Jan 14 05:45:21 compute sudo[1489653]: pam_unix(sudo:session): session closed for user root
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] CPU Control group m>
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] I/O Control Group m>
    lines 1-22/22 (END)
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    [root@controller ~]# docker ps | grep triliovault_datamover_api
    3f979c15cedc   trilio/centos-binary-trilio-datamover-api:4.2.50-victoria                     "dumb-init --single-…"   3 days ago    Up 3 days                         triliovault_datamover_api
    [root@compute1 ~]# docker ps | grep triliovault_datamover
    2f1ece820a59   trilio/centos-binary-trilio-datamover:4.2.50-victoria                        "dumb-init --single-…"   3 days ago    Up 3 days                        triliovault_datamover
    [root@controller ~]# docker ps | grep horizon
    4a004c786d47   trilio/centos-binary-trilio-horizon-plugin:4.2.50-victoria                    "dumb-init --single-…"   3 days ago    Up 3 days (unhealthy)             horizon
    root@jujumaas:~# juju status | grep trilio
    trilio-data-mover       4.2.51   active       3  trilio-data-mover       jujucharms    9  ubuntu
    trilio-dm-api           4.2.51   active       1  trilio-dm-api           jujucharms    7  ubuntu
    trilio-horizon-plugin   4.2.51   active       1  trilio-horizon-plugin   jujucharms    6  ubuntu
    trilio-wlm              4.2.51   active       1  trilio-wlm              jujucharms    9  ubuntu
      trilio-data-mover/8        active    idle            172.17.1.5                         Unit is ready
      trilio-data-mover/6        active    idle            172.17.1.6                         Unit is ready
      trilio-data-mover/7*       active    idle            172.17.1.7                         Unit is ready
      trilio-horizon-plugin/2*   active    idle            172.17.1.16                        Unit is ready
    trilio-dm-api/2*             active    idle   1/lxd/4  172.17.1.27     8784/tcp           Unit is ready
    trilio-wlm/2*                active    idle   7        172.17.1.28     8780/tcp           Unit is ready
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp16 onwards/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain1-controller-0 heat-admin]# podman ps | grep trilio-
    e3530d6f7bec  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:4.2.47-rhosp16.1           kolla_start           2 weeks ago   Up 2 weeks ago          trilio_dmapi
    f93f7019f934  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:4.2.47-rhosp16.1          kolla_start           2 weeks ago   Up 2 weeks ago          horizon
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain3-novacompute-1 heat-admin]# podman ps | grep trilio-
    4419b02e075c  undercloud162.ctlplane.trilio.local:8787/trilio/trilio-datamover:dev-osp16.2-1-rhosp16.2       kolla_start  2 days ago   Up 27 seconds ago          trilio_datamover
     (overcloudtrain1) [stack@ucqa161 ~]$ openstack endpoint list | grep datamover
    | 218b2f92569a4d259839fa3ea4d6103a | regionOne | dmapi          | datamover      | True    | internal  | https://overcloudtrain1internalapi.trilio.local:8784/v2                    |
    | 4702c51aa5c24bed853e736499e194e2 | regionOne | dmapi          | datamover      | True    | public    | https://overcloudtrain1.trilio.local:13784/v2                              |
    | c8169025eb1e4954ab98c7abdb0f53f6 | regionOne | dmapi          | datamover      | True    | admin     | https://overcloudtrain1internalapi.trilio.local:8784/v2    
    [root@controller ~]# wget http://10.10.2.15:8085/yum-repo/queens/workloadmgrclient-4.0.115-4.0.noarch.rpm
    --2021-03-08 15:36:37--  http://10.10.2.15:8085/yum-repo/queens/workloadmgrclient-4.0.115-4.0.noarch.rpm
    Connecting to 10.10.2.15:8085... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 155976 (152K) [application/x-rpm]
    Saving to: ‘workloadmgrclient-4.0.115-4.0.noarch.rpm’
    
    100%[======================================>] 1,55,976    --.-K/s   in 0.001s
    
    2021-03-08 15:36:37 (125 MB/s) - ‘workloadmgrclient-4.0.115-4.0.noarch.rpm’ saved [155976/155976]
    
    [root@controller ~]# yum install workloadmgrclient-4.0.115-4.0.noarch.rpm
    Loaded plugins: fastestmirror
    Examining workloadmgrclient-4.0.115-4.0.noarch.rpm: workloadmgrclient-4.0.115-4.                                                                                                                                                                                                                                                                                                                                                           0.noarch
    Marking workloadmgrclient-4.0.115-4.0.noarch.rpm to be installed
    Resolving Dependencies
    --> Running transaction check
    ---> Package workloadmgrclient.noarch 0:4.0.115-4.0 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ================================================================================
     Package         Arch   Version     Repository                             Size
    ================================================================================
    Installing:
     workloadmgrclient
                     noarch 4.0.115-4.0 /workloadmgrclient-4.0.115-4.0.noarch 700 k
    
    Transaction Summary
    ================================================================================
    Install  1 Package
    
    Total size: 700 k
    Installed size: 700 k
    Is this ok [y/d/N]: y
    Downloading packages:
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : workloadmgrclient-4.0.115-4.0.noarch                         1/1
      Verifying  : workloadmgrclient-4.0.115-4.0.noarch                         1/1
    
    Installed:
      workloadmgrclient.noarch 0:4.0.115-4.0
    
    Complete!
    [trilio]
    name=Trilio Repository
    baseurl=http://trilio:[email protected]:8283/triliovault-<Trilio-Release>/yum/
    enabled=1
    gpgcheck=0
    [root@controller ~]# cat /etc/yum.repos.d/trilio.repo
    [trilio]
    name=Trilio Repository
    baseurl=http://trilio:[email protected]:8283/triliovault-4.0/yum/
    enabled=1
    gpgcheck=0
    
    [root@controller ~]# yum install workloadmgrclient
    Loaded plugins: fastestmirror
    Determining fastest mirrors
     * base: centos-canada.vdssunucu.com.tr
     * centos-ceph-nautilus: mirror.its.dal.ca
     * centos-nfs-ganesha28: centos.mirror.colo-serv.net
     * centos-openstack-train: centos-canada.vdssunucu.com.tr
     * centos-qemu-ev: centos-canada.vdssunucu.com.tr
     * extras: centos-canada.vdssunucu.com.tr
     * updates: centos-canada.vdssunucu.com.tr
    base                                                                                                                                                                                                                                                                                                                                                                                                                | 3.6 kB  00:00:00
    centos-ceph-nautilus                                                                                                                                                                                                                                                                                                                                                                                                | 3.0 kB  00:00:00
    centos-nfs-ganesha28                                                                                                                                                                                                                                                                                                                                                                                                | 3.0 kB  00:00:00
    centos-openstack-train                                                                                                                                                                                                                                                                                                                                                                                              | 3.0 kB  00:00:00
    centos-qemu-ev                                                                                                                                                                                                                                                                                                                                                                                                      | 3.0 kB  00:00:00
    extras                                                                                                                                                                                                                                                                                                                                                                                                              | 2.9 kB  00:00:00
    trilio                                                                                                                                                                                                                                                                                                                                                                                                              | 2.9 kB  00:00:00
    updates                                                                                                                                                                                                                                                                                                                                                                                                             | 2.9 kB  00:00:00
    (1/3): extras/7/x86_64/primary_db                                                                                                                                                                                                                                                                                                                                                                                   | 225 kB  00:00:00
    (2/3): centos-openstack-train/7/x86_64/primary_db                                                                                                                                                                                                                                                                                                                                                                   | 1.1 MB  00:00:00
    (3/3): updates/7/x86_64/primary_db                                                                                                                                                                                                                                                                                                                                                                                  | 5.7 MB  00:00:00
    Resolving Dependencies
    --> Running transaction check
    ---> Package workloadmgrclient.noarch 0:4.0.116-4.0 will be installed
    --> Finished Dependency Resolution
    
    Dependencies Resolved
    
    ===========================================================================================================================================================================================================================================================================================================================================================================================================================================
     Package                                                                                                         Arch                                                                                                 Version                                                                                                   Repository                                                                                            Size
    ===========================================================================================================================================================================================================================================================================================================================================================================================================================================
    Installing:
     workloadmgrclient                                                                                               noarch                                                                                               4.0.116-4.0                                                                                               trilio                                                                                               152 k
    
    Transaction Summary
    ===========================================================================================================================================================================================================================================================================================================================================================================================================================================
    Install  1 Package
    
    Total download size: 152 k
    Installed size: 700 k
    Is this ok [y/d/N]: y
    Downloading packages:
    workloadmgrclient-4.0.116-4.0.noarch.rpm                                                                                                                                                                                                                                                                                                                                                                            | 152 kB  00:00:00
    Running transaction check
    Running transaction test
    Transaction test succeeded
    Running transaction
      Installing : workloadmgrclient-4.0.116-4.0.noarch                                                                                                                                                                                                                                                                                                                                                                                    1/1
      Verifying  : workloadmgrclient-4.0.116-4.0.noarch                                                                                                                                                                                                                                                                                                                                                                                    1/1
    
    Installed:
      workloadmgrclient.noarch 0:4.0.116-4.0
    
    Complete!
    root@ubuntu:~# curl -Og6 http://10.10.2.15:8085/deb-repo/deb-repo/python3-workloadmgrclient_4.0.115_all.deb
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  116k  100  116k    0     0   899k      0 --:--:-- --:--:-- --:--:--  982k
    
    root@ubuntu:~# apt-get install ./python3-workloadmgrclient_4.0.115_all.deb -y
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Note, selecting 'python3-workloadmgrclient' instead of './python3-workloadmgrclient_4.0.115_all.deb'
    The following NEW packages will be installed:
      python3-workloadmgrclient
    0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
    Need to get 0 B/120 kB of archives.
    After this operation, 736 kB of additional disk space will be used.
    Selecting previously unselected package python3-workloadmgrclient.
    (Reading database ... 65533 files and directories currently installed.)
    Preparing to unpack .../python3-workloadmgrclient_4.0.115_all.deb ...
    Unpacking python3-workloadmgrclient (4.0.115) ...
    Setting up python3-workloadmgrclient (4.0.115) ...
    deb [trusted=yes] https://apt.fury.io/triliodata-<Trilio-Version>/ /
    root@ubuntu:~# cat /etc/apt/sources.list.d/fury.list
    deb [trusted=yes] https://apt.fury.io/triliodata-4-0/ /
    
    root@ubuntu:~# apt update
    Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
    Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
    Hit:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
    Hit:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
    Ign:5 https://apt.fury.io/triliodata-4-0  InRelease
    Ign:6 https://apt.fury.io/triliodata-4-0  Release
    Ign:7 https://apt.fury.io/triliodata-4-0  Packages
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Get:7 https://apt.fury.io/triliodata-4-0  Packages
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Ign:8 https://apt.fury.io/triliodata-4-0  Translation-en
    Fetched 84.0 kB in 12s (6930 B/s)
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    All packages are up to date.
    
    root@ubuntu:~# apt-get install python3-workloadmgrclient
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    The following NEW packages will be installed:
      python3-workloadmgrclient
    0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
    Need to get 0 B/120 kB of archives.
    After this operation, 736 kB of additional disk space will be used.
    Selecting previously unselected package python3-workloadmgrclient.
    (Reading database ... 65533 files and directories currently installed.)
    Preparing to unpack .../python3-workloadmgrclient_4.0.115_all.deb ...
    Unpacking python3-workloadmgrclient (4.0.115) ...
    Setting up python3-workloadmgrclient (4.0.115) ...
    [root@compute ~]# df -h
    df: /var/trilio/triliovault-mounts: Transport endpoint is not connected
    Filesystem                     Size  Used Avail Use% Mounted on
    devtmpfs                        28G     0   28G   0% /dev
    tmpfs                           28G     0   28G   0% /dev/shm
    tmpfs                           28G  928K   28G   1% /run
    tmpfs                           28G     0   28G   0% /sys/fs/cgroup
    /dev/mapper/cl_centos8-root     70G   13G   58G  19% /
    /dev/vda1                     1014M  231M  784M  23% /boot
    tmpfs                          5.5G     0  5.5G   0% /run/user/0
    172.25.0.10:/mnt/tvault/42436  2.5T  1.8T  674G  74% /var/triliovault-mounts/MTcyLjI1LjAuMTA6L21udC90dmF1bHQvNDI0MzY=
    [root@compute ~]# umount /var/triliovault-mounts/MTcyLjI1LjAuMTA6L21udC90dmF1bHQvNDI0MzY=
    
    [root@TVM2 ~]# pcs resource disable wlm-cron
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr                                                                                                                                                                                                                                                                                                                                                           -cron
    
    deb [trusted=yes] https://apt.fury.io/triliodata-4-2/ /
    apt-get update
    apt list --upgradable
    [triliovault-4-2]
    name=triliovault-4-2
    baseurl=http://trilio:[email protected]:8283/triliodata-4-2/yum/
    gpgcheck=0
    enabled=1
    yum repolist
    yum check-upgrade
    lxc-ls #grep container name for dmapi service
    lxc-attach -n <dmapi container name>
    tar -czvf dmapi_config.tar.gz /etc/dmapi
    apt list --upgradable
    apt install python3-dmapi --upgrade
    tar -xzvf dmapi_config.tar.gz -C /
    systemctl restart tvault-datamover-api
    systemctl status tvault-datamover-api
    lxc-ls #grep container name for dmapi service
    lxc-attach -n <dmapi container name>
    tar -czvf dmapi_config.tar.gz /etc/dmapi
    yum list installed | grep dmapi  ##use dnf if yum not available
    yum check-update python3-dmapi   ##use dnf if yum not available
    yum upgrade python3-dmapi        ##use dnf if yum not available
    tar -xzvf dmapi_config.tar.gz -C /
    systemctl restart tvault-datamover-api
    systemctl status tvault-datamover-api
    lxc-attach -n controller_horizon_container-ead7cc60
    apt list --upgradable
    apt install python3-tvault-horizon-plugin --upgrade
    apt install python3-workloadmgrclient --upgrade
    apt install python3-contegoclient --upgrade
    systemctl restart apache2
    workloadmgr --version
    lxc-attach -n controller_horizon_container-ead7cc60
    yum list installed | grep trilio ##use dnf if yum not available
    yum upgrade python3-contegoclient-el8 python3-tvault-horizon-plugin-el8 python3-workloadmgrclient-el8  ##use dnf if yum not available
    systemctl restart httpd
    workloadmgr --version
    tar -czvf contego_config.tar.gz /etc/tvault-contego/ 
    apt install python3-tvault-contego --upgrade
    apt install python3-s3-fuse-plugin --upgrade
    
    yum upgrade python3-tvault-contego   ##use dnf if yum not available
    yum upgrade python3-s3fuse-plugin   ##use dnf if yum not available
    tar -xzvf  contego_config.tar.gz -C /
    systemctl restart tvault-contego
    systemctl status tvault-contego
    #To check if backend storage got mounted successfully
    df -h
    tar -czvf contego_config.tar.gz /etc/tvault-contego/ /etc/tvault-object-store/
    e.g. 
    root@compute:~# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev             28G     0   28G   0% /dev
    tmpfs           5.5G  1.4M  5.5G   1% /run
    /dev/vda3       124G   16G  102G  13% /
    tmpfs            28G   20K   28G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs            28G     0   28G   0% /sys/fs/cgroup
    /dev/vda1       456M  297M  126M  71% /boot
    tmpfs           6.3G     0  6.3G   0% /var/triliovault/tmpfs
    Trilio        -     -  0.0K    - /var/triliovault-mounts
    tmpfs           5.5G     0  5.5G   0% /run/user/0
    [root@compute ~]# umount /var/triliovault-mounts
    apt install python3-tvault-contego --upgrade
    apt install python3-s3-fuse-plugin --upgrade
    
    yum upgrade python3-tvault-contego   ##use dnf if yum not available
    yum upgrade python3-s3fuse-plugin   ##use dnf if yum not available
    tar -xzvf  contego_config.tar.gz -C /
    systemctl restart tvault-contego
    systemctl status tvault-contego
    #To check if backend storage got mounted successfully
    df -h
    retries 5
    timeout http-request 10m
    timeout queue 10m
    timeout connect 10m
    timeout client 10m
    timeout server 10m
    timeout check 10m
    balance roundrobin
    maxconn 50000
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    # Datamover haproxy setting
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_balance_alg: roundrobin
          haproxy_timeout_client: 10m
          haproxy_timeout_server: 10m
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    cd /opt/openstack-ansible/playbooks
    openstack-ansible haproxy-install.yml
    workloadmgr policy-list
    workloadmgr policy-show <policy_id>
    workloadmgr policy-create --policy-fields <key=key-name>
                              [--display-description <display_description>]
                              [--metadata <key=key-name>]
                              <display_name>
    workloadmgr policy-update [--display-name <display-name>]
                              [--display-description <display-description>]
                              [--policy-fields <key=key-name>]
                              [--metadata <key=key-name>]
                              <policy_id>
    workloadmgr policy-assign [--add_project <project_id>]
                              [--remove_project <project_id>]
                              <policy_id>
    workloadmgr policy-delete <policy_id>
    Deliverables against TVO-4.2.6

    Package/Container Names

    Package Kind

    Package Version/Container Tags

    contego

    deb

    4.2.64

    contegoclient

    rpm

    4.2.64-4.2

    contegoclient

    deb

    4.2.64

    contegoclient

    Following packages changed/added in the current release

    Package/Container Names

    Package Kind

    Package/Container Version/Tags

    tvault-contego

    deb

    4.2.64.13

    python3-tvault-contego

    rpm

    4.2.64.13-4.2

    tvault-contego

    rpm

    4.2.64.13-4.2

    tvault-contego

    Containers and Gitbranch

    Name

    Tag

    Gitbranch

    TVO/4.2.6

    RHOSP13 containers

    4.2.6-rhosp13

    RHOSP16.1 containers

    4.2.6-rhosp16.1

    RHOSP16.2 containers

    4.2.6-rhosp16.2

    Kolla Ansible Victoria containers

    4.2.6-victoria

    Changelog

    • Verification of Jira issues targeted for 4.2.6 release

    • Cohesity NFS/S3 storage backend support

    Fixed Bugs and issues

    1. Mounting a snapshot fails if the first disk of the File Recovery Manager VM is a CDROM

    2. TVM HA configuration fails when one of the chosen Controller's hostname is part of any other TVM's hostname

    3. Validation of keystone and s3 endpoint is stuck on the TVM UI

    4. Temp volume creation taking too long

    5. The httpd service is in a failed state on a freshly deployed T4O

    6. The sshd option UseDNS should be set to "no" to avoid issues

    Known issues

    [RHOSP 16.1.8 and RHOSP 16.2.4] Trilio Horizon container in reboot loop

    Observation : Post upgrade from a previous release/hotfix or on fresh deployment, the Trilio Horizon container is in a reboot loop

    Workaround:

    Either of the below workarounds should be performed on the controller where the issue occurs for the horizon pod.

    option-1: Restart the memcached service on controller using systemctl (command: systemctl restart tripleo_memcached.service).

    option-2: Restart the memcached pod (command: podman restart memcached).

    [Snapshot mount] Unable to see correct content after mounting snapshot

    Observation : Snapshot mount using RHEL8 File recovery manager image is not showing any mounted device

    Workaround:

    Use RHEL7 image for File recovery manager vm instead RHEL8.

    [Backup failure for HPE nimble storage volumes with timeout receiving packet error

    Observation : Snapshot is failing for VM's that have volumes backed by a Nimble iSCSI SAN with a "timeout receiving packet" error

    Workaround: add uxsock_timeout parameter

    log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

    [Input/Output error while writing to Cohesity NFS share]

    Observation : Input/Output error during qemu-img convert operation while writing to Cohesity NFS share

    Workaround: For cohesity NFS use below nfs options during Trilio configuration and datamover deployment and if issue still persists then increase timeo and retrans parameter values in nfs options

    nfs_options = nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to run the search in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 13:23:25 GMT
    Content-Type: application/json
    Content-Length: 244
    Connection: keep-alive
    X-Compute-Request-Id: req-bdfd3fb8-5cbf-4108-885f-63160426b2fa
    
    {
       "file_search":{
          "created_at":"2020-11-09T13:23:25.698534",
          "updated_at":null,
          "id":14,
          "deleted_at":null,
          "status":"executing",
          "error_msg":null,
          "filepath":"/etc/h*",
          "json_resp":null,
          "vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
    

    Body format

    Get File Search Results

    POST https://$(tvm_address):8780/v1/$(tenant_id)/search/<search_id>

    Starts a filesearch with the given parameters

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to run the search in

    search_id

    string

    ID of the File Search to get

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Installing on TripleO Train

    1. Prepare for deployment

    1.1] Select 'backup target' type

    Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

    The following backup target types are supported by Trilio

    a) NFS

    Need NFS share path

    b) Amazon S3

    - S3 Access Key - Secret Key - Region - Bucket name

    c) Other S3 compatible storage (Like, Ceph based S3)

    - S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

    1.2] Clone triliovault-cfg-scripts repository

    The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.

    All commands need to be run as user 'stack' on undercloud node

    TripleO CentOS8 is not supported anymore as CentOS Linux 8 has reached End of Life on December 31st,2021.

    The following command clones the triliovault-cfg-scripts github repository.

    Please note that the Trilio Appliance needs to get updated to the latest HF as well.

    1.3] If the backup target type is 'Ceph based S3' with SSL:

    If your backup target is ceph S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, the user needs to rename his ca chain cert file to s3-cert.pem and copy it into the directory triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE_Directory/puppet/trilio/files

    2] Upload Trilio puppet module

    3] Update overcloud roles data file to include Trilio services

    Trilio contains multiple services. Add these services to your roles_data.yaml.

    In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:

    /usr/share/openstack-tripleo-heat-templates/roles_data.yaml

    Add the following services to the roles_data.yaml

    All commands need to be run as user 'stack'

    3.1] Add Trilio Datamover Api Service to role data file

    This service needs to share the same role as the keystone and database service. In the case of the pre-defined roles will these services run on the role Controller. In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone service is installed.

    Add the following line to the identified role:

    3.2] Add Trilio Datamover Service to role data file

    This service needs to share the same role as the nova-compute service. In the case of the pre-defined roles will the nova-compute service run on the role Compute. In the case of custom-defined roles, it is necessary to use the role the nova-compute service is used.

    Add the following line to the identified role:

    3.3] Add Trilio Horizon Service role data file4] Prepare Trilio container images

    This service needs to share the same role as the OpenStack Horizon server. In the case of the pre-defined roles, Horizon service runs on the role Controller. Add the following line to the identified role:

    All commands need to be run as user 'stack'

    Refer to the word <HOTFIX-TAG-VERSION> as 4.2.8 in the below sections

    Trilio containers are pushed to 'Dockerhub'. Registry URL: 'docker.io'. Container pull URLs are given below.

    CentOS7

    There are two registry methods available in TripleO Openstack Platform.

    1. Remote Registry

    2. Local Registry

    4.1] Remote Registry

    Follow this section when 'Remote Registry' is used.

    For this method, it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from the Dockerhub registry.

    Populate the trilio_env.yaml with container URLs for:

    • Trilio Datamover container

    • Trilio Datamover api container

    • Trilio Horizon Plugin

    trilio_env.yaml will be available in __triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments

    4.2] Local Registry

    Follow this section when 'local registry' is used on the undercloud.

    Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.

    The changes can be verified using the following commands.

    5] Configure multi-IP NFS

    This section is only required when the multi-IP feature for NFS is required.

    This feature allows setting the IP to access the NFS Volume per datamover instead of globally.

    On Undercloud node, change the directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/IP map.

    Get the compute hostnames from the following command. Check the 'Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.

    Run this command on undercloud by sourcing 'stackrc'.

    Edit the input map file and fill in all the details. Refer to for details about the structure.

    vi triliovault_nfs_map_input.yml

    Update pyYAML on the undercloud node only

    If pip isn't available please install pip on the undercloud.

    Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.

    The result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

    Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Validate the changes in file triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

    Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

    Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that NFS is used as the backup target.

    6] Fill in Trilio environment details

    Fill Trilio details in the file /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml, triliovault environment file is self-explanatory. Fill in details of the backup target, verify image URLs, and other details.

    NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    7] Install Trilio on Overcloud

    Use the following heat environment file and roles data file in overcloud deploy command

    1. trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations

    2. roles_data.yaml: This file contains overcloud roles data with Trilio roles added.

    3. Use the correct trilio endpoint map file as per your keystone endpoint configuration. - Instead of tls-endpoints-public-dns.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’ - Instead of tls-endpoints-public-ip.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’ - Instead of tls-everywhere-endpoints-dns.yaml

    Deploy command with triliovault environment file looks like the following.

    Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

    8] Verify the deployment

    If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

    8.1] On the Controller node

    Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

    Verify the haproxy configuration under:

    8.2] On Compute node

    Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

    8.3] On the node with Horizon service

    Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.

    10] Troubleshooting for overcloud deployment failures

    Trilio components will be deployed using puppet scripts.

    In case of the overcloud deployment fails do the following command to provide the list of errors. The following document also provides valuable insights:

    Configuring Trilio

    Learn about configuring Trilio for OpenStack

    The configuration process used by Trilio for OpenStack heavily utilizes Ansible scripts. In recent years, Ansible has emerged as a leading tool for configuration management, due to which Trilio makes extensive use of Ansible playbooks to effectively configure the Trilio cluster. To address any potential Trilio configuration issues, it's crucial for users to have a fundamental understanding of Ansible playbook output.

    Given the inherent repeatability of Ansible modules, the Trilio configuration can be run as many times as needed to alter or reconfigure the Trilio cluster.

    Upon booting the VM, direct your browser (preferably Chrome or Firefox) to the Trilio node's IP address. This will take you to the Trilio Dashboard, which houses the Trilio configurator.

    The user is: admin The default password is: password

    After the first login, you will be prompted to change the admin password.

    Unlike previous versions of Trilio, the current version only requires you to configure the cluster once and the Trilio dashboard provides cluster-wide management capability.

    Uploading the OpenStack certificate bundle

    OpenStack endpoints can be configured to use TLS. In such a configuration the Trilio appliance needs to trust the certificates provided by the OpenStack endpoints.

    To achieve this trust it is required to upload the OpenStack certificate bundle through the OS API certificate tab of the Trilio appliance Dashboard.

    The certificate bundle is located on the controller nodes of the OpenStack installation.

    The default paths for each distribution are as follows:

    The uploaded certificates can be verified on the Trilio appliance at the following location.

    Details needed for the Trilio Appliance

    Once you log in to an unconfigured Trilio Appliance, the first page you encounter is the configurator. This tool needs specific details about the Trilio Appliance, OpenStack, and Backup Storage to proceed.

    Trilio Cluster information

    The Trilio Cluster must integrated into an existing OpenStack environment. The following fields ask for the details of your Trilio Cluster.

    • Controller Nodes

      • This is the list of Trilio virtual appliance IP addresses along with their hostnames.

      • Format: comma-separated list with pairs combined through '='

    The Trilio Cluster supports only 1 node and 3 node clusters.

    • Virtual IP Address

      • This is the Trilio cluster IP address which is mandatory

      • Format: IP/Subnet

      • Example: 172.20.4.150/24

    The Virtual IP is mandatory even for single-node clusters and has to be different from any IP assigned to a Trilio Controller Node.

    • Name Server

      • List of nameservers, primarily used to resolve OpenStack service endpoints.

      • Format: comma-separated list

      • example: 10.10.10.1,172.20.4.1

    If defining OpenStack endpoint hostnames in the /etc/hosts file on the Trilio Applicance VM is preferred over a DNS solution you may set the nameserver to 0.0.0.0, the default gateway.

    • Domain Search Order

      • The domain the Trilio Cluster will use.

      • Format: comma-separated list

      • example: trilio.io,trilio.demo

    OpenStack Credentials information

    The Trilio Appliance integrates with one OpenStack environment. The following fields ask for the information required to access and connect with the OpenStack Cluster.

    • Keystone URL

      • The Keystone endpoint used to fetch authentication for configuration

      • format: URL

      • example: https://keystone.trilio.io:5000/v3

    When FQDNs are used for the Keystone endpoints it is necessary to configure at least one DNS server before the configuration.

    Absent a DNS server, the IPs should be defined in the /etc/hosts file on the Trilio Appliance, and the nameserver should be set to 0.0.0.0.

    Otherwise, the validation of the Openstack Credentials will fail.

    • Domain ID

      • domain the provided user and tenant are located in

      • format: ID

      • example: default

    Trilio requires domain admin role access. To provide domain admin role to a user, the following command can be used:

    openstack role add --domain <domain id> --user <username> admin

    The Trilio configurator verifies after every entry if it is possible to login into Openstack using the provided credentials.

    This verification will fail until all entries are set and correct.

    When the verification is successful it is possible to choose the Admin tenant, the Region, and the Trustee role without error.

    • Admin Tenant

      • The tenant to be used together with the provided user

      • format: a pre-populated list

      • example: admin

    When leveraging OpenStack Barbican for protecting encrypted volumes and offering encrypted backups, it's essential that the Trustee Role is assigned as 'Creator' or a role that possesses equivalent permissions to the Creator role.

    This is crucial because only the Creator role has the authority to create, read, and delete secrets within Barbican. The generation of encryption-enabled workloads would be unsuccessful if the Trustee Role does not possess the permissions associated with the 'Creator' role.

    Backup Storage Configuration information

    These fields request information about the backup target that the Trilio installation will use to store your backups.

    • OpenStack Distribution

      • Select the Distribution of OpenStack for Trilio integration

      • format: predefined list

      • example: RHOSP

    Some distributions of OpenStack require a special mount point to be used, so make the OpenStack Distribution selection carefully.

    • Backup Storage

      • Defines the Backup Storage protocol to use

      • format: predefined list of radio buttons

      • example: NFS

    Using the NFS protocol

    • NFS Export

      • The path under which the NFS Volumes to be used can be found

      • format: comma-separated list of NFS Volumes paths

      • example: 10.10.2.20:/upstream,10.10.5.100:/nfs2

    On Cohesity NFS if Input/Output errors are observed then try increasing timeout and retrans parameter value in NFS options

    Please use the predefined NFS Options and only change them when it is know that changes are necessary.

    Trilio is testing against the predefined NFS options.

    Using the S3 protocol

    • S3 Compatible

      • Switch between Amazon and other S3 compatible storage solutions

      • format: predefined list

      • example: Amazon S3

    Using a Secured HTTPS Endpoint for Non-AWS S3 Storage

    When using a secure HTTPS endpoint for non-AWS S3 storage (for example Ceph), you should validate the Certificate Authority (CA) by uploading the corresponding CA certificate. The certificate can be uploaded in the "OS API Certificate" section, under the "Upload Client Certificate" subsection, as explained in .

    Workload Import

    Check this box in case of reinitialization or reinstallation of the Trilio Appliance to import all matching Workloads located on the provided Backup Target.

    Workloads that are not assigned to an existing tenant will fail to import and need to be manually once the configuration is done.

    Advanced settings

    At the end of the configurator is the option to activate advanced settings.

    Activating this option provides the ability to configure the Keystone endpoints used for the Datamover API and Trilio.

    Setup Trilio and Datamover API endpoints.

    Trilio generates Keystone endpoints for 2 services. The Trilio Datamover API and the Trilio Workloadmanager.

    OpenStack installations typically distribute endpoint types across various networks.

    The advanced settings for both the Datamover API endpoints and TrilioWorkloadManager endpoints enable Trilio configuration options which allow the user to accommodate for such an environment.

    IP addresses supplied in these fields are added as additional VIPs to the Trilio cluster.

    Should a Fully Qualified Domain Name (FQDN) be used for those endpoints, the Trilio configurator will resolve the FQDN, subsequently identifying the associated IP addresses, which are then added as additional Virtual IP addresses (VIPs).

    It is recommended to verify the Datamover API settings against the ones configured during the installation of the Trilio components.

    Should these endpoints already exist in Keystone, their values will be prefilled and immutable. If changes are necessary, you must first remove the old Keystone endpoints.

    Providing a URL with https activates the TLS enabled configuration, which requires the upload of certificates and the connected private key.

    Set up an external database

    Trilio allows the use of an external MySQL or MariaDB database.

    This database needs to be prepared by creating the empty workloadmgr database, creating the workloadmgr user and setting the right permissions.

    An example command to create this database would be:

    Provide the connection string to the Trilio configurator.

    The database value can only be set upon an initial configuration of the Trilio solution.

    When the Cluster has been configured to use the internal database, then the connection string will not be shown in the next configuration attempt.

    In the case of an external database, the connection string will be shown but is immutable.

    Define the Trilio service user password

    Trilio is using a service user that is located in the OpenStack service project.

    The password for this service user will be generated randomly or can be defined in the advanced settings.

    Starting the configurator

    Once all entries have been set and all validations are error-free the configurator can be started.

    • Click Finish

    • Reconfirm in the pop-up that you want to start the configuration

    • Wait for the configurator to finish

    Some elements of the configuration take longer than others. Even when it looks like the configurator is stuck, please wait till the configurator finishes. Should the configurator not be finished after 6 hours have elapsed, please contact Trilio Support for assistance.

    The configurator utilizes Ansible and Trilio internal API calls during the configuration process

    Following each configuration block or upon completion of the entire configurator process, you have the opportunity to examine the output generated by Ansible.

    At the end of a successful configuration, the page will be forwarded to the configured VIP for the Trilio Appliance.

    Restart Trilio Services

    In complex environments it is sometimes necessary to restart a single service or the complete solution. Rarely is restarting the complete node, where a service is running possible or even the ideal solution.

    This page describes the services running by Trilio and how to restart those.

    Trilio Appliance Services

    The Trilio Appliance is the controller of Trilio. Most services on the Appliance are running in a High Availability mode on a 3-node cluster.

    wlm-api

    The wlm-api service takes the API calls against the Trilio Appliance. It is running in active-active mode on all nodes of the Trilio cluster.

    To restart the wlm-api service run on each Trilio node:

    wlm-scheduler

    The wlm-scheduler service is taking job requests and identifies which Trilio node should take the request. It is running in active-active mode on all nodes of the Trilio cluster.

    To restart the wlm-scheduler service run on each Trilio node:

    wlm-workloads

    The wlm-workloads service is the task worker of Trilio executing all jobs given to the Trilio node. It is running in active-active mode on all nodes of the Trilio cluster.

    To restart the wlm-workloads service run on each Trilio node:

    wlm-cron

    The wlm-cron service is responsible for starting scheduled Backups according to the configurtation of Tenant Workloads. It is running in active-passive mode and controlled by the pacemaker cluster.

    To restart the wlm-workloads service run on the Trilio node with VIP assigned:

    VIP resources

    The Trilio appliance is running 1 to 4 virtual IPs on the Trilio cluster. These are controlled by the pacemaker cluster and provided through NGINX.

    To restart these resources the pacemaker NGINX resource is getting restarted:

    RabbitMQ

    The Trilio cluster is using RabbitMQ as messaging service. It is running in active-active mode on all nodes of the Trilio cluster.

    RabbitMQ is a complex system in itself. This guide will only provide the basic commands to do a restart of a node and check the health of the cluster afterward. For complete documentation of how to restart RabbitMQ, please follow the .

    To restart a RabbitMQ node run on each Trilio node:

    It is recommended to wait for the node to rejoin and sync with the cluster before restarting another RabbitMQ node.

    When the complete cluster is getting stopped and restarted it is important to keep the order of nodes in mind. The last node to be stopped needs to be the first node to be started.

    Galera Cluster (MariaDB)

    The Galera Cluster is managing the Trilio MariaDB database. It is running in active-active mode on all nodes of the Trilio cluster.

    Galera Cluster is a complex system in itself. This guide will only provide the basic commands to do a restart of a node and check the health of the cluster afterward. For complete documentation of how to restart Galera clusters, please follow the .

    When restarting Galera two different use-cases need to be considered:

    • Restarting a single node

    • Restarting the whole cluster

    Restarting a single node

    A single node can be restarted without any issues. It will automatically rejoin the cluster and sync against the remaining nodes.

    The following commands will gracefully stop and restart the mysqld service.

    After a restart will the cluster start the syncing process. Don't restart node after node to reach a complete cluster restart.

    Check the cluster health after the restart.

    Restarting the complete cluster

    Restarting a complete cluster requires some additional steps as the Galera cluster is basically destroyed once all nodes have been shut down. It needs to be rebuild afterwards.

    First gracefully shutdown the Galera cluster on all nodes:

    The second step is to identify the Galera node with the latest dataset. This can be achieved by reading the grastate.dat file on the Trilio nodes.

    When this documentation is followed the last mysqld service that got shut down will be the one with the latest dataset.

    The value to check for are the seqno.

    The node with the highest seqno is the node that contains the latest data. This node will also contain safe_to_bootstrap: 1 to indicate that the Galera cluster can be rebuild from this node.

    On the identified node the new cluster is getting generated with the following command:

    Running galera_new_cluster on the wrong node will lead to data loss as this command will set the node the command is issued on as the first node of the cluster. All nodes which join afterward will sync against the data of this first node.

    After the command has been issued is the mysqld service running on this node. Now the other nodes can be restarted one by one. The started nodes will automatically rejoin the cluster and sync against the master node. Once a synced status has been reached is each node a primary node in the cluster.

    Check the Cluster health after all services are up again.

    Verify Health of the Galera Cluster

    Verify the cluster health by running the following commands inside each Trilio MariaDB. The values returned from these statements have to be the same for each node.

    Canonical workloadmgr container services

    Canonical Openstack is not using the Trilio Appliance. In Canonical environments is the Trilio controller unit part of the JuJu deployment as workloadmgr container.

    To restart the services inside this container the following commands are to be issued.

    Single Node deployment

    HA deployment

    On all nodes:

    On a single node:

    Trilio dmapi service

    The Trilio dmapi service is running on the Openstack controller nodes. Depending on the Openstack Distribution Trilio is installed on different commands are issued to restart the dmapi service.

    RHOSP13

    RHOSP13 is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    RHOSP16

    RHOSP16 is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    Canonical

    Canonical is running the Trilio services in JuJu controlled LXD containers. The dmapi service can be restarted by issuing the following command from the MASS node.

    Kolla-Ansible Openstack

    Kolla-Ansible Openstack is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    Ansible Openstack

    Ansible Openstack is running the Trilio services as LXD containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    Trilio datamover service (tvault-contego)

    The Trilio datamover service is running on the Openstack compute nodes. Depending on the Openstack Distribution Trilio is installed on different commands are issued to restart the datamover service.

    RHOSP13

    RHOSP13 is running the Trilio services as docker containers. The datamover service can be restarted by issuing the following command on the compute node.

    RHOSP16

    RHOSP16 is running the Trilio services as docker containers. The datamover service can be restarted by issuing the following command on the compute node.

    Canonical

    Canonical is running the Trilio services in JuJu controlled LXD containers. The datamover service can be restarted by issuing the following command from the MASS node.

    Kolla-Ansible Openstack

    Kolla-Ansible Openstack is running the Trilio services as docker containers. The dmapi service can be restarted by issuing the following command on the host running the dmapi service.

    Ansible Openstack

    Ansible Openstack is running the Trilio datamover service directly on the compute node. The datamover service can be restarted by issuing the following command on.

    Workload Quotas

    Trilio enables Openstack administrators to set Project Quotas against the usage of Trilio.

    The following Quotas can be set:

    • Number of Workloads a Project is allowed to have

    • Number of Snapshots a Project is allowed to have

    • Number of VMs a Project is allowed to protect

    • Amount of Storage a Project is allowed to use on the Backup Target

    Work with Workload Quotas via Horizon

    The Trilio Quota feature is available for all supported Openstack versions and distributions, but only Train and higher releases include the Horizon integration of the Quota feature.

    Workload Quotas are managed like any other Project Quotas.

    1. Login into Horizon as user with admin role

    2. Navigate to Identity

    3. Navigate to Projects

    4. Identify the Project to modify or show the quotas on

    Work with Workload Quotas via CLI

    List available Quota Types

    Trilio is providing several different Quotas. The following command allows listing those.

    Trilio 4.1 do not yet have the Quota Type Volume integrated. Using this will not generate any Quotas a Tenant has to apply to.

    Show Quota Type Details

    The following command will show the details of a provided Quota Type.

    • <quota_type_id> ➡️ID of the Quota Type to show

    Create a Quota

    The following command will create a Quota for a given project and set the provided value.

    • <quota_type_id> ➡️ID of the Quota Type to be created

    • <allowed_value>➡️ Value to set for this Quota Type

    • <high_watermark>➡️

    The high watermark is automatically set to 80% of the allowed value when set via Horizon.

    A created Quota will generate an allowed_quota_object with its own ID. This is ID is needed when continuing to work with the created Quota.

    List allowed Quotas

    The following command lists all Trilio Quotas set for a given project.

    • <project_id>➡️ Project to list the Quotas from

    Show allowed Quota

    The following command shows the details about a provided allowed Quota.

    • <allowed_quota_id> ➡️ID of the allowed Quota to show.

    Update allowed Quota

    The following command shows how to update the value of an already existing allowed Quota.

    • <allowed_value>➡️ Value to set for this Quota Type

    • <high_watermark>➡️ Value to set for High Watermark warnings

    • <project_id>➡️

    Delete allowed Quota

    The following command will delete an allowed Quota and sets the value of the connected Quota Type back to unlimited for the affected project.

    • <allowed_quota_id> ➡️ID of the allowed Quota to delete

    TrilioVault data protection — charm-guide 0.0.1.dev818 documentationdocs.openstack.org

    Trilio 4.2 Release Notes

    Trilio 4.2 introduces new features and capabilities:

    • Backup and Recovery of encrypted Cinder volumes (Barbican support)

    • Encryption of Workloads (Barbican support)

    • Support for multi-IP NFS backup targets

    Spinning up the Trilio VM

    Learn about spinning up the Trilio VM

    For Canonical Openstack it is not necessary to spin up the Trilio VM. However, Trilio File Search functionality requires that the Trilio Workload manager (trilio-wlm) be deployed as a virtual machine. File Search will not function if the Trilio Workload manager (trilio-wlm) is running as a lxd container(s).

    The Trilio Appliance is delivered as qcow2 image and runs as VM on top of a KVM Hypervisor.

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 13:24:28 GMT
    Content-Type: application/json
    Content-Length: 819
    Connection: keep-alive
    X-Compute-Request-Id: req-d57bea9a-9968-4357-8743-e0b906466063
    
    {
       "file_search":{
          "created_at":"2020-11-09T13:23:25.000000",
          "updated_at":"2020-11-09T13:23:48.000000",
          "id":14,
          "deleted_at":null,
          "status":"completed",
          "error_msg":null,
          "filepath":"/etc/h*",
          "json_resp":"[
                          {
                             "ed4f29e8-7544-4e1c-af8a-a76031211926":[
                                {
                                   "/dev/vda1":[
                                      "/etc/hostname",
                                      "/etc/hosts"
                                   ],
                                   "/etc/hostname":{
                                      "dev":"2049",
                                      "ino":"32",
                                      "mode":"33204",
                                      "nlink":"1",
                                      "uid":"0",
                                      "gid":"0",
                                      "rdev":"0",
                                      "size":"1",
                                      "blksize":"1024",
                                      "blocks":"2",
                                      "atime":"1603455255",
                                      "mtime":"1603455255",
                                      "ctime":"1603455255"
                                   },
                                   "/etc/hosts":{
                                      "dev":"2049",
                                      "ino":"127",
                                      "mode":"33204",
                                      "nlink":"1",
                                      "uid":"0",
                                      "gid":"0",
                                      "rdev":"0",
                                      "size":"37",
                                      "blksize":"1024",
                                      "blocks":"2",
                                      "atime":"1603455257",
                                      "mtime":"1431011050",
                                      "ctime":"1431017172"
                                   }
                                }
                             ]
                          }
                      ]",
          "vm_id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b"
       }
    }
    {
       "file_search":{
          "start":<Integer>,
          "end":<Integer>,
          "filepath":"<Reg-Ex String>",
          "date_from":<Date Format: YYYY-MM-DDTHH:MM:SS>,
          "date_to":<Date Format: YYYY-MM-DDTHH:MM:SS>,
          "snapshot_ids":[
             "<Snapshot-ID>"
          ],
          "vm_id":"<VM-ID>"
       }
    }

    python

    4.2.64

    puppet-triliovault

    rpm

    4.2.64-4.2

    python3-contegoclient

    deb

    4.2.64

    python3-contegoclient-el8

    rpm

    4.2.64-4.2

    python3-trilio-fusepy

    rpm

    3.0.1-1

    trilio-fusepy

    rpm

    3.0.1-1

    python3-workloadmgrclient

    deb

    4.2.64.1

    python3-workloadmgrclient-el8

    rpm

    4.2.64.1-4.2

    python-workloadmgrclient

    deb

    4.2.64.1

    workloadmgrclient

    python

    4.2.64.1

    workloadmgrclient

    rpm

    4.2.64.1-4.2

    dmapi

    python

    4.2.64.1

    dmapi

    rpm

    4.2.64.1-4.2

    dmapi

    deb

    4.2.64.1

    python3-dmapi

    deb

    4.2.64.1

    python3-dmapi

    rpm

    4.2.64.1-4.2

    python3-s3-fuse-plugin

    deb

    4.2.64.1

    python3-tvault-horizon-plugin

    deb

    4.2.64.3

    s3-fuse-plugin

    deb

    4.2.64.1

    tvault-horizon-plugin

    deb

    4.2.64.3

    s3fuse

    python

    4.2.64.2

    tvault-horizon-plugin

    python

    4.2.64.2

    python-s3fuse-plugin-cent7

    rpm

    4.2.64.1-4.2

    python3-s3fuse-plugin

    rpm

    4.2.64.1-4.2

    python3-tvault-horizon-plugin-el8

    rpm

    4.2.64.3-4.2

    tvault-horizon-plugin

    rpm

    4.2.64.3-4.2

    python

    4.2.64.6

    tvault_configurator

    python

    4.2.64.15

    workloadmgr

    python

    4.2.64.11

    workloadmgr

    deb

    4.2.64.11

    python3-tvault-contego

    deb

    4.2.64.13

    Kolla Ansible Wallaby containers

    4.2.6-wallaby

    Kolla Yoga Containers

    4.2.6-yoga

    TripleO Containers

    4.2.6-tripleo

    }
    }

    User-Agent

    string

    python-workloadmgrclient

    Logo
    Logo
    this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
    this page
    https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html
    Example: 172.20.4.151=tvault-104-1,172.20.4.152=tvault-104-2,172.20.4.153=tvault-104-3’

    NTP Servers

    • NTP servers the Trilio Cluster will use

    • format: comma-separated list

    • example: 0.pool.ntp.org,10.10.10.10

  • Timezone

    • Timezone the Trilio Cluster will use internally

    • format: pre-populated list

    • example: UTC

  • Endpoint Type

    • Defines which endpoint type will be used to communicate with the Openstack endpoints

    • format: predefined list of radio buttons

    • example: Public

  • Administrator

    • Username of an account with the domain admin role

    • format: String

    • example: admin

  • Password

    • password for the prior provided user

    • format: String

    • example: password

  • Region

    • Openstack Region the user and tenant are located in

    • format: a pre-populated list

    • example: RegionOne

  • Trustee Role

    • The Openstack role required to be able to use Trilio functionalities

    • format: a pre-populated list

    • example: _member_

  • NFS Options

    • NFS options used by the Trilio Cluster when mounting the NFS Exports

    • format: NFS options

    • example: nolock,soft,timeo=180,intr,lookupcache=none

    • NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

  • (S3 compatible) Endpoint URL

    • URL to be used to reach and access the provided S3 compatible storage

    • format: URL

    • example: objects.trilio.io

  • Access Key

    • Access Key necessary to login into the S3 storage

    • format: access key

    • example: SFHSAFHPFFSVVBSVBSZRF

  • Secret Key

    • Secret Key necessary to login into the S3 storage

    • format: secret key

    • example: bfAEURFGHsnvd3435BdfeF

  • Region

    • Configured Region for the S3 Bucket (keep the default for S3 compatible without Region)

    • format: String

    • example: us-east-1

  • Signature Version

    • S3 signature version to use for signing into the S3 storage

    • format: string

    • example: default

  • Bucket Name

    • Name of the bucket to be used as Backup target

    • format: string

    • example: Trilio-backup

  • Uploading the OpenStack Certificate Bundle
    reassigned
    official RabbitMQ documentation
    official Galera documentation

    Use the small arrow next to "Manage Members" to open the submenu

  • Choose "Modify Quotas"

  • Navigate to "Workload Manager"

  • Edit Quotas as desired

  • Click "Save"

  • Value to set for High Watermark warnings
  • <project_id>➡️ Project to assign the quota to

  • Project to assign the quota to
  • <allowed_quota_id> ➡️ID of the allowed Quota to update

  • Screenshot of Horizon integration for Workload Manager Quotas
    This guide shows the tested way to spin up the Trilio Appliance on a RHV Cluster. Please contact a RHV Administrator and Trilio Customer Success Agent in case of incompatibility with company standards.

    Creating the cloud-init image

    The Trilio appliance is utilizing cloud-init to provide the initial network and user configuration.

    Cloud-init is reading it's information either from a metadata server or from a provided cd image. Trilio is utilizing the cd image.

    Needed tools

    To create the cloud-init image it is required to have genisoimage available.

    Providing the Metadata

    Cloud-init is using two files for it's metadata.

    The first file is called meta-data and contains the information about the network configuration. Below is an example of this file.

    Keep the hostname localhost. The hostname gets changed through the configuration step. Changing the hostname will lead to the tvault-config service not properly starting, blocking further configuration.

    The instance-id has to match the VM name in virsh

    The second file is called user-data and contains little scripts and information to set up for example the user passwords. Below is an example of this file.

    creating the image file

    Both files meta-data and user-data are needed. Even when one is empty, is it needed to create a working cloud-init image.

    The image is getting created using genisoimage follwing this general command:

    genisoimage -output <name>.iso -volid cidata -joliet -rock </path/user-data> </path/meta-data>

    An example of this command is shown below.

    Spining up the Trilio appliance

    The Trilio Appliance qcow2 image can be downloaded from the Trilio customer portal. Please contact your Trilio sales or technical lead to get access to the portal.

    After the cloud-init image has been created the TriloVault appliance can be spun up on the desired KVM server.

    Extract the Trilio QCOW2 tar file using the following command :

    See below an example command, how to spin up the Trilio appliance using virsh and the created iso image.

    It is of course possible to spin up the Trilio appliance without a cloud-init iso-image. It will spin up with default values.

    Uninstalling cloud-init after first boot

    Once the Trilio appliance is up and running with it's initial configuration is it recommended to uninstall cloud-init.

    If cloud-init is not installed it will rerun the network configuration upon every boot. Setting the network configuration back to DHCP, if no metadata is provided.

    To uninstall cloud-init, follow the example below.

    [RHOSP, TripleO and Kolla only] Verify nova UID/GID for nova user on the Appliance

    Red Hat OpenStack, TripleO, and Kolla Ansible Openstack are using the nova UID/GID of 42436 inside their containers instead of 162:162 which is the standard in other Openstack environments.

    Please verify that the nova UID/GID on the Trilio Appliance is still 42436,

    In case of the UID/GID being changed back to 162:162 follow these steps to set it back to 42436:42436.

    1. Download the shell script that will change the user id

    2. Assign executable permissions

    3. Execute the script

    4. Verify that the nova user and group id have changed to '42436'

    Updating the appliance to the latest minor version

    It is recommended to directly update the Trilio appliance to the latest version.

    To do so follow the minor update guide provided here:

    • Online update Trilio Appliance

    • Offline update Trilio Appliance

    cd /home/stack
    git clone -b TVO/4.2.8 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/puppet/trilio/files/
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    chmod +x *.sh
    ./upload_puppet_module.sh
    
    ## Output of the above command looks like the following.
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Above command creates the following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    'OS::TripleO::Services::TrilioDatamoverApi'
    'OS::TripleO::Services::TrilioDatamover'
    'OS::TripleO::Services::TrilioHorizon'
    Trilio Datamove container:        docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
    Trilio Datamover Api Container:   docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
    Trilio horizon plugin:            docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    # For TripleO Train Centos7
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
       DockerTrilioDmApiImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
       DockerHorizonImage: docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_registry_hostname_or_ip> <OS_platform> <4.1-TRIPLEO-CONTAINER> <container_tool_available_on_undercloud>
    
    Options OS_platform: [centos7]
    Options container_tool_available_on_undercloud: [docker, podman]
    
    ## To get undercloud registry hostname/ip, we have two approaches. Use either one.
    1. openstack tripleo container image list
    
    2. find your 'containers-prepare-parameter.yaml' (from overcloud deploy command) and search for 'push_destination'
    cat /home/stack/containers-prepare-parameter.yaml | grep push_destination
     - push_destination: "undercloud.ctlplane.ooo.prod1:8787"
    
    Here, 'undercloud.ctlplane.ooo.prod1' is undercloud registry hostname. Use it in our command like following example.
    
    # Command Example:
    sudo ./prepare_trilio_images.sh undercloud.ctlplane.ooo.prod1 centos7 <HOTFIX-TAG-VERSION>-tripleo podman
    
    ## Verify changes
    # For TripleO Train Centos7
    $ grep '<HOTFIX-TAG-VERSION>-tripleo' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
       DockerTrilioDmApiImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
       DockerHorizonImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    ## For Centos7 Train
    
    (undercloud) [stack@undercloud redhat-director-scripts/tripleo-train]$ openstack tripleo container image list | grep trilio
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo                       |                        |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo                  |                   |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo                  |
    cd triliovault-cfg-scripts/common/
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    ## On Python3 env
    sudo pip3 install PyYAML==5.1
    
    ## On Python2 env
    sudo pip install PyYAML==5.1
    ## On Python3 env
    python3 ./generate_nfs_map.py
    
    ## On Python2 env
    python ./generate_nfs_map.py
    grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env_tls_endpoints_public_dns.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /home/stack/templates/roles_data.yaml
    [root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
    26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo       kolla_start           5 days ago  Up 5 days ago         horizon
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo       kolla_start           5 days ago  Up 5 days ago         horizon
    openstack stack failures list overcloud
    heat stack-list --show-nested -f "status=FAILED"
    heat resource-list --nested-depth 5 overcloud | grep FAILED
    
    ##=> If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
    docker logs trilio_dmapi
    tailf /var/log/containers/trilio-datamover-api/dmapi.log
    
    ##=> If trilio datamover containers does not start well or in restarting state, use following logs to debug.
    docker logs trilio_datamover
    tailf /var/log/containers/trilio-datamover/tvault-contego.log
    RHOSP/TripleO: /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
    Kolla Ansible with CentOS: /etc/pki/tls/certs/ca-bundle.crt
    Kolla Ansible with Ubuntu:  /usr/local/share/ca-certificates/
    OpenStack Ansible (OSA) with Ubuntu in our lab: /etc/openstack_deploy/ssl/
    OpenStack Asnible (OSA) with CentOS: /etc/openstack_deploy/ssl
    /etc/workloadmgr/ca-chain.pem
    create database workloadmgr_auto;
    CREATE USER 'trilio'@'localhost' IDENTIFIED BY 'password';
    GRANT ALL PRIVILEGES ON workloadmgr_auto.* TO 'trilio'@'10.10.10.67' IDENTIFIED BY 'password';
    mysql://trilio:[email protected]/workloadmgr_auto?charset=utf8
    systemctl restart wlm-api
    systemctl restart wlm-scheduler
    systemctl restart wlm-workloads
    pcs resource restart wlm-cron
    pcs resource restart lb_nginx-clone
    [root@TVM1 ~]# rabbitmqctl stop
    Stopping and halting node rabbit@TVM1 ...
    [root@TVM1 ~]# rabbitmq-server -detached
    Warning: PID file not written; -detached was passed.
    [root@TVM1 ~]# rabbitmqctl cluster_status
    Cluster status of node rabbit@TVM1 ...
    [{nodes,[{disc,[rabbit@TVM1,rabbit@TVM2,rabbit@TVM3]}]},
     {running_nodes,[rabbit@TVM2,rabbit@TVM3,rabbit@TVM1]},
     {cluster_name,<<"rabbit@TVM1">>},
     {partitions,[{rabbit@TVM2,[rabbit@TVM1,rabbit@TVM3]},
                  {rabbit@TVM3,[rabbit@TVM1,rabbit@TVM2]}]},
     {alarms,[{rabbit@TVM2,[]},{rabbit@TVM3,[]},{rabbit@TVM1,[]}]}]
    systemctl stop mysqld
    systemctl start mysqld
    systemctl stop mysqld
    cat /var/lib/mysql/grastate.dat
    
    # GALERA saved state
    version: 2.1
    uuid:    353e129f-11f2-11eb-b3f7-76f39b7b455d
    seqno:   213576545367
    safe_to_bootstrap: 1
    galera_new_cluster
    systemctl start mysqld
    MariaDB [(none)]> show status like 'wsrep_incoming_addresses';
    +--------------------------+-------------------------------------------------+
    | Variable_name            | Value                                           |
    +--------------------------+-------------------------------------------------+
    | wsrep_incoming_addresses | 10.10.2.13:3306,10.10.2.14:3306,10.10.2.12:3306 |
    +--------------------------+-------------------------------------------------+
    1 row in set (0.01 sec)
    
    MariaDB [(none)]> show status like 'wsrep_cluster_size';
    +--------------------+-------+
    | Variable_name      | Value |
    +--------------------+-------+
    | wsrep_cluster_size | 3     |
    +--------------------+-------+
    1 row in set (0.00 sec)
    
    MariaDB [(none)]> show status like 'wsrep_cluster_state_uuid';
    +--------------------------+--------------------------------------+
    | Variable_name            | Value                                |
    +--------------------------+--------------------------------------+
    | wsrep_cluster_state_uuid | 353e129f-11f2-11eb-b3f7-76f39b7b455d |
    +--------------------------+--------------------------------------+
    1 row in set (0.00 sec)
    
    MariaDB [(none)]> show status like 'wsrep_local_state_comment';
    +---------------------------+--------+
    | Variable_name             | Value  |
    +---------------------------+--------+
    | wsrep_local_state_comment | Synced |
    +---------------------------+--------+
    1 row in set (0.01 sec)
    juju ssh <workloadmgr unit name>/<unit-number>
    Systemctl restart wlm-api wlm-scheduler wlm-workloads wlm-cron
    juju ssh <workloadmgr unit name>/<unit-number>
    Systemctl restart wlm-api wlm-scheduler wlm-workloads
    juju ssh <workloadmgr unit name>/<unit-number>
    crm_resource --restart -r res_trilio_wlm_wlm_cron
    docker restart trilio_dmapi
    podman restart trilio_dmapi
    juju ssh <trilio-dm-api unit name>/<unit-number>
    sudo systemctl restart tvault-datamover-api
    docker restart triliovault_datamover_api
    lxc-stop -n <dmapi container name>
    lxc-start -n <dmapi container name>
    docker restart trilio_datamover
    podman restart trilio_datamover
    juju ssh <trilio-data-mover unit name>/<unit-number>
    sudo systemctl restart tvault-contego
    docker restart triliovault_datamover
    service tvault-contego restart
    workloadmgr project-quota-type-list
    workloadmgr project-quota-type-show <quota_type_id>
    workloadmgr project-allowed-quota-create --quota-type-id quota_type_id
                                             --allowed-value allowed_value 
                                             --high-watermark high_watermark 
                                             --project-id project_id
    workloadmgr project-allowed-quota-list <project_id>
    workloadmgr project-allowed-quota-show <allowed_quota_id>
    workloadmgr project-allowed-quota-update [--allowed-value <allowed_value>]
                                             [--high-watermark <high_watermark>]
                                             [--project-id <project_id>]
                                             <allowed_quota_id>
    workloadmgr project-allowed-quota-delete <allowed_quota_id>
    #For RHEL and centos
    yum install genisoimage
    #For Ubuntu 
    apt-get install genisoimage
    [root@kvm]# cat meta-data
    instance-id: triliovault
    network-interfaces: |
       auto eth0
       iface eth0 inet static
       address 158.69.170.20
       netmask 255.255.255.0
       gateway 158.69.170.30
    
       dns-nameservers 11.11.0.51
    local-hostname: localhost
    [root@kvm]# cat user-data
    #cloud-config
    chpasswd:
      list: |
        root:password1
        stack:password2
      expire: False
    genisoimage  -output tvault-firstboot-config.iso -volid cidata -joliet -rock user-data meta-data
    tar Jxvf TrilioVault_file.tar.xz
    virt-install -n triliovault-vm  --memory 24576 --vcpus 8 \
    --os-type linux \ 
    --disk tvault-appliance-os-3.0.154.qcow2,device=disk,bus=virtio,size=40 \
    --network bridge=virbr0,model=virtio \
    --network bridge=virbr1,model=virtio \
    --graphics none \
    --import \
    --disk path=tvault-firstboot-config.iso,device=cdrom
    sudo yum remove cloud-init
    [root@TVM1 ~]# id nova
    uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
    ## Download the shell script
    $ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    
    ## Assign executable permissions
    $ chmod +x nova_userid.sh
    
    ## Execute the shell script to change 'nova' user and group id to '42436'
    $ ./nova_userid.sh
    
    ## Ignore any errors and verify that 'nova' user and group id has changed to '42436'
    $ id nova
       uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)

    Database clean up utility

  • Backup rebase utility

  • Trilio 4.2.64

    Release Versions

    Packages

    Name
    Type
    Version

    s3fuse

    python package

    4.2.64

    tvault-configurator

    python package

    4.2.64

    workloadmgr

    python package

    4.2.64

    Containers and Gitbranch

    Name
    Tag

    Gitbranch

    stable/4.2

    RHOSP13 containers

    4.2.64-rhosp13

    RHOSP16.1 containers

    4.2.64-rhosp16.1

    RHOSP16.2 containers

    4.2.64-rhosp16.2

    Kolla Ansible Victoria containers

    4.2.64-victoria

    Backup and Recovery of encrypted Cinder Volumes (Barbican support)

    This functionality is not yet available for Canonical OpenStack. An update will be provided once it is available for Canonical OpenStack too.

    This functionality is not available for RHOSP13 or TripleO Train on CentOS7. The reason is a dependency package, which is not available in RHEL7 or CentOS7.

    The OpenStack Barbican service enables the OpenStack Cinder service to provide encrypted Volumes. These volumes are software encrypted by the Cinder service with the secret used for the encryption being managed by the Barbican service.

    Trilio for OpenStack 4.2 is integrating into OpenStack Barbican to enable T4O to provide native backup and recovery of the encrypted Cinder volume.

    Any workload containing an encrypted Cinder volume has to create encrypted backups too. It is not possible to create unencrypted Workloads for encrypted Cinder Volumes.

    T4O 4.2 is providing encryption on the Workload level. All VMs that are part of an encrypted Workload will have their Cinder Volume data encrypted.

    This functionality is not available for encrypted Nova boot volumes. Encrypted Nova boot volumes can not be backed up. Unencrypted Nova boot volumes can be backed up and put into an encrypted Workload.

    Encryption of Workloads (Barbican support)

    Activating this feature and using encrypted Workload will lead to longer backup times. The following timings have been seen in Trilio labs:

    Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : ~108GB

    1. For unencrypted WL : 62 min

    2. For encrypted WL : 82 min

    Snapshot time for Windows Image booted VM. No additional data except OS. : ~12 GB

    1. For unencrypted WL : 10 min

    2. For encrypted WL : 18 min

    This functionality is not yet available for Canonical OpenStack. An update will be provided once it is available for Canonical OpenStack too.

    This functionality is not available for RHOSP13 or TripleO Train on CentOS7. The reason is a dependency package, which is not available in RHEL7 or CentOS7.

    The integration of Trilio for OpenStack 4.2 into Barbican enables T4O to provide encryption for the qcow2-data component of the Trilio backups. The json files containing the backed-up OpenStack metadata stays unencrypted.

    This functionality requires the OpenStack Barbican service to be present. Without the Barbican service, the possibilities to encrypt Workloads will not be shown inside Horizon.

    T4O 4.2 is only consuming secrets from Barbican. It is not creating, editing, or deleting any secret inside Barbican.

    Barbican secrets are required to run backups or restores in encryption-enabled Workloads. It is the OpenStack project user's responsibility to provide secrets and to ensure that the correct secrets are available.

    To utilize encrypted Workloads the Trilio trustee role needs to be able to engage with the Barbican service to read and fetch secrets from Barbican. The only default roles enabled with these permissions are the admin and the creator roles.

    The possibility to encrypt a Workload is only provided during Workload creation. Once a workload has been created it is no longer possible to change, whether the Workload is encrypted or not.

    Every encrypted Workload is consuming a unique Barbican secret. It is not possible to assign the same secret to two Workloads.

    The following secret configurations are supported.

    Algorithm
    mode
    content types
    payload-content-encoding
    secret type
    payload
    secretfile

    AES-256

    ctr

    text/plain

    None

    passphrase

    plaintext

    A default Barbican installation will generate secrets of the following type:

    • Alghorithm: AES-256

    • Mode: cbc

    • content type: application/octet-stream

    • payload-content-encoding: base64

    • secret type: opaque

    • payload: plaintext

    Barbican can be configured to use other secret vaults as a backend. Trilio has not tested other than the default secret vault provided by the Barbican service.

    Other secret vaults are supported under best-effort support.

    Encrypted Workloads can be migrated to different projects or clouds just like normal Workloads. The Barbican secrets used for the encryption need to be made available inside the target environment. This already applies when the Workload gets assigned to a different owner.

    Support for Multi-IP NFS backends

    This functionality is not yet available for Canonical OpenStack. An update will be provided once it is available for Canonical OpenStack too.

    Many Trilio customers are using software-defined storage solutions for a scalable backup target. These software-defined storage solutions often provide the capability to spread read and write operations over multiple nodes of the storage cluster. Each of these nodes has its own access IP to interact with. All nodes are still writing to the same logical volume, which is available through the NFS protocol.

    Trilio for Openstack 4.2 is supporting such solutions by enabling the have different IPs for the same backup target for each Datamover.

    Every Datamover and the Trilio appliance are still consuming one NFS path per Volume.

    The requirement for this functionality is that all NFS paths spread across the Trilio solution are accessing the same NFS Volume. Using this functionality to provide the Trilio solution with different backup targets to different Datamovers will lead to backup and restore failures.

    This is achieved by changing the method to calculate the T4O mount point. It is now only considering the volume path instead of the complete NFS path. Example below.

    Database cleanup utility

    Trilio for OpenStack is following an older OpenStack database schema and model. This model contains, that no data inside the database ever gets truly deleted. Only a flag is set to showcase that the dataset is no longer considered active.

    This has the advantage that it is always possible to trace back any objects and their existence timeline inside the database for analytical purposes.

    The big disadvantage of this model is that the database itself is ever-growing with every activity that T4O is doing.

    Over time it is possible for the database to reach sizes, that normal activities like listing Workloads are taking so long, that other tasks like taking backups are impacted.

    To counter this issue does Trilio provide a new utility, which will delete all no longer required database entries to reduce the load of the work when the T4O solution needs to access the database.

    Running this utility will completely delete all elements that are not required for active Workloads or Snapshots. It is recommended to create a database dump first when the possibility to analyze past data is required.

    Trilio will revamp its Database schema and usage in a future T4O release.

    Backup rebase utility

    Trilio for OpenStack is providing full synthetic backups utilizing the backing-file functionality of qcow2 images.

    This functionality enables Trilio backups to run incremental forever, without having to restore every incremental backup. Instead only the latest backup needs to be restored as all missing blocks are fetched through the backing file chain from older backups.

    The backing file of a particular backup is hardcoded inside the qcow2 image itself. These backing files need to consist of the full path and can't use relative paths. An example is provided below.

    T4O is using a base64 hash value inside this path for NFS backup targets. This backup target path needs to be resolvable for the qcow2 image to find its backing file.

    The hash value of the mount path can change when the backup is moved to a different backup target or when the T4O calculation method is changed like it is for T4O 4.2.

    Trilio is now providing a utility that will change the backing file path for a given workload.

    The runtime of this utility is increasing exponentially with increasing backing file chain.

    Known issues

    [Canonical OpenStack] Selective restore fails for migrated workloads

    Observation:

    • after migrating workloads to a different Canonical OpenStack environments restore fails with permission denied error in tmp directory

    Workaround:

    Run the {{sudo sysctl fs.protected_regular=0}} command on wlm units.

    [Intermittent Canonical OpenStack Queens] In-Place restore doesn't work with ext3/ext4 file systems

    Observation:

    • In-Place restore succeeds logically

    • Data is not getting replaced

    Workaround:

    • Running the In-Place restore a second time

    • Running a selective restore and reattaching the restored Volume

    [intermittent] [Canonical OpenStack] Retention not honored after mount/unmount operation

    Observation:

    • After a mount or unmount operation the ownership of the backup stays qemu:qemu which leads to T4O no longer being able to run required merge commands

    Workaround:

    • identify the workload with failed retention policy

    • run: chown -R nova:nova <WL_DIR>

    • verify next backup applies the retention policy

    CLI commands get-importworkload-list and get-orphaned-workloads-list show wrong list of workloads

    Observation:

    • independent of the command all workloads located on the backup target are listed

    Workaround:

    • use project id in command to show only workloads that can be imported for that project workloadmgr workload-get-importworkloads-list --project_id <project_id>

    File-search not displaying files in lvm created logical volumes

    Observation:

    • File search returns an empty list for lvm controlled logical volumes

    • fdisk created logical volumes work as desired

    File-search not displaying files when root directory doesn't contain read permissions for group

    Observation:

    • File search returns an empty list when root doesn't have read permissions for groups

    • File search is run as user nova

    Unable to create encrypted Workload if T4O gets reconfigured with creator trustee role

    Observation:

    • T4O initially configured with trustee role member (not able to create encrypted workloads)

    • After reconfiguration to trustee role creator encrypted workloads are still not creatable

    Workaround:

    • After reconfiguration create one unencrypted workload

    • Then create encrypted workloads as required

    Post restore of encrypted incremental snapshots CentOS instance is not getting booted

    Observation:

    • Restoring of a CentOS-based instance fails to start the instance in the case of restoring from an encrypted incremental snapshot

    Workaround:

    • Use only full backups for CentOS-based instances in combination with encrypted Workloads

    Installing on Ansible Openstack

    Please ensure that the Trilio Appliance has been updated to the latest hotfix before continuing the installation.

    Change the nova user id on the Trilio Nodes

    The 'nova' user ID and Group ID on the Trilio nodes need to be set the same as in the compute node(s). Trilio is by default using the nova user ID (UID) and Group ID (GID) 162:162. Ansible OpenStack is not always 'nova' user id 162 on compute node. Do the following steps on all Trilio nodes in case of nova UID & GID are not in sync with the Compute Node(s)

    1. Download the shell script that will change the user-id

    2. Assign executable permissions

    3. Edit script to use the correct nova id

    4. Execute the script

    Prepare deployment host

    Clone triliovault-cfg-scripts from github repository on Ansible Host.

    Available values for <branch>: hotfix-4-TVO/4.2

    Copy Ansible roles and vars to required places.

    In case of installing on OSA Victora or OSA Wallaby edit OPENSTACK_DIST in the file /etc/openstack_/user_tvault_vars.yml to victoria or wallaby respectively

    Add Trilio playbook to /opt/openstack-ansible/playbooks/setup-openstack.ymlat the end of the file.

    Add the following content at the end of the file /etc/openstack_deploy/user_variables.yml

    Create the following file /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml

    Add the following content to the created file.

    Edit the file /etc/openstack_deploy/openstack_user_config.yml according to the example below to set host entries for Trilio components.

    Edit the common editable parameter section in the file /etc/openstack_deploy/user_tvault_vars.yml

    Append the required details like Trilio Appliance IP address, Openstack distribution, snapshot storage backend, SSL related information, etc.

    Note:

    1. From 4.2HF4 onwards default prefilled value i.e 4.2.64 will be used for TVAULT_PACKAGE_VERSION .

    2. In case of more than one nova virtual environment If the user wants to install tvault-contego service in a specific nova virtual environment on compute node(s) then needs to uncomment var nova_virtual_env and then set the value of nova_virtual_env

    Configure Multi-IP NFS

    This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs

    New parameter added to /etc/openstack_deploy/user_tvault_vars.yml file for Mutli-IP NFS\

    Change the Directory

    Edit file 'triliovault_nfs_map_input.yml' in current directory and provide compute host and NFS share/IP map.

    Please take a look atto learn about the format of the file.

    Update pyyaml on the Openstack Ansible server node only

    Execute generate_nfs_map.py file to create one to one mapping of compute and nfs share.

    Result will be in file - 'triliovault_nfs_map_output.yml' of the current directory

    Validate output map file Open file 'triliovault_nfs_map_output.yml

    available in current directory and validate that all compute nodes are mapped with all necessary nfs shares.

    Append the content of triliovault_nfs_map_output.yml file to /etc/openstack_deploy/user_tvault_vars.yml

    Deploy Trilio components

    Run the following commands to deploy only Trilio components in case of an already deployed Ansible Openstack.

    If Ansible Openstack is not already deployed then run the native Openstack deployment commands to deploy Openstack and Trilio Components together. An example for the native deployment command is given below:

    Verify the Trilio deployment

    Verify triliovault datamover api service deployed and started well. Run the below commands on controller node(s).

    Verify triliovault datamover service deployed and started well on compute node(s). Run the following command oncompute node(s).

    Verify that triliovault horizon plugin, contegoclient, and workloadmgrclient are installed on the Horizon container.

    Run the following command on Horizon container.

    Verify that haproxy setting on controller node using below commands.

    Update to the latest hotfix

    After the deployment has been verified it is recommended to update to the latest hotfix to ensure the best possible experience.

    To update the environment follow .

    Upgrading on TripleO Train [CentOS7]

    1. Generic Pre-requisites

    1. Please ensure following points before starting the upgrade process:

      1. Either 4.1 GA OR any hotfix patch against 4.1 should be already deployed for performing upgrades mentioned in the current document.

      2. No snapshot OR restore to be running.

      3. Global job scheduler should be disabled.

      4. wlm-cron should be disabled (on primary T4O node).

        1. pcs resource disable wlm-cron

        2. Check : systemctl status wlm-cron OR pcs resource show wlm-cron

        3. Additional step : To ensure that cron is actually stopped, search for any lingering processes against wlm-cron and kill them. [Cmd : ps -ef | grep -i workloadmgr-cron]

    2. [On Undercloud node] Clone triliovault repo and upload trilio puppet module

    Run all the commands with 'stack' user

    2.1 Clone the latest configuration scripts

    2.2 Backup target is “Ceph based S3” with SSL

    If the backup target is Ceph S3 with SSL and SSL certificates are self-signed or authorized by private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, user needs to rename his CA chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/puppet/trilio/files'.

    2.3 Upload triliovault puppet module

    3. Prepare Trilio container images

    In this step, we are going to pull triliovault container images to the user’s registry.

    Trilio containers are pushed to ‘Dockerhub'. The registry URL is ‘docker.io’. Following are the triliovault container pull URLs.

    Refer to the word <HOTFIX-TAG-VERSION> as 4.2.8 in the below sections

    3.1 Trilio container URLs

    Trilio container URLs for TripleO Train CentOS7:

    There are two registry methods available in the TripleO Openstack Platform.

    1. Remote Registry

    2. Local Registry

    Identify which method you are using. Below we have explained all three methods to pull and configure trilioVault's container images for overcloud deployment.

    3.2 Remote Registry

    If you are using the 'Remote Registry' method follow this section. You don't need to pull anything. You just need to populate the following container URLs in trilio env yaml.

    • Populate 'environments/trilio_env.yaml' file with triliovault container urls. Changes look like the following.

    3.3 Registry on Undercloud

    If you are using 'local registry' on undercloud, follow this section.

    • Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.

    ## Above script pushes trilio container images to undercloud registry and sets correct trilio images URLs in ‘environments/trilio_env.yaml’. Verify the changes using the following command.

    4. Configure multi-IP NFS

    This section is only required when the multi-IP feature for NFS is required.

    This feature allows to set the IP to access the NFS Volume per datamover instead of globally.

    On Undercloud node, change directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

    Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.

    Run this command on undercloud by sourcing 'stackrc'.

    Edit input map file and fill all the details. Refer to the for details about the structure.

    vi triliovault_nfs_map_input.yml

    Update pyyaml on the undercloud node only

    If pip isn't available please install pip on the undercloud.

    Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.

    The result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

    Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Validate the changes in file 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

    Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that nfs is used as backup target.

    5. Fill in triliovault environment details

    • Refer old 'trilio_env.yaml' - (/home/stack/triliovault-cfg-scripts-old/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml) file and update new trilio_env.yaml file.

    • Fill triliovault details in file - '/home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml', triliovault environment file is self explanatory. Fill details of backup target, verify image urls and other details.

    For Cohesity NFS format for NFS options : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    6. roles_data.yaml file changes

    A new separate triliovault service is introduced in Trilio 4.2 release for Trilio Horizon plugin. User need to add following service to roles_data.yaml file and this service need to be co-located with openstack horizon service.

    OS::TripleO::Services::TrilioHorizon

    7. Install Trilio on Overcloud

    Use the following heat environment file and roles data file in overcloud deploy command

    1. trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations

    2. roles_data.yaml: This file contains overcloud roles data with Trilio roles added. This file need not be changed, you can use the old role_data.yaml file

    3. Use the correct trilio endpoint map file as per your keystone endpoint configuration. - Instead of tls-endpoints-public-dns.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’ - Instead of tls-endpoints-public-ip.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’ - Instead of tls-everywhere-endpoints-dns.yaml

    Deploy command with triliovault environment file looks like following.

    8. Steps to verify correct deployment

    8.1 On overcloud controller node(s)

    Make sure Trilio dmapi and horizon containers(shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps

    8.2 On overcloud compute node(s)

    Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps.

    8.3 On OpenStack node where OpenStack Horizon Service is running

    Make sure the horizon container is in a running state. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin

    9. Troubleshooting if any failures

    Trilio components will be deployed using puppet scripts.

    In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights:

    10. Known Issues/Limitations

    10.1 Overcloud deploy fails with the following error. Valid for Train CentOS7 only.

    This is not a Trilio issue. It’s a TripleO issue that is not fixed in Train Centos7. This is fixed in higher versions of TripleO.

    Undercloud jobs puppet task ertmonger_certificate[haproxy-external-cert] fails with Unrecognized parameter or wrong value type

    Workaround:

    APPLY the fix directly on the setup. It's not merged in train centos7. PR: Need fix in on controller and compute nodes /usr/share/openstack-puppet/modules/certmonger/lib/puppet/provider/certmonger_certificate/certmonger_certificate.rb

    11. Enable mount-bind for NFS

    Note : Below mentioned steps required only if target backend is NFS.

    Please referfor detailed steps to setup mount bind.

    Snapshots

    Definition

    A Snapshot is a single Trilio backup of a workload including all data and metadata. It contains the information of all VM's that are protected by the workload.

    List of Snapshots

    Using Horizon

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    4. Identify the workload to show the details on

    The List of Snapshots for the chosen Workload contains the following additional information:

    • Creation Time

    • Name of the Snapshot

    • Description of the Snapshot

    • Total amount of Restores from this Snapshot

    Using CLI

    • --workload_id <workload_id> ➡️ Filter results by workload_id

    • --tvault_node <host> ➡️ List all the snapshot operations scheduled on a tvault node(Default=None)

    • --date_from <date_from>

    Creating a Snapshot

    Snapshots are automatically created by the Trilio scheduler. If necessary or in case of deactivated scheduler is it possible to create a Snapshot on demand.

    Using Horizon

    There are 2 possibilities to create a snapshot on demand.

    Possibility 1: From the Workloads overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that shall create a Snapshot

    Possibility 2: From the Workload Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that shall create a Snapshot

    Using CLI

    • <workload_id>➡️ID of the workload to snapshot.

    • --full➡️ Specify if a full snapshot is required.

    • --display-name <display-name>➡️

    Snapshot overview

    Each Snapshot contains a lot of information about the backup. These information can be seen in the Snapshot overview.

    Using Horizon

    To reach the Snapshot Overview follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    Details Tab

    The Snapshot Details Tab shows the most important information about the Snapshot.

    • Snapshot Name / Description

    • Snapshot Type

    • Time Taken

    • Size

    Restores Tab

    The Snapshot Restores Tab shows the list of Restores that have been started from the chosen Snapshot. It is possible to start Restores from here.

    Please refer to the User Guide to learn more about Restores.

    Misc. Tab

    The Snapshot Miscellaneous Tab provides the remaining metadata information about the Snapshot.

    • Creation Time

    • Last Update time

    • Snapshot ID

    • Workload ID of the Workload containing the Snapshot

    Using CLI

    • <snapshot_id>➡️ID of the snapshot to be shown

    • --output <output>➡️Option to get additional snapshot details, Specify --output metadata for snapshot metadata, Specify --output networks for snapshot vms networks, Specify --output disks for snapshot vms disks

    Delete Snapshots

    Once a Snapshot is no longer needed, it can be safely deleted from a Workload.

    The retention policy will automatically delete the oldest Snapshots according to the configure policy.

    You have to delete all Snapshots to be able to delete a Workload.

    Deleting a Trilio Snapshot will not delete any Openstack Cinder Snapshots. Those need to be deleted separately if desired.

    Using Horizon

    There are 2 possibilities to delete a Snapshot.

    Possibility 1: Single Snapshot deletion through the submenu

    To delete a single Snapshot through the submenu follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to delete

    Possibility 2: Multiple Snapshot deletion through checkbox in Snapshot overview

    To delete one or more Snapshots through the Snapshot overview do the following:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    Using CLI

    • <snapshot_id>➡️ID of the snapshot to be deleted

    Snapshot Cancel

    Ongoing Snapshots can be canceled.

    Canceled Snapshots will be treated like errored Snapshots

    Using Horizon

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to cancel

    Using CLI

    • <snapshot_id>➡️ID of the snapshot to be canceled

    Snapshot Mount

    Mount Snapshot

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/mount

    Mounts a Snapshot to the provided File Recovery Manager

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    List of mounted Snapshots in Tenant

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/mounted/list

    Provides the list of all Snapshots mounted in a Tenant

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    List of mounted Snapshots in Workload

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/snapshots/mounted/list

    Provides the list of all Snapshots mounted in a specified Workload

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Dismount Snapshot

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/dismount

    Unmounts a Snapshot of the provided File Recovery Manager

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    # echo -n /Trilio_Backup | base64
    L1RyaWxpb19CYWNrdXA=
    /var/triliovault-mounts/L1RyaWxpb19CYWNrdXA=/
    qemu-img info 85b645c5-c1ea-4628-b5d8-1faea0e9d549
    image: 85b645c5-c1ea-4628-b5d8-1faea0e9d549
    file format: qcow2
    virtual size: 1.0G (1073741824 bytes)
    disk size: 21M
    cluster_size: 65536
    backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_3c2fbee5-ad90-4448-b009-5047bcffc2ea/snapshot_f4874ed7-fe85-4d7d-b22b-082a2e068010/vm_id_9894f013-77dd-4514-8e65-818f4ae91d1f/vm_res_id_9ae3a6e7-dffe-4424-badc-bc4de1a18b40_vda/a6289269-3e72-4085-adca-e228ba656984
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false

    workloadmgrclient

    python package

    4.2.64

    dmapi

    deb package

    4.2.64

    python3-dmapi

    deb package

    4.2.64

    tvault-contego

    deb package

    4.2.64

    python3-tvault-contego

    deb package

    4.2.64

    tvault-horizon-plugin

    deb package

    4.2.64

    python3-tvault-horizon-plugin

    deb package

    4.2.64

    s3-fuse-plugin

    deb package

    4.2.64

    python3-s3-fuse-plugin

    deb package

    4.2.64

    workloadmgr

    deb package

    4.2.64

    workloadmgrclient

    deb package

    4.2.64

    python3-namedatomiclock

    deb package

    1.1.3

    dmapi

    rpm package

    4.2.64-4.2

    python3-dmapi

    rpm package

    4.2.64-4.2

    tvault-contego

    rpm package

    4.2.64-4.2

    python3-tvault-contego

    rpm package

    4.2.64-4.2

    tvault-horizon-plugin

    rpm package

    4.2.64-4.2

    python3-tvault-horizon plugin-el8

    rpm package

    4.2.64-4.2

    python-s3fuse-plugin-cent7

    rpm package

    3.0.1-1

    python3-s3fuse-plugin

    rpm package

    3.0.1-1

    workloadmgrclient

    rpm package

    4.2.64-4.2

    TripleO Train containers

    4.2.64-tripleo

    plaintext

    xts

    application/octet-stream

    base64

    symmetric keys

    encoded with base64

    cbc

    opaque

    Verify that 'nova' user and group id has changed to the desired value

    In case of more than one horizon plugin configured on openstack user can specify under which horizon plugins to install Trilio Horizon Plugins by setting horizon_virtual_env parameter. Default value of horizon_virtual_env is ' /openstack/venvs/horizon*'\

  • NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

  • this page
    this procedure

    Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Total amount of succeeded Restores

  • Total amount of failed Restores

  • Snapshot Type

  • Snapshot Size

  • Snapshot Status

  • ➡️
    From date in format 'YYYY-MM-DDTHH:MM:SS' eg 2016-10-10T00:00:00, If don't specify time then it takes 00:00 by default
  • --date_to <date_to>➡️To date in format 'YYYY-MM-DDTHH:MM:SS'(defult is current day), Specify HH:MM:SS to get snapshots within same day inclusive/exclusive results for date_from and date_to

  • --all {True,False} ➡️ List all snapshots of all the projects(valid for admin user only)

  • Click "Create Snapshot"

  • Provide a name and description for the Snapshot

  • Decide between Full and Incremental Snapshot

  • Click "Create"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Click "Create Snapshot"

  • Provide a name and description for the Snapshot

  • Decide between Full and Incremental Snapshot

  • Click "Create"

  • Optional snapshot name. (Default=None)
  • --display-description <display-description>➡️Optional snapshot description. (Default=None)

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the Snapshot Name

  • Which VMs are part of the Snapshot

  • for each VM in the Snapshot

    • Instance Info - Name & Status

    • Security Group(s) - Name & Type

    • Flavor - vCPUs, Disk & RAM

    • Networks - IP, Networkname & Mac Address

    • Attached Volumes - Name, Type, size (GB), Mount Point & Restore Size

    • Misc - Original ID of the VM

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu

  • Click "Delete Snapshot"

  • Confirm by clicking "Delete"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshots in the Snapshot list

  • Check the checkbox for each Snapshot that shall be deleted

  • Click "Delete Snapshots"

  • Confirm by clicking "Delete"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click "Cancel" on the same line as the identified Snapshot

  • Confirm by clicking "Cancel"

  • Restores

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project the Snapshot is located in

    snapshot_id

    string

    ID of the Snapshot to mount

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant to search for mounted Snapshots

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgr

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant to search for mounted Snapshots

    workload_id

    string

    ID of the Workload to search for mounted Snapshots

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgr

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project the Snapshot is located in

    snapshot_id

    string

    ID of the Snapshot to dismount

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 15:29:03 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-9d779802-9c65-463a-973c-39cdffcba82e
    curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    chmod +x nova_userid.sh
    vi nova_userid.sh  # change nova user_id and group_id to uid & gid present on compute nodes. 
    ./nova_userid.sh
    id nova
    git clone -b <branch> https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/
    cp -R ansible/roles/* /opt/openstack-ansible/playbooks/roles/
    cp ansible/main-install.yml   /opt/openstack-ansible/playbooks/os-tvault-install.yml
    cp ansible/environments/group_vars/all/vars.yml /etc/openstack_deploy/user_tvault_vars.yml
    cp ansible/tvault_pre_install.yml /opt/openstack-ansible/playbooks/
    - import_playbook: os-tvault-install.yml
    # Datamover haproxy setting
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_balance_alg: roundrobin
          haproxy_timeout_client: 10m
          haproxy_timeout_server: 10m
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    cat > /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml 
    component_skel:
      dmapi_api:
        belongs_to:
          - dmapi_all
    
    container_skel:
      dmapi_container:
        belongs_to:
          - tvault-dmapi_containers
        contains:
          - dmapi_api
    
    physical_skel:
      tvault-dmapi_containers:
        belongs_to:
          - all_containers
      tvault-dmapi_hosts:
        belongs_to:
          - hosts
    #tvault-dmapi
    tvault-dmapi_hosts:   # Add controller details in this section as tvault DMAPI is resides on controller nodes.
      infra-1:            # controller host name. 
        ip: 172.26.0.3    # Ip address of controller
      infra-2:            # If we have multiple controllers add controllers details in same manner as shown in Infra-2  
        ip: 172.26.0.4    
        
    #tvault-datamover
    tvault_compute_hosts: # Add compute details in this section as tvault datamover is resides on compute nodes.
      infra-1:            # compute host name. 
        ip: 172.26.0.7    # Ip address of compute node
      infra-2:            # If we have multiple compute nodes add compute details in same manner as shown in Infra-2
        ip: 172.26.0.8
    ##common editable parameters required for installing tvault-horizon-plugin, tvault-contego and tvault-datamover-api
    #ip address of TVM
    IP_ADDRESS: sample_tvault_ip_address
    
    ##Time Zone
    TIME_ZONE: "Etc/UTC"
    
    ## Don't update or modify the value of TVAULT_PACKAGE_VERSION
    ## The default value of is '4.2.64'
    TVAULT_PACKAGE_VERSION: 4.2.64
    
    # Update Openstack dist code name like ussuri etc.
    OPENSTACK_DIST: ussuri
    
    #Need to add the following statement in nova sudoers file
    #nova ALL = (root) NOPASSWD: /home/tvault/.virtenv/bin/privsep-helper *
    #These changes require for Datamover, Otherwise Datamover will not work
    #Are you sure? Please set variable to 
    #  UPDATE_NOVA_SUDOERS_FILE: proceed
    #other wise ansible tvault-contego installation will exit
    UPDATE_NOVA_SUDOERS_FILE: proceed
    
    ##### Select snapshot storage type #####
    #Details for NFS as snapshot storage , NFS_SHARES should begin with "-".
    ##True/False
    NFS: False
    NFS_SHARES: 
              - sample_nfs_server_ip1:sample_share_path
              - sample_nfs_server_ip2:sample_share_path
    
    #if NFS_OPTS is empty then default value will be "nolock,soft,timeo=180,intr,lookupcache=none"
    NFS_OPTS: ""
    
    ## Valid for 'nfs' backup target only.
    ## If backup target NFS share supports multiple endpoints/ips but in backend it's a single share then
    ## set 'multi_ip_nfs_enabled' parameter to 'True'. Otherwise it's value should be 'False'
    multi_ip_nfs_enabled: False
    
    #### Details for S3 as snapshot storage
    ##True/False
    S3: False
    VAULT_S3_ACCESS_KEY: sample_s3_access_key
    VAULT_S3_SECRET_ACCESS_KEY: sample_s3_secret_access_key
    VAULT_S3_REGION_NAME: sample_s3_region_name
    VAULT_S3_BUCKET: sample_s3_bucket
    VAULT_S3_SIGNATURE_VERSION: default
    #### S3 Specific Backend Configurations
    #### Provide one of follwoing two values in s3_type variable, string's case should be match
    #Amazon/Other_S3_Compatible
    s3_type: sample_s3_type
    #### Required field(s) for all S3 backends except Amazon
    VAULT_S3_ENDPOINT_URL: ""
    #True/False
    VAULT_S3_SECURE: True
    VAULT_S3_SSL_CERT: ""
    
    ###details of datamover API
    ##If SSL is enabled "DMAPI_ENABLED_SSL_APIS" value should be dmapi.
    #DMAPI_ENABLED_SSL_APIS: dmapi
    ##If SSL is disabled "DMAPI_ENABLED_SSL_APIS" value should be empty.
    DMAPI_ENABLED_SSL_APIS: ""
    DMAPI_SSL_CERT: ""
    DMAPI_SSL_KEY: ""
    
    ## Trilio dmapi_workers count
    ## Default value of dmapi_workers is 16
    dmapi_workers: 16
    
    #### Any service is using Ceph Backend then set ceph_backend_enabled value to True
    #True/False
    ceph_backend_enabled: False
    
    ## Provide Horizon Virtual Env path from Horizon_container
    ## e.g. '/openstack/venvs/horizon-23.1.0'
    horizon_virtual_env: '/openstack/venvs/horizon*'
    
    ## When More Than One Nova Virtual Env. On Compute Node(s) and
    ## User Wants To Specify Specific Nova Virtual Env. From Existing
    ## Then Only Uncomment the var nova_virtual_env and pass value like 'openstack/venvs/nova-23.2.0'
    
    #nova_virtual_env: 'openstack/venvs/nova-23.2.0'
    
    #Set verbosity level and run playbooks with -vvv option to display custom debug messages
    verbosity_level: 3
    
    #******************************************************************************************************************************************************************
    ###static fields for tvault contego extension ,Please Do not Edit Below Variables
    #******************************************************************************************************************************************************************
    #SSL path
    DMAPI_SSL_CERT_DIR: /opt/config-certs/dmapi
    VAULT_S3_SSL_CERT_DIR: /opt/config-certs/s3
    RABBITMQ_SSL_DIR: /opt/config-certs/rabbitmq
    DMAPI_SSL_CERT_PATH: /opt/config-certs/dmapi/dmapi-ca.pem
    DMAPI_SSL_KEY_PATH: /opt/config-certs/dmapi/dmapi.key
    VAULT_S3_SSL_CERT_PATH: /opt/config-certs/s3/ca_cert.pem
    RABBITMQ_SSL_CERT_PATH: /opt/config-certs/rabbitmq/rabbitmq.pem
    RABBITMQ_SSL_KEY_PATH: /opt/config-certs/rabbitmq/rabbitmq.key
    RABBITMQ_SSL_CA_CERT_PATH: /opt/config-certs/rabbitmq/rabbitmq-ca.pem
    
    PORT_NO: 8085
    PYPI_PORT: 8081
    DMAPI_USR: dmapi
    DMAPI_GRP: dmapi
    #tvault contego file path
    TVAULT_CONTEGO_CONF: /etc/tvault-contego/tvault-contego.conf
    TVAULT_OBJECT_STORE_CONF: /etc/tvault-object-store/tvault-object-store.conf
    NOVA_CONF_FILE: /etc/nova/nova.conf
    #Nova distribution specific configuration file path
    NOVA_DIST_CONF_FILE: /usr/share/nova/nova-dist.conf
    TVAULT_CONTEGO_EXT_USER: nova
    TVAULT_CONTEGO_EXT_GROUP: nova
    TVAULT_DATA_DIR_MODE: 0775
    TVAULT_DATA_DIR_OLD: /var/triliovault
    TVAULT_DATA_DIR: /var/triliovault-mounts
    TVAULT_CONTEGO_VIRTENV: /home/tvault
    TVAULT_CONTEGO_VIRTENV_PATH: "{{TVAULT_CONTEGO_VIRTENV}}/.virtenv"
    TVAULT_CONTEGO_EXT_BIN: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/bin/tvault-contego"
    TVAULT_CONTEGO_EXT_PYTHON: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/bin/python"
    TVAULT_CONTEGO_EXT_OBJECT_STORE: ""
    TVAULT_CONTEGO_EXT_BACKEND_TYPE: ""
    TVAULT_CONTEGO_EXT_S3: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/lib/python2.7/site-packages/contego/nova/extension/driver/s3vaultfuse.py"
    privsep_helper_file: /home/tvault/.virtenv/bin/privsep-helper
    pip_version: 7.1.2
    virsh_version: "1.2.8"
    contego_service_file_path: /etc/systemd/system/tvault-contego.service
    contego_service_ulimits_count: 65536
    contego_service_debian_path: /etc/init/tvault-contego.conf
    objstore_service_file_path:  /etc/systemd/system/tvault-object-store.service
    objstore_service_debian_path: /etc/init/tvault-object-store.conf
    ubuntu: "Ubuntu"
    centos: "CentOS"
    redhat: "RedHat"
    Amazon: "Amazon"
    Other_S3_Compatible: "Other_S3_Compatible"
    tvault_datamover_api: tvault-datamover-api
    datamover_service_file_path: /etc/systemd/system/tvault-datamover-api.service
    datamover_service_debian_path: /etc/init/tvault-datamover.conf
    datamover_log_dir: /var/log/dmapi
    trilio_yum_repo_file_path: /etc/yum.repos.d/trilio.repo
    
    
    verbosity_level: 3
    ## Valid for 'nfs' backup target only.
    ## If backup target NFS share supports multiple endpoints/ips but in backend it's a single share then 
    ## set 'multi_ip_nfs_enabled' paremeter to 'True'. Otherwise it's value should be 'False'
    multi_ip_nfs_enabled: False
    cd triliovault-cfg-scripts/common/
    vi triliovault_nfs_map_input.yml
    pip3 install -U pyyaml
    python ./generate_nfs_map.py
    vi triliovault_nfs_map_output.yml
    cat triliovault_nfs_map_output.yml >> /etc/openstack_deploy/user_tvault_vars.yml
    cd /opt/openstack-ansible/playbooks
    
    ## Run tvault_pre_install.yml to install lxc packages
    ansible-playbook tvault_pre_install.yml
    
    # To create Dmapi container
    openstack-ansible lxc-containers-create.yml 
    
    #To Deploy Trilio Components
    openstack-ansible os-tvault-install.yml
    
    #To configure Haproxy for Dmapi
    openstack-ansible haproxy-install.yml
    openstack-ansible setup-infrastructure.yml --syntax-check
    openstack-ansible setup-hosts.yml
    openstack-ansible setup-infrastructure.yml
    openstack-ansible setup-openstack.yml
    lxc-ls                                           # Check the dmapi container is present on controller node.
    lxc-info -s controller_dmapi_container-a11984bf  # Confirm running status of the container
    systemctl status tvault-contego.service
    systemctl status tvault-object-store  # If Storage backend is S3
    df -h                                 # Verify the mount point is mounted on compute node(s)
    lxc-attach -n controller_horizon_container-1d9c055c                                   # To login on horizon container
    apt list | egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient'              # For ubuntu based container
    dnf list installed |egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient'     # For CentOS based container
     haproxy -c -V -f /etc/haproxy/haproxy.cfg # Verify the keyword datamover_service-back is present in output.
     workloadmgr snapshot-list [--workload_id <workload_id>]
                               [--tvault_node <host>]
                               [--date_from <date_from>]
                               [--date_to <date_to>]
                               [--all {True,False}]
    workloadmgr workload-snapshot [--full] [--display-name <display-name>]
                                  [--display-description <display-description>]
                                  <workload_id>
    workloadmgr snapshot-show [--output <output>] <snapshot_id>
    workloadmgr snapshot-delete <snapshot_id>
    workloadmgr snapshot-cancel <snapshot_id>
    {
       "mount":{
          "mount_vm_id":"15185195-cd8d-4f6f-95ca-25983a34ed92",
          "options":{
             
          }
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 15:44:42 GMT
    Content-Type: application/json
    Content-Length: 228
    Connection: keep-alive
    X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a
    
    {
       "mounted_snapshots":[
          {
             "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
             "snapshot_name":"snapshot",
             "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
             "mounturl":"[\"http://192.168.100.87\"]",
             "status":"mounted"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 15:44:42 GMT
    Content-Type: application/json
    Content-Length: 228
    Connection: keep-alive
    X-Compute-Request-Id: req-04c6ef90-125c-4a36-9603-af1af001006a
    
    {
       "mounted_snapshots":[
          {
             "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
             "snapshot_name":"snapshot",
             "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
             "mounturl":"[\"http://192.168.100.87\"]",
             "status":"mounted"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 11 Nov 2020 16:03:49 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-abf69be3-474d-4cf3-ab41-caa56bb611e4
    {
       "mount": 
          {
              "options": null
          }
    }
    this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
    this page
    https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html
    Bug #1915242 “[Train] [CentOS7] Undercloud jobs puppet task ertm...” : Bugs : tripleo
    https://github.com/saltedsignal/puppet-certmonger/pull/35/files
    to this page

    Snapshot Mount

    Definition

    Trilio allows you to view or download a file from the snapshot. Any changes to the files or directories when snapshot is mounted are temporary and are discarded when the snapshot is unmounted. Mounting is a faster way to restore a single or multiple files. To mount a snapshot follow these steps.

    Supported File Recovery Manager Image

    Create a File Recovery Manager Instance

    It is recommended to do these steps once to the chosen cloud-Image and then upload the modified cloud image to Glance.

    • Create an Openstack image using a Linux based cloud-image like Ubuntu, CentOS or RHEL with the following metadata parameters.

    • Spin up an instance from that image It is recommended to have at least 8GB RAM for the mount operation. Bigger Snapshots can require more RAM.

    Steps to apply on CentOS and RHEL cloud-images

    • install and activate qemu-guest-agent

    • Edit /etc/sysconfig/qemu-ga and remove the following from BLACKLIST_RPC section

    • Disable SELINUX in /etc/sysconfig/selinux

    • Install python3 and lvm2

    • Reboot the Instance

    Steps to apply on Ubuntu cloud-images

    • install and activate qemu-guest-agent

    • Verify the loaded path of qemu-guest-agent

    Loaded path init.d (Ubuntu 18.04)

    Follow this path when systemctl returns the following loaded path

    Edit /etc/init.d/qemu-guest-agent and add Freeze-Hook file path in daemon args

    Loaded path systemd (Ubuntu 20.04)

    Follow this path when systemctl returns the following loaded path

    Edit qemu-guest-agent systemd file

    Add the following lines

    Finalize the FRM on Ubuntu

    • Restart qemu-guest-agent service

    • Install Python3

    • Reboot the VM

    Mounting a Snapshot

    Mounting a Snapshot to a File Recovery Manager provides read access to all data that is located on the in the mounted Snapshot.

    It is possible to run the mounting process against any Openstack instance. During this process will the instance be rebooted.

    Always mount Snapshots to File Recovery Manager instances only.

    To be able to successfully mount Windows (NTFS) Snapshots the ntfs filesystem support is required on the File Recovery Manager instance.

    Unmount any mounted Snapshot once there is no further need to keep it mounted. Mounted Snapshots will not be purged by the Retention policy.

    Using Horizon

    There are 2 possibilities to mount a Snapshot in Horizon.

    Through the Snapshot list

    To mount a Snapshot through the Snapshot list follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:

    tvault_recovery_manager=yes

    Through the File Search results

    To mount a Snapshot through the File Search results follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    Should all instances of the project be listed and there is a File Recovery Manager instance existing verify together with the administrator that the File Recovery Manager image has the following property set:

    tvault_recovery_manager=yes

    Using CLI

    • <snapshot_id> ➡️ ID of the Snapshot to be mounted

    • <mount_vm_id> ➡️ ID of the File Recovery Manager instance to mount the Snapshot to.

    Accessing the File Recovery Manager

    The File Recovery Manager is a normal Linux based Openstack instance.

    It can be accessed via SSH or SSH based tools like FileZila or WinSCP.

    SSH login is often disabled by default in cloud-images. Enable SSH login if necessary.

    The mounted Snapshot can be found at the following path:

    /home/ubuntu/tvault-mounts/mounts/

    Each VM in the Snapshot has its own directory using the VM_ID as the identifier.

    Identifying mounted Snapshots

    Sometimes a Snapshot is mounted for a longer time and it needs to be identified, which Snapshots are mounted.

    Using Horizon

    There are 2 possibilities to identify mounted Snapshots inside Horizon.

    From the File Recovery Manager instance Metadata

    1. Login to Horizon

    2. Navigate to Compute

    3. Navigate to Instances

    4. Identify the File Recovery Manager Instance

    The mounted_snapshot_url contains the Snapshot ID of the Snapshot that has been mounted last.

    This value only gets updated, when a new Snapshot is mounted.

    From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    Using CLI

    • --workloadid <workloadid> ➡️ Restrict the list to snapshots in the provided workload

    Unmounting a Snapshot

    Once a mounted Snapshot is no longer needed it is possible and recommended to unmount the snapshot.

    Unmounting a Snapshot frees the File Recovery Manager instance to mount the next Snapshot and allows Trilio retention policy to purge the former mounted Snapshot.

    Deleting the File Recovery Manager instance will not update the Trilio appliance. The Snapshot will be considered mounted until an unmount command has been received.

    Using Horizon

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to mount

    Using the CLI

    • <snapshot_id> ➡️ ID of the snapshot to unmount.

    Healthcheck of Trilio

    Trilio is composed of multiple services, which can be checked in case of any errors.

    Verify the Trilio Appliance

    Verify the services are up

    Trilio is using 4 main services on the Trilio Appliance:

    • wlm-api

    • wlm-scheduler

    • wlm-workloads

    Those can be verified to be up and running using the systemctl status command.

    Check the Trilio pacemaker and nginx cluster

    The second component to check the Trilio Appliance's health is the nginx and pacemaker cluster.

    Verify API connectivity of the Trilio Appliance

    Checking the availability of the Trilio API on the chosen endpoints is recommended.

    The following example curl command lists the available workload-types and verifies that the connection is available and working:

    Please check the API guide for more commands and how to generate the X-Auth-Token.

    Verify Trilio components on OpenStack

    On OpenStack Ansible

    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    In order to check the dmapi service go to dmapi container which is residing on controller nodes and run below command

    Datamover service (tvault-contego)

    The datamover service is running on each compute node. Logging to compute node and run below command

    On Kolla Ansible OpenStack

    Datamover-API service (dmapi)

    The dmapi service has its own Keystone endpoints, which should be checked in addition to the actual service status.

    Run the following command on “nova-api” nodes and make sure “triliovault_datamover_api” container is in started state.

    Datamover service (tvault-contego)

    Run the following command on "nova-compute" nodes and make sure the container is in a started state.

    Trilio Horizon integration

    Run the following command on horizon nodes and make sure the container is in a started state.

    On Canonical OpenStack

    Run the following command on MAAS nodes and make sure all trilio units like trilio-data-mover, trilio-dm-api, trilio-horizon-plugin, trilio-wlmare in active state

    On Red Hat OpenStack and TripleO

    On controller node

    Make sure the Trilio dmapi and horizon containers (shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.

    On compute node

    Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly.

    On overcloud

    Please check dmapi endpoints on overcloud node.

    Upgrading on Canonical OpenStack

    Learn about upgrading Trilio on Canonical OpenStack

    Trilio supports the upgrade of charms and Trilio components from older releases (4.0 onwards) to the latest release. More information about the latest 4.2 Trilio charms for the Trilio-4.2 release can be found here.

    The following charms exist:

    • trilio-wlm ➡️ Installs and manages Trilio Controller services.

    • trilio-dm-api ➡️Installs and manages the Trilio Datamover API service.

    • ➡️ Installs and manages the Trilio Datamover service.

    • ➡️ Installs and manages the Trilio Horizon Plugin.

    The documentation of the charms can be found here:

    The following steps have been tested and verified within Trilio environments. There have been cases where these steps updated all packages inside the LXC containers, leading to failures in basic OpenStack services.

    It is recommended to run each of these steps in dry-run first.

    When any other packages but Trilio packages are getting updated, stop the upgrade procedure and contact your Trilio customer success manager.

    Upgrading the Trilio Charms

    More detailed information about the latest 4.2 Trilio charms for the Trilio-4.2 release can be found .

    Upgrading the Trilio Charms up to OpenStack Wallaby release

    Follow the steps mentioned in this to upgrade the charms to the latest 4.2 charms before upgrading the Trilio packages.

    Upgrading the Trilio Charms to OpenStack Yoga release onwards

    Follow the steps mentioned below to upgrade the charms to the latest 4.2 charms before upgrading the Trilio packages.

    General Prerequisites

    1. No snapshot OR restore are to be running during this process.

    2. Global job scheduler should be disabled

    1. Upgrade juju charms

    juju [-m <model>] upgrade-charm trilio-wlm --switch trilio-charmers-trilio-wlm-focal --channel latest/edge

    juju [-m <model>] upgrade-charm trilio-horizon-plugin --switch trilio-charmers-trilio-horizon-plugin-focal --channel latest/edge

    juju [-m <model>] upgrade-charm trilio-dm-api --switch trilio-charmers-trilio-dm-api-focal --channel latest/edge

    juju [-m <model>] upgrade-charm trilio-data-mover --switch trilio-charmers-trilio-data-mover-focal --channel latest/edge

    2. Wait till all trilio units are active/idle then Restart all the trilio services

    juju [-m <model>] exec --application trilio-dm-api "sudo systemctl restart tvault-datamover-api"

    juju [-m <model>] exec --application trilio-data-mover "sudo systemctl restart tvault-contego"

    juju [-m <model>] exec --application trilio-horizon-plugin "sudo systemctl restart apache2"

    3. Select Single Node or HA steps as appropriate to restart services

    For setup with single node wlm unit

    juju [-m <model>] exec --application trilio-wlm "sudo systemctl restart wlm-api wlm-scheduler wlm-workloads wlm-cron"

    For setups with 3 node HA enabled wlm

    juju [-m <model>] exec --application trilio-wlm "systemctl restart wlm-api wlm-scheduler wlm-workloads"

    juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource restart res_trilio_wlm_wlm_cron"

    4. Ensure that wlm-cron is only running on a single Unit in the output of this command

    juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-cron"

    If wlm-cron is running in more than one unit, or nowhere at all, this can be fixed by following the below steps:

    juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource stop res_trilio_wlm_wlm_cron" juju [-m <model>] exec --application trilio-wlm "sudo systemctl stop wlm-cron" juju [-m <model>] exec --application trilio-wlm "sudo ps -ef | grep workloadmgr-cron | grep -v grep"

    No workloadmgr-cron process should be running. If it's still running somewhere, login to that unit and stop the service manually with 'sudo systemctl stop wlm-cron'

    juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource start res_trilio_wlm_wlm_cron" juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-cron"

    wlm-cron service should only be running in a single Juju Unit and not running in the other two.

    5. Check upgraded juju charm & trilio services

    Check the trilio unit's status in juju status [-m <model>] | grep trilio output.

    All the trilio units will be reporting the new juju charm version.

    trilio-data-mover 4.2.64.26 active 3 trilio-charmers-trilio-data-mover-jammy latest/edge 4 no Unit is ready trilio-dm-api 4.2.64.2 active 1 trilio-charmers-trilio-dm-api-jammy latest/edge 2 no Unit is ready trilio-horizon-plugin 4.2.64.5 active 1 trilio-charmers-trilio-horizon-plugin-jammy latest/edge 4 no Unit is ready trilio-wlm 4.2.64.20 active 1 trilio-charmers-trilio-wlm-jammy latest/edge 3 no Unit is ready

    6. Check that all Trilio services are running

    juju [-m <model>] exec --application trilio-dm-api "sudo systemctl status tvault-datamover-api"

    juju [-m <model>] exec --application trilio-data-mover "sudo systemctl status tvault-contego"

    juju [-m <model>] exec --application trilio-horizon-plugin "sudo systemctl status apache2"

    juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-api wlm-scheduler wlm-workloads wlm-cron"

    Next move on the Updating the Trilio packages.

    Upgrading the Trilio packages

    Trilio is releasing hotfixes, which require updating the packages inside the containers. These hotfixes can not be installed using the Juju charms as they don't require an update to the charms.

    Generic Prerequisites

    1. Trilio packages can be upgraded after deployment. Trilio supports upgrade to the latest 4.2 releases from as old as the Trilio 4.0 release.

    2. No snapshot OR restore are to be running during this process.

    3. Global job scheduler should be disabled.

    4. wlm-cron should be disabled ( Following sets of commands are to be run on MAAS node).

    1. Stop the wlm-cron service

    1. Ensure that no stale wlm-cron processes exist

    If any stale process are found, they should be stopped manually.

    Upgrade package on trilio units

    The deployed Trilio version is controlled by the triliovault-pkg-source charm configuration option.

    The gemfury repository should be accessible from all Trilio units. For each trilio charm, it should be pointing to the following Gemfury repository:

    This can be checked via juju [-m ] config trilio-wlm triliovault-pkg-source command output.

    The preferred, recommended, and tested method to update the packages is through the Juju command line.

    Run the below commands from MAAS node

    On Ubuntu Bionic environments

    On other (not Ubuntu Bionic) environments

    Check the trilio unit's status in juju status [-m ] | grep trilio output. All the trilio units will be with the new packages.

    Update the DB schema

    Run the below command to update the schema

    Check the schema head with below command. It should point to latest schema head.

    Enable mount-bind for NFS

    T4O 4.2 is changing how the mount point is calculated. It is required to setup a mount bind to make T4O 4.1 or older backups available.

    Please referfor detailed steps to set up the mount bind.

    Post-Upgrade steps

    1. If the trilio-wlm nodes are HA enabled:

      1. Make sure the wlm-cron services are down after the pkg upgrade. Run the following command for the same:

    1. Unset the cluster maintenance mode

    1. Make sure the wlm-cron service up and running on any one node.

    1. Set the Global Job Scheduler to the original state.

    Troubleshooting

    If any trilio unit gets into an error state with the message :

    hook failed: "update-status"

    One of the reasons could be the package installation did not finish correctly. One way to verify that is by logging into that unit and following the below steps;

    Compatibility Matrix

    Learn about Trilio Support for OpenStack Distributions

    The CentOS community has moved over to the CentOS stream.

    The support for CentOS8 has ended on December 31st 2021. The official announcement can be found .

    CentOS7 is still supported and maintained till June 30th 2024

    cd /home/stack
    mv triliovault-cfg-scripts triliovault-cfg-scripts-old
    #Additionally keep the NFS share path noted
    #/var/lib/nova/triliovault-mounts/MTcyLjMwLjEuMzovcmhvc3BuZnM=
    
    ##Clone latest Trilio cfg scripts repository
    git clone --branch TVO/4.2.8 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/puppet/trilio/files/
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    chmod +x *.sh
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following.
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    Trilio Datamove container:        docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
    Trilio Datamover Api Container:   docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
    Trilio horizon plugin:            docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    # For TripleO Train Centos7
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
       DockerTrilioDmApiImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
       DockerHorizonImage: docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_registry_hostname_or_ip> <OS_platform> <4.2-TRIPLEO-CONTAINER> <container_tool_available_on_undercloud>
    
    ## To get undercloud registry hostname/ip, we have two approaches. Use either one.
    1. openstack tripleo container image list
    
    2. find your 'containers-prepare-parameter.yaml' (from overcloud deploy command) and search for 'push_destination'
    cat /home/stack/containers-prepare-parameter.yaml | grep push_destination
     - push_destination: "undercloud.ctlplane.ooo.prod1:8787"
    
    Here, 'undercloud.ctlplane.ooo.prod1' is undercloud registry hostname. Use it in our command like following example.
    
    # Command Example:
    sudo ./prepare_trilio_images.sh undercloud.ctlplane.ooo.prod1 centos7 <HOTFIX-TAG-VERSION>-tripleo podman
    
    ## Verify changes
    # For TripleO Train Centos7
    $ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
       DockerTrilioDmApiImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
       DockerHorizonImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    ## For Centos7 Train
    
    (undercloud) [stack@undercloud redhat-director-scripts/tripleo-train]$ openstack tripleo container image list | grep trilio
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo                       |                        |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo                  |                   |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo                  |
    cd triliovault-cfg-scripts/common/
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    ## On Python3 env 
    sudo pip3 install PyYAML==5.1 
    
    ## On Python2 env 
    sudo pip install PyYAML==5.1
    ## On Python3 env 
    python3 ./generate_nfs_map.py 
     
    ## On Python2 env 
    python ./generate_nfs_map.py
    grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env_tls_endpoints_public_dns.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /home/stack/templates/roles_data.yaml
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  prod1-compute1.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo         kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  prod1-controller1.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo      kolla_start           5 days ago  Up 5 days ago         horizon
    openstack stack failures list overcloud
    heat stack-list --show-nested -f "status=FAILED"
    heat resource-list --nested-depth 5 overcloud | grep FAILED
    
    => If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_dmapi
    
    tailf /var/log/containers/trilio-datamover-api/dmapi.log
    
     
    
    => If trilio datamover containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_datamover
    
    tailf /var/log/containers/trilio-datamover/tvault-contego.log

    ✔️

    RHEL

    RHEL7

    ✔️

    RHEL

    RHEL8

    ✔️

    RHEL

    RHEL9

    ✔️

    Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu

  • Click "Mount Snapshot"

  • Choose the File Recovery Manager instance to mount to

  • Confirm by clicking "Mount"

  • Click the workload name to enter the Workload overview

  • Navigate to the File Search tab

  • Do a File Search

  • Identify the Snapshot to be mounted

  • Click "Mount Snapshot" for the chosen Snapshot

  • Choose the File Recovery Manager instance to mount to

  • Confirm by clicking "Mount"

  • Click on the Name of the File Recovery Manager Instance to bring up its details

  • On the Overview tab look for Metadata

  • Identify the value for mounted_snapshot_url

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Search for the Snapshot that has the option "Unmount Snapshot"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Search for the Snapshot that has the option "Unmount Snapshot"

  • Click "Unmount Snapshot"

  • Cloud Image Name

    Version

    Supported

    Ubuntu

    Bionic(18.04)

    ✔️

    Ubuntu

    Focal(20.04)

    ✔️

    Centos

    Centos8

    ✔️

    Centos

    Centos8 stream

    wlm-cron
    openstack image create \
    --file <File Manager Image Path> \
    --container-format bare \
    --disk-format qcow2 \
    --public \
    --property hw_qemu_guest_agent=yes \
    --property tvault_recovery_manager=yes \
    --property hw_disk_bus=virtio \
    tvault-file-manager
    guest-file-read
    guest-file-write
    guest-file-open
    guest-file-close
    SELINUX=disabled
    yum install python3 lvm2
    apt-get update
    apt-get install qemu-guest-agent
    systemctl enable qemu-guest-agent
    Loaded: loaded (/etc/init.d/qemu-guest-agent; generated)
    DAEMON_ARGS="-F/etc/qemu/fsfreeze-hook"
    Loaded: loaded (/usr/lib/systemd/system/qemu-guest-agent.service; disabled; vendor preset: enabled)
    systemctl edit qemu-guest-agent
    [Service]
    ExecStart=
    ExecStart=/usr/sbin/qemu-ga -F/etc/qemu/fsfreeze-hook
    systemctl restart qemu-guest-agent
    apt-get install python3
    workloadmgr snapshot-mount <snapshot_id> <mount_vm_id>
    workloadmgr snapshot-mounted-list [--workloadid <workloadid>]
    workloadmgr snapshot-dismount <snapshot_id>
    systemctl | grep wlm
      wlm-api.service          loaded active running   workloadmanager api service
      wlm-cron.service         loaded active running   Cluster Controlled wlm-cron
      wlm-scheduler.service    loaded active running   Cluster Controlled wlm-scheduler
      wlm-workloads.service    loaded active running   workloadmanager workloads service
    systemctl status wlm-api
    ######
    ● wlm-api.service - workloadmanager api service
       Loaded: loaded (/etc/systemd/system/wlm-api.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:41:19 UTC; 2 months 21 days ago
     Main PID: 4688 (workloadmgr-api)
       CGroup: /system.slice/wlm-api.service
               ├─4688 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-api --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-scheduler
    ######
    ● wlm-scheduler.service - Cluster Controlled wlm-scheduler
       Loaded: loaded (/etc/systemd/system/wlm-scheduler.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-scheduler.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9342 (workloadmgr-sch)
       CGroup: /system.slice/wlm-scheduler.service
               └─9342 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-scheduler --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-workloads
    ######
    ● wlm-workloads.service - workloadmanager workloads service
       Loaded: loaded (/etc/systemd/system/wlm-workloads.service; enabled; vendor preset: disabled)
       Active: active (running) since Tue 2021-11-02 19:51:05 UTC; 2 months 21 days ago
     Main PID: 606 (workloadmgr-wor)
       CGroup: /system.slice/wlm-workloads.service
               ├─ 606 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-cron
    ######
    ● wlm-cron.service - Cluster Controlled wlm-cron
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-cron.service.d
               └─50-pacemaker.conf
       Active: active (running) since Sat 2022-01-22 13:49:28 UTC; 1 day 23h ago
     Main PID: 9209 (workloadmgr-cro)
       CGroup: /system.slice/wlm-cron.service
               ├─9209 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    pcs status
    ######
    Cluster name: triliovault
    
    WARNINGS:
    Corosync and pacemaker node names do not match (IPs used in setup?)
    
    Stack: corosync
    Current DC: TVM1 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum
    Last updated: Mon Jan 24 13:42:01 2022
    Last change: Tue Nov  2 19:07:04 2021 by root via crm_resource on TVM2
    
    3 nodes configured
    9 resources configured
    
    Online: [ TVM1 TVM2 TVM3 ]
    
    Full list of resources:
    
     virtual_ip     (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_public      (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_admin       (ocf::heartbeat:IPaddr2):       Started TVM2
     virtual_ip_internal    (ocf::heartbeat:IPaddr2):       Started TVM2
     wlm-cron       (systemd:wlm-cron):     Started TVM2
     wlm-scheduler  (systemd:wlm-scheduler):        Started TVM2
     Clone Set: lb_nginx-clone [lb_nginx]
         Started: [ TVM2 ]
         Stopped: [ TVM1 TVM3 ]
    
    Daemon Status:
      corosync: active/enabled
      pacemaker: active/enabled
      pcsd: active/enabled
    curl http://10.10.2.34:8780/v1/8e16700ae3614da4ba80a4e57d60cdb9/workload_types/detail -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-workloadmgrclient" -H "Accept: application/json" -H "X-Auth-Token: gAAAAABe40NVFEtJeePpk1F9QGGh1LiGnHJVLlgZx9t0HRrK9rC5vqKZJRkpAcW1oPH6Q9K9peuHiQrBHEs1-g75Na4xOEESR0LmQJUZP6n37fLfDL_D-hlnjHJZ68iNisIP1fkm9FGSyoyt6IqjO9E7_YVRCTCqNLJ67ZkqHuJh1CXwShvjvjw
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    lxc-attach -n <dmapi-container-name>  (go to dmapi conatiner)
    root@controller-dmapi-container-08df1e06:~# systemctl status tvault-datamover-api.service
    ● tvault-datamover-api.service - TrilioData DataMover API service
         Loaded: loaded (/lib/systemd/system/tvault-datamover-api.service; enabled; vendor preset: enabled)
         Active: active (running) since Wed 2022-01-12 11:53:39 UTC; 1 day 17h ago
       Main PID: 23888 (dmapi-api)
          Tasks: 289 (limit: 57729)
         Memory: 607.7M
         CGroup: /system.slice/tvault-datamover-api.service
                 ├─23888 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23893 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23894 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23895 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23896 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23897 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23898 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23899 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23900 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23901 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23902 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23903 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23904 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23905 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23906 /usr/bin/python3 /usr/bin/dmapi-api
                 ├─23907 /usr/bin/python3 /usr/bin/dmapi-api
                 └─23908 /usr/bin/python3 /usr/bin/dmapi-api
    
    Jan 12 11:53:39 controller-dmapi-container-08df1e06 systemd[1]: Started TrilioData DataMover API service.
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    Jan 12 11:53:40 controller-dmapi-container-08df1e06 dmapi-api[23888]: Could not load
    
    root@compute:~# systemctl status tvault-contego
    ● tvault-contego.service - Tvault contego
         Loaded: loaded (/etc/systemd/system/tvault-contego.service; enabled; vendor preset: enabled)
         Active: active (running) since Fri 2022-01-14 05:45:19 UTC; 2s ago
       Main PID: 1489651 (python3)
          Tasks: 19 (limit: 67404)
         Memory: 6.7G (max: 10.0G)
         CGroup: /system.slice/tvault-contego.service
                 ├─ 998543 /bin/qemu-nbd -c /dev/nbd45 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998772 /bin/qemu-nbd -c /dev/nbd73 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─ 998931 /bin/qemu-nbd -c /dev/nbd100 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─ 999147 /bin/qemu-nbd -c /dev/nbd35 --object secret,id=sec0,data=payload-1234 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,file>
                 ├─1371322 /bin/qemu-nbd -c /dev/nbd63 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 ├─1371524 /bin/qemu-nbd -c /dev/nbd91 --object secret,id=sec0,data=payload-test1 --image-opts driver=qcow2,encrypt.format=luks,encrypt.key-secret=sec0,fil>
                 └─1489651 /openstack/venvs/nova-22.3.1/bin/python3 /usr/bin/tvault-contego --config-file=/etc/nova/nova.conf --config-file=/etc/tvault-contego/tvault-cont>
    
    Jan 14 05:45:19 compute systemd[1]: Started Tvault contego.
    Jan 14 05:45:20 compute sudo[1489653]:     nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/openstack/venvs/nova-22.3.1/bin/nova-rootwrap /etc/nova/rootwrap.conf umou>
    Jan 14 05:45:20 compute sudo[1489653]: pam_unix(sudo:session): session opened for user root by (uid=0)
    Jan 14 05:45:21 compute python3[1489655]: umount: /var/triliovault-mounts/VHJpbGlvVmF1bHQ=: no mount point specified.
    Jan 14 05:45:21 compute sudo[1489653]: pam_unix(sudo:session): session closed for user root
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] CPU Control group m>
    Jan 14 05:45:21 compute tvault-contego[1489651]: 2022-01-14 05:45:21.499 1489651 INFO __main__ [req-48c32a39-38d0-45b9-9852-931e989133c6 - - - - -] I/O Control Group m>
    lines 1-22/22 (END)
    root@ansible:~# openstack endpoint list | grep dmapi
    | 190db2ce033e44f89de73abcbf12804e | US-WEST-2 | dmapi          | datamover    | True    | public    | https://osa-victoria-ubuntu20-2.triliodata.demo:8784/v2                    |
    | dec1a323791b49f0ac7901a2dc806ee2 | US-WEST-2 | dmapi          | datamover    | True    | admin     | http://10.10.10.154:8784/v2                                                |
    | f8c4162c9c1246ffb0190d0d093c48af | US-WEST-2 | dmapi          | datamover    | True    | internal  | http://10.10.10.154:8784/v2                                                |
    
    root@ansible:~# curl http://10.10.10.154:8784
    {"versions": [{"id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [{"rel": "self", "href": "h
    [root@controller ~]# docker ps | grep triliovault_datamover_api
    3f979c15cedc   trilio/centos-binary-trilio-datamover-api:4.2.50-victoria                     "dumb-init --single-…"   3 days ago    Up 3 days                         triliovault_datamover_api
    [root@compute1 ~]# docker ps | grep triliovault_datamover
    2f1ece820a59   trilio/centos-binary-trilio-datamover:4.2.50-victoria                        "dumb-init --single-…"   3 days ago    Up 3 days                        triliovault_datamover
    [root@controller ~]# docker ps | grep horizon
    4a004c786d47   trilio/centos-binary-trilio-horizon-plugin:4.2.50-victoria                    "dumb-init --single-…"   3 days ago    Up 3 days (unhealthy)             horizon
    root@jujumaas:~# juju status | grep trilio
    trilio-data-mover       4.2.51   active       3  trilio-data-mover       jujucharms    9  ubuntu
    trilio-dm-api           4.2.51   active       1  trilio-dm-api           jujucharms    7  ubuntu
    trilio-horizon-plugin   4.2.51   active       1  trilio-horizon-plugin   jujucharms    6  ubuntu
    trilio-wlm              4.2.51   active       1  trilio-wlm              jujucharms    9  ubuntu
      trilio-data-mover/8        active    idle            172.17.1.5                         Unit is ready
      trilio-data-mover/6        active    idle            172.17.1.6                         Unit is ready
      trilio-data-mover/7*       active    idle            172.17.1.7                         Unit is ready
      trilio-horizon-plugin/2*   active    idle            172.17.1.16                        Unit is ready
    trilio-dm-api/2*             active    idle   1/lxd/4  172.17.1.27     8784/tcp           Unit is ready
    trilio-wlm/2*                active    idle   7        172.17.1.28     8780/tcp           Unit is ready
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp16 onwards/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain1-controller-0 heat-admin]# podman ps | grep trilio-
    e3530d6f7bec  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:4.2.47-rhosp16.1           kolla_start           2 weeks ago   Up 2 weeks ago          trilio_dmapi
    f93f7019f934  ucqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:4.2.47-rhosp16.1          kolla_start           2 weeks ago   Up 2 weeks ago          horizon
    On rhosp13 OS:	docker ps | grep trilio-
    On other (rhosp/tripleo) :	podman ps | grep trilio-
    
    [root@overcloudtrain3-novacompute-1 heat-admin]# podman ps | grep trilio-
    4419b02e075c  undercloud162.ctlplane.trilio.local:8787/trilio/trilio-datamover:dev-osp16.2-1-rhosp16.2       kolla_start  2 days ago   Up 27 seconds ago          trilio_datamover
     (overcloudtrain1) [stack@ucqa161 ~]$ openstack endpoint list | grep datamover
    | 218b2f92569a4d259839fa3ea4d6103a | regionOne | dmapi          | datamover      | True    | internal  | https://overcloudtrain1internalapi.trilio.local:8784/v2                    |
    | 4702c51aa5c24bed853e736499e194e2 | regionOne | dmapi          | datamover      | True    | public    | https://overcloudtrain1.trilio.local:13784/v2                              |
    | c8169025eb1e4954ab98c7abdb0f53f6 | regionOne | dmapi          | datamover      | True    | admin     | https://overcloudtrain1internalapi.trilio.local:8784/v2    

    If trilio-wlm is HA enabled, set the cluster configuration to maintenance mode ( this command will fail for single node deployment).

    trilio-data-mover
    trilio-horizon-plugin
    here
    document
    to this page
    TrilioVault data protection — charm-guide 0.0.1.dev818 documentationdocs.openstack.org

    Kolla Ansible environments running on CentOS8 are receiving continuous limited support. This means that future updates from Trilio for Kolla Ansible environments on CentOS8 will use the latest available CentOS8 base containers and only the Trilio for OpenStack code gets updated. When the Kolla Ansible community provides CentOS Stream based containers, Trilio will provide CentOS Stream based containers as well.

    Ansible Openstack is at its End Of Support from Trilio starting 2nd March 2023

    Trilio for OpenStack Compatibility Matrix

    Trilio Release
    RHOSP
    Canonical
    Kolla
    TripleO

    4.2.8

    17.0

    16.2

    16.1

    13

    Zed

    Yoga

    Wallaby Victoria

    Ussuri

    Train

    Stein

    Queens

    Victoria

    Wallaby

    Yoga

    Zed

    Train

    4.2.7

    17.0

    16.2

    16.1

    13

    Zed

    Yoga

    Wallaby Victoria

    Ussuri

    Train

    Stein

    Queens

    NFS & S3 Support:

    All versions of Trilio for OpenStack support NFSv3 and S3 as backup targets.

    Barbican Support:

    Supported in RHOSP 16.1, 16.2, 17.0, Canonical Ussuri, Victoria, Wallaby, Yoga, Zed, Kolla Victoria, Wallaby, Yoga.

    Not Supported in RHOSP 13, Canonical Queens, Stein, Train, TripleO Train.

    Supported OS:

    RHEL7, RHEL8, RHEL9, Ubuntu 18.04, Ubuntu 20.04, Ubuntu 20.04 source image, Ubuntu 22.04, CentOS7, CentOS Stream 8, CentOS Linux 8, Ubuntu 20.04 binary image, CentOS Stream 8 binary image, CentOS Stream 8 source image.

    Deployment:

    Deploy using the distributions corresponding deployment toolsets, including Red Hat Director, JuJu Charms, Ansible, and Director.

    Compatibility Matrix Detailed View

    Distribution/Version
    Trilio 4.2.8
    Trilio 4.2.7
    Trilio 4.2HF4
    Trilio 4.2HF1+
    TrilioV 4.2 GA
    OS
    Barbican Support
    NFS Support
    S3 Support
    Deployment

    RHOSP 13

    Yes

    here

    Upgrading on RHOSP

    0] Pre-requisites

    Please ensure the following points are met before starting the upgrade process:

    • No Snapshot or Restore is running

    • Global job scheduler is disabled

    • wlm-cron is disabled on the Trilio Appliance

    Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it is has been completely shut-down.

    1] [On Undercloud node] Clone latest Trilio repository and upload Trilio puppet module

    All commands need to be run as user 'stack' on undercloud node

    1.1] Clone Trilio cfg scripts repository

    Separate directories are created as per Redhat OpenStack release under 'triliovault-cfg-scripts/redhat-director-scripts/' directory. Use all scripts/templates from respective directory. For ex, if your RHOSP release is 13, then use scripts/templates from 'triliovault-cfg-scripts/redhat-director-scripts/rhosp13' directory only.

    Available RHOSP_RELEASE___DIRECTORY values are:

    rhosp13 rhosp16.1 rhosp16.2 rhosp17.0

    1.2] If backup target type is 'Ceph based S3' with SSL:

    If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into the puppet directory of the right release.

    2] Upload Trilio puppet module

    3] Update overcloud roles data file to include Trilio services

    Trilio has two services as explained below. You need to add these two services to your roles_data.yaml. If you do not have customized roles_data file, you can find your default roles_data.yaml file at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml on undercloud.

    You need to find that role_data file and edit it to add the following Trilio services.

    i) Trilio Datamover Api Service:

    Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamoverApi

    This service needs to be co-located with database and keystone services. That said, you need to add this service on the same role as of keystone and database service.

    Typically this service should be deployed on controller nodes where keystone and database runs. If you are using RHOSP's pre-defined roles, you need to addOS::TripleO::Services::TrilioDatamoverApiservice to Controller role.

    ii) Trilio Datamover Service: Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamover This service should be deployed on role where nova-compute service is running.

    If you are using RHOSP's pre-defined roles, you need to add our OS::TripleO::Services::TrilioDatamover service to Compute role.

    If you have defined your custom roles, then you need to identify the role name where in 'nova-compute' service is running and then you need to add 'OS::TripleO::Services::TrilioDatamover' service to that role.

    iii) Trilio Horizon Service:

    This service needs to share the same role as the OpenStack Horizon server. In the case of the pre-defined roles will the Horizon service run on the role Controller. Add the following to the identified role OS::TripleO::Services::TrilioHorizon

    4] Prepare latest Trilio container images

    All commands need to be run as user 'stack'

    Trilio containers are pushed to 'RedHat Container Registry'. Registry URL is ''. The Trilio container URLs are as follows:

    Refer to the word <HOTFIX-TAG-VERSION> as 4.2.8 in the below sections

    4.1] available container images

    RHOSP 13

    RHOSP 16.1

    RHOSP 16.2

    RHOSP 17.0

    There are three registry methods available in RedHat Openstack Platform.

    1. Remote Registry

    2. Local Registry

    3. Satellite Server

    4.2] Remote Registry

    Please refer to the to see which containers are available.

    Follow this section when 'Remote Registry' is used.

    For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from Redhat registry.

    Populate the trilio_env.yaml with container URLs for:

    • Trilio Datamover container

    • Trilio Datamover api container

    • Trilio Horizon Plugin

    trilio_env.yaml will be available in __triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments

    Example

    4.2] Local Registry

    Please refer to the to see which containers are available.

    Follow this section when 'local registry' is used on the undercloud.

    In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.

    RHOSP13 example

    RHOSP 16.1, RHOSP16.2 and RHOSP17.0 example

    The changes can be verified using the following commands.

    4.3] Red Hat Satellite Server

    Please refer to the to see which containers are available.

    Follow this section when a Satellite Server is used for the container registry.

    Pull the Trilio containers on the Red Hat Satellite using the given

    Populate the trilio_env.yaml with container urls.

    RHOSP 13 example

    RHOSP 16.1, RHOSP 16.2 and RHOSP 17.0

    5. Verify Trilio environment details

    It is recommended to re-populate the backup target details in the freshly downloaded trilio_env.yaml file. This will ensure that parameters that have been added since the last update/installation of Trilio are available and will be filled out too.

    Locations of the trilio_env.yaml:

    For more details about the trilio_env.yaml please check .

    6] Configure multi-IP NFS

    This section is only required when the multi-IP feature for NFS is required.

    This feature allows to set the IP to access the NFS Volume per datamover instead of globally.

    On Undercloud node, change directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

    Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.

    Run this command on undercloud by sourcing 'stackrc'.

    Edit input map file and fill all the details. Refer to the for details about the structure.

    vi triliovault_nfs_map_input.yml

    Update pyyaml on the undercloud node only

    If pip isn't available please install pip on the undercloud.

    Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.

    The result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

    Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Validate the changes in file 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

    Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that nfs is used as backup target.

    7] Update Overcloud Trilio components

    Use the following heat environment file and roles data file in overcloud deploy command:

    1. trilio_env.yaml

    2. roles_data.yaml

    3. Use correct Trilio endpoint map file as per available Keystone endpoint configuration

    To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:

    8] Verify deployment

    If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

    8.1] On Controller node

    Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

    8.2] On Compute node

    Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

    8.3] On the node with Horizon service

    Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.

    If Trilio Horizon container is in restarted state on RHOSP 16.1.8/RHSOP 16.2.4 then use below workaroud

    9] Enable mount-bind for NFS

    T4O 4.2 has changed the calculation of the mount point. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2

    Please follow to set up the mount bind for RHOSP.

    E-Mail Notification Settings

    E-Mail Notification Settings are done through the settings API. Use the values from the following table to set Email Notifications up through API.

    Setting name
    Settings Type
    Value type
    example

    Workloads

    Definition

    A workload is a backup job that protects one or more Virtual Machines according to a configured policy. There can be as many workloads as needed. But each VM can only be part of one Workload.

    Using encrypted Workload will lead to longer backup times. The following timings have been seen in Trilio labs:

    Schedulers

    Disable Workload Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/pause

    Disables the scheduler of a given Workload

    juju exec [-m <model>] --unit trilio-wlm/leader "sudo crm configure property maintenance-mode=true"
    juju exec [-m <model>] --application trilio-wlm "sudo systemctl stop wlm-cron"
    juju exec [-m <model>] --application trilio-wlm "sudo ps -ef | grep [w]orkloadmgr-cron"
    deb [trusted=yes] https://apt.fury.io/triliodata-4-2/ /
    juju exec [-m <model>] --application trilio-wlm 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" workloadmgr python3-workloadmgrclient python3-contegoclient s3-fuse-plugin'
    juju exec [-m <model>] --application trilio-horizon-plugin 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" tvault-horizon-plugin python-workloadmgrclient'
    juju exec [-m <model>] --application trilio-dm-api 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-dmapi'
    juju exec [-m <model>] --application trilio-data-mover 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" tvault-contego s3-fuse-plugin'
    juju exec [-m <model>] --application trilio-wlm 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" workloadmgr python3-workloadmgrclient python3-contegoclient python3-s3-fuse-plugin'
    juju exec [-m <model>] --application trilio-horizon-plugin 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-tvault-horizon-plugin python3-workloadmgrclient'
    juju exec [-m <model>] --application trilio-dm-api 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-dmapi'
    juju exec [-m <model>] --application trilio-data-mover 'sudo apt-get update && sudo apt-get install -y --only-upgrade -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" python3-tvault-contego python3-s3-fuse-plugin'
    trilio-data-mover      <package version>  active      3  trilio-data-mover      jujucharms    8  ubuntu
    trilio-dm-api          <package version>  active      1  trilio-dm-api          jujucharms    5  ubuntu
    trilio-horizon-plugin  <package version>  active      1  trilio-horizon-plugin  jujucharms    4  ubuntu
    trilio-wlm             <package version>  active      3  trilio-wlm             jujucharms    7  ubuntu
    juju exec [-m <model>] --unit trilio-wlm/leader "alembic -c /etc/workloadmgr/alembic.ini upgrade heads"
    juju exec [-m <model>] --unit trilio-wlm/leader "alembic -c /etc/workloadmgr/alembic.ini current"
    juju exec [-m <model>] --application trilio-wlm "sudo systemctl stop wlm-cron"
    juju exec [-m <model>] --unit trilio-wlm/leader "sudo crm configure property maintenance-mode=false"
    juju exec [-m <model>] --application trilio-wlm "sudo systemctl status wlm-cron"
    Login to trilio unit and run "sudo dpkg --configure -a"
    It will ask for user input, hit enter and log out from the unit.
    From mass node run command "juju resolve <trilio unit name>"

    Victoria

    Wallaby

    Yoga

    Train

    4.2 HF4

    16.2

    16.1

    13

    Yoga

    Wallaby Victoria

    Ussuri

    Train

    Stein

    Queens

    Victoria

    Wallaby

    Yoga

    Train

    4.2 HF1+

    16.2

    16.1

    13

    Wallaby Victoria

    Ussuri

    Train

    Stein

    Queens

    Victoria

    Wallaby

    Train

    4.2 GA

    16.2

    16.1

    13

    Victoria

    Ussuri

    Train

    Stein

    Queens

    Victoria

    Train

    Yes

    Yes

    Yes

    Yes

    RHEL7

    Not supported

    NFSv3

    AWS S3 compatible

    Red Hat Director

    RHOSP 16.1

    Yes

    Yes

    Yes

    Yes

    Yes

    RHEL8

    Supported

    NFSv3

    AWS S3 compatible

    Red Hat Director

    RHOSP 16.2

    Yes

    Yes

    Yes

    Yes

    Yes

    RHEL8

    Supported

    NFSv3

    AWS S3 compatible

    Red Hat Director

    RHOSP 17.0

    Yes

    Yes

    RHEL9

    Supported

    NFSv3

    AWS S3 compatible

    Red Hat Director

    Canonical Queens

    Yes

    Yes

    Yes

    Yes

    Yes

    Ubuntu 18.04

    Not supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Stein

    Yes

    Yes

    Yes

    Yes

    Yes

    Ubuntu 18.04

    Not supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Train

    Yes

    Yes

    Yes

    Yes

    Yes

    Ubuntu 18.04

    Not supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Ussuri

    Yes

    Yes

    Yes

    Yes

    Yes

    Ubuntu 18.04/20.04

    Supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Victoria

    Yes

    Yes

    Yes

    Yes

    Yes

    Ubuntu 20.04

    Supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Wallaby

    Yes

    Yes

    Yes

    Yes

    Ubuntu 20.04

    Supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Yoga

    Yes

    Yes

    Yes

    Ubuntu 20.04/22.04

    Supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Canonical Zed

    Yes

    Yes

    Ubuntu 22.04

    Supported

    NFSv3

    AWS S3 compatible

    JuJu Charms

    Kolla Victoria

    Yes

    Yes

    Yes

    Yes

    Yes

    Ubuntu 20.04, CentOS Linux 8

    Supported

    NFSv3

    AWS S3 compatible

    Ansible

    Kolla Wallaby

    Yes

    Yes

    Yes

    Yes

    Ubuntu 20.04, CentOS Stream 8

    Supported

    NFSv3

    AWS S3 compatible

    Ansible

    Kolla Yoga

    Yes

    Yes

    Yes

    Ubuntu 20.04, CentOS Stream 8

    Supported

    NFSv3

    AWS S3 compatible

    Ansible

    Kolla Zed

    Yes

    Ubuntu 22.04, Rocky9

    Supported

    NFSv3

    AWS S3 compatible

    Ansible

    TripleO Train

    Yes

    Yes

    Yes

    Yes

    Yes

    CentOS7

    Not Supported

    NFSv3

    AWS S3 compatible

    Director

    Logo
    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 11:52:56 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-99f51825-9b47-41ea-814f-8f8141157fc7

    Enable Workload Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/resume

    Enables the scheduler of a given Workload

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Scheduler Trust Status

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>

    Validates the Scheduler trust for a given Workload

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    All following API commands require an Authentication token against a user with admin-role in the authentication project.

    Global Job Scheduler status

    GET https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler

    Requests the status of the Global Job Scheduler

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Disable Global Job Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/disable

    Requests disabling the Global Job Scheduler

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Enable Global Job Scheduler

    POST https://$(tvm_address):8780/v1/$(tenant_id)/global_job_scheduler/enable

    Requests enabling the Global Job Scheduler

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project the Workload is located in

    workload_id

    string

    ID of the Workload to disable the Scheduler in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Instead of tls-endpoints-public-dns.yaml file, use environments/trilio_env_tls_endpoints_public_dns.yaml

  • Instead of tls-endpoints-public-ip.yaml file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml

  • Instead of tls-everywhere-endpoints-dns.yaml file, useenvironments/trilio_env_tls_everywhere_dns.yaml

  • registry.connect.redhat.com
    following overview
    this overview
    following overview
    Red Hat registry URLs.
    here
    this page
    this documentation

    Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : ~108GB

    1. For unencrypted WL : 62 min

    2. For encrypted WL : 82 min

    Snapshot time for Windows Image booted VM. No additional data except OS. : ~12 GB

    1. For unencrypted WL : 10 min

    2. For encrypted WL : 18 min5

    List of Workloads

    Using Horizon

    To view all available workloads of a project inside Horizon do:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    The overview in Horizon lists all workloads with the following additional information:

    • Creation time

    • Workload Name

    • Workload description

    • Total amount of Snapshots inside this workload

      • Total amount of succeeded Snapshots

      • Total amount of failed Snapshots

    • Workload Type

    • Status of the Workload

    Using CLI

    • --all {True,False}➡️List all workloads of all projects (valid for admin user only)

    • --nfsshare <nfsshare>➡️List all workloads of nfsshare (valid for admin user only)

    Workload Create

    The encryption options of the workload creation process are only available when the Barbican service is installed and available.

    Using Horizon

    To create a workload inside Horizon do the following steps:

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    4. Click "Create Workload"

    5. Provide Workload Name and Workload Description on the first tab "Details"

    6. Choose between Serial or Parallel workload on the first tab "Details"

    7. Choose the Policy if available to use on the first tab "Details"

    8. Choose if the Workload is encrypted on the first tab "Details"

    9. Provide the secret UUID if Workload is encrypted on the first tab "Details"

    10. Choose the VMs to protect on the second Tab "Workload Members"

    11. Decide for the schedule of the workload on the Tab "Schedule"

    12. Provide the Retention policy on the Tab "Policy"

    13. Choose the Full Backup Interval on the Tab "Policy"

    14. If required check "Pause VM" on the Tab "Options"

    15. Click create

    The created Workload will be available after a few seconds and starts to take backups according to the provided schedule and policy.

    Using CLI

    • --display-name➡️Optional workload name. (Default=None)

    • --display-description➡️Optional workload description. (Default=None)

    • --workload-type-id➡️Workload Type ID is required

    • --source-platform➡️Workload source platform is required. Supported platforms is 'openstack'

    • --instance➡️Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID

    • --jobschedule➡️Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'snapshots_to_retain' : '2'

    • --metadata➡️Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

    • --policy-id <policy_id>➡️ID of the policy to assign to the workload

    • --encryption <True/False> ➡️Enable/Disable encryption for this workload

    • --secret-uuid <secret_uuid> ➡️UUID of the Barbican secret to be used for the workload

    Workload Overview

    A workload contains many information, which can be seen in the workload overview.

    Using Horizon

    To enter the workload overview inside Horizon do the following steps:

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    4. Identify the workload to show the details on

    5. Click the workload name to enter the Workload overview

    Details Tab

    The Workload Details tab provides you with the general most important information about the workload:

    • Name

    • Description

    • Availability Zone

    • List of protected VMs including the information of qemu guest agent availability

    The status of the qemu-guest-agent just shows, whether the necessary Openstack configuration has been done for this VM to provide qemu guest agent integration. It does not check, whether the qemu guest agent is installed and configured on the VM.

    It is possible to navigate to the protected VM directly from the list of protected VMs.

    Snapshots Tab

    The Workload Snapshots Tab shows the list of all available Snapshots in the chosen Workload.

    From here it is possible to work with the Snapshots, create Snapshots on demand and start Restores.

    Please refer to the Snapshot and Restore User Guide to learn more about those.

    Policy Tab

    The Workload Policy Tab gives an overview of the current configured scheduler and retention policy. The following elements are shown:

    • Scheduler Enabled / Disabled

    • Start Date / Time

    • End Date / Time

    • RPO

    • Time till next Snapshot run

    • Retention Policy and Value

    • Full Backup Interval policy and value

    Filesearch Tab

    The Workload Filesearch Tab provides access to the powerful search engine, which allows to find files and folders on Snapshots without the need of a restore.

    Please refer to the File Search User Guide to learn more about this feature.

    Misc. Tab

    The Workload Miscellaneous Tab shows the remaining metadata of the Workload. The following information are provided:

    • Creation time

    • last update time

    • Workload ID

    • Workload Type

    Using CLI

    • <workload_id> ➡️ ID/name of the workload to show

    • --verbose➡️option to show additional information about the workload

    Edit a Workload

    Workloads can be modified in all components to match changing needs.

    Editing a Workload will set the User, who edits the Workload, as the new owner.

    Using Horizon

    To edit a workload in Horizon do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to be modified

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Edit Workload"

    7. Modify the workload as desired - All parameters except workload type can be changed

    8. Click "Update"

    Using CLI

    • --display-name ➡️ Optional workload name. (Default=None)

    • --display-description➡️Optional workload description. (Default=None)

    • --instance <instance-id=instance-uuid>➡️Specify an instance to include in the workload. Specify option multiple times to include multiple instances. instance-id: include the instance with this UUID

    • --jobschedule <key=key-name>➡️Specify following key value pairs for jobschedule Specify option multiple times to include multiple keys. If don't specify timezone, then by default it takes your local machine timezone 'start_date' : '06/05/2014' 'end_date' : '07/15/2014' 'start_time' : '2:30 PM' 'interval' : '1 hr' 'retention_policy_type' : 'Number of Snapshots to Keep' or 'Number of days to retain Snapshots' 'retention_policy_value' : '30'

    • --metadata <key=key-name>➡️Specify a key value pairs to include in the workload_type metadata Specify option multiple times to include multiple keys. key=value

    • --policy-id <policy_id>➡️ID of the policy to assign

    • <workload_id> ➡️ID of the workload to edit

    Delete a Workload

    Once a workload is no longer needed it can be safely deleted.

    All Snapshots need to be deleted before the workload gets deleted. Please refer to the Snapshots User Guide to learn how to delete Snapshots.

    Using Horizon

    To delete a workload do the following steps:

    1. Login to Horizon

    2. Navigate to the Backups

    3. Navigate to Workloads

    4. Identify the workload to be deleted

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Delete Workload"

    7. Confirm by clicking "Delete Workload" yet again

    Using CLI

    • <workload_id> ➡️ ID/name of the workload to delete

    • --database_only <True/False>➡️Keep True if want to delete from database only.(Default=False)

    Unlock a Workload

    Workloads that are actively taking backups or restores are locked for further tasks. It is possible to unlock a workload by force if necessary.

    It is highly recommend to use this feature only as last resort in case of backups/restores being stuck without failing or a restore is required while a backup is running.

    Using Horizon

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to unlock

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Unlock Workload"

    7. Confirm by clicking "Unlock Workload" yet again

    Using CLI

    • <workload_id> ➡️ ID of the workload to unlock

    Reset a Workload

    In rare cases it might be necessary to start a backup chain all over again, to ensure the quality of the created backups. To not recreate a Workload in such cases is it possible to reset a Workload.

    The Workload reset will:

    • Cancel all ongoing tasks

    • Delete all existing Openstack Trilio Snapshots from the protected VMs

    • recalculate the next Snapshot time

    • take a full backup at the next Snapshot

    Using Horizon

    To reset a Workload do the following steps:

    1. Login to the Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload to reset

    5. Click the small arrow next to "Create Snapshot" to open the sub-menu

    6. Click "Reset Workload"

    7. Confirm by clicking "Reset Workload" yet again

    Using CLI

    • <workload_id> ➡️ ID/name of the workload to reset

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:06:01 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-4eb1863e-3afa-4a2c-b8e6-91a41fe37f78
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:31:49 GMT
    Content-Type: application/json
    Content-Length: 1223
    Connection: keep-alive
    X-Compute-Request-Id: req-c6f826a9-fff7-442b-8886-0770bb97c491
    
    {
       "scheduler_enabled":true,
       "trust":{
          "created_at":"2020-10-23T14:35:11.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "value":"871ca24f38454b14b867338cb0e9b46c",
          "description":"token id for user ccddc7e7a015487fa02920f4d4979779 project c76b3355a164498aa95ddbc960adc238",
          "category":"identity",
          "type":"trust_id",
          "public":false,
          "hidden":true,
          "status":"available",
          "metadata":[
             {
                "created_at":"2020-10-23T14:35:11.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"a3cc9a01-3d49-4ff8-ad8e-b12a7b3c68b0",
                "settings_name":"trust-002bcbaf-c16b-44e6-a9ef-9c1efbfa2e2c",
                "settings_project_id":"c76b3355a164498aa95ddbc960adc238",
                "key":"role_name",
                "value":"member"
             }
          ]
       },
       "is_valid":true,
       "scheduler_obj":{
          "workload_id":"4bafaa03-f69a-45d5-a6fc-ae0119c77974",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "user_domain_id":"default",
          "user":"ccddc7e7a015487fa02920f4d4979779",
          "tenant":"c76b3355a164498aa95ddbc960adc238"
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:45:27 GMT
    Content-Type: application/json
    Content-Length: 30
    Connection: keep-alive
    X-Compute-Request-Id: req-cd447ce0-7bd3-4a60-aa92-35fc43b4729b
    
    {"global_job_scheduler": true}
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:49:29 GMT
    Content-Type: application/json
    Content-Length: 31
    Connection: keep-alive
    X-Compute-Request-Id: req-6f49179a-737a-48ab-91b7-7e7c460f5af0
    
    {"global_job_scheduler": false}
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 12:50:11 GMT
    Content-Type: application/json
    Content-Length: 30
    Connection: keep-alive
    X-Compute-Request-Id: req-ed279acc-9805-4443-af91-44a4420559bc
    
    {"global_job_scheduler": true}
    [root@TVM2 ~]# pcs resource disable wlm-cron
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr                                                                                                                                                                                                                                                                                                                                                           -cron
    
    cd /home/stack
    mv triliovault-cfg-scripts triliovault-cfg-scripts-old
    git clone -b TVO/4.2.8 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following for RHOSP13, RHOSP16.1 and RHOSP16.2
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Output of above command looks like following for RHOSP17.0
    Creating tarball...
    Tarball created.
    renamed '/tmp/puppet-modules-P3duCg9/puppet-modules.tar.gz' -> '/var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz'
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    Trilio Datamover container:       registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover container:       registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover container:       registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover container:       registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    # For RHOSP13
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    
    # For RHOSP16.1
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
       
    # For RHOSP16.2
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    
    # For RHOSP17.0
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
    
    ./prepare_trilio_images.sh <undercloud_ip> <container_tag>
    
    # Example:
    ./prepare_trilio_images.sh 192.168.13.34 <HOTFIX-TAG-VERSION>-rhosp13
    
    ## Verify changes
    # For RHOSP13
    $ grep '4.2.6-rhosp13' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: 172.25.2.2:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: 172.25.2.2:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: 172.25.2.2:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
    
    sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER_TAG> 
    
    ## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME'. 
    -- In the below example 'trilio-undercloud.ctlplane.localdomain' is <UNDERCLOUD_REGISTRY_HOSTNAME>
    $ openstack tripleo container image list | grep keystone
    | docker://trilio-undercloud.ctlplane.localdomain:8787/rhosp-rhel8/openstack-keystone:16.0-82                       |
    | docker://trilio-undercloud.ctlplane.localdomain:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.0-84   
    
    ## 'CONTAINER_TAG' format for RHOSP16.1: <<HOTFIX-TAG-VERSION>>-rhosp16.1
    ## 'CONTAINER_TAG' format for RHOSP16.2: <<HOTFIX-TAG-VERSION>>-rhosp16.2
    ## 'CONTAINER_TAG' format for RHOSP17.0: <<HOTFIX-TAG-VERSION>>-rhosp17.0
    
    ## Example
    sudo ./prepare_trilio_images.sh trilio-undercloud.ctlplane.localdomain <HOTFIX-TAG-VERSION>-rhosp16.1
    (undercloud) [stack@undercloud redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>]$ openstack tripleo container image list | grep trilio
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1                       |                        |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1                   |                   |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1                  |
    
    -----------------------------------------------------------------------------------------------------
    
    (undercloud) [stack@undercloud redhat-director-scripts]$ grep 'Image' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    ## For RHOSP16.1
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
    ## For RHOSP16.2
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    
    ## For RHOSP7.0
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    
    RHOSP13: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/environments/trilio_env.yaml
    RHOSP16.1: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env.yaml
    RHOSP16.2: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/environments/trilio_env.yaml
    RHOSP17.0: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/environments/trilio_env.yaml
    cd triliovault-cfg-scripts/common/
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    ## On Python3 env 
    sudo pip3 install PyYAML==5.1 3 
    
    ## On Python2 env 
    sudo pip install PyYAML==5.1
    ## On Python3 env 
    python3 ./generate_nfs_map.py 
     
    ## On Python2 env 
    python ./generate_nfs_map.py
    grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env_tls_endpoints_public_dns.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_nfs_map.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /home/stack/templates/roles_data.yaml
    [root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
    26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2       kolla_start           5 days ago  Up 5 days ago         horizon
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2       kolla_start           5 days ago  Up 5 days ago         horizon
    ## Either of the below workarounds should be performed on all the controller nodes where issue occurs for horizon pod.
    
    option-1: Restart the memcached service on controller using systemctl (command: systemctl restart tripleo_memcached.service)
    
    option-2: Restart the memcached pod (command: podman restart memcached)
    workloadmgr workload-list [--all {True,False}] [--nfsshare <nfsshare>]
    workloadmgr workload-create --instance <instance-id=instance-uuid>
                                [--display-name <display-name>]
                                [--display-description <display-description>]
                                [--workload-type-id <workload-type-id>]
                                [--source-platform <source-platform>]
                                [--jobschedule <key=key-name>]
                                [--metadata <key=key-name>]
                                [--policy-id <policy_id>]
                                [--encryption <True/False>]
                                [--secret-uuid <secret_uuid>]
    workloadmgr workload-show <workload_id> [--verbose <verbose>]
    usage: workloadmgr workload-modify [--display-name <display-name>]
                                       [--display-description <display-description>]
                                       [--instance <instance-id=instance-uuid>]
                                       [--jobschedule <key=key-name>]
                                       [--metadata <key=key-name>]
                                       [--policy-id <policy_id>]
                                       <workload_id>
    workloadmgr workload-delete [--database_only <True/False>] <workload_id>
    workloadmgr workload-unlock <workload_id>
    workloadmgr workload-reset <workload_id>

    String

    [email protected]

    smtp_port

    email_settings

    Integer

    587

    smtp_server_name

    email_settings

    String

    Mailserver_A

    smtp_server_username

    email_settings

    String

    admin

    smtp_server_password

    email_settings

    String

    password

    smtp_timeout

    email_settings

    Integer

    10

    smtp_email_enable

    email_settings

    Boolean

    True

    Create Setting

    POST https://$(tvm_address):8780/v1/$(tenant_id)/settings

    Creates a Trilio setting.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work with

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body format

    Setting create requires a Body in json format, to provide the requested information.

    Show Setting

    GET https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>

    Shows all details of a specified setting

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    setting_name

    string

    Name of the setting to show

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Modify Setting

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/settings

    Modifies the provided setting with the given details.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work with w

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body format

    Workload modify requires a Body in json format, to provide the information about the values to modify.

    Delete Setting

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/settings/<setting_name>

    Deletes the specified Workload.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    setting_name

    string

    Name of the setting to delete

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    smtp_default___recipient

    email_settings

    String

    [email protected]

    smtp_default___sender

    email_settings

    solutions/openstack/backing-file-update/backing_file_update.sh at master · trilioData/solutionsGitHub
    Rebase script for T4O backups
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 11:55:43 GMT
    Content-Type: application/json
    Content-Length: 403
    Connection: keep-alive
    X-Compute-Request-Id: req-ac16c258-7890-4ae7-b7f4-015b5aa4eb99
    
    {
       "settings":[
          {
             "created_at":"2021-02-04T11:55:43.890855",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "name":"smtp_port",
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":null,
             "value":"8080",
             "description":null,
             "category":null,
             "type":"email_settings",
             "public":false,
             "hidden":0,
             "status":"available",
             "is_public":false,
             "is_hidden":false
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 12:01:27 GMT
    Content-Type: application/json
    Content-Length: 380
    Connection: keep-alive
    X-Compute-Request-Id: req-404f2808-7276-4c2b-8870-8368a048c28c
    
    {
       "setting":{
          "created_at":"2021-02-04T11:55:43.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"smtp_port",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_id":null,
          "value":"8080",
          "description":null,
          "category":null,
          "type":"email_settings",
          "public":false,
          "hidden":false,
          "status":"available",
          "metadata":[
             
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 12:05:59 GMT
    Content-Type: application/json
    Content-Length: 403
    Connection: keep-alive
    X-Compute-Request-Id: req-e92e2c38-b43a-4046-984e-64cea3a0281f
    
    {
       "settings":[
          {
             "created_at":"2021-02-04T11:55:43.000000",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "name":"smtp_port",
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":null,
             "value":"8080",
             "description":null,
             "category":null,
             "type":"email_settings",
             "public":false,
             "hidden":0,
             "status":"available",
             "is_public":false,
             "is_hidden":false
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 04 Feb 2021 11:49:17 GMT
    Content-Type: application/json
    Content-Length: 1223
    Connection: keep-alive
    X-Compute-Request-Id: req-5a8303aa-6c90-4cd9-9b6a-8c200f9c2473
    {
       "settings":[
          {
             "category":null,
             "name":<String Setting_name>,
             "is_public":false,
             "is_hidden":false,
             "metadata":{
                
             },
             "type":<String Setting type>,
             "value":<String Setting Value>,
             "description":null
          }
       ]
    }
    {
       "settings":[
          {
             "category":null,
             "name":<String Setting_name>,
             "is_public":false,
             "is_hidden":false,
             "metadata":{
                
             },
             "type":<String Setting type>,
             "value":<String Setting Value>,
             "description":null
          }
       ]
    }

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    Upgrading on Kolla OpenStack

    Trilio supports the upgrade of Trilio-Openstack components from any of the older releases (4.1 onwards) to the latest 4.2 hotfix releases without ripping up the older deployments.

    Refer to the below-mentioned acceptable values for the placeholders kolla_base_distro and ** triliovault_tag** in this document as per the Openstack environment:

    Openstack Version
    triliovault_tag
    kolla_base_distro

    1] Pre-requisites

    Please ensure the following points are met before starting the upgrade process:

    • Either 4.1 or 4.2 GA OR any hotfix patch against 4.1/4.2 should be already deployed

    • No Snapshot OR Restore is running

    • Global job scheduler should be disabled

    • wlm-cron is disabled on the primary Trilio Appliance

    1.1] Deactivating the wlm-cron service

    The following sets of commands will disable the wlm-cron service and verify that it has been completly shut-down.

    2] Clone latest configuration scripts

    Before the latest configuration script is loaded it is recommended to take a backup of the existing config scripts' folder & Trilio ansible roles. The following command can be used for this purpose:

    Clone the latest configuration scripts of the required branch and access the deployment script directory for Kolla Ansible Openstack.

    Copy the downloaded Trilio ansible role into the Kolla-Ansible roles directory.

    3] Append Trilio variables

    3.1] Clean old Trilio variables and append new Trilio variables

    This step is not always required. It is recommended to comparetriliovault_globals.ymlwith the Trilio entries in the/etc/kolla/globals.ymlfile.

    In case of no changes, this step can be skipped.

    This is required, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_globals.yml they need to be updated in /etc/kolla/globals.yml file.

    3.2] Clean old Trilio passwords and append new Trilio password variables

    This step is not always required. It is recommended to comparetriliovault_passwords.ymlwith the Trilio entries in the/etc/kolla/passwords.ymlfile.

    In case of no changes, this step can be skipped.

    This step is required, when some password variable names have been added, changed, or removed in the latest triliovault_passwords.yml. In this case, the /etc/kolla/passwords.yml needs to be updated.

    3.3] Append triliovault_site.yml content to kolla ansible's site.yml

    This step is not always required. It is recommended to comparetriliovault_site.ymlwith the Trilio entries in the/usr/local/share/kolla-ansible/ansible/site.ymlfile.

    In case of no changes, this step can be skipped.

    This is required because, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_site.yml they need to be updated in /usr/local/share/kolla-ansible/ansible/site.yml file.

    3.4] Append triliovault_inventory.txt to the kolla-ansible inventory file

    This step is not always required. It is recommended to comparetriliovault_inventory.yml ith the Trilio entries in the/root/multinodefile.

    In case of no changes, this step can be skipped.

    By default, the triliovault-datamover-api service gets installed on ‘control' hosts and the trilio-datamover service gets installed on 'compute’ hosts. You can edit the T4O groups in the inventory file as per your cloud architecture.

    T4O group names are ‘triliovault-datamover-api’ and ‘triliovault-datamover’

    4] Configure multi-IP NFS as Trilio backup target

    This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs

    On kolla-ansible server node, change directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

    If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.

    Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.

    vi triliovault_nfs_map_input.yml

    The triliovault_nfs_map_input.yml is explained .

    Update PyYAML on the kolla-ansible server node only

    Expand the map file to create a one-to-one mapping of compute and NFS share.

    The result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all necessary NFS shares.

    Append this output map file to triliovault_globals.yml File Path: /home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml

    Ensure to set multi_ip_nfs_enabled in _`_triliovault_globals.yml` file to yes

    • A new parameter is added to triliovault_globals.yml , set this parameter value to 'yes' if backup target NFS supports multiple endpoints/IPs. File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’ multi_ip_nfs_enabled: 'yes'

    • Later append triliovault_globals.yaml file to /etc/kolla/globals.yml

    5] Edit globals.yml to set T4O parameters

    Edit /etc/kolla/globals.yml file to fill triliovault backup target and build details. You will find the triliovault related parameters at the end of globals.yml. The user needs to fill in details like triliovault build version, backup target type, backup target details, etc.

    Following is the list of parameters that the user needs to edit.

    Parameter
    Defaults/choices
    comments

    6] Enable T4O Snapshot mount feature

    This step is already part of the 4.2 GA installation procedure and should only be verified.

    To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.

    Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variable. Append the Trilio mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.

    For a default Kolla installation, will the variable look as follows afterward:

    Next, find the variable nova_compute_default_volumes in the same file and append the mount bind /var/trilio:/var/trilio:shared to the list.

    After the change will the variable look for a default Kolla installation as follows:

    In the case of using Ironic compute nodes one more entry needs to be adjusted in the same file. Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.

    After the changes the variable will look like the following:

    7] Pull containers in case of private repository

    In case, the user doesn’t want to use the docker hub registry for triliovault containers during cloud deployment, then the user can pull triliovault images before starting cloud deployment and push them to other preferred registries.

    Following are the triliovault container image URLs for 4.2 releases. Replace kolla_base_distro and triliovault_tag variables with their values.

    This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro This {{ triliovault_tag }} is mentioned at the start of this document.

    Trilio supports the Source-based containers from the OpenStack Yoga release **** onwards.

    Below are the Source-based OpenStack deployment images

    Below are the Binary-based OpenStack deployment images

    8] Pull T4O container images

    Activate the login into dockerhub for Trilio tagged containers.

    Please get the Dockerhub login credentials from Trilio Sales/Support team

    Run the below command from the directory with the multinode file tull pull the required images.

    9] Run Kolla-Ansible upgrade command

    Run the below command from the directory with the multinode file to start the upgrade process.

    10] Verify Trilio deployment

    Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.

    11] Advance settings/configuration for Trilio services

    11.1] Customize HAproxy configuration parameters for Trilio datamover api service

    Following are the default haproxy conf parameters set against triliovault datamover api service.

    These values work best for triliovault dmapi service. It’s not recommended to change these parameter values. However, in some exceptional cases, If needed to change any of the above parameter values then same can be done on kolla-ansible server in the following file.

    After editing, run kolla-ansible deploy command again to push these changes to openstack cloud.

    Post kolla-ansible deploy, to verify the changes, please check following file, available on all controller/haproxy nodes.

    12] Enable mound-bind for NFS

    T4O 4.2 is changing the calculation for the mount point hash value when using NFS backups.

    Please follow to ensure that Backups taken from T4O 4.1 or older can be used with T4O 4.2.

    Installing on Kolla Openstack

    This page lists all steps required to deploy Trilio components on Kolla-ansible deployed OpenStack cloud.

    1] Plan for Deployment

    Please ensure that the Trilio Appliance has been updated to the latest maintenance release before continuing the installation.

    Refer to the below-mentioned acceptable values for the placeholders triliovault_tag and ``` **kolla_base_distro`** , in this document as per the Openstack environment:

    Openstack Version
    triliovault_tag
    kolla_base_distro

    1.1] Select backup target type

    Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

    Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.

    a) NFS

    Need NFS share path

    b) Amazon S3

    - S3 Access Key - Secret Key - Region - Bucket name

    c) Other S3 compatible storage (Like, Ceph based S3)

    - S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

    2] Clone Trilio Deployment Scripts

    Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterwards, copy Trilio Ansible role into Kolla-ansible roles directory

    3] Hook Trilio deployment scripts to Kolla-ansible deploy scripts

    3.1] Add Trilio global variables to globals.yml

    3.2] Add Trilio passwords to kolla passwords.yaml

    Append triliovault_passwords.yml to /etc/kolla/passwords.yml. Passwords are empty. Set these passwords manually in the /etc/kolla/passwords.yml.

    3.3] Append Trilio site.yml content to kolla ansible’s site.yml

    3.4] Append triliovault_inventory.txt to your cloud’s kolla-ansible inventory file.

    3.5] Configure multi-IP NFS

    This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs

    On kolla-ansible server node, change directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

    If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.

    Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.

    vi triliovault_nfs_map_input.yml

    The triliovault_nfs_map_imput.yml is explained .

    Update PyYAML on the kolla-ansible server node only

    Expand the map file to create one to one mapping of compute and nfs share.

    Result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all necessary nfs shares.

    Append this output map file to 'triliovault_globals.yml' File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’

    Ensure to set multi_ip_nfs_enabled in __ triliovault_globals.yml file to yes

    4] Edit globals.yml to set Trilio parameters

    Edit /etc/kolla/globals.yml file to fill Trilio backup target and build details. You will find the Trilio related parameters at the end of globals.yml file. Details like Trilio build version, backup target type, backup target details, etc need to be filled out.

    Following is the list of parameters that the usr needs to edit.

    Parameter
    Defaults/choices
    comments

    In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.

    Following are the triliovault container image URLs for 4.2 releases**.** Replace kolla_base_distro and triliovault_tag variables with their values.\

    This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro

    Trilio supports the Source-based containers from the OpenStack Yoga release **** onwards.

    Below are the Source-based OpenStack deployment images

    Below are the Binary-based OpenStack deployment images

    5] Enable Trilio Snapshot mount feature

    To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.

    Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variables. Append the Trilio mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.

    For a default Kolla installation, will the variable look as follows afterward:

    Next, find the variable nova_compute_default_volumes in the same file and append the mount bind /var/trilio:/var/trilio:shared to the list.

    After the change will the variable look for a default Kolla installation as follows:

    In case of using Ironic compute nodes one more entry need to be adjusted in the same file. Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.

    After the changes the variable will looks like the following:

    6] Pull Trilio container images

    Activate the login into dockerhub for Trilio tagged containers.

    Please get the Dockerhub login credentials from Trilio Sales/Support team

    Pull the Trilio container images from the dockerhub based on the existing inventory file. In the example is the inventory file named multinode.

    7] Deploy Trilio

    All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.

    This is just an example command. You need to use your cloud deploy command.

    Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

    8] Verify Trilio deployment

    Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.

    The example is shown for 4.2.7 maintenance release from Kolla Victoria CentOS binary based setup.

    The example is shown for 4.2.7 maintenance release from Kolla Yoga Ubuntu source based setup.

    The example is shown for 4.2.7 maintenance release from Kolla Yoga Ubuntu binary based setup.

    9] Troubleshooting Tips

    9.1 ] Check Trilio containers and their startup logs

    To see all TriloVault containers running on a specific node use the docker ps command.

    To check the startup logs use the docker logs <container name> command.

    9.2] Trilio Horizon tabs are not visible in Openstack

    Verify that the Trilio Appliance is configured. The Horizon tabs are only shown, when a configured Trilio appliance is available.

    Verify that the Trilio horizon container is installed and in a running state.

    9.3] Trilio Service logs

    • Trilio datamover api service logs on datamover api node

    • Trilio datamover service logs on datamover node

    10. Change the nova user id on the Trilio Nodes

    Note: This step needs to be done on Trilio Appliance node. Not on OpenStack node.

    Pre-requisite: You should have already launched Trilio appliance VM

    In Kolla openstack distribution, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all Trilio nodes:

    1. Download the shell script that will change the user id

    2. Assign executable permissions

    3. Execute the script

    4. Verify that 'nova' user and group id has changed to '42436'

    11. Advanced configurations - [Optional]

    11.1] We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user' in the file '/etc/kolla/globals.yaml'.

    If user wants to edit this parameter value they can do it. Impact will be, cinder's ceph user and triliovault datamover's ceph user will be updated upon next kolla-ansible deploy command.

    Managing Trusts

    Openstack Administrators should never have the need to directly work with the trusts created.

    The cloud-trust is created during the Trilio configuration and further trusts are created as necessary upon creating or modifying a workload.

    List Trusts

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts

    Provides the lists of trusts for the given Tenant.

    Path Parameters

    Name
    Type
    Description

    Query Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Create Trust

    POST https://$(tvm_address):8780/v1/$(tenant_id)/trusts

    Creates a workload in the provided Tenant/Project with the given details.

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    Show Trust

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>

    Shows all details of a specified trust

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Delete Trust

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/trusts/<trust_id>

    Deletes the specified trust.

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Validate Scheduler Trust

    GET https://$(tvm_address):8780/v1/$(tenant_id)/trusts/validate/<workload_id>

    Validates the Trust of a given Workload.

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    T4O 4.2 HF4 Release Notes

    Release Versions

    Packages

    Deliverables against TVO-4.2.HF4

    Following packages changed/added in the current release

    Juju charms for Openstack Yoga release

    Containers and Gitbranch

    Changelog

    New Qualifications

    This Hotfix extends the Support Matrix of T4O 4.2 as follows:

    • Fresh 4.2HF4 Trilio appliance build with wlm services on Python3.8 and multiple vulnerability fixes

    • Kolla Ansible Openstack Yoga CentOS Stream 8, Ubuntu 20.04

    • Canonical Yoga Ubuntu 20.04, Ubuntu 22.04

    Fixed Bugs and issues

    1. Irods nfs T4O not compatible

    2. Backups failing with "Unable to call vast_instance"

    3. Backup/restore does not work when NFS has access to Nova user just NOVA

    4. Failed ansible-tvault-contego-extension : create trilio.filters for mount and unmount task

    Logo

    4.2.64

    puppet-triliovault

    rpm

    4.2.64-4.2

    python3-contegoclient

    deb

    4.2.64

    python3-contegoclient-el8

    rpm

    4.2.64-4.2

    python3-s3-fuse-plugin

    deb

    4.2.64

    python3-s3fuse-plugin

    rpm

    4.2.64-4.2

    python3-trilio-fusepy

    rpm

    3.0.1-1

    python-s3fuse-plugin-cent7

    rpm

    4.2.64-4.2

    s3-fuse-plugin

    deb

    4.2.64

    trilio-fusepy

    rpm

    3.0.1-1

    python3-workloadmgrclient

    deb

    4.2.64.1

    python3-workloadmgrclient-el8

    rpm

    4.2.64.1-4.2

    python-workloadmgrclient

    deb

    4.2.64.1

    workloadmgrclient

    python

    4.2.64.1

    workloadmgrclient

    rpm

    4.2.64.1-4.2

    4.2.64.8-4.2

    tvault-contego

    python

    4.2.64.1

    workloadmgr

    deb

    4.2.64.10

    workloadmgr

    python

    4.2.64.10

    tvault_configurator

    python

    4.2.64.10

    tvault-horizon-plugin

    deb

    4.2.64.2

    tvault-horizon-plugin

    rpm

    4.2.64.2-4.2

    tvault-horizon-plugin

    python

    4.2.64.1

    python3-tvault-horizon-plugin

    deb

    4.2.64.2

    python3-tvault-horizon-plugin-el8

    rpm

    4.2.64.2-4.2

    dmapi

    python

    4.2.64.1

    dmapi

    rpm

    4.2.64.1-4.2

    dmapi

    deb

    4.2.64.1

    python3-dmapi

    deb

    4.2.64.1

    python3-dmapi

    rpm

    4.2.64.1-4.2

    s3fuse

    python

    4.2.64.1

    Jammy (Ubuntu 22.04)

    trilio-charmers-trilio-wlm-focal

    latest/edge

    Focal (Ubuntu 20.04)

    trilio-charmers-trilio-dm-api-focal

    latest/edge

    Focal (Ubuntu 20.04)

    trilio-charmers-trilio-data-mover-focal

    latest/edge

    Focal (Ubuntu 20.04)

    trilio-charmers-trilio-horizon-plugin-focal

    latest/edge

    Focal (Ubuntu 20.04)

    4.2.64-hotfix-4-wallaby

    Kolla Yoga Containers

    4.2.64-hotfix-4-yoga

    TripleO Containers

    4.2.63-hotfix-4-tripleo

    lxc packages not installed when using bare metal install

  • dmapi_all also includes bare metal hosts on non LXC deployments

  • Config failed with "iptables: Nothing to save" when TVM utilizing IPv6 | Tmobile/Red Hat | SFDC#2881

  • Package/Container Names

    Package Kind

    Package Version/Container Tags

    contego

    deb

    4.2.64

    contegoclient

    rpm

    4.2.64-4.2

    contegoclient

    deb

    4.2.64

    contegoclient

    Package/Container Names

    Package Kind

    Package/Container Version/Tags

    python3-tvault-contego

    deb

    4.2.64.8

    tvault-contego

    deb

    4.2.64.8

    python3-tvault-contego

    rpm

    4.2.64.8-4.2

    tvault-contego

    Charm name

    Channel

    Supported release

    trilio-charmers-trilio-wlm-jammy

    latest/edge

    Jammy (Ubuntu 22.04)

    trilio-charmers-trilio-dm-api-jammy

    latest/edge

    Jammy (Ubuntu 22.04)

    trilio-charmers-trilio-data-mover-jammy

    latest/edge

    Jammy (Ubuntu 22.04)

    trilio-charmers-trilio-horizon-plugin-jammy

    Name

    Tag

    Gitbranch

    hotfix-4-TVO/4.2

    RHOSP13 containers

    4.2.64-hotfix-4-rhosp13

    RHOSP16.1 containers

    4.2.64-hotfix-4-rhosp16.1

    RHOSP16.2 containers

    4.2.64-hotfix-4-rhosp16.2

    Kolla Ansible Victoria containers

    4.2.64-hotfix-4-victoria

    python

    rpm

    latest/edge

    Kolla Ansible Wallaby containers

    4.2.8-zed

    ubuntu rocky

    <dockerhub-login-password>

    Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team

    triliovault_docker_registry

    Default value: docker.io

    Edit this value if a different container registry for Trilio containers is to be used. Containers need to be pulled from docker.io and pushed to chosen registry first.

    triliovault_backup_target

    • nfs

    • amazon_s3

    • other_s3_compatible

    nfs if the backup target is NFS

    amazon_s3 if the backup target is Amazon S3

    other_s3_compatible if the backup target type is S3 but not amazon S3.

    multi_ip_nfs_enabled

    yes no default: no

    This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.

    triliovault_nfs_shares

    <NFS-IP/FQDN>:/<NFS path>

    NFS share path example: ‘192.168.122.101:/nfs/tvault’

    triliovault_nfs_options

    'nolock,soft,timeo=180, intr,lookupcache=none'. for Cohesity nfs: 'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10'

    -These parameter set NFS mount options. -Keep default values, unless a special requirement exists.

    triliovault_s3_access_key

    S3 Access Key

    Valid for amazon_s3 and

    triliovault_s3_secret_key

    S3 Secret Key

    Valid for amazon_s3 and other_s3_compatible

    triliovault_s3_region_name

    • Default value: us-east-1

    • S3 Region name

    Valid for amazon_s3 and other_s3_compatible

    If s3 storage doesn't have region parameter keep default

    triliovault_s3_bucket_name

    S3 Bucket name

    Valid for amazon_s3 and other_s3_compatible

    triliovault_s3_endpoint_url

    S3 Endpoint URL

    Valid for other_s3_compatible only

    triliovault_s3_ssl_enabled

    • True

    • False

    Valid for other_s3_compatible only

    Set true for SSL enabled S3 endpoint URL

    triliovault_s3_ssl_cert_file_name

    s3-cert.pem

    Valid for other_s3_compatible only with SSL enabled and self signed certificates

    OR issued by a private authority. In this case, copy the ceph s3 ca chain file to/etc/kolla/config/triliovault/

    directory on ansible server. Create this directory if it does not exist already.

    triliovault_copy_ceph_s3_ssl_cert

    • True

    • False

    Valid for other_s3_compatible only

    Set to True when: SSL enabled with self-signed certificates or issued by a private authority.

    After this step, you can proceed to 'Configuring Trilio' section.

    Victoria

    4.2.8-victoria

    ubuntu centos

    Wallaby

    4.2.8-wallaby

    ubuntu centos

    Yoga

    4.2.8-yoga

    ubuntu centos

    triliovault_tag

    <triliovault_tag >

    Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned in the 1st step

    horizon_image_full

    Uncomment

    By default, Trilio Horizon container would not get deployed.

    Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.

    triliovault_docker_username

    <dockerhub-login-username>

    Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team

    here

    Zed

    triliovault_docker_password

    User-Agent

    string

    python-workloadmgrclient

    tvm_name

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant / Project to fetch the trusts from

    is_cloud_admin

    boolean

    true/false

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to create the Trust for

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    workload_id

    string

    ID of the Workload to show

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Trust in

    trust_id

    string

    ID of the Trust to delete

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    workload_id

    string

    ID of the Workload to validate the Trust of

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:21:57 GMT
    Content-Type: application/json
    Content-Length: 868
    Connection: keep-alive
    X-Compute-Request-Id: req-fa48f0ad-aa76-42fa-85ea-1e5461889fb3
    
    {
       "trust":[
          {
             "created_at":"2020-11-26T13:10:53.000000",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "name":"trust-6e290937-de9b-446a-a406-eb3944e5a034",
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":"cloud_admin",
             "value":"dbe2e160d4c44d7894836a6029644ea0",
             "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
             "category":"identity",
             "type":"trust_id",
             "public":false,
             "hidden":true,
             "status":"available",
             "metadata":[
                {
                   "created_at":"2020-11-26T13:10:54.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"e9ec386e-79cf-4f6b-8201-093315648afe",
                   "settings_name":"trust-6e290937-de9b-446a-a406-eb3944e5a034",
                   "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
                   "key":"role_name",
                   "value":"admin"
                }
             ]
          }
       ]
    }
    git clone -b TVO/4.2.8 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/kolla-ansible/
    
    # For Centos and Ubuntu
    cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
    ## For Centos and Ubuntu
    - Take backup of globals.yml
    cp /etc/kolla/globals.yml /opt/
    
    - Append Trilio global variables to globals.yml 
    cat ansible/triliovault_globals.yml >> /etc/kolla/globals.yml
    ## For Centos and Ubuntu
    - Take backup of passwords.yml
    cp /etc/kolla/passwords.yml /opt/
    
    - Append Trilio global variables to passwords.yml 
    cat ansible/triliovault_passwords.yml >> /etc/kolla/passwords.yml
    
    - Edit '/etc/kolla/passwords.yml', go to end of the file and set trilio passwords.
    # For Centos and Ubuntu
    - Take backup of site.yml
    cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/
    
    # If the OpenStack release is ‘yoga' append below Trilio code to site.yml  
    cat ansible/triliovault_site_yoga.yml >> /usr/local/share/kolla-ansible/ansible/site.yml    
    
    # If the OpenStack release is other than 'yoga' append below Trilio code to site.yml 
    cat ansible/triliovault_site.yml >> /usr/local/share/kolla-ansible/ansible/site.yml                                                            
    For example:
    If your inventory file name path '/root/multinode' then use following command.
    
    cat ansible/triliovault_inventory.txt >> /root/multinode
    cd triliovault-cfg-scripts/common/
    pip3 install -U pyyaml
    python ./generate_nfs_map.py
    cat triliovault_nfs_map_output.yml >> ../kolla-ansible/ansible/triliovault_globals.yml
    
    1. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
    2. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
    3. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-horizon-plugin:{{ triliovault_tag }}
    
    ## EXAMPLE from Kolla Ubuntu source based OpenStack
    docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:{{ triliovault_tag }}
    1. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
    2. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
    3. docker.io/trilio/{{ kolla_base_distro }}-binary-trilio-horizon-plugin:{{ triliovault_tag }}
    
    ## EXAMPLE from Kolla Ubuntu binary based OpenStack
    docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/ubuntu-binary-trilio-horizon-plugin:{{ triliovault_tag }}
    nova_libvirt_default_volumes:
      - "{{ node_config_directory }}/nova-libvirt/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run/:/run/:shared"
      - "/dev:/dev"
      - "/sys/fs/cgroup:/sys/fs/cgroup"
      - "kolla_logs:/var/log/kolla/"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "
    {% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    
    
    
    "
      - "nova_libvirt_qemu:/etc/libvirt/qemu"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }
      - "/var/trilio:/var/trilio:shared"
    nova_compute_default_volumes:
      - "{{ node_config_directory }}/nova-compute/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run:/run:shared"
      - "/dev:/dev"
      - "kolla_logs:/var/log/kolla/"
      - "
    {% if enable_iscsid | bool %}iscsi_info:/etc/iscsi{% endif %}"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    
    
    "
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    nova_compute_ironic_default_volumes:
      - "{{ node_config_directory }}/nova-compute-ironic/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "kolla_logs:/var/log/kolla/"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    ansible -i multinode control -m shell -a "docker login -u <docker-login-username> -p <docker-login-password> docker.io"
    kolla-ansible -i multinode pull --tags triliovault
    kolla-ansible -i multinode deploy
    [root@controller ~]# docker ps | grep datamover-api
    cd9d0ccc19b6   trilio/kolla-centos-trilio-datamover-api:4.2.7-victoria     "dumb-init --single-…"   21 hours ago   Up 21 hours                         triliovault_datamover_api
    
    [root@compute ~]# docker ps | grep datamover
    fae5e4f2e04a   trilio/kolla-centos-trilio-datamover:4.2.7-victoria   "dumb-init --single-…"   21 hours ago   Up 21 hours                       triliovault_datamover
    
    [root@controller ~]# docker ps | grep horizon
    f019ef071d3c   trilio/centos-binary-trilio-horizon-plugin:4.2.7-victoria   "dumb-init --single-…"   21 hours ago   Up 21 hours (unhealthy)             horizon
    
    root@controller:~# docker ps | grep triliovault_datamover_api
    5e9f87240a25   trilio/kolla-ubuntu-trilio-datamover-api:4.2.7-yoga           "dumb-init --single-…"   23 hours ago   Up 23 hours                       triliovault_datamover_api
    
    root@controller:~# docker ps | grep horizon
    4cd644f0486c   trilio/kolla-ubuntu-trilio-horizon-plugin:4.2.7-yoga          "dumb-init --single-…"   23 hours ago   Up 23 hours (healthy)             horizon
    
    root@compute1:~# docker ps | grep triliovault_datamover
    7b6001ef43b9   trilio/kolla-ubuntu-trilio-datamover:4.2.7-yoga              "dumb-init --single-…"   23 hours ago   Up 23 hours                       triliovault_datamover
    root@compute1:~#
    
    root@controller:~# docker ps | grep triliovault_datamover_api
    686b1aff0165   trilio/kolla-ubuntu-trilio-datamover-api:4.2.7-yoga           "dumb-init --single-…"   3 hours ago    Up 3 hours                        triliovault_datamover_api
    
    root@controller:~# docker ps | grep horizon
    d49ac6f52af4   trilio/ubuntu-binary-trilio-horizon-plugin:4.2.7-yoga         "dumb-init --single-…"   3 hours ago    Up 3 hours (healthy)              horizon
    
    root@compute:~# docker ps | grep triliovault_datamover
    c5a01651ddc7   trilio/kolla-ubuntu-trilio-datamover:4.2.7-yoga              "dumb-init --single-…"   3 hours ago    Up 3 hours                        triliovault_datamover
    root@compute:~#
    
    docker ps -a | grep trilio
    docker logs trilio_datamover_api
    docker logs trilio_datamover
    docker ps | grep horizon
    /var/log/kolla/triliovault-datamover-api/dmapi.log
    /var/log/kolla/triliovault-datamover/tvault-contego.log
    ## Download the shell script
    $ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    
    ## Assign executable permissions
    $ chmod +x nova_userid.sh
    
    ## Execute the shell script to change 'nova' user and group id to '42436'
    $ ./nova_userid.sh
    
    ## Ignore any errors and verify that 'nova' user and group id has changed to '42436'
    $ id nova
       uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:43:36 GMT
    Content-Type: application/json
    Content-Length: 868
    Connection: keep-alive
    X-Compute-Request-Id: req-2151b327-ea74-4eec-b606-f0df358bc2a0
    
    {
       "trust":[
          {
             "created_at":"2021-01-21T11:43:36.140407",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":"adfa32d7746a4341b27377d6f7c61adb",
             "value":"1c981a15e7a54242ae54eee6f8d32e6a",
             "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
             "category":"identity",
             "type":"trust_id",
             "public":false,
             "hidden":1,
             "status":"available",
             "is_public":false,
             "is_hidden":true,
             "metadata":[
                
             ]
          }
       ]
    }
    {
       "trusts":{
          "role_name":"member",
          "is_cloud_trust":false
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:39:12 GMT
    Content-Type: application/json
    Content-Length: 888
    Connection: keep-alive
    X-Compute-Request-Id: req-3c2f6acb-9973-4805-bae3-cd8dbcdc2cb4
    
    {
       "trust":{
          "created_at":"2020-11-26T13:15:29.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "value":"703dfabb4c5942f7a1960736dd84f4d4",
          "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
          "category":"identity",
          "type":"trust_id",
          "public":false,
          "hidden":true,
          "status":"available",
          "metadata":[
             {
                "created_at":"2020-11-26T13:15:29.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"86aceea1-9121-43f9-b55c-f862052374ab",
                "settings_name":"trust-54e24d8d-6bcf-449e-8021-708b4ebc65e1",
                "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
                "key":"role_name",
                "value":"member"
             }
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 21 Jan 2021 11:41:51 GMT
    Content-Type: application/json
    Content-Length: 888
    Connection: keep-alive
    X-Compute-Request-Id: req-d838a475-f4d3-44e9-8807-81a9c32ea2a8
    {
       "scheduler_enabled":true,
       "trust":{
          "created_at":"2021-01-21T11:43:36.000000",
          "updated_at":null,
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "value":"1c981a15e7a54242ae54eee6f8d32e6a",
          "description":"token id for user adfa32d7746a4341b27377d6f7c61adb project 4dfe98a43bfa404785a812020066b4d6",
          "category":"identity",
          "type":"trust_id",
          "public":false,
          "hidden":true,
          "status":"available",
          "metadata":[
             {
                "created_at":"2021-01-21T11:43:36.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"d98d283a-b096-4a68-826a-36f99781787d",
                "settings_name":"trust-b03daf38-1615-48d6-88f9-a807c728e786",
                "settings_project_id":"4dfe98a43bfa404785a812020066b4d6",
                "key":"role_name",
                "value":"member"
             }
          ]
       },
       "is_valid":true,
       "scheduler_obj":{
          "workload_id":"209c13fa-e743-4ccd-81f7-efdaff277a1f",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "user_domain_id":"default",
          "user":"adfa32d7746a4341b27377d6f7c61adb",
          "tenant":"4dfe98a43bfa404785a812020066b4d6"
       }
    }

    Access to the gemfury repository to fetch new packages

    <dockerhub-login-password>

    Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team

    triliovault_docker___registry

    Default: docker.io

    If users want to use a different container registry for the triliovault containers, then the user can edit this value. In that case, the user first needs to manually pull triliovault containers from and push them to the other registry.

    triliovault_backup___target

    nfs

    amazon_s3

    ceph_s3

    'nfs': If the backup target is NFS

    'amazon_s3': If the backup target is Amazon S3

    'ceph_s3': If the backup target type is S3 but not amazon S3.

    multi_ip_nfs_enabled

    yes no Default: no

    This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.

    dmapi_workers

    Default: 16

    If dmapi_workers field is not present in config file. The Default value will be equals to number of cores present on the node

    triliovault_nfs___shares

    <NFS-IP/FQDN>:/<NFS path>

    Only with nfs for triliovault_backup_target

    User needs to provide NFS share path, e.g.: 192.168.122.101:/opt/tvault

    triliovault_nfs___options

    Default:'nolock,soft,timeo=180, intr,lookupcache=none'. for Cohesity nfs: 'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10'

    -Only with nfs for triliovault_backup_target -Keep default values if unclear

    triliovault_s3___access_key

    S3 Access Key

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 access key

    triliovault_s3___secret_key

    S3 Secret Key

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 secret key

    triliovault_s3___region_name

    S3 Region Name

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 region or keep default if no region required

    triliovault_s3___bucket_name

    S3 Bucket name

    Only with amazon_s3 or cephs3 for triliovault_backuptarget

    Provide S3 bucket

    triliovault_s3___endpoint_url

    S3 Endpoint URL

    Valid for other_s3_compatible only

    triliovault_s3___ssl_enabled

    True

    False

    Only with ceph_s3 for triliovault_backup_target

    Set to true if endpoint is on HTTPS

    triliovault_s3__ssl_cert__file_name

    s3-cert-pem

    Only with ceph_s3 for triliovault_backup_target and

    if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority user needs to copy the 'ceph s3 ca chain file' to "/etc/kolla/config/triliovault/" directory on ansible server. Create this directory if it does not exist already.

    triliovault_copy__ceph_s3__ssl_cert

    True

    False

    Set to true if:

    ceph_s3 for triliovault_backup_target and

    if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority

    Victoria

    4.2.8-victoria

    ubuntu centos

    Wallaby

    4.2.8-wallaby

    ubuntu centos

    Yoga

    4.2.8-yoga

    ubuntu centos

    Zed

    4.2.8-zed

    ubuntu rocky

    triliovault_tag

    <triliovault_tag >

    Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned at the start of this document

    horizon_image___full

    Uncomment

    By default, Trilio Horizon container would not get deployed.

    Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.

    triliovault_docker___username

    <dockerhub-login-username>

    Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team

    here
    this documentation

    triliovault_docker___password

    Workload Quotas

    List Quota Types

    GET https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types

    Lists all available Quota Types

    [root@TVM2 ~]# pcs resource disable wlm-cron
    [root@TVM2 ~]# systemctl status wlm-cron
    ● wlm-cron.service - workload's scheduler cron service
       Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset                                                                                                                                                                                                                                                                                                                                                           : disabled)
       Active: inactive (dead)
    
    Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
    Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
    Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
    Hint: Some lines were ellipsized, use -l to show in full.
    [root@TVM2 ~]# pcs resource show wlm-cron
     Resource: wlm-cron (class=systemd type=wlm-cron)
      Meta Attrs: target-role=Stopped
      Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito                                                                                                                                                                                                                                                                                                                                                           r-interval-30s)
                  start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int                                                                                                                                                                                                                                                                                                                                                           erval-0s)
                  stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
    [root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
    root     15379 14383  0 08:27 pts/0    00:00:00 grep --color=auto -i workloadmgr                                                                                                                                                                                                                                                                                                                                                           -cron
    
    mv triliovault-cfg-scripts triliovault-cfg-scripts_old
    mv /usr/local/share/kolla-ansible/ansible/roles/triliovault /opt/triliovault_old
    git clone -b TVO/4.2.8 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/kolla-ansible/
    cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
    #copy the backed-up original globals.yml which is not having triliovault variables iniside current globals.yml
    cp /opt/globals.yml /etc/kolla/globals.yml
    
    #Append Trilio global variables to globals.yml
    cat ansible/triliovault_globals.yml >> /etc/kolla/globals.yml
    Take backup of current password file
    cp /etc/kolla/password.yml /opt/password-<CURRENT-RELEASE>.yml
    
    #Reset the passwords file to default one by reverting the backed-up original password.yml. This backup would have been taken during previous install/upgrade.
    cp /opt/password.yml /etc/kolla/password.yml
    
    #Append Trilio password variables to passwords.yml 
    cat ansible/triliovault_passwords.yml >> /etc/kolla/passwords.yml
    
    #File /etc/kolla/passwords.yml to be edited to set passwords.
    #To set the passwords, it's recommended to use the same passwords as done during previous T4O deployment, as present in the password file backup (/opt/password-<CURRENT-RELEASE>.yml). 
    #Any additional passwords (in triliovault_passwords.yml), should be set by the user in /etc/kolla/passwords.yml.
    #Take backup of current site.yml file
    cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/site-<CURRENT-RELEASE>.yml
    
    #Reset the site.yml to default one by reverting the backed-up original site.yml inside current site.yml. This backup would have been taken during previous install/upgrade.
    cp /opt/site.yml /usr/local/share/kolla-ansible/ansible/site.yml
    
    # If the OpenStack release is ‘yoga' append below Trilio code to site.yml  
    cat ansible/triliovault_site_yoga.yml >> /usr/local/share/kolla-ansible/ansible/site.yml    
    
    # If the OpenStack release is other than 'yoga' append below Trilio code to site.yml 
    cat ansible/triliovault_site.yml >> /usr/local/share/kolla-ansible/ansible/site.yml                               
    For example:
    If your inventory file name path '/root/multinode' then use following
    #cleanup old T4O groups from /root/multinode and copy latest triliovault inventory file
    cat ansible/triliovault_inventory.txt >> /root/multinode
    cd triliovault-cfg-scripts/common/
    pip3 install -U pyyaml
    python ./generate_nfs_map.py
    cat triliovault_nfs_map_output.yml >> ../kolla-ansible/ansible/triliovault_globals.yml
    nova_libvirt_default_volumes:
      - "{{ node_config_directory }}/nova-libvirt/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run/:/run/:shared"
      - "/dev:/dev"
      - "/sys/fs/cgroup:/sys/fs/cgroup"
      - "kolla_logs:/var/log/kolla/"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "
    {% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    
    
    
    
    "
      - "nova_libvirt_qemu:/etc/libvirt/qemu"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }
      - "/var/trilio:/var/trilio:shared"
    nova_compute_default_volumes:
      - "{{ node_config_directory }}/nova-compute/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run:/run:shared"
      - "/dev:/dev"
      - "kolla_logs:/var/log/kolla/"
      - "
    {% if enable_iscsid | bool %}iscsi_info:/etc/iscsi{% endif %}"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    
    
    "
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    nova_compute_ironic_default_volumes:
      - "{{ node_config_directory }}/nova-compute-ironic/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "kolla_logs:/var/log/kolla/"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-horizon-plugin:{{ triliovault_tag }}
    
    ## EXAMPLE from Kolla Ubuntu source based OpenStack
    docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:{{ triliovault_tag }}
    docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/{{ kolla_base_distro }}-binary-trilio-horizon-plugin:{{ triliovault_tag }}
    
    ## EXAMPLE from Kolla Ubuntu binary based OpenStack
    docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/ubuntu-binary-trilio-horizon-plugin:{{ triliovault_tag }}
    ansible -i multinode control -m shell -a "docker login -u <docker-login-username> -p <docker-login-password> docker.io"
    kolla-ansible -i multinode pull --tags triliovault
    kolla-ansible -i multinode upgrade
    root@controller:~# docker ps | grep triliovault_datamover_api
    686b1aff0165   trilio/kolla-ubuntu-trilio-datamover-api:4.2.7-yoga           "dumb-init --single-…"   3 hours ago    Up 3 hours                        triliovault_datamover_api
    
    root@controller:~# docker ps | grep horizon
    d49ac6f52af4   trilio/ubuntu-binary-trilio-horizon-plugin:4.2.7-yoga         "dumb-init --single-…"   3 hours ago    Up 3 hours (healthy)              horizon
    
    root@compute:~# docker ps | grep triliovault_datamover
    c5a01651ddc7   trilio/kolla-ubuntu-trilio-datamover:4.2.7-yoga              "dumb-init --single-…"   3 hours ago    Up 3 hours                        triliovault_datamover
    
    retries 5
    timeout http-request 10m
    timeout queue 10m
    timeout connect 10m
    timeout client 10m
    timeout server 10m
    timeout check 10m
    balance roundrobin
    maxconn 50000
    /usr/local/share/kolla-ansible/ansible/roles/triliovault/defaults/main.yml
    /etc/kolla/haproxy/services.d/triliovault-datamover-api.cfg
    docker.io
    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 15:40:56 GMT
    Content-Type: application/json
    Content-Length: 1625
    Connection: keep-alive
    X-Compute-Request-Id: req-2ad95c02-54c6-4908-887b-c16c5e2f20fe
    
    {
       "quota_types":[
          {
             "created_at":"2020-10-19T10:05:52.000000",
             "updated_at":"2020-10-19T10:07:32.000000",
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
             "display_name":"Workloads",
    
    

    Show Quota Type

    GET https://$(tvm_address):8780/v1/$(tenant_id)/projects_quota_types/<quota_type_id>

    Requests the details of a Quota Type

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project to work in

    quota_type_id

    string

    ID of the Quota Type to show

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Create allowed Quota

    POST https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>

    Creates an allowed Quota with the given parameters

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    project_id

    string

    ID of the Tenant/Project to create the allowed Quota in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body Format

    List allowed Quota

    GET https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<project_id>

    Lists all allowed Quotas for a given project.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    project_id

    string

    ID of the Tenant/Project to list allowed Quotas from

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Show allowed Quota

    GET https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quota/<allowed_quota_id>

    Shows details for a given allowed Quota

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    <allowed_quota_id>

    string

    ID of the allowed Quota to show

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Update allowed Quota

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/update_allowed_quota/<allowed_quota_id>

    Updates an allowed Quota with the given parameters

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    <allowed_quota_id>

    string

    ID of the allowed Quota to update

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body Format

    Delete allowed Quota

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/project_allowed_quotas/<allowed_quota_id>

    Deletes a given allowed Quota

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to work in

    <allowed_quota_id>

    string

    ID of the allowed Quota to delete

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Workload Import and Migration

    import Workload list

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/get_list/import_workloads

    Provides the list of all importable workloads

    Path Parameters

    Name
    Type
    Description

    Query Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    orphaned Workload list

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/orphan_workloads

    Provides the list of all orphaned workloads

    Path Parameters

    Name
    Type
    Description

    Query Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Import Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/import_workloads

    Imports all or the provided workloads

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body format

    T4O 4.2 HF3 Release Notes

    Prerequisites

    To use this hotfix (4.2.HF3).

    1. Customers (except Canonical Openstack) and having Openstack Ussuri OR Openstack Victoria need to have an already deployed and working TVO-4.2 GA.

    Backups-Admin Area

    Trilio provides Backup-as-a-Service, which allows Openstack Users to manage and control their backups themselves. This doesn't eradicate the need for a Backup Administrator, who has an overview of the complete Backup Solution.

    To provide Backup Administrators with the tools they need does Trilio for Openstack provide a Backup-Admin area in Horizon in addition to the API and CLI.

    Access the Backups-Admin area

    To access the Backups-Admin area follow these steps:

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 15:44:43 GMT
    Content-Type: application/json
    Content-Length: 342
    Connection: keep-alive
    X-Compute-Request-Id: req-5bf629fe-ffa2-4c90-b704-5178ba2ab09b
    
    {
       "quota_type":{
          "created_at":"2020-10-19T10:05:52.000000",
          "updated_at":"2020-10-19T10:07:32.000000",
          "deleted_at":null,
          "deleted":false,
          "version":"4.0.115",
          "id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
          "display_name":"Workloads",
          "display_description":"Total number of workload creation allowed per project",
          "status":"available"
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 15:51:51 GMT
    Content-Type: application/json
    Content-Length: 24
    Connection: keep-alive
    X-Compute-Request-Id: req-08c8cdb6-b249-4650-91fb-79a6f7497927
    
    {
       "allowed_quotas":[
          {
             
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:01:39 GMT
    Content-Type: application/json
    Content-Length: 766
    Connection: keep-alive
    X-Compute-Request-Id: req-e570ce15-de0d-48ac-a9e8-60af429aebc0
    
    {
       "allowed_quotas":[
          {
             "id":"262b117d-e406-4209-8964-004b19a8d422",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
             "allowed_value":5,
             "high_watermark":4,
             "version":"4.0.115",
             "quota_type_name":"Workloads"
          },
          {
             "id":"68e7203d-8a38-4776-ba58-051e6d289ee0",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "quota_type_id":"f02dd7a6-2e08-11ea-889c-7440bb00b67d",
             "allowed_value":-1,
             "high_watermark":-1,
             "version":"4.0.115",
             "quota_type_name":"Storage"
          },
          {
             "id":"ed67765b-aea8-4898-bb1c-7c01ecb897d2",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "quota_type_id":"be323f58-2e08-11ea-889c-7440bb00b67d",
             "allowed_value":50,
             "high_watermark":25,
             "version":"4.0.115",
             "quota_type_name":"VMs"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:15:07 GMT
    Content-Type: application/json
    Content-Length: 268
    Connection: keep-alive
    X-Compute-Request-Id: req-d87a57cd-c14c-44dd-931e-363158376cb7
    
    {
       "allowed_quotas":{
          "id":"262b117d-e406-4209-8964-004b19a8d422",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "quota_type_id":"1c5d4290-2e08-11ea-889c-7440bb00b67d",
          "allowed_value":5,
          "high_watermark":4,
          "version":"4.0.115",
          "quota_type_name":"Workloads"
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:24:04 GMT
    Content-Type: application/json
    Content-Length: 24
    Connection: keep-alive
    X-Compute-Request-Id: req-a4c02ee5-b86e-4808-92ba-c363b287f1a2
    
    {"allowed_quotas": [{}]}
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Wed, 18 Nov 2020 16:33:09 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    {
       "allowed_quotas":[
          {
             "project_id":"<project_id>",
             "quota_type_id":"<quota_type_id>",
             "allowed_value":"<integer>",
             "high_watermark":"<Integer>"
          }
       ]
    }
    {
       "allowed_quotas":{
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "allowed_value":"20000",
          "high_watermark":"18000"
       }
    }
    "display_description":"Total number of workload creation allowed per project",
    "status":"available"
    },
    {
    "created_at":"2020-10-19T10:05:52.000000",
    "updated_at":"2020-10-19T10:07:32.000000",
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"b7273a06-2e08-11ea-889c-7440bb00b67d",
    "display_name":"Snapshots",
    "display_description":"Total number of snapshot creation allowed per project",
    "status":"available"
    },
    {
    "created_at":"2020-10-19T10:05:52.000000",
    "updated_at":"2020-10-19T10:07:32.000000",
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"be323f58-2e08-11ea-889c-7440bb00b67d",
    "display_name":"VMs",
    "display_description":"Total number of VMs allowed per project",
    "status":"available"
    },
    {
    "created_at":"2020-10-19T10:05:52.000000",
    "updated_at":"2020-10-19T10:07:32.000000",
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"c61324d0-2e08-11ea-889c-7440bb00b67d",
    "display_name":"Volumes",
    "display_description":"Total number of volume attachments allowed per project",
    "status":"available"
    },
    {
    "created_at":"2020-10-19T10:05:52.000000",
    "updated_at":"2020-10-19T10:07:32.000000",
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"f02dd7a6-2e08-11ea-889c-7440bb00b67d",
    "display_name":"Storage",
    "display_description":"Total storage (in Bytes) allowed per project",
    "status":"available"
    }
    ]
    }

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work in

    project_id

    string

    restricts the output to the given project

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to work in

    migrate_cloud

    boolean

    True also shows Workloads from different clouds

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of the Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to take the Snapshot in

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 10:34:10 GMT
    Content-Type: application/json
    Content-Length: 7888
    Connection: keep-alive
    X-Compute-Request-Id: req-9d73e5e6-ca5a-4c07-bdf2-ec2e688fc339
    
    {
       "workloads":[
          {
             "created_at":"2020-11-02T13:40:06.000000",
             "updated_at":"2020-11-09T09:53:30.000000",
             "id":"18b809de-d7c8-41e2-867d-4a306407fb11",
             "user_id":"ccddc7e7a015487fa02920f4d4979779",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "availability_zone":"nova",
             "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
             "name":"Workload_1",
             "description":"no-description",
             "interval":null,
             "storage_usage":null,
             "instances":null,
             "metadata":[
                {
                   "created_at":"2020-11-09T09:57:23.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"ee27bf14-e460-454b-abf5-c17e3d484ec2",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"63cd8d96-1c4a-4e61-b1e0-3ae6a17bf533",
                   "value":"c8468146-8117-48a4-bfd7-49381938f636"
                },
                {
                   "created_at":"2020-11-05T10:27:06.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"22d3e3d6-5a37-48e9-82a1-af2dda11f476",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                   "value":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2"
                },
                {
                   "created_at":"2020-11-09T09:37:20.000000",
                   "updated_at":"2020-11-09T09:57:23.000000",
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"61615532-6165-45a2-91e2-fbad9eb0b284",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"b083bb70-e384-4107-b951-8e9e7bbac380",
                   "value":"c8468146-8117-48a4-bfd7-49381938f636"
                },
                {
                   "created_at":"2020-11-02T13:40:24.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"5a53c8ee-4482-4d6a-86f2-654d2b06e28c",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"backup_media_target",
                   "value":"10.10.2.20:/upstream"
                },
                {
                   "created_at":"2020-11-05T10:27:14.000000",
                   "updated_at":"2020-11-09T09:57:23.000000",
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"5cb4dc86-a232-4916-86bf-42a0d17f1439",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"e33c1eea-c533-4945-864d-0da1fc002070",
                   "value":"c8468146-8117-48a4-bfd7-49381938f636"
                },
                {
                   "created_at":"2020-11-02T13:40:06.000000",
                   "updated_at":"2020-11-02T14:10:30.000000",
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"506cd466-1e15-416f-9f8e-b9bdb942f3e1",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"hostnames",
                   "value":"[\"cirros-1\", \"cirros-2\"]"
                },
                {
                   "created_at":"2020-11-02T13:40:06.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"093a1221-edb6-4957-8923-cf271f7e43ce",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"pause_at_snapshot",
                   "value":"0"
                },
                {
                   "created_at":"2020-11-02T13:40:06.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"79baaba8-857e-410f-9d2a-8b14670c4722",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"policy_id",
                   "value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
                },
                {
                   "created_at":"2020-11-02T13:40:06.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"4e23fa3d-1a79-4dc8-86cb-dc1ecbd7008e",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"preferredgroup",
                   "value":"[]"
                },
                {
                   "created_at":"2020-11-02T14:10:30.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"ed06cca6-83d8-4d4c-913b-30c8b8418b80",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"topology",
                   "value":"\"\\\"\\\"\""
                },
                {
                   "created_at":"2020-11-02T13:40:23.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"4b6a80f7-b011-48d4-b5fd-f705448de076",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "key":"workload_approx_backup_size",
                   "value":"6"
                }
             ],
             "jobschedule":"(dp0\nVfullbackup_interval\np1\nV-1\np2\nsVretention_policy_type\np3\nVNumber of Snapshots to Keep\np4\nsVend_date\np5\nVNo End\np6\nsVstart_time\np7\nV01:45 PM\np8\nsVinterval\np9\nV5\np10\nsVenabled\np11\nI00\nsVretention_policy_value\np12\nV10\np13\nsVtimezone\np14\nVUTC\np15\nsVstart_date\np16\nV11/02/2020\np17\nsVappliance_timezone\np18\nVUTC\np19\ns.",
             "status":"locked",
             "error_msg":null,
             "links":[
                {
                   "rel":"self",
                   "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
                },
                {
                   "rel":"bookmark",
                   "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/18b809de-d7c8-41e2-867d-4a306407fb11"
                }
             ],
             "scheduler_trust":null
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 10:42:01 GMT
    Content-Type: application/json
    Content-Length: 120143
    Connection: keep-alive
    X-Compute-Request-Id: req-b443f6e7-8d8e-413f-8d91-7c30ba166e8c
    
    {
       "workloads":[
          {
             "created_at":"2019-04-24T14:09:20.000000",
             "updated_at":"2019-05-16T09:10:17.000000",
             "id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
             "user_id":"6ef8135faedc4259baac5871e09f0044",
             "project_id":"863b6e2a8e4747f8ba80fdce1ccf332e",
             "availability_zone":"nova",
             "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
             "name":"comdirect_test",
             "description":"Daily UNIX Backup 03:15 PM Full 7D Keep 8",
             "interval":null,
             "storage_usage":null,
             "instances":null,
             "metadata":[
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":false,
                   "created_at":"2019-05-16T09:13:54.000000",
                   "updated_at":null,
                   "value":"ca544215-1182-4a8f-bf81-910f5470887a",
                   "version":"3.2.46",
                   "key":"40965cbb-d352-4618-b8b0-ea064b4819bb",
                   "deleted_at":null,
                   "id":"5184260e-8bb3-4c52-abfa-1adc05fe6997"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:30.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"10.10.2.20:/upstream",
                   "version":"3.2.46",
                   "key":"backup_media_target",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"02dd0630-7118-485c-9e42-b01d23aa882c"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":false,
                   "created_at":"2019-05-16T09:13:51.000000",
                   "updated_at":null,
                   "value":"51693eca-8714-49be-b409-f1f1709db595",
                   "version":"3.2.46",
                   "key":"eb7d6b13-21e4-45d1-b888-d3978ab37216",
                   "deleted_at":null,
                   "id":"4b79a4ef-83d6-4e5a-afb3-f4e160c5f257"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:20.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"[\"Comdirect_test-2\", \"Comdirect_test-1\"]",
                   "version":"3.2.46",
                   "key":"hostnames",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"0cb6a870-8f30-4325-a4ce-e9604370198e"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":false,
                   "created_at":"2019-04-24T14:09:20.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"0",
                   "version":"3.2.46",
                   "key":"pause_at_snapshot",
                   "deleted_at":null,
                   "id":"5d4f109c-9dc2-48f3-a12a-e8b8fa4f5be9"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:20.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"[]",
                   "version":"3.2.46",
                   "key":"preferredgroup",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"9a223fbc-7cad-4c2c-ae8a-75e6ee8a6efc"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:11:49.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"\"\\\"\\\"\"",
                   "version":"3.2.46",
                   "key":"topology",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"77e436c0-0921-4919-97f4-feb58fb19e06"
                },
                {
                   "workload_id":"0ed39f25-5df2-4cc5-820f-2af2cde6aa67",
                   "deleted":true,
                   "created_at":"2019-04-24T14:09:30.000000",
                   "updated_at":"2019-05-16T09:01:23.000000",
                   "value":"121",
                   "version":"3.2.46",
                   "key":"workload_approx_backup_size",
                   "deleted_at":"2019-05-16T09:01:23.000000",
                   "id":"79aa04dd-a102-4bd8-b672-5b7a6ce9e125"
                }
             ],
             "jobschedule":"(dp1\nVfullbackup_interval\np2\nV7\nsVretention_policy_type\np3\nVNumber of days to retain Snapshots\np4\nsVend_date\np5\nV05/31/2019\np6\nsVstart_time\np7\nS'02:15 PM'\np8\nsVinterval\np9\nV24 hrs\np10\nsVenabled\np11\nI01\nsVretention_policy_value\np12\nI8\nsS'appliance_timezone'\np13\nS'UTC'\np14\nsVtimezone\np15\nVAfrica/Porto-Novo\np16\nsVstart_date\np17\nS'04/24/2019'\np18\ns.",
             "status":"locked",
             "error_msg":null,
             "links":[
                {
                   "rel":"self",
                   "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
                },
                {
                   "rel":"bookmark",
                   "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/orphan_workloads/4dfe98a43bfa404785a812020066b4d6/workloads/0ed39f25-5df2-4cc5-820f-2af2cde6aa67"
                }
             ],
             "scheduler_trust":null
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 11:03:55 GMT
    Content-Type: application/json
    Content-Length: 100
    Connection: keep-alive
    X-Compute-Request-Id: req-0e58b419-f64c-47e1-adb9-21ea2a255839
    
    {
       "workloads":{
          "imported_workloads":[
             "faa03-f69a-45d5-a6fc-ae0119c77974"        
          ],
          "failed_workloads":[
     
          ]
       }
    }
    {
       "workload_ids":[
          "<workload_id>"
       ],
       "upgrade":true
    }

    Customers (except Canonical Openstack) and having Openstack Wallaby need to follow the T4O-4.2 GA deployment process and directly upgrade to 4.2.HF3 containers/packages. The high-level flow below:

    1. Deplo T4O-4.2 GA appliance.

    2. Upgrade to 4.2.HF3 packages on the appliance.

    3. Kolla

      1. Deploy Trilio components via 4.2.HF3 containers/packages on Openstack Wallaby.

    4. Openstack Ansible

      1. Deploy Trilio components Openstack Wallaby [This will deploy 4.2 GA packages]

      2. Upgrade TrilioVault packages to 4.2.HF3 on Openstack Wallaby.

    5. Configure the Trilio appliance.

  • Canonical users having Openstack Ussuri Or Openstack Victoria can either upgrade (on top of 4.2 GA) using Trilio upgrade documents OR do a new deployment using 4.2 Deployment documents.

  • Canonical users having Openstack Wallaby need to do a new deployment using 4.2 Deployment documents.

  • Release Scope

    Current Hotfix release targets the following:

    1. High-level Qualification (via Sanity & Functional suites' execution) of T4O with Ussuri, Victoria & Wallaby Openstack.

    2. Verification of Jira issues targeted for 4.2. release.

    3. As part of the new process, the delivery will be via packages; end users would need to do the rolling upgrade on top of 4.2 GA.

    Release Artifacts

    Artifacts

    Reference

    1

    Release Date

    Aug 25, 2022

    2

    Debian URL

    deb [trusted=yes] /

    3

    RPM URL

    4

    Branch Tag and Containers Tags

    Note : Container images with tag 4.2.64-hotfix-3-rhosp16.1 are not available for download from RedHat registry due to technical issues. Hence, it is recommended to use the latest tag, i.e. 4.2.64-hotfix-4-rhosp16.1.

    Ref link for 4.2.64-hotfix-4-rhosp16.1 : T4O 4.2 HF4 Release Notes__

    ****

    Tag Reference in Install/Upgrade Docs

    Value

    Comments

    1

    4.2 Hotfix triliovault-cfg-scripts branch name

    hotfix-3-TVO/4.2

    Label against the Trilio repositories from where required code to be pulled for upgrades.

    2

    4.2 Hotfix RHOSP13 Container tag

    4.2.64-hotfix-3-rhosp13

    RHOSP13 Container tag for 4.2.HF3

    3

    Resolved Issues

    Summary

    Restore failing while creating security group

    Tvault configuration failing with build 4.1.19

    Configuration fails with pcs auth

    privsep Unhandled error: ConnectionRefusedError

    Tvault configuration failing with build 4.1.19

    Datamover container restarting

    While deploying trilio-wlm 4.2 directly on the machine is getting stuck at workloadmgr package installation.

    trilio data mover pods stuck in reboot loop post stack update on RHOSP 13

    Reassigning a workload from a deleted project fails

    Deliverables

    Package/Container Names

    Package Kind

    Package Version/Container Tags

    1

    contego

    deb

    4.2.64

    2

    contegoclient

    rpm

    4.2.64-4.2

    3

    Package Added/Changed

    Package/Container Names

    Package Kind

    Package/Container Version/Tags

    1

    python3-tvault-contego

    deb

    4.2.64.7

    2

    tvault-contego

    deb

    4.2.64.7

    3

    Known Issues

    Summary

    Workaround/Comments (if any)

    1

    encrypted volume backup fails with SSO user

    Follow below steps if T4O is reconfigured with ‘creator’ role

    1. login to any T4O node

    2. source particular user rc file

    3. fire below command to get the trust id

    workloadmgr trust-list 4. In order to create encrypted workload user needs to delete the existing trust which is created using other than ‘creator’ role

    2

    additional security rule is getting added in shared security group after restore

    It will go as known issue in 4.2HF3 and will be targeted in 4.2HF4

    3

    4

    Login to Horizon using admin user.

  • Click on Admin Tab.

  • Navigate to Backups-Admin Tab.

  • Navigate to Trilio page.

  • The Backups-Admin area provides the following features.

    It is possible to reduce the shown information down to a single tenant. That way seeing the exact impact the chosen Tenant has.

    Status overview

    The status overview is always visible in the Backups-Admin area. It provides the most needed information on a glance, including:

    • Storage Usage (nfs only)

    • Number of protected VMs compared to number of existing VMs

    • Number of currently running Snapshots

    • Status of TVault Nodes

    • Status of Contego Nodes

    The status of nodes is filled when the services are running and in good status.

    Workloads tab

    This tab provides information about all currently existing Workloads. It is the most important overview tab for every Backup Administrator and therefor the default tab shown when opening the Backup-Admins area.

    The following information are shown:

    • User-ID that owns the Workload

    • Project that contains the Workload

    • Workload name

    • Workload Type

    • Availability Zone

    • Amount of protected VMs

    • Performance information about the last 30 backups

      • How much data was backed up (green bars)

      • How long did the Backup take (red line)

    • Piechart showing amount of Full (Blue) Backups compared to Incremental (Red) Backups

    • Number of successful Backups

    • Number of failed Backups

    • Storage used by that Workload

    • Which Backup target is used

    • When is the next Snapshot run

    • What is the general intervall of the Workload

    • Scheduler Status including a Switch to deactivate/activate the Workload

    Usage tab

    Administrators often need to figure out, where a lot of resources are used up, or they need to quickly provide usage information to a billing system. This tab helps in these tasks by providing the following information:

    • Storage used by a Tenant

    • VMs protected by a Tenant

    It is possible to drill down to see the same information per workload and finally per protected VM.

    The Usage tab includes workloads and VMs that are no longer actively used by a Tenant, but exist on the backup target.

    Nodes tab

    This tab displays information about Trilio cluster nodes. The following information are shown:

    • Node name

    • Node ID

    • Trilio Version of the node

    • IP Address

    • Active Controller Node (True/False)

    • Status of the Node

    The Virtual IP is shown as it's own node. It is typically shown directly below the current active Controller Node.

    Data Movers tab (Trilio Data Mover Service)

    This tab displays information about Trilio contego service. The following information are shown:

    • Service-Name

    • Compute Node the service is running on

    • Zone

    • Service Status from Openstack perspective (enabled/disabled)

    • Version of the Service

    • General Status

    • last time the Status was updated

    Storage tab

    This tab displays information about the backup target storage. It contains the following information:

    • Storage Name

    Clicking on the Storage name provides an overview of all workloads stored on that storage.

    • Capacity of the storage

    • Total utilization of the storage

    • Status of the storage

    • Statistic information

      • Percentage all storages are used

      • Percentage how much storage is used for full backups

      • Amount of Full backups versus Incremental backups

    Audit tab

    Audit logs provide the sequence of workload related activities done by users, like workload creation, snapshot creation, etc. The following information are shown:

    • Time of the entry

    • What task has been done

    • Project the task has performed in

    • User that performed the task

    The Audit log can be searched for strings to find for example only entries down by a specific user.

    Additionally, can the shown timeframe be changed as necessary.

    License tab

    The license tab provides an overview over the current license and allows to upload new licenses, or validate the current license.

    A license validation is automatically done, when opening the tab.

    The following information about an active license are shown:

    • Organization (License name)

    • License ID

    • Purchase date - when was the license created

    • License Expiry Date

    • Maintenance Expiry Date

    • License value

    • License Edition

    • License Version

    • License Type

    • Description of the License

    • Evaluation (True/False)

    Trilio will stop all activities once a license is no longer valid or expired.

    Policy tab

    The policy tab gives Administrators the possibility to work with workload policies.

    Please use Workload Policies in the Admin guide to learn more about how to create and use Workload Policies.

    Settings tab

    This tab manages all global settings for the whole cloud. Trilio has two types of settings:

    1. Email settings

    2. Job scheduler settings.

    Email Settings

    These settings will be used by Trilio to send email reports of snapshots and restores to users.

    Configuring the Email settings is a must-have to provide Email notification to Openstack users.

    The following information are required to configure the email settings:

    • SMTP Server

    • SMTP username

    • SMTP password

    • SMTP port

    • SMTP timeout

    • Sender email address

    A test email can be sent directly from the configuration page.

    To work with email settings through CLI use the following commands:

    To set an email setting for the first time or after deletion use:

    • --description➡️Optional description (Default=None) ➡️ Not required for email settings

    • --category➡️Optional setting category (Default=None) ➡️ Not required for email settings

    • --type➡️settings type ➡️ set to email_settings

    • --is-public➡️sets if the setting can be seen publicly ➡️ set to False

    • --is-hidden➡️sets if the setting will always be hidden ➡️ set to False

    • --metadata➡️sets if the setting can be seen publicly ➡️ Not required for email settings

    • <name>➡️name of the setting ➡️ Take from the list below

    • <value>➡️value of the setting ➡️ Take value type from the list below

    To update an already set email setting through CLI use:

    • --description➡️Optional description (Default=None) ➡️ Not required for email settings

    • --category➡️Optional setting category (Default=None) ➡️ Not required for email settings

    • --type➡️settings type ➡️ set to email_settings

    • --is-public➡️sets if the setting can be seen publicly ➡️ set to False

    • --is-hidden➡️sets if the setting will always be hidden ➡️ set to False

    • --metadata➡️sets if the setting can be seen publicly ➡️ Not required for email settings

    • <name>➡️name of the setting ➡️ Take from the list below

    • <value>➡️value of the setting ➡️ Take value type from the list below

    To show an already set email setting use:

    • --get_hidden➡️show hidden settings (True) or not (False) ➡️ Not required for email settings, use False if set

    • <setting_name>➡️name of the setting to show➡️ Take from the list below

    To delete a set email setting use:

    • <setting_name>➡️name of the setting to delete ➡️ Take from the list below

    Setting name
    Value type
    example

    smtp_default___recipient

    String

    [email protected]

    smtp_default___sender

    String

    [email protected]

    smtp_port

    Integer

    587

    Disable/Enable Job Scheduler

    The Global Job Scheduler can be used to deactivate all scheduled workloads without modifying each one of them.

    To activate/deactivate the Global Job Scheduler through the Backups-Admin area:

    1. Login to Horizon using admin user.

    2. Click on Admin Tab.

    3. Navigate to Backups-Admin Tab.

    4. Navigate to Trilio page.

    5. Navigate to the Settings tab

    6. Click "Disable/Enable Job Scheduler"

    7. Check or Uncheck the box for "Job Scheduler Enabled"

    8. Confirm by clicking on "Change"

    The Global Job Scheduler can be controlled through CLI as well.

    To get the status of the Global Job Scheduler use:

    To deactivate the Global Job Scheduler use:

    To activate the Global Job Scheduler use:

    workloadmgr setting-create [--description <description>]
                               [--category <category>]
                               [--type <type>]
                               [--is-public {True,False}]
                               [--is-hidden {True,False}]
                               [--metadata <key=value>]
                               <name> <value>
    workloadmgr setting-update [--description <description>]
                               [--category <category>]
                               [--type <type>]
                               [--is-public {True,False}]
                               [--is-hidden {True,False}]
                               [--metadata <key=value>]
                               <name> <value>
    workloadmgr setting-show [--get_hidden {True,False}] <setting_name>
    workloadmgr setting-delete <setting_name>
    workloadmgr get-global-job-scheduler
    workloadmgr disable-global-job-scheduler
    workloadmgr enable-global-job-scheduler

    smtp_server_name

    String

    Mailserver_A

    smtp_server_username

    String

    admin

    smtp_server_password

    String

    password

    smtp_timeout

    Integer

    10

    smtp_email_enable

    Boolean

    True

    workloadmgr trust-delete <TrustID>

    5.create a new trust with ‘creator’ role

    workloadmgr trust-create creator

    6.now create encrypted workload

    PIP URL

    https://pypi.fury.io/triliodata-4-2/

    4.2 Hotfix RHOSP16.1 Container tag

    4.2.64-hotfix-3-rhosp16.1

    RHOSP16.1 Container tag for 4.2.HF3

    4

    4.2 Hotfix RHOSP16.2 Container tag

    4.2.64-hotfix-3-rhosp16.2

    RHOSP16.2 Container tag for 4.2.HF3

    5

    4.2 Hotfix Kolla Victoria Container tag

    4.2.64-hotfix-3-victoria

    Kolla Container tag against 4.2.HF3

    6

    4.2 Hotfix Kolla Wallaby Container tag

    4.2.64-hotfix-3-wallaby

    Kolla Container tag against 4.2.HF3

    7

    4.2 Hotfix TripleO Container tag

    4.2.63-hotfix-3-tripleo

    TripleO Train CentOS 7 Container tag for 4.2.HF3

    Reassign of Workload from Deleted Project Fails SFDC #2821

    default_tvault_dashboard_tvo-tvm not available after yum update

    workload policy shows incorrect start time

    tvault-config service is in the crash loop on 2 out of 3 nodes T4O cluster

    Trilio core functionality operations do not perform as expected when the master T4O node is powered off.

    backup stuck in uploading phase

    Backup failed at snapshot_network_topology

    contegoclient

    deb

    4.2.64

    4

    contegoclient

    python

    4.2.64

    5

    dmapi

    rpm

    4.2.64-4.2

    6

    dmapi

    deb

    4.2.64

    7

    puppet-triliovault

    rpm

    4.2.64-4.2

    8

    python3-contegoclient

    deb

    4.2.64

    9

    python3-contegoclient-el8

    rpm

    4.2.64-4.2

    10

    python3-dmapi

    deb

    4.2.64

    11

    python3-dmapi

    rpm

    4.2.64-4.2

    12

    python3-s3-fuse-plugin

    deb

    4.2.64

    13

    python3-s3fuse-plugin

    rpm

    4.2.64-4.2

    14

    python3-trilio-fusepy

    rpm

    3.0.1-1

    15

    python-s3fuse-plugin-cent7

    rpm

    4.2.64-4.2

    16

    s3fuse

    python

    4.2.64

    17

    s3-fuse-plugin

    deb

    4.2.64

    18

    trilio-fusepy

    rpm

    3.0.1-1

    19

    4.2-RHOSP13-CONTAINER

    Containers

    4.2.64-hotfix-3-rhosp13

    20

    4.2-RHOSP16.1-CONTAINER

    Containers

    4.2.64-hotfix-3-rhosp16.1

    21

    4.2-RHOSP16.2-CONTAINER

    Containers

    4.2.64-hotfix-3-rhosp16.2

    22

    4.2-KOLLA-CONTAINER Victoria

    Containers

    4.2.64-hotfix-3-victoria

    23

    4.2-KOLLA-CONTAINER Wallaby

    Containers

    4.2.64-hotfix-3-wallaby

    24

    4.2-TRIPLEO-CONTAINER

    Containers

    4.2.64-hotfix-3-tripleo

    python3-tvault-contego

    rpm

    4.2.64.7-4.2

    4

    tvault-contego

    rpm

    4.2.64.7-4.2

    5

    workloadmgr

    deb

    4.2.64.6

    6

    workloadmgr

    python

    4.2.64.6

    7

    tvault_configurator

    python

    4.2.64.6

    8

    tvault-horizon-plugin

    deb

    4.2.64.1

    9

    tvault-horizon-plugin

    rpm

    4.2.64.1-4.2

    10

    python3-tvault-horizon-plugin

    deb

    4.2.64.1

    11

    python3-tvault-horizon-plugin-el8

    rpm

    4.2.64.1-4.2

    12

    python3-workloadmgrclient

    deb

    4.2.64.1

    13

    python3-workloadmgrclient-el8

    rpm

    4.2.64.1-4.2

    14

    python-workloadmgrclient

    deb

    4.2.64.1

    15

    workloadmgrclient

    python

    4.2.64.1

    16

    workloadmgrclient

    rpm

    4.2.64.1-4.2

    [encrypted] Post restore of incremental snapshots centos instance is not getting booted

    There is no workaround as such. But customer can only restore the already taken full snapshot

    5

    [Intermittent] In-place restore doesn't work for ext3 & ext4 file system in canonical bionic-queens

    In-place restore doesn't work well for ext3 & ext4 file system in canonical bionic-queens.

    After in-place restore instance has data from the latest snapshot for ext3 & ext4 file system, however In-place restore was done for previous full/incremental snapshot.

    6

    Performance difference between encrypted & unencrypted WL/snapshot

    With encryption in place, user would see some performance degradation against all operations done by Trilio.

    Stats below as per trials in Trilio Lab

    Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : ~108GB

    1. For unencrypted WL : 62 min

    2. For encrypted WL : 82 min

    Snapshot time for Windows Image booted VM. No additional data except OS. : ~12 GB

    1. For unencrypted WL : 10 min

    2. For encrypted WL : 18 min

    7

    get-importworkload-list and get-orphaned-workloads-list are showing the wrong list of WLs

    Customer need to use --project option with importworkload-lists cli to get the list WLs that can be imported with particular openstack.

    workloadmgr workload-get-importworkloads-list --project_id <project_id>

    8

    File-search not displaying files present in logical vol on volume group (LVM)

    If we create lvm partition using fdisk utility, then the file search will work.

    9

    Retention not working post snapshot mount/unmount operation

    # List the Workload id for which Retention failing due to ownership change issue. # Fire the cmd chown -R nova:nova <WL_DIR> # After firing above cmd, now one should able to see the snapshot ownership as nova:nova.

    10

    [Barbican]File search on encrypted workload returns empty data

    By default, if root directory is not having read permissions for group, then file search will also fail as it runs from nova user.

    11

    Single corrupted snapshot impacts import of all other valid snapshots causing file search failure

    As per the current import design flow, if any single WL is corrupted (in current case few DB files were missing), then other good workloads get impacted during import, but import operation doesn’t stop OR fails. Respective wlm-api logs should show the error.

    To mitigate the impact, the identified corrupted WL should be manually removed from target backend followed by reinitialize and import.

    12

    Test email error message should be in readable and understandable format

    NA

    13

    File search will not work on Canonical if wlm is running on container (lxc container in this case)

    NA

    LP Bug : [https://bugs.launchpad.net/trilio/+bug/1961149

    14

    Unable to create encrypted workload if T4O reconfigured with creator trustee role.

    If T4O is initially configured with member as trustee role and then user reconfigures the same with creator as a trustee role, then this failure would occur. Workaround : Follow below steps if T4O is reconfigured with ‘creator’ role

    1. login to any T4O node

    2. source particular user rc file

    3. fire command to get the trust id (workloadmgr trust-list)

    4. In order to create encrypted workload user needs to delete the existing trust which is created using other than ‘creator’ role (workloadmgr trust-delete <TrustID>)

    5. create a new trust with ‘creator’ role (workloadmgr trust-create creator)

    6. now create encrypted workload

    https://apt.fury.io/triliodata-4-2/
    baseurl=http://trilio:[email protected]:8283/triliodata-4-2/yum/

    Workloads

    List Workloads

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads

    Provides the list of all workloads for the given tenant/project id

    Snapshots

    List Snapshots

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots

    Lists all Snapshots.

    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to fetch the workloads from

    Query Parameters

    Name
    Type
    Description

    nfs_share

    string

    lists workloads located on a specific nfs-share

    all_workloads

    boolean

    admin role required - True lists workloads of all tenants/projects

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 29 Oct 2020 14:55:40 GMT
    Content-Type: application/json
    Content-Length: 3480
    Connection: keep-alive
    X-Compute-Request-Id: req-a2e49b7e-ce0f-4dcb-9e61-c5a4756d9948
    
    {
       "workloads":[
          {
             "project_id":"4dfe98a43bfa404785a812020066b4d6",
             "user_id":"adfa32d7746a4341b27377d6f7c61adb",
             "id":"8ee7a61d-a051-44a7-b633-b495e6f8fc1d",
             "name":"worklaod1",
             "snapshots_info":"",
             "description":"no-description",
             "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
    

    Create Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads

    Creates a workload in the provided Tenant/Project with the given details.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to create the workload in

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body format

    Workload create requires a Body in json format, to provide the requested information.

    Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.

    Show Workload

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    Shows all details of a specified workload

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Project/Tenant where to find the Workload

    workload_id

    string

    ID of the Workload to show

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Modify Workload

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    Modifies a workload in the provided Tenant/Project with the given details.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant/Project where to find the workload in

    workload_id

    string

    ID of the Workload to modify

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body format

    Workload modify requires a Body in json format, to provide the information about the values to modify.

    All values in the body are optional.

    Using a policy-id will pull the following information from the policy. Values provided in the Body will be overwritten with the values from the Policy.

    Delete Workload

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    Deletes the specified Workload.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    workload_id

    string

    ID of the Workload to delete

    Query Parameters

    Name
    Type
    Description

    database_only

    boolean

    True leaves the Workload data on the Backup Target

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Unlock Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/unlock

    Unlocks the specified Workload

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    workload_id

    string

    ID of the Workload to unlock

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Reset Workload

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>/reset

    Resets the defined workload

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio Service

    tenant_id

    string

    ID of the Tenant where to find the Workload in

    workload_id

    string

    ID of the Workload to reset

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication Token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Projects to fetch the Snapshots from

    Query Parameters

    Name
    Type
    Description

    host

    string

    host name of the TVM that took the Snapshot

    workload_id

    string

    ID of the Workload to list the Snapshots off

    date_from

    string

    starting date of Snapshots to show

    \

    Format: YYYY-MM-DDTHH:MM:SS

    string

    ending date of Snapshots to show

    \

    Format: YYYY-MM-DDTHH:MM:SS

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Take Snapshot

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workloads/<workload_id>

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of the Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to take the Snapshot in

    workload_id

    string

    ID of the Workload to take the Snapshot in

    Query Parameters

    Name
    Type
    Description

    full

    boolean

    True creates a full Snapshot

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body format

    When creating a Snapshot it is possible to provide additional information

    This Body is completely optional

    Show Snapshot

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Shows the details of a specified Snapshot

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of the Trilio Service

    tenant_id

    string

    ID of the Tenant/Project to take the Snapshot from

    snapshot_id

    string

    ID of the Snapshot to show

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Delete Snapshot

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Deletes a specified Snapshot

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to find the Snapshot in

    snapshot_id

    string

    ID of the Snapshot to delete

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Cancel Snapshot

    GET https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>/cancel

    Cancels the Snapshot process of a given Snapshot

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to find the Snapshot in

    snapshot_id

    string

    ID of the Snapshot to cancel

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Thu, 29 Oct 2020 15:42:02 GMT
    Content-Type: application/json
    Content-Length: 703
    Connection: keep-alive
    X-Compute-Request-Id: req-443b9dea-36e6-4721-a11b-4dce3c651ede
    
    {
       "workload":{
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "id":"c4e3aeeb-7d87-4c49-99ed-677e51ba715e",
          "name":"API created",
          "snapshots_info":"",
          "description":"API description",
          "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
          "status":"creating",
          "created_at":"2020-10-29T15:42:01.000000",
          "updated_at":"2020-10-29T15:42:01.000000",
          "scheduler_trust":null,
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             }
          ]
       }
    }
    retention_policy_type
    retention_policy_value
    interval
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 12:08:42 GMT
    Content-Type: application/json
    Content-Length: 1536
    Connection: keep-alive
    X-Compute-Request-Id: req-afb76abb-aa33-427e-8219-04fc2b91bce0
    
    {
       "workload":{
          "created_at":"2020-10-29T15:42:01.000000",
          "updated_at":"2020-10-29T15:42:18.000000",
          "id":"c4e3aeeb-7d87-4c49-99ed-677e51ba715e",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "availability_zone":"nova",
          "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
          "name":"API created",
          "description":"API description",
          "interval":null,
          "storage_usage":{
             "usage":0,
             "full":{
                "snap_count":0,
                "usage":0
             },
             "incremental":{
                "snap_count":0,
                "usage":0
             }
          },
          "instances":[
             {
                "id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b",
                "name":"cirros-4",
                "metadata":{
                   
                }
             },
             {
                "id":"7c1bb5d2-aa5a-44f7-abcd-2d76b819b4c8",
                "name":"cirros-3",
                "metadata":{
                   
                }
             }
          ],
          "metadata":{
             "hostnames":"[]",
             "meta":"data",
             "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
             "preferredgroup":"[]",
             "workload_approx_backup_size":"6"
          },
          "jobschedule":{
             "retention_policy_type":"Number of Snapshots to Keep",
             "end_date":"15/27/2020",
             "start_time":"3:00 PM",
             "interval":"5",
             "enabled":false,
             "retention_policy_value":"10",
             "timezone":"UTC+2",
             "start_date":"10/27/2020",
             "fullbackup_interval":"-1",
             "appliance_timezone":"UTC",
             "global_jobscheduler":true
          },
          "status":"available",
          "error_msg":null,
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/workloads/c4e3aeeb-7d87-4c49-99ed-677e51ba715e"
             }
          ],
          "scheduler_trust":null
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 12:31:42 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-674a5d71-4aeb-4f99-90ce-7e8d3158d137
    retention_policy_type
    retention_policy_value
    interval
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 13:31:00 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 13:41:55 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 02 Nov 2020 13:52:30 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive
    {
       "workload":{
          "name":"<name of the Workload>",
          "description":"<description of workload>",
          "workload_type_id":"<ID of the chosen Workload Type",
          "source_platform":"openstack",
          "instances":[
             {
                "instance-id":"<Instance ID>"
             },
             {
                "instance-id":"<Instance ID>"
             }
          ],
          "jobschedule":{
             "retention_policy_type":"<'Number of Snapshots to Keep'/'Number of days to retain Snapshots'>",
             "retention_policy_value":"<Integer>"
             "timezone":"<timezone>",
             "start_date":"<Date format: MM/DD/YYYY>"
             "end_date":"<Date format MM/DD/YYYY>",
             "start_time":"<Time format: HH:MM AM/PM>",
             "interval":"<Format: Integer hr",
             "enabled":"<True/False>"
          },
          "metadata":{
             <key>:<value>,
             "policy_id":"<policy_id>"
          }
       }
    }
    {
       "workload":{
          "name":"<name>",
          "description":"<description>"
          "instances":[
             {
                "instance-id":"<instance_id>"
             },
             {
                "instance-id":"<instance_id>"
             }
          ],
          "jobschedule":{
             "retention_policy_type":"<'Number of Snapshots to Keep'/'Number of days to retain Snapshots'>",
             "retention_policy_value":"<Integer>",
             "timezone":"<timezone>",
             "start_time":"<HH:MM AM/PM>",
             "end_date":"<MM/DD/YYYY>",
             "interval":"<Integer hr>",
             "enabled":"<True/False>"
          },
          "metadata":{
             "meta":"data",
             "policy_id":"<policy_id>"
          },
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 12:58:38 GMT
    Content-Type: application/json
    Content-Length: 266
    Connection: keep-alive
    X-Compute-Request-Id: req-ed391cf9-aa56-4c53-8153-fd7fb238c4b9
    
    {
       "snapshots":[
          {
             "id":"1ff16412-a0cd-4e6a-9b4a-b5d4440fffc4",
             "created_at":"2020-11-02T14:03:18.000000",
             "status":"available",
             "snapshot_type":"full",
             "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
             "name":"snapshot",
             "description":"-",
             "host":"TVM1"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 13:58:38 GMT
    Content-Type: application/json
    Content-Length: 283
    Connection: keep-alive
    X-Compute-Request-Id: req-fb8dc382-e5de-4665-8d88-c75b2e473f5c
    
    {
       "snapshot":{
          "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "created_at":"2020-11-04T13:58:37.694637",
          "status":"creating",
          "snapshot_type":"full",
          "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
          "name":"API taken 2",
          "description":"API taken description 2",
          "host":""
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 14:07:18 GMT
    Content-Type: application/json
    Content-Length: 6609
    Connection: keep-alive
    X-Compute-Request-Id: req-f88fb28f-f4ce-4585-9c3c-ebe08a3f60cd
    
    {
       "snapshot":{
          "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "created_at":"2020-11-04T13:58:37.000000",
          "updated_at":"2020-11-04T14:06:03.000000",
          "finished_at":"2020-11-04T14:06:03.000000",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"available",
          "snapshot_type":"full",
          "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
          "instances":[
             {
                "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                "name":"cirros-2",
                "status":"available",
                "metadata":{
                   "availability_zone":"nova",
                   "config_drive":"",
                   "data_transfer_time":"0",
                   "object_store_transfer_time":"0",
                   "root_partition_type":"Linux",
                   "trilio_ordered_interfaces":"192.168.100.80",
                   "vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.80\", \"config_drive\": \"\"}",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "workload_name":"Workload_1"
                },
                "flavor":{
                   "vcpus":"1",
                   "ram":"512",
                   "disk":"1",
                   "ephemeral":"0"
                },
                "security_group":[
                   {
                      "name":"default",
                      "security_group_type":"neutron"
                   }
                ],
                "nics":[
                   {
                      "mac_address":"fa:16:3e:cf:10:91",
                      "ip_address":"192.168.100.80",
                      "network":{
                         "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                         "name":"robert_internal",
                         "cidr":null,
                         "network_type":"neutron",
                         "subnet":{
                            "id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
                            "name":"robert_internal",
                            "cidr":"192.168.100.0/24",
                            "ip_version":4,
                            "gateway_ip":"192.168.100.1"
                         }
                      }
                   }
                ],
                "vdisks":[
                   {
                      "label":null,
                      "resource_id":"fa888089-5715-4228-9e5a-699f8f9d59ba",
                      "restore_size":1073741824,
                      "vm_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                      "volume_id":"51491d30-9818-4332-b056-1f174e65d3e3",
                      "volume_name":"51491d30-9818-4332-b056-1f174e65d3e3",
                      "volume_size":"1",
                      "volume_type":"iscsi",
                      "volume_mountpoint":"/dev/vda",
                      "availability_zone":"nova",
                      "metadata":{
                         "readonly":"False",
                         "attached_mode":"rw"
                      }
                   }
                ]
             },
             {
                "id":"e33c1eea-c533-4945-864d-0da1fc002070",
                "name":"cirros-1",
                "status":"available",
                "metadata":{
                   "availability_zone":"nova",
                   "config_drive":"",
                   "data_transfer_time":"0",
                   "object_store_transfer_time":"0",
                   "root_partition_type":"Linux",
                   "trilio_ordered_interfaces":"192.168.100.176",
                   "vm_metadata":"{\"workload_name\": \"Workload_1\", \"workload_id\": \"18b809de-d7c8-41e2-867d-4a306407fb11\", \"trilio_ordered_interfaces\": \"192.168.100.176\", \"config_drive\": \"\"}",
                   "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
                   "workload_name":"Workload_1"
                },
                "flavor":{
                   "vcpus":"1",
                   "ram":"512",
                   "disk":"1",
                   "ephemeral":"0"
                },
                "security_group":[
                   {
                      "name":"default",
                      "security_group_type":"neutron"
                   }
                ],
                "nics":[
                   {
                      "mac_address":"fa:16:3e:cf:4d:27",
                      "ip_address":"192.168.100.176",
                      "network":{
                         "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                         "name":"robert_internal",
                         "cidr":null,
                         "network_type":"neutron",
                         "subnet":{
                            "id":"b7b54304-aa82-4d50-91e6-66445ab56db4",
                            "name":"robert_internal",
                            "cidr":"192.168.100.0/24",
                            "ip_version":4,
                            "gateway_ip":"192.168.100.1"
                         }
                      }
                   }
                ],
                "vdisks":[
                   {
                      "label":null,
                      "resource_id":"c8293bb0-031a-4d33-92ee-188380211483",
                      "restore_size":1073741824,
                      "vm_id":"e33c1eea-c533-4945-864d-0da1fc002070",
                      "volume_id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                      "volume_name":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                      "volume_size":"1",
                      "volume_type":"iscsi",
                      "volume_mountpoint":"/dev/vda",
                      "availability_zone":"nova",
                      "metadata":{
                         "readonly":"False",
                         "attached_mode":"rw"
                      }
                   }
                ]
             }
          ],
          "name":"API taken 2",
          "description":"API taken description 2",
          "host":"TVM1",
          "size":44171264,
          "restore_size":2147483648,
          "uploaded_size":44171264,
          "progress_percent":100,
          "progress_msg":"Snapshot of workload is complete",
          "warning_msg":null,
          "error_msg":null,
          "time_taken":428,
          "pinned":false,
          "metadata":[
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"16fc1ce5-81b2-4c07-ac63-6c9232e0418f",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"backup_media_target",
                "value":"10.10.2.20:/upstream"
             },
             {
                "created_at":"2020-11-04T13:58:37.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"5a56bbad-9957-4fb3-9bbc-469ec571b549",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"cancel_requested",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:29.000000",
                "updated_at":"2020-11-04T14:05:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"d36abef7-9663-4d88-8f2e-ef914f068fb4",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"data_transfer_time",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"c75f9151-ef87-4a74-acf1-42bd2588ee64",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"hostnames",
                "value":"[\"cirros-1\", \"cirros-2\"]"
             },
             {
                "created_at":"2020-11-04T14:05:29.000000",
                "updated_at":"2020-11-04T14:05:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"02916cce-79a2-4ad9-a7f6-9d9f59aa8424",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"object_store_transfer_time",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"96efad2f-a24f-4cde-8e21-9cd78f78381b",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"pause_at_snapshot",
                "value":"0"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"572a0b21-a415-498f-b7fa-6144d850ef56",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"policy_id",
                "value":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"dfd7314d-8443-4a95-8e2a-7aad35ef97ea",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"preferredgroup",
                "value":"[]"
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"2e17e1e4-4bb1-48a9-8f11-c4cd2cfca2a9",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"topology",
                "value":"\"\\\"\\\"\""
             },
             {
                "created_at":"2020-11-04T14:05:57.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"33762790-8743-4e20-9f50-3505a00dbe76",
                "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
                "key":"workload_approx_backup_size",
                "value":"6"
             }
          ],
          "restores_info":""
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 14:18:36 GMT
    Content-Type: application/json
    Content-Length: 56
    Connection: keep-alive
    X-Compute-Request-Id: req-82ffb2b6-b28e-4c73-89a4-310890960dbc
    
    {"task": {"id": "a73de236-6379-424a-abc7-33d553e050b7"}}
    
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Wed, 04 Nov 2020 14:26:44 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-47a5a426-c241-429e-9d69-d40aed0dd68d
    {
       "snapshot":{
          "is_scheduled":<true/false>,
          "name":"<name>",
          "description":"<description>"
       }
    }
    "status":"available",
    "created_at":"2020-10-26T12:07:01.000000",
    "updated_at":"2020-10-29T12:22:26.000000",
    "scheduler_trust":null,
    "links":[
    {
    "rel":"self",
    "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
    },
    {
    "rel":"bookmark",
    "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/8ee7a61d-a051-44a7-b633-b495e6f8fc1d"
    }
    ]
    },
    {
    "project_id":"4dfe98a43bfa404785a812020066b4d6",
    "user_id":"adfa32d7746a4341b27377d6f7c61adb",
    "id":"a90d002a-85e4-44d1-96ac-7ffc5d0a5a84",
    "name":"workload2",
    "snapshots_info":"",
    "description":"no-description",
    "workload_type_id":"f82ce76f-17fe-438b-aa37-7a023058e50d",
    "status":"available",
    "created_at":"2020-10-20T09:51:15.000000",
    "updated_at":"2020-10-29T10:03:33.000000",
    "scheduler_trust":null,
    "links":[
    {
    "rel":"self",
    "href":"http://wlm_backend/v1/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
    },
    {
    "rel":"bookmark",
    "href":"http://wlm_backend/4dfe98a43bfa404785a812020066b4d6/workloads/a90d002a-85e4-44d1-96ac-7ffc5d0a5a84"
    }
    ]
    }
    ]
    }

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    all

    boolean

    admin role required - True lists all Snapshots of all Workloads

    User-Agent

    string

    python-workloadmgrclient

    Installing on RHOSP

    The Red Hat Openstack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.

    Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.

    1. Prepare for deployment

    1.1] Select 'backup target' type

    Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

    Following backup target types are supported by Trilio

    a) NFS

    Need NFS share path

    b) Amazon S3

    - S3 Access Key - Secret Key - Region - Bucket name

    c) Other S3 compatible storage (Like, Ceph based S3)

    - S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

    1.2] Clone triliovault-cfg-scripts repository

    The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.

    All commands need to be run as user 'stack' on undercloud node

    The following command clones the triliovault-cfg-scripts github repository.

    Next access the Red Hat Director scripts according to the used RHOSP version.

    RHOSP 13

    RHOSP 16.1

    RHOSP 16.2

    RHOSP 17.0

    The remaining documentation will use the following path for examples:

    1.3] If backup target type is 'Ceph based S3' with SSL:

    If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE___Directory/puppet/trilio/files'

    2] Upload trilio puppet module

    The following commands upload the Trilio puppet module to the overcloud registry. The actual upload happens upon the next deployment.

    Trilio puppet module is uploaded to overcloud as a swift deploy artifact with heat resource name 'DeployArtifactURLs'. If you check trilio's puppet module artifact file it looks like following.

    Note: If your overcloud deploy command using any other deploy artifact through a environment file, then you need to merge trilio deploy artifact url and your url in single file.

    • How to check if your overcloud deploy environment files using deploy artifacts? You need to check string 'DeployArtifactURLs' in your environment files (only those mentioned in overcloud deploy command with -e option). If you find it any such environment file that is mentioned in overcloud dpeloy command with '-e' option then your deploy command is using deploy artifact.

    • In that case you need to merge all deploy artifacts in single file. Refer following steps.

    Let's say, your artifact file path is "/home/stack/templates/user-artifacts.yaml" then refer following steps to merge both urls in single file and pass that new file to overcloud deploy command with '-e' option.

    3] Update overcloud roles data file to include Trilio services

    Trilio contains multiple services. Add these services to your roles_data.yaml.

    In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:

    /usr/share/openstack-tripleo-heat-templates/roles_data.yaml

    Add the following services to the roles_data.yaml

    All commands need to be run as user 'stack'

    3.1] Add Trilio Datamover Api Service to role data file

    This service needs to share the same role as the keystone and database service. In case of the pre-defined roles will these services run on the role Controller. In case of custom defined roles, it is necessary to use the same role where 'OS::TripleO::Services::Keystone' service installed.

    Add the following line to the identified role:

    3.2] Add Trilio Datamover Service to role data file

    This service needs to share the same role as the nova-compute service. In case of the pre-defined roles will the nova-compute service run on the role Compute. In case of custom defined roles, it is necessary to use the role the nova-compute service is using.

    Add the following line to the identified role:

    3.3] Add Trilio Horizon Service role data file

    This service needs to share the same role as the OpenStack Horizon server. In case of the pre-defined roles will the Horizon service run on the role Controller. Add the following line to the identified role:

    4] Prepare Trilio container images

    All commands need to be run as user 'stack'

    Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.

    Please note that using the hotfix containers requires that the Trilio Appliance is getting upgraded to the desired hotfix level as well.

    Refer to the word <HOTFIX-TAG-VERSION> as 4.2.8 in the below sections

    RHOSP 13

    RHOSP 16.1

    RHOSP 16.2

    RHOSP 17.0

    There are three registry methods available in RedHat Openstack Platform.

    1. Remote Registry

    2. Local Registry

    3. Satellite Server

    4.1] Remote Registry

    Follow this section when 'Remote Registry' is used.

    In this method, container images gets downloaded directly on overcloud nodes during overcloud deploy/update command execution. User can set remote registry to redhat registry or any other private registry that he wants to use. User needs to provide credentials of the registry in 'containers-prepare-parameter.yaml' file.

    1. Make sure other openstack service images are also using the same method to pull container images. If it's not the case you can not use this method.

    2. Populate 'containers-prepare-parameter.yaml' with content like following. Important parameters are 'push_destination: false', ContainerImageRegistryLogin: true and registry credentials. Trilio container images are published to registry 'registry.connect.redhat.com'. Credentials of registry 'registry.redhat.io' will work for 'registry.connect.redhat.com' registry too.

    Note: This file -'containers-prepare-parameter.yaml'

    Redhat document for remote registry method: \

    Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat

    3. Make sure you have network connectivity to above registries from all overcloud nodes. Otherwise image pull operation will fail.

    4. User need to manually populate the trilio_env.yaml file with Trilio container image URLs as given below:

    trilio_env.yaml file path:

    At this step, you have configured Trilio image urls in the necessary environment file.

    4.2] Local Registry

    Follow this section when 'local registry' is used on the undercloud.

    In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.

    RHOSP13

    RHOSP 16.1

    RHOSP16.2

    RHOSP17.0

    At this step, you have downloaded triliovault container images and configured triliovault image urls in necessary environment file.

    4.3] Red Hat Satellite Server

    Follow this section when a Satellite Server is used for the container registry.

    Pull the Trilio containers on the Red Hat Satellite using the given

    Populate the trilio_env.yaml with container urls.

    RHOSP 13

    RHOSP 16.1

    RHOSP16.2

    RHOSP17.0

    At this step, you have downloaded triliovault container images into RedHat sattelite server and configured triliovault image urls in necessary environment file.

    5] Configure multi-IP NFS

    This section is only required when the multi-IP feature for NFS is required.

    This feature allows to set the IP to access the NFS Volume per datamover instead of globally.

    On Undercloud node, change directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

    Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.

    Run this command on undercloud by sourcing 'stackrc'.

    Edit input map file and fill all the details. Refer to the for details about the structure.

    vi triliovault_nfs_map_input.yml

    Update pyyaml on the undercloud node only

    If pip isn't available please install pip on the undercloud.

    Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.

    The result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

    Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Validate the changes in file 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

    Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that nfs is used as backup target.

    6] Provide environment details in trilio-env.yaml

    Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.

    The following information are required additionally:

    • Network for the datamover api

    • datamover password

    • Backup target type {nfs/s3}

    • In case of NFS

    NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    • In case of S3

      • S3 type {amazon_s3/ceph_s3}

      • S3 Access key

      • S3 Secret key

    Use ceph_s3 for any non-aws S3 backup targets.

    7] Advanced Settings/Configuration

    7.1] Haproxy customized configuration for Trilio dmapi service __

    The existing default haproxy configuration works fine with most of the environments. Only when timeout issues with the dmapi are observed or other reasons are known, change the configuration as described here.

    Following is the haproxy conf file location on haproxy nodes of the overcloud. Trilio datamover api service haproxy configuration gets added to this file.

    Trilio datamover haproxy default configuration from the above file looks as follows:

    The user can change the following configuration parameter values.

    To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for edit (Edit <RHOSP_RELEASE> with your cloud's release information. Valid values are - rhosp13, rhosp16, rhosp16.1)

    For RHOSP13

    For RHOSP16.1

    For RHOSP16.2

    For RHOSP17.0

    ii) Search the following entries and edit as required

    iii) Save the changes.

    7.2] Configure Custom Volume/Directory Mounts for the Trilio Datamover Service

    • If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use the following heat environment file. A variable named 'TrilioDatamoverOptVolumes' is available in this file.

    • This variable 'TrilioDatamoverOptVolumes' accepts list of volume/bind mounts.

    • User needs to edit this file and add their volume mounts in below format.

    • For example:

    • In this volume mount "/mnt/dir2:/var/dir2", "/mnt/dir2" is a diretcory available on compute host and "/var/dir2" is the mount point inside datamover container.

    • Next, User needs to pass this file to overcloud deploy command with '-e' option Like below.

    8] Deploy overcloud with trilio environment

    Use the following heat environment file and roles data file in overcloud deploy command:

    1. trilio_env.yaml

    2. roles_data.yaml

    3. Use correct Trilio endpoint map file as per available Keystone endpoint configuration

    To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:

    Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

    9] Verify deployment

    If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

    9.1] On Controller node

    Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

    Verify the haproxy configuration under:

    9.2] On Compute node

    Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

    9.3] On the node with Horizon service

    Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.

    If Trilio Horizon container is in restarted state on RHOSP 16.1.8/RHSOP 16.2.4 then use below workaroud

    10] Troubleshooting for overcloud deployment failures

    Trilio components will be deployed using puppet scripts.

    oIn case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights:

    list of NFS Shares

  • NFS options

  • MultiIPNfsEnabled

  • S3 Region name

  • S3 Bucket

  • S3 Endpoint URL

  • S3 Signature Version

  • S3 Auth Version

  • S3 SSL Enabled {true/false}

  • S3 SSL Cert

  • Instead of tls-endpoints-public-dns.yaml file, use environments/trilio_env_tls_endpoints_public_dns.yaml

  • Instead of tls-endpoints-public-ip.yaml file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml

  • Instead of tls-everywhere-endpoints-dns.yaml file, useenvironments/trilio_env_tls_everywhere_dns.yaml

  • If activated use the correct trilio_nfs_map.yaml file

  • Here
    Red Hat registry URLs.
    this page
    https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html
    cd /home/stack
    git clone -b TVO/4.2.8 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp13/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/puppet/trilio/files
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/scripts/
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following for RHOSP13, RHOSP16.1 and RHOSP16.2
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    ## Command is same for RHOSP17.0 but the command output and file content would be different
    
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/scripts/
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following for RHOSP17.0
    Creating tarball...
    Tarball created.
    renamed '/tmp/puppet-modules-P3duCg9/puppet-modules.tar.gz' -> '/var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz'
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    ## For RHOSP13, RHOSP16.1 and RHOSP16.2
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml
    # Heat environment to deploy artifacts via Swift Temp URL(s)
    parameter_defaults:
        DeployArtifactURLs:
        - 'http://172.25.0.103:8080/v1/AUTH_46ba596219d143c8b076e9fcc4139fed/overcloud-artifacts/puppet-modules.tar.gz?temp_url_sig=c3972b7ce75226c278ab3fa8237d31cc1f2115bd&temp_url_expires=1646738377'
    
    ## For RHOSP17.0
    (undercloud) [stack@undercloud17-3 scripts]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml
    parameter_defaults:
      DeployArtifactFILEs:
      - /var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz
    
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml | grep http >> /home/stack/templates/user-artifacts.yaml
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/templates/user-artifacts.yaml
    # Heat environment to deploy artifacts via Swift Temp URL(s)
    parameter_defaults:
        DeployArtifactURLs:
        - 'http://172.25.0.103:8080/v1/AUTH_57ba596219d143c8b076e9fcc4139f3g/overcloud-artifacts/some-artifact.tar.gz?temp_url_sig=dc972b7ce75226c278ab3fa8237d31cc1f2115sc&temp_url_expires=3446738365'
        - 'http://172.25.0.103:8080/v1/AUTH_46ba596219d143c8b076e9fcc4139fed/overcloud-artifacts/puppet-modules.tar.gz?temp_url_sig=c3972b7ce75226c278ab3fa8237d31cc1f2115bd&temp_url_expires=1646738377'
    
    'OS::TripleO::Services::TrilioDatamoverApi'
    'OS::TripleO::Services::TrilioDatamover'
    OS::TripleO::Services::TrilioHorizon
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    File Name: containers-prepare-parameter.yaml
    parameter_defaults:
      ContainerImagePrepare:
      - push_destination: false
        set:
          namespace: registry.redhat.io/...
          ...
      ...
      ContainerImageRegistryCredentials:
        registry.redhat.io:
          myuser: 'p@55w0rd!'
        registry.connect.redhat.com:
          myuser: 'p@55w0rd!'
      ContainerImageRegistryLogin: true
    cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
    # For RHOSP13
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    
    # For RHOSP16.1
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.1' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
    # For RHOSP16.2
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.2' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    
    # For RHOSP17.0
    $ grep '<HOTFIX-TAG-VERSION>-rhosp17.0' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/scripts/
    
    ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp13
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: 172.25.2.2:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: 172.25.2.2:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: 172.25.2.2:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    
    $ docker image list | grep <HOTFIX-TAG-VERSION>-rhosp13
    172.30.5.101:8787/trilio/trilio-datamover                  <HOTFIX-TAG-VERSION>-rhosp13        f2dfb36bb176        8 weeks ago         3.61 GB
    registry.connect.redhat.com/trilio/trilio-datamover        <HOTFIX-TAG-VERSION>-rhosp13        f2dfb36bb176        8 weeks ago         3.61 GB
    172.30.5.101:8787/trilio/trilio-datamover-api              <HOTFIX-TAG-VERSION>-rhosp13        5d62f572a00c        8 weeks ago         2.24 GB
    registry.connect.redhat.com/trilio/trilio-datamover-api    <HOTFIX-TAG-VERSION>-rhosp13        5d62f572a00c        8 weeks ago         2.24 GB
    registry.connect.redhat.com/trilio/trilio-horizon-plugin   <HOTFIX-TAG-VERSION>-rhosp13        27c4de28e5ae        2 months ago        2.27 GB
    172.30.5.101:8787/trilio/trilio-horizon-plugin             <HOTFIX-TAG-VERSION>-rhosp13        27c4de28e5ae        2 months ago        2.27 GB
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp16.1
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.1' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
    $ openstack tripleo container image list | grep <HOTFIX-TAG-VERSION>-rhosp16.1
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1 |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1      |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1  |
    -----------------------------------------------------------------------------------------------------
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp16.2
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.2' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    
    $ openstack tripleo container image list | grep <HOTFIX-TAG-VERSION>-rhosp16.2
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2 |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2      |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2  |
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp17.0
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp17.0' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    
    $ openstack tripleo container image list | grep <HOTFIX-TAG-VERSION>-rhosp17.0
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0 |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0      |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0  |
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.1' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.2' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    $ grep '<HOTFIX-TAG-VERSION>-rhosp17.0' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    cd triliovault-cfg-scripts/common/
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    ## On Python3 env
    sudo pip3 install PyYAML==5.1 3
    
    ## On Python2 env
    sudo pip install PyYAML==5.1
    ## On Python3 env
    python3 ./generate_nfs_map.py
    
    ## On Python2 env
    python ./generate_nfs_map.py
    grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
    resource_registry:
      OS::TripleO::Services::TrilioDatamover: ../services/trilio-datamover.yaml
      OS::TripleO::Services::TrilioDatamoverApi: ../services/trilio-datamover-api.yaml
      OS::TripleO::Services::TrilioHorizon: ../services/trilio-horizon.yaml
    
      # NOTE: If there are addition customizations to the endpoint map (e.g. for
      # other integratiosn), this will need to be regenerated.
      OS::TripleO::EndpointMap: endpoint_map.yaml
    
    parameter_defaults:
    
       ## Enable Trilio's quota functionality on horizon
       ExtraConfig:
         horizon::customization_module: 'dashboards.overrides'
    
       ## Define network map for trilio datamover api service
       ServiceNetMap:
           TrilioDatamoverApiNetwork: internal_api
    
       ## Trilio Datamover Password for keystone and database
       TrilioDatamoverPassword: "test1234"
    
       ## Trilio container pull urls
       DockerTrilioDatamoverImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
    
       ## If you do not want Trilio's horizon plugin to replace your horizon container, just comment following line.
       ContainerHorizonImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
       ## Backup target type nfs/s3, used to store snapshots taken by triliovault
       BackupTargetType: 'nfs'
    
       ## If backup target NFS share support multiple IPs and you want to use those IPs(more than one) then
       ## set this parameter to True. Otherwise keep it False.
       MultiIPNfsEnabled: False
    
       ## For backup target 'nfs'
       NfsShares: '192.168.122.101:/opt/tvault'
       NfsOptions: 'nolock,soft,timeo=180,intr,lookupcache=none'
    
       ## For backup target 's3'
       ## S3 type: amazon_s3/ceph_s3
       S3Type: 'amazon_s3'
    
       ## S3 access key
       S3AccessKey: ''
    
       ## S3 secret key
       S3SecretKey: ''
    
       ## S3 region, if your s3 does not have any region, just keep the parameter as it is
       S3RegionName: ''
    
       ## S3 bucket name
       S3Bucket: ''
    
       ## S3 endpoint url, not required for Amazon S3, keep it as it is
       S3EndpointUrl: ''
    
       ## S3 signature version
       S3SignatureVersion: 'default'
    
       ## S3 Auth version
       S3AuthVersion: 'DEFAULT'
    
       ## If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint url then change it to 'True', otherwise keep it as 'False'
       S3SslEnabled: False
    
       ## If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint URL and SSL certificates are self signed, then
       ## user need to set this parameter value to: '/etc/tvault-contego/s3-cert.pem', otherwise keep it's value as empty string.
       S3SslCert: ''
    
       ## Configure 'dmapi_workers' parameter of '/etc/dmapi/dmapi.conf' file
       ## This parameter value used to spawn the number of dmapi processes to handle the incoming api requests.
       ## If your dmapi node has ‘n' cpu cores, It is recommended, to set this parameter to '4*n’.
       ## If dmapi_workers field is not present in config file. The Default value will be equals to number of cores present on the node
       DmApiWorkers: 16
    
       ## Don't edit following parameter
       EnablePackageInstall: True
    
    
       ## Load 'rbd' kernel module on all compute nodes
       ComputeParameters:
         ExtraKernelModules:
           rbd: {}
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    listen trilio_datamover_api
      bind 172.25.0.107:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
      bind 172.25.0.107:8784 transparent
      balance roundrobin
      http-request set-header X-Forwarded-Proto https if { ssl_fc }
      http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
      http-request set-header X-Forwarded-Port %[dst_port]
      maxconn 50000
      option httpchk
      option httplog
      retries 5
      timeout check 10m
      timeout client 10m
      timeout connect 10m
      timeout http-request 10m
      timeout queue 10m
      timeout server 10m
      server overcloud-controller-0.internalapi.localdomain 172.25.0.106:8784 check fall 5 inter 2000 rise 2
    
    retries 5
    timeout http-request 10m
    timeout queue 10m
    timeout connect 10m
    timeout client 10m
    timeout server 10m
    timeout check 10m
    balance roundrobin
    maxconn 50000
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/services/trilio-datamover-api.yaml
              tripleo::haproxy::trilio_datamover_api::options:
                 'retries': '5'
                 'maxconn': '50000'
                 'balance': 'roundrobin'
                 'timeout http-request': '10m'
                 'timeout queue': '10m'
                 'timeout connect': '10m'
                 'timeout client': '10m'
                 'timeout server': '10m'
                 'timeout check': '10m'
    triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE>/environments/trilio_datamover_opt_volumes.yaml
    parameter_defaults:
      TrilioDatamoverOptVolumes:
        - /opt/dir1:/opt/dir1
        - /mnt/dir2:/var/dir2
    openstack overcloud deploy --templates \
    -e <> \
    .
    .
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_datamover_opt_volumes.yaml
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env_tls_endpoints_public_dns.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_nfs_map.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
    [root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
    26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2       kolla_start           5 days ago  Up 5 days ago         horizon
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2       kolla_start           5 days ago  Up 5 days ago         horizon
    ## Either of the below workarounds should be performed on all the controller nodes where issue occurs for horizon pod.
    
    option-1: Restart the memcached service on controller using systemctl (command: systemctl restart tripleo_memcached.service)
    
    option-2: Restart the memcached pod (command: podman restart memcached)
    openstack stack failures list overcloud
    heat stack-list --show-nested -f "status=FAILED"
    heat resource-list --nested-depth 5 overcloud | grep FAILED
    
    => If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_dmapi
    
    tailf /var/log/containers/trilio-datamover-api/dmapi.log
    
    
    
    => If trilio datamover containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_datamover
    
    tailf /var/log/containers/trilio-datamover/tvault-contego.log

    Restores

    Definition

    A Restore is the workflow to bring back the backed up VMs from a Trilio Snapshot.

    List of Restores

    Using Horizon

    To reach the list of Restores for a Snapshot follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    Using CLI

    • --snapshot_id <snapshot_id> ➡️ ID of the Snapshot to show the restores of

    Restores overview

    Using Horizon

    To reach the detailed Restore overview follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    Details Tab

    The Restore Details Tab shows the most important information about the Restore.

    • Name

    • Description

    • Restore Type

    • Status

    The Restore Options are the restore.json provided to Trilio.

    • List of VMs restored

      • restored VM Name

      • restored VM Status

      • restored VM ID

    Misc Tab

    The Misc tab provides additional Metadata information.

    • Creation Time

    • Restore ID

    • Snapshot ID containing the Restore

    • Workload

    Using CLI

    • <restore_id> ➡️ ID of the restore to be shown

    • --output <output> ➡️ Option to get additional restore details, Specify --output metadata for restore metadata,--output networks --output subnets --output routers --output flavors

    Delete a Restore

    Once a Restore is no longer needed, it can be safely deleted from a Workload.

    Deleting a Restore will only delete the Trilio information about this Restore. No Openstack resources are getting deleted.

    Using Horizon

    There are 2 possibilities to delete a Restore.

    Possibility 1: Single Restore deletion through the submenu

    To delete a single Restore through the submenu follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to delete

    Possibility 2: Multiple Restore deletion through a checkbox in Snapshot overview

    To delete one or more Restores through the Restore list do the following:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to show

    Using CLI

    • <restore_id> ➡️ ID of the restore to be deleted

    Cancel a Restore

    Ongoing Restores can be canceled.

    Using Horizon

    To cancel a Restore in Horizon follow these steps:

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to delete

    Using CLI

    • <restore_id> ➡️ ID of the restore to be deleted

    One Click Restore

    The One Click Restore will bring back all VMs from the Snapshot in the same state as they were backed up. They will:

    • be located in the same cluster in the same datacenter

    • use the same storage domain

    • connect to the same network

    • have the same flavor

    The user can't change any Metadata.

    The One Click Restore requires, that the original VM's that have been backed up are deleted or otherwise lost. If even one VM is still existing, will the One Click Restore fail.

    The One Click Restore will automatically update the Workload to protect the restored VMs.

    Using Horizon

    There are 2 possibilities to start a One Click Restore.

    Possibility 1: From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Possibility 2: From the Snapshot overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Using CLI

    • <snapshot_id> ➡️ ID of the snapshot to restore.

    • --display-name <display-name> ➡️ Optional name for the restore.

    • --display-description <display-description> ➡️

    Selective Restore

    The Selective Restore is the most complex restore Trilio has to offer. It allows to adapt the restored VMs to the exact needs of the User.

    With the selective restore the following things can be changed:

    • Which VMs are getting restored

    • Name of the restored VMs

    • Which networks to connect with

    • Which Storage domain to use

    The Selective Restore is always available and doesn't have any prerequirements.

    The Selective Restore will automatically update the Workload to protect the created instance in the case that the original instance is no longer existing.

    Using Horizon

    There are 2 possibilities to start a Selective Restore.

    Possibility 1: From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Possibility 2: From the Snapshot overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Using CLI

    • <snapshot_id> ➡️ ID of the snapshot to restore.

    • --display-name <display-name> ➡️ Optional name for the restore.

    • --display-description <display-description> ➡️

    Inplace Restore

    The Inplace Restore covers those use cases, where the VM and its Volumes are still available, but the data got corrupted or needs to a rollback for other reasons.

    It allows the user to restore only the data of a selected Volume, which is part of a backup.

    The Inplace Restore only works when the original VM and the original Volume are still available and connected. Trilio is checking this by the saved Object-ID.

    The Inplace Restore will not create any new RHV resources. Please use one of the other restore options if new Volumes or VMs are required.

    Using Horizon

    There are 2 possibilities to start an Inplace Restore.

    Possibility 1: From the Snapshot list

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Possibility 2: From the Snapshot overview

    1. Login to Horizon

    2. Navigate to Backups

    3. Navigate to Workloads

    4. Identify the workload that contains the Snapshot to be restored

    Using CLI

    • <snapshot_id> ➡️ ID of the snapshot to restore.

    • --display-name <display-name> ➡️ Optional name for the restore.

    • --display-description <display-description> ➡️

    required restore.json for CLI

    The workloadmgr client CLI is using a restore.json file to define the restore parameters for the selective and the inplace restore.

    An example for a selective restore of this restore.json is shown below. A detailed analysis and explanation is given afterwards.

    The restore.json requires many information about the backed up resources. All required information can be gathered in the .

    General required information

    Before the exact details of the restore are to be provided it is necessary to provide the general metadata for the restore.

    • name➡️the name of the restore

    • description➡️the description of the restore

    • oneclickrestore <True/False>➡️

    Selective Restore required information

    The Selective Restore requires a lot of information to be able to execute the restore as desired.

    Those information are divided into 3 components:

    • instances

    • restore_topology

    • networks_mapping

    Information required in instances

    This part contains all information about all instances that are part of the Snapshot to restore and how they are to be restored.

    Even when VMs are not to be restored are they required inside the restore.json to allow a clean execution of the restore.

    Each instance requires the following information

    • id ➡️ original id of the instance

    • include <True/False> ➡️ Set True when the instance shall be restored

    All further information are only required, when the instance is part of the restore.

    • name ➡️ new name of the instance

    • availability_zone ➡️ Nova Availability Zone the instance shall be restored into. Leave empty for "Any Availability Zone"

    • Nics ➡️

    To use the next free IP available in the set Nics to an empty list [ ]

    Using an empty list for Nics combined with the Network Topology Restore, will the restore automatically restore the original IP address of the instance.

    • vdisks ➡️ List of all Volumes that are part of the instance. Each Volume requires the following information:

      • id ➡️ Original ID of the Volume

      • new_volume_type

    The root disk needs to be at least as big as the root disk of the backed up instance was.

    The following example describes a single instance with all values.

    Information required in network topology restore or network mapping

    Do not mix network topology restore together with network mapping.

    To activate a network topology restore set:

    To activate network mapping set:

    When the network mapping is activated it is used, it is necessary to provide the mapping details, which are part of the networks_mapping block:

    • networks ➡️ list of snapshot_network and target_network pairs

      • snapshot_network ➡️ the network backed up in the snapshot, contains the following:

    Full selective restore example

    Inplace Restore required information

    The Inplace Restore requires less information thana selective restore. It only requires the base file with some information about the Instances and Volumes to be restored.

    Information required in instances

    • id ➡️ ID of the instance inside the Snapshot

    • restore_boot_disk ➡️ Set to True if the boot disk of that VM shall be restored.

    When the boot disk is at the same time a Cinder Disk, both values need to be set true.

    • include ➡️ Set to True if at least one Volume from this instance shall be restored

    • vdisks ➡️ List of disks, that are connected to the instance. Each disk contains:

      • id

    Network mapping information required

    There are no network information required, but the field have to exist as empty value for the restore to work.

    Full Inplace restore example

    Workload Policies

    List Policies

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy

    Requests the list of available Workload Policies

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Show Policy

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>

    Requests the details of a given policy

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    list assigned Policies

    GET https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/assigned/<project_id>

    Requests the lists of Policies assigned to a Project.

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Create Policy

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy

    Creates a Policy with the given parameters

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    Update Policy

    PUT https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>

    Updates a Policy with the given information

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    Assign Policy

    POST https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy-id>

    Updates a Policy with the given information

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    Body Format

    Delete Policy

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/workload_policy/<policy_id>

    Deletes a given Policy

    Path Parameters

    Name
    Type
    Description

    Headers

    Name
    Type
    Description

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    policy_id

    string

    ID of the Policy to show

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    project_id

    string

    ID of the Project to fetch assigned Policies from

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    policy_id

    string

    ID of the Policy to update

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    policy_id

    string

    ID of the Policy to assign

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of Tenant/Project

    policy_id

    string

    ID of the Policy to delete

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 13:56:08 GMT
    Content-Type: application/json
    Content-Length: 1399
    Connection: keep-alive
    X-Compute-Request-Id: req-4618161e-64e4-489a-b8fc-f3cb21d94096
    
    {
       "policy_list":[
          {
             "id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
             "created_at":"2020-10-26T12:52:22.000000",
             "updated_at":"2020-10-26T12:52:22.000000",
             "status":"available",
             "name":"Gold",
             "description":"",
             "metadata":[
                
             ],
             "field_values":[
                {
                   "created_at":"2020-10-26T12:52:22.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"0201f8b4-482d-4ec1-9b92-8cf3092abcc2",
                   "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                   "policy_field_name":"retention_policy_value",
                   "value":"10"
                },
                {
                   "created_at":"2020-10-26T12:52:22.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"48cc7007-e221-44de-bd4e-6a66841bdee0",
                   "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                   "policy_field_name":"interval",
                   "value":"5"
                },
                {
                   "created_at":"2020-10-26T12:52:22.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"79070c67-9021-4220-8a79-648ffeebc144",
                   "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                   "policy_field_name":"retention_policy_type",
                   "value":"Number of Snapshots to Keep"
                },
                {
                   "created_at":"2020-10-26T12:52:22.000000",
                   "updated_at":null,
                   "deleted_at":null,
                   "deleted":false,
                   "version":"4.0.115",
                   "id":"9fec205a-9528-45ea-a118-ffb64d8c7d9d",
                   "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                   "policy_field_name":"fullbackup_interval",
                   "value":"-1"
                }
             ]
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Fri, 13 Nov 2020 14:18:42 GMT
    Content-Type: application/json
    Content-Length: 2160
    Connection: keep-alive
    X-Compute-Request-Id: req-0583fc35-0f80-4746-b280-c17b32cc4b25
    
    {
       "policy":{
          "id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
          "created_at":"2020-10-26T12:52:22.000000",
          "updated_at":"2020-10-26T12:52:22.000000",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "status":"available",
          "name":"Gold",
          "description":"",
          "field_values":[
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"0201f8b4-482d-4ec1-9b92-8cf3092abcc2",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"retention_policy_value",
                "value":"10"
             },
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"48cc7007-e221-44de-bd4e-6a66841bdee0",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"interval",
                "value":"5"
             },
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"79070c67-9021-4220-8a79-648ffeebc144",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"retention_policy_type",
                "value":"Number of Snapshots to Keep"
             },
             {
                "created_at":"2020-10-26T12:52:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"9fec205a-9528-45ea-a118-ffb64d8c7d9d",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "policy_field_name":"fullbackup_interval",
                "value":"-1"
             }
          ],
          "metadata":[
             
          ],
          "policy_assignments":[
             {
                "created_at":"2020-10-26T12:53:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"3e3f1b12-1b1f-452b-a9d2-b6e5fbf2ab18",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "project_id":"4dfe98a43bfa404785a812020066b4d6",
                "policy_name":"Gold",
                "project_name":"admin"
             },
             {
                "created_at":"2020-10-29T15:39:13.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"8b4a6236-63f1-4e2d-b8d1-23b37f4b4346",
                "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
                "project_id":"c76b3355a164498aa95ddbc960adc238",
                "policy_name":"Gold",
                "project_name":"robert"
             }
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:14:01 GMT
    Content-Type: application/json
    Content-Length: 338
    Connection: keep-alive
    X-Compute-Request-Id: req-57175488-d267-4dcb-90b5-f239d8b02fe2
    
    {
       "policies":[
          {
             "created_at":"2020-10-29T15:39:13.000000",
             "updated_at":null,
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"8b4a6236-63f1-4e2d-b8d1-23b37f4b4346",
             "policy_id":"b79aa5f3-405b-4da4-96e2-893abf7cb5fd",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "policy_name":"Gold",
             "project_name":"robert"
          }
       ]
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:24:03 GMT
    Content-Type: application/json
    Content-Length: 1413
    Connection: keep-alive
    X-Compute-Request-Id: req-05e05333-b967-4d4e-9c9b-561f1a7add5a
    
    {
       "policy":{
          "id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
          "created_at":"2020-11-17T09:24:01.000000",
          "updated_at":"2020-11-17T09:24:01.000000",
          "status":"available",
          "name":"CLI created",
          "description":"CLI created",
          "metadata":[
             
          ],
          "field_values":[
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"767ae42d-caf0-4d36-963c-9b0e50991711",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"interval",
                "value":"4 hr"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_value",
                "value":"10"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_type",
                "value":"Number of Snapshots to Keep"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"fullbackup_interval",
                "value":"-1"
             }
          ]
       }
    }
    {
       "workload_policy":{
          "field_values":{
             "fullbackup_interval":"<-1 for never / 0 for always / Integer>",
             "retention_policy_type":"<Number of Snapshots to Keep/Number of days to retain Snapshots>",
             "interval":"<Integer hr>",
             "retention_policy_value":"<Integer>"
          },
          "display_name":"<String>",
          "display_description":"<String>",
          "metadata":{
             <key>:<value>
          }
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:32:13 GMT
    Content-Type: application/json
    Content-Length: 1515
    Connection: keep-alive
    X-Compute-Request-Id: req-9104cf1c-4025-48f5-be92-1a6b7117bf95
    
    {
       "policy":{
          "id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
          "created_at":"2020-11-17T09:24:01.000000",
          "updated_at":"2020-11-17T09:24:01.000000",
          "status":"available",
          "name":"API created",
          "description":"API created",
          "metadata":[
             
          ],
          "field_values":[
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"767ae42d-caf0-4d36-963c-9b0e50991711",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"interval",
                "value":"8 hr"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_value",
                "value":"20"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_type",
                "value":"Number of days to retain Snapshots"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"fullbackup_interval",
                "value":"7"
             }
          ]
       }
    }
    {
       "policy":{
          "field_values":{
             "fullbackup_interval":"<-1 for never / 0 for always / Integer>",
             "retention_policy_type":"<Number of Snapshots to Keep/Number of days to retain Snapshots>",
             "interval":"<Integer hr>",
             "retention_policy_value":"<Integer>"
          },
          "display_name":"String",
          "display_description":"String",
          "metadata":{
             <key>:<value>
          }
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:46:23 GMT
    Content-Type: application/json
    Content-Length: 2318
    Connection: keep-alive
    X-Compute-Request-Id: req-169a53e4-b1c9-4bd1-bf68-3416d177d868
    
    {
       "policy":{
          "id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
          "created_at":"2020-11-17T09:24:01.000000",
          "updated_at":"2020-11-17T09:24:01.000000",
          "user_id":"adfa32d7746a4341b27377d6f7c61adb",
          "project_id":"4dfe98a43bfa404785a812020066b4d6",
          "status":"available",
          "name":"API created",
          "description":"API created",
          "field_values":[
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"767ae42d-caf0-4d36-963c-9b0e50991711",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"interval",
                "value":"8 hr"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"7e34ce5c-3de0-408e-8294-cc091bee281f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_value",
                "value":"20"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"95537f7c-e59a-4365-b1e9-7fa2ed49c677",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"retention_policy_type",
                "value":"Number of days to retain Snapshots"
             },
             {
                "created_at":"2020-11-17T09:24:01.000000",
                "updated_at":"2020-11-17T09:31:45.000000",
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"f635bece-be61-4e72-bce4-bc72a6f549e3",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "policy_field_name":"fullbackup_interval",
                "value":"7"
             }
          ],
          "metadata":[
             
          ],
          "policy_assignments":[
             {
                "created_at":"2020-11-17T09:46:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"4794ed95-d8d1-4572-93e8-cebd6d4df48f",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "project_id":"cbad43105e404c86a1cd07c48a737f9c",
                "policy_name":"API created",
                "project_name":"services"
             },
             {
                "created_at":"2020-11-17T09:46:22.000000",
                "updated_at":null,
                "deleted_at":null,
                "deleted":false,
                "version":"4.0.115",
                "id":"68f187a6-3526-4a35-8b2d-cb0e9f497dd8",
                "policy_id":"23176f20-9e9d-4fc3-9d3d-f10d2b184163",
                "project_id":"c76b3355a164498aa95ddbc960adc238",
                "policy_name":"API created",
                "project_name":"robert"
             }
          ]
       },
       "failed_ids":[
          
       ]
    }
    {
       "policy":{
          "remove_projects":[
             "<project_id>"
          ],
          "add_projects":[
             "<project_id>",
          ]
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Tue, 17 Nov 2020 09:56:03 GMT
    Content-Type: text/html; charset=UTF-8
    Content-Length: 0
    Connection: keep-alive

    Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the Snapshot Name

  • Navigate to the Restores tab

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the Snapshot Name

  • Navigate to the Restores tab

  • Identify the restore to show

  • Click the restore name

  • Time taken

  • Size

  • Progress Message

  • Progress

  • Host

  • Restore Options

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the Snapshot Name

  • Navigate to the Restore tab

  • Click "Delete Restore" in the line of the restore in question

  • Confirm by clicking "Delete Restore"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshots in the Snapshot list

  • Enter the Snapshot by clicking the Snapshot name

  • Navigate to the Restore tab

  • Check the checkbox for each Restore that shall be deleted

  • Click "Delete Restore" in the menu above

  • Confirm by clicking "Delete Restore"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the Snapshot Name

  • Navigate to the Restore tab

  • Identify the ongoing Restore

  • Click "Cancel Restore" in the line of the restore in question

  • Confirm by clicking "Cancel Restore"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click "One Click Restore" in the same line as the identified Snapshot

  • (Optional) Provide a name / description

  • Click "Create"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click the Snapshot Name

  • Navigate to the "Restores" tab

  • Click "One Click Restore"

  • (Optional) Provide a name / description

  • Click "Create"

  • Optional description for restore.

    Which DataCenter / Cluster to restore into

  • Which flavor the restored VMs will use

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

  • Click on "Selective Restore"

  • Configure the Selective Restore as desired

  • Click "Restore"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click the Snapshot Name

  • Navigate to the "Restores" tab

  • Click "Selective Restore"

  • Configure the Selective Restore as desired

  • Click "Restore"

  • Optional description for restore.
  • --filename <filename> ➡️ Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

  • Click on "Inplace Restore"

  • Configure the Inplace Restore as desired

  • Click "Restore"

  • Click the workload name to enter the Workload overview

  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click the Snapshot Name

  • Navigate to the "Restores" tab

  • Click "Inplace Restore"

  • Configure the Inplace Restore as desired

  • Click "Restore"

  • Optional description for restore.
  • --filename <filename> ➡️ Provide file path(relative or absolute) including file name , by default it will read file: /usr/lib/python2.7/site-packages/workloadmgrclient/input-files/restore.json .You can use this for reference or replace values into this file.

  • If the restore is a oneclick restore. Setting this to True will override all other settings and a One Click Restore is started.
  • restore_type <oneclick/selective/inplace>➡️defines the restore that is intended

  • type openstack➡️defines that the restore is into an openstack cloud.

  • openstackstarts the exact definition of the restore

  • list of openstack Neutron ports that shall be attached to the instance. Each Neutron Port consists of:
    • id ➡️ ID of the Neutron port to use

    • mac_address ➡️ Mac Address of the Neutron port

    • ip_address ➡️ IP Address of the Neutron port

    • network ➡️ network the port is assigned to. Contains the following information:

      • id ➡️ ID of the network the Neutron port is part of

      • subnet➡️

    ➡️
    The Volume Type to use for the restored Volume. Leave empty for Volume Type None
  • availability_zone ➡️ The Cinder Availability Zone to use for the Volume. The default Availability Zone of Cinder is Nova

  • flavor➡️Defines the Flavor to use for the restored instance. Contains the following information:

    • ram➡️How much RAM the restored instance will have (in MB)

    • ephemeral➡️How big the ephemeral disk of the instance will be (in GB)

    • vcpus➡️How many vcpus the restored instance will have available

    • swap➡️How big the Swap of the restored instance will be (in MB). Leave empty for none.

    • disk➡️Size of the root disk the instance will boot with

    • id➡️ID of the flavor that matches the provided information

  • id ➡️ Original ID of the network backed up
  • subnet ➡️ the subnet of the network backed up in the snapshot, contains the following:

    • id ➡️ Original ID of the subnet backed up

  • target_network ➡️ the existing network to map to, contains the following

    • id ➡️ ID of the network to map to

    • subnet ➡️ the subnet of the network backed up in the snapshot, contains the following:

      • id ➡️ ID of the subnet to map to

  • ➡️
    Original ID of the Volume
  • restore_cinder_volume ➡️ set to true if the Volume shall be restored

  • Snapshot overview
    workloadmgr restore-list [--snapshot_id <snapshot_id>]
    workloadmgr restore-show [--output <output>] <restore_id>
    workloadmgr restore-delete <restores_id>
    workloadmgr restore-cancel <restore_id>
    workloadmgr snapshot-oneclick-restore [--display-name <display-name>]
                                          [--display-description <display-description>]
                                          <snapshot_id>
    workloadmgr snapshot-selective-restore [--display-name <display-name>]
                                           [--display-description <display-description>]
                                           [--filename <filename>]
                                           <snapshot_id>
    workloadmgr snapshot-inplace-restore [--display-name <display-name>]
                                         [--display-description <display-description>]
                                         [--filename <filename>]
                                         <snapshot_id>
    {
        oneclickrestore: False,
        restore_type: selective, 
        type: openstack, 
        openstack: 
            {
                instances: 
                    [
                        {
                            include: True, 
                            id: 890888bc-a001-4b62-a25b-484b34ac6e7e,                        
                            name: cdcentOS-1, 
                            availability_zone:, 
                            nics: [], 
                            vdisks: 
                                [
                                    {
                                        id: 4cc2b474-1f1b-4054-a922-497ef5564624, 
                                        new_volume_type:, 
                                        availability_zone: nova
                                    }
                                ], 
                            flavor: 
                                {
                                    ram: 512, 
                                    ephemeral: 0, 
                                    vcpus: 1,
                                    swap:,
                                    disk: 1, 
                                    id: 1
                                }                         
                        }
                    ], 
                restore_topology: True, 
                networks_mapping: 
                    {
                        networks: []
                    }
            }
    }
    'instances':[
      {
         'name':'cdcentOS-1-selective',
         'availability_zone':'US-East',
         'nics':[
           {
              'mac_address':'fa:16:3e:00:bd:60',
              'ip_address':'192.168.0.100',
              'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
              'network':{
                 'subnet':{
                    'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
                 },
                 'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
              }
           }
         ],
         'vdisks':[
           {
              'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
              'new_volume_type':'ceph',
              'availability_zone':'nova'
           }
         ],
         'flavor':{
            'ram':2048,
            'ephemeral':0,
            'vcpus':1,
            'swap':'',
            'disk':20,
            'id':'2'
         },
         'include':True,
         'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
      }
    ]
    restore_topology:True
    restore_topology:False
    {
       'oneclickrestore':False,
       'openstack':{
          'instances':[
             {
                'name':'cdcentOS-1-selective',
                'availability_zone':'US-East',
                'nics':[
                   {
                      'mac_address':'fa:16:3e:00:bd:60',
                      'ip_address':'192.168.0.100',
                      'id':'8b871820-f92e-41f6-80b4-00555a649b4c',
                      'network':{
                         'subnet':{
                            'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
                         },
                         'id':'d5047e84-077e-4b38-bc43-e3360b0ad174'
                      }
                   }
                ],
                'vdisks':[
                   {
                      'id':'4cc2b474-1f1b-4054-a922-497ef5564624',
                      'new_volume_type':'ceph',
                      'availability_zone':'nova'
                   }
                ],
                'flavor':{
                   'ram':2048,
                   'ephemeral':0,
                   'vcpus':1,
                   'swap':'',
                   'disk':20,
                   'id':'2'
                },
                'include':True,
                'id':'890888bc-a001-4b62-a25b-484b34ac6e7e'
             }
          ],
          'restore_topology':False,
          'networks_mapping':{
             'networks':[
                {
                   'snapshot_network':{
                      'subnet':{
                         'id':'8b609440-4abf-4acf-a36b-9a0fa70c383c'
                      },
                      'id':'8b871820-f92e-41f6-80b4-00555a649b4c'
                   },
                   'target_network':{
                      'subnet':{
                         'id':'2b1506f4-2a7a-4602-a8b9-b7e8a49f95b8'
                      },
                      'id':'d5047e84-077e-4b38-bc43-e3360b0ad174',
                      'name':'internal'
                   }
                }
             ]
          }
       },
       'restore_type':'selective',
       'type':'openstack'
    }
    {
       'oneclickrestore':False,
       'restore_type':'inplace',
       'type':'openstack',   
       'openstack':{
          'instances':[
             {
                'restore_boot_disk':True,
                'include':True,
                'id':'ba8c27ab-06ed-4451-9922-d919171078de',
                'vdisks':[
                   {
                      'restore_cinder_volume':True,
                      'id':'04d66b70-6d7c-4d1b-98e0-11059b89cba6',
                   }
                ]
             }
          ]
       }
    }
    subnet the port is assigned to. Contains the following information:
    • id ➡️ ID of the network the Neutron port is part of

    Example runbook for Disaster Recovery using NFS

    This runbook will demonstrate how to set up Disaster Recovery with Trilio for a given scenario.

    The chosen scenario is following an actively used Trilio customer environment.

    Scenario

    There are two Openstack clouds available "Openstack Cloud A" and Openstack Cloud B". "Openstack Cloud B" is the Disaster Recovery restore point of "Openstack Cloud A" and vice versa. Both clouds have an independent Trilio installation integrated. These Trilio installations are writing their Backups to NFS targets. "Trilio A" is writing to "NFS A1" and "Trilio B" is writing to "NFS B1". The NFS Volumes used are getting synced against another NFS Volume on the other side. "NFS A1" is syncing with "NFS B2" and "NFS B1" is syncing with "NFS A2". The syncing process is set up independently from Trilio and will always favor the newer dataset.

    This scenario will cover the Disaster Recovery of a single Workload and a complete Cloud. All processes are done be the Openstack administrator.

    Prerequisites for the Disaster Recovery process

    This runbook will assume that the following is true:

    • "Openstack Cloud A" and "Openstack Cloud B" both have an active Trilio installation with a valid license

    • "Openstack Cloud A" and "Openstack Cloud B" have free resources to host additional VMs

    • "Openstack Cloud A" and "Openstack Cloud B" have Tenants/Projects available that are the designated restore points for Tenant/Projects of the other side

    For ease of writing will this runbook assume from here on, that "Openstack Cloud A" is down and the Workloads are getting restored into "Openstack Cloud B".

    In the case of the usage of shared Tenant networks, beyond the floating IP, the following additional requirement is needed: All Tenant Networks, Routers, Ports, Floating IPs, and DNS Zones are created

    Disaster Recovery of a single Workload

    A single Workload can do a Disaster Recovery in this Scenario, while both Clouds are still active. To do so the following high-level process needs to be followed:

    1. Copy the Workload directories to the configured NFS Volume

    2. Make the right Mount-Paths available

    3. Reassign the Workload

    4. Restore the Workload

    Copy the Workload directories to the configured NFS Volume

    This process only shows how to get a Workload from "Openstack Cloud A" to "Openstack Cloud B". The vice versa process is similar.

    As only a single Workload is to be recovered it is more efficient to copy the data of that single Workload over to the "NFS B1" Volume, which is used by "Trilio B".

    Mount "NFS B2" Volume to a Trilio VM

    It is recommended to use the Trilio VM as a connector between both NFS Volumes, as the nova user is available on the Trilio VM.

    Identify the Workload on the "NFS B2" Volume

    Trilio Workloads are identified by their ID und which they are stored on the Backup Target. See below example:

    In the case that the Workload ID is not known can available Metadata inside the Workload directories be used to identify the correct Workload.

    Copy the Workload

    The identified workload needs to be copied with all subdirectories and files. Afterward, it is necessary to adjust the ownership to nova:nova with the right permissions.

    Make the Mount-Paths available

    Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.

    The MTAuMTAuMi4yMDovdXBzdHJlYW0= part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.

    This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.

    Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.

    Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.

    Identify the base64 hash values

    The used hash values can be calculated using the base64 tool in any Linux distribution.

    Create and bind the paths

    Based on the identified base64 hash values the following paths are required on each Compute node.

    /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

    and

    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0

    In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.

    To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.

    Reassign the workload

    Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.

    Add admin-user to required domains and projects

    To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.

    Discover orphaned Workloads from NFS-Storage of Target Cloud

    Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.

    List available projects on Target Cloud in the Target Domain

    The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.

    List available users on the Target Cloud in the Target Project that have the right backup trustee role

    To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.

    Reassign the workload to the target project

    Now that all informations have been gathered the workload can be reassigned to the target project.

    Verify the workload is available at the desired target_project

    After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.

    Restore the workload

    The reassigned workload can be restored using Horizon following the procedure described .

    This runbook will continue on the CLI only path.

    Prepare the selective restore by getting the snapshot information

    To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.

    List all Snapshots of the workload to restore to identify the snapshot to restore

    Get Snapshot Details with network details for the desired snapshot

    Get Snapshot Details with disk details for the desired Snapshot

    Prepare the selective restore by creating the restore.json file

    The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.

    Run the selective restore

    To do the actual restore use the following command:

    Verify the restore

    To verify the success of the restore from a Trilio perspective the restore status is checked.

    Clean up

    After the Desaster Recovery Process has been successfully completed it is recommended to bring the TVM installation back into its original state to be ready for the next DR process.

    Delete the workload

    Delete the workload that got restored.

    Remove the database entry

    The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.

    To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.

    Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.

    This script can be found here:

    Remove the admin user from the project

    After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.

    Disaster Recovery of a complete cloud

    This Scenario will cover the Disaster Recovery of a full cloud. It is assumed that the source cloud is down or lost completly. To do the disaster recovery the following high-level process needs to be followed:

    1. Reconfigure the Target Trilio installation

    2. Make the right Mount-Paths available

    3. Reassign the Workload

    4. Restore the Workload

    Reconfigure the Target Trilio installation

    Before the Desaster Recovery Process can start is it necessary to make the backups to be restored available for the Trilio installation. The following steps need to be done to completely reconfigure the Trilio installation.

    During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.

    Add NFS B2 to the Trilio Appliance Cluster

    To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.

    Edit the workloadmgr.conf

    Look for the line defining the NFS mounts

    Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

    Write and close the workloadmgr.conf

    Restart the wlm-workloads service

    Add NFS B2 to the Trilio Datamovers

    Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.

    To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.

    Edit the tvault-contego.conf

    Look for the line defining the NFS mounts

    Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

    Write and close the tvault-contego.conf

    Restart the tvault-contego service

    Make the Mount-Paths available

    Trilio backups are using qcow2 backing files, which make every incremental backup a full synthetic backup. These backing files can be made visible using the qemu-img tool.

    The MTAuMTAuMi4yMDovdXBzdHJlYW0= part of the backing file path is the base64 hash value, which will be calculated upon the configuration of a Trilio installation for each provided NFS-Share.

    This hash value is calculated based on the provided NFS-Share path: <NFS_IP>/<path> If even one character in the NFS-Share path is different between the provided NFS-Share paths a completely different hash value is generated.

    Workloads, that have been moved between NFS-Shares, require that their incremental backups can follow the same path as on their original Source Cloud. To achieve this it is necessary to create the mount path on all compute nodes of the Target Cloud.

    Afterwards a mount bind is used to make the workloads data accessible over the old and the new mount path. The following example shows the process of how to successfully identify the necessary mount points and create the mount bind.

    Identify the base64 hash values

    The used hash values can be calculated using the base64 tool in any Linux distribution.

    Create and bind the paths

    Based on the identified base64 hash values the following paths are required on each Compute node.

    /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl

    and

    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0

    In the scenario of this runbook is the workload coming from the NFS_A1 NFS-Share, which means the mount path of that NFS-Share needs to be created and bound to the Target Cloud.

    To keep the desired mount past a reboot it is recommended to edit the fstab of all compute nodes accordingly.

    Reassign the workload

    Trilio workloads have clear ownership. When a workload is moved to a different cloud it is necessary to change the ownership. The ownership can only be changed by Openstack administrators.

    Add admin-user to required domains and projects

    To fulfill the required tasks an admin role user is used. This user will be used until the workload has been restored. Therefore, it is necessary to provide this user access to the desired Target Project on the Target Cloud.

    Discover orphaned Workloads from NFS-Storage of Target Cloud

    Each Trilio installation maintains a database of workloads that are known to the Trilio installation. Workloads that are not maintained by a specific Trilio installation, are from the perspective of that installation, orphaned workloads. An orphaned workload is a workload accessible on the NFS-Share, that is not assigned to any existing project in the Cloud the Trilio installation is protecting.

    List available projects on Target Cloud in the Target Domain

    The identified orphaned workloads need to be assigned to their new projects. The following provides the list of all available projects viewable by the used admin-user in the target_domain.

    List available users on the Target Cloud in the Target Project that have the right backup trustee role

    To allow project owners to work with the workloads as well will they get assigned to a user with the backup trustee role that is existing in the target project.

    Reassign the workload to the target project

    Now that all informations have been gathered the workload can be reassigned to the target project.

    Verify the workload is available at the desired target_project

    After the workload has been assigned to the new project it is recommended to verify the workload is managed by the Target Trilio and is assigned to the right project and user.

    Restore the workload

    The reassigned workload can be restored using Horizon following the procedure described .

    This runbook will continue on the CLI only path.

    Prepare the selective restore by getting the snapshot information

    To be able to do the necessary selective restore a few pieces of information about the snapshot to be restored are required. The following process will provide all necessary information.

    List all Snapshots of the workload to restore to identify the snapshot to restore

    Get Snapshot Details with network details for the desired snapshot

    Get Snapshot Details with disk details for the desired Snapshot

    Prepare the selective restore by creating the restore.json file

    The selective restore is using a restore.json file for the CLI command. This restore.json file needs to be adjusted according to the desired restore.

    Run the selective restore

    To do the actual restore use the following command:

    Verify the restore

    To verify the success of the restore from a Trilio perspective the restore status is checked.

    Reconfigure the Target Trilio installation back to the original one

    After the Desaster Recovery Process has finished it is necessary to return the Trilio installation to its original configuration. The following steps need to be done to completely reconfigure the Trilio installation.

    During the reconfiguration process will all backups of the Target Region be on hold and it is not recommended to create new Backup Jobs until the Desaster Recovery Process has finished and the original Trilio configuration has been restored.

    Delete NFS B2 to the Trilio Appliance Cluster

    To add the NFS-Vol2 to the Trilio Appliance cluster the Trilio can either be to use both NFS Volumes or it is possible to edit the configuration file and then restart all services. This procedure describes how to edit the conf file and restart the services. This needs to be repeated on every Trilio Appliance.

    Edit the workloadmgr.conf

    Look for the line defining the NFS mounts

    Delete NFS B2 from the comma-seperated list

    Write and close the workloadmgr.conf

    Restart the wlm-workloads service

    Delete NFS B2 to the Trilio Datamovers

    Trilio is integrating natively into the Openstack deployment tools. When using the Red Hat director or JuJu charms it is recommended to adapt the environment files for these orchestrators and update the Datamovers through them.

    To add the NFS B2 to the Trilio Datamovers manually the tvault-contego.conf file needs to be edited and the service restarted.

    Edit the tvault-contego.conf

    Look for the line defining the NFS mounts

    Add NFS B2 to that as comma-seperated list. Space is not necessary, but can be set.

    Write and close the tvault-contego.conf

    Restart the tvault-contego service

    Clean up

    After the Desaster Recovery Process has been successfully completed and the Trilio installation reconfigured to its original state, it is recommended to do the following additional steps to be ready for the next Disaster Recovery process.

    Remove the database entry

    The Trilio database is following the Openstack standard of not deleting any database entries upon deletion of the cloud object. Any Workload, Snapshot or Restore, which gets deleted, is marked as deleted only.

    To allow the Trilio installation to be ready for another disaster recovery it is necessary to completely delete the entries of the Workloads, which have been restored.

    Trilio does provide and maintain a script to safely delete workload entries and all connected entities from the Trilio database.

    This script can be found here:

    Remove the admin user from the project

    After all restores for the target project have been achieved it is recommended to remove the used admin user from the project again.

    Restores

    List Restores

    GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/detail

    Lists Restores with details

    Access to a user with the admin role permissions on domain level
  • One of the Openstack clouds is down/lost

  • Clean up

    Reconfigure the Target Trilio installation back to the original one

  • Clean up

  • here
    https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabase
    fully reconfigured
    here
    fully reconfigured
    https://github.com/trilioData/solutions/tree/master/openstack/CleanWlmDatabase
    Disaster Recovery Scenario
    Path Parameters
    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the Restores from

    Query Parameters

    Name
    Type
    Description

    snapshot_id

    string

    ID of the Snapshot to fetch the Restores from

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run the authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 11:28:43 GMT
    Content-Type: application/json
    Content-Length: 4308
    Connection: keep-alive
    X-Compute-Request-Id: req-0bc531b6-be6e-43b4-90bd-39ef26ef1463
    
    {
       "restores":[
          {
             "id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
             "created_at":"2020-11-05T10:17:40.000000",
             "updated_at":"2020-11-05T10:17:40.000000",
             "finished_at":"2020-11-05T10:27:20.000000",
             "user_id":"ccddc7e7a015487fa02920f4d4979779",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "status":"available",
    

    Get Restore

    GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>

    Provides all details about the specified Restore

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the restore from

    restore_id

    string

    ID of the restore to show

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Delete Restore

    DELETE https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>

    Deletes the specified Restore

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the Restore from

    restore_id

    string

    ID of the Restore to be deleted

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to run authentication against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    Cancel Restore

    GET https://$(tvm_address):8780/v1/$(tenant_id)/restores/<restore_id>/cancel

    Cancels an ongoing Restore

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of the Trilio service

    tenant_id

    string

    ID of the Tenant/Project to fetch the Restore from

    restore_id

    string

    ID of the Restore to cancel

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Accept

    string

    application/json

    User-Agent

    string

    python-workloadmgrclient

    One Click Restore

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Starts a restore according to the provided information

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    snapshot_id

    string

    ID of the snapshot to restore

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body Format

    The One-Click restore requires a body to provide all necessary information in json format.

    Selective Restore

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Starts a restore according to the provided information.

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    snapshot_id

    string

    ID of the snapshot to restore

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body Format

    The One-Click restore requires a body to provide all necessary information in json format.

    Inplace Restore

    POST https://$(tvm_address):8780/v1/$(tenant_id)/snapshots/<snapshot_id>

    Starts a restore according to the provided information

    Path Parameters

    Name
    Type
    Description

    tvm_address

    string

    IP or FQDN of Trilio service

    tenant_id

    string

    ID of the Tenant/Project to do the restore in

    snapshot_id

    string

    ID of the snapshot to restore

    Headers

    Name
    Type
    Description

    X-Auth-Project-Id

    string

    Project to authenticate against

    X-Auth-Token

    string

    Authentication token to use

    Content-Type

    string

    application/json

    Accept

    string

    application/json

    Body Format

    The One-Click restore requires a body to provide all necessary information in json format.

    # mount <NFS B2-IP/NFS B2-FQDN>:/<VOL-Path> /mnt
    workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    /…/workload_<id>/workload_db <<< Contains User ID and Project ID of Workload owner
    /…/workload_<id>/workload_vms_db <<< Contains VM IDs and VM Names of all VMs actively protected be the Workload
    # cp /mnt/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    # chown -R nova:nova /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    # chmod -R 644 /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105
    #qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    file format: qcow2
    virtual size: 1.0G (1073741824 bytes)
    disk size: 516K
    cluster_size: 65536
    
    backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
    
    # echo -n 10.10.2.20:/NFS_A1 | base64
    MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    
    # echo -n 10.20.3.22:/NFS_B2 | base64
    MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
    #mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #mount --bind 
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #vi /etc/fstab
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ 		/ var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl	none		bind 		0 0
    # source {customer admin rc file}  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
    # workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True    
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    |     Name   |                  ID                  |            Project ID            |  User ID                         |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    | Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 |  329880dedb4cd357579a3279835f392 |  
    | Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 |  329880dedb4cd357579a3279835f392 |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+
    # openstack project list --domain <target_domain>  
    +----------------------------------+----------+  
    | ID                               | Name     |  
    +----------------------------------+----------+  
    | 01fca51462a44bfa821130dce9baac1a | project1 |  
    | 33b4db1099ff4a65a4c1f69a14f932ee | project2 |  
    | 9139e694eb984a4a979b5ae8feb955af | project3 |  
    +----------------------------------+----------+ 
    # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | Role                             | User                             | Group | Project                          | Domain | Inherited |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    # workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True    
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    |    Name   |                  ID                  |            Project ID            |  User ID                         |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    | project1  | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+ 
    # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
    +-------------------+------------------------------------------------------------------------------------------------------+
    | Property          | Value                                                                                                |
    +-------------------+------------------------------------------------------------------------------------------------------+
    | availability_zone | nova                                                                                                 |
    | created_at        | 2019-04-18T02:19:39.000000                                                                           |
    | description       | Test Linux VMs                                                                                       |
    | error_msg         | None                                                                                                 |
    | id                | ac9cae9b-5e1b-4899-930c-6aa0600a2105                                                                 |
    | instances         | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id":                      |
    |                   | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}]                                     |
    | interval          | None                                                                                                 |
    | jobschedule       | True                                                                                                 |
    | name              | Test Linux                                                                                           |
    | project_id        | 2fc4e2180c2745629753305591aeb93b                                                                     |
    | scheduler_trust   | None                                                                                                 |
    | status            | available                                                                                            |
    | storage_usage     | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
    |                   | "snap_count": 13}}                                                                                   |
    | updated_at        | 2019-11-15T02:32:43.000000                                                                           |
    | user_id           | 72e65c264a694272928f5d84b73fe9ce                                                                     |
    | workload_type_id  | f82ce76f-17fe-438b-aa37-7a023058e50d                                                                 |
    +-------------------+------------------------------------------------------------------------------------------------------+
    # workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
    
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    |         Created At         |     Name     |                  ID                  |             Workload ID              | Snapshot Type |   Status  |    Host   |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    | 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |      full     | available | Upstream2 |
    | 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    | 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    # workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |   Networks  | Value                                                                                                                                        |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |  ip_address | 172.20.20.20                                                                                                                                 |
    |    vm_id    | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44', 
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:74:58:bb                                                                                                                            |
    |             |                                                                                                                                              |
    |  ip_address | 172.20.20.13                                                                                                                                 |
    |    vm_id    | 3fd869b2-16bd-4423-b389-18d19d37c8e0                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:6b:46:ae                                                                                                                            |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------------+--------------------------------------------------+
    |       Vdisks      |                      Value                       |
    +-------------------+--------------------------------------------------+
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a       |
    |    volume_name    |       0027b140-a427-46cb-9ccf-7895c7624493       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       0027b140-a427-46cb-9ccf-7895c7624493       |
    | availability_zone |                       nova                       |
    |       vm_id       |       38b620f1-24ae-41d7-b0ab-85ffc2d7958b       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       8007ed89-6a86-447e-badb-e49f1e92f57a       |
    |    volume_name    |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    | availability_zone |                       nova                       |
    |       vm_id       |       3fd869b2-16bd-4423-b389-18d19d37c8e0       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    +-------------------+--------------------------------------------------+
    {
       u'description':u'<description of the restore>',
       u'oneclickrestore':False,
       u'restore_type':u'selective',
       u'type':u'openstack',
       u'name':u'<name of the restore>'
       u'openstack':{
          u'instances':[
             {
                u'name':u'<name instance 1>',
                u'availability_zone':u'<AZ instance 1>',
                u'nics':[ #####Leave empty for network topology restore
                ],
                u'vdisks':[
                   {
                      u'id':u'<old disk id>',
                      u'new_volume_type':u'<new volume type name>',
                      u'availability_zone':u'<new cinder volume AZ>'
                   }
                ],
                u'flavor':{
                   u'ram':<RAM in MB>,
                   u'ephemeral':<GB of ephemeral disk>,
                   u'vcpus':<# vCPUs>,
                   u'swap':u'<GB of Swap disk>',
                   u'disk':<GB of boot disk>,
                   u'id':u'<id of the flavor to use>'
                },
                u'include':<True/False>,
                u'id':u'<old id of the instance>'
             } #####Repeat for each instance in the snapshot
          ],
          u'restore_topology':<True/False>,
          u'networks_mapping':{
             u'networks':[ #####Leave empty for network topology restore
                
             ]
          }
       }
    }
    
    # workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
    
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    |         Created At         |       Name       |                  ID                  |             Snapshot ID              |   Size   |   Status  |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    | 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
    +------------------+------------------------------------------------------------------------------------------------------+
    | Property         | Value                                                                                                |
    +------------------+------------------------------------------------------------------------------------------------------+
    | created_at       | 2019-09-24T12:44:38.000000                                                                           |
    | description      | -                                                                                                    |
    | error_msg        | None                                                                                                 |
    | finished_at      | 2019-09-24T12:46:07.000000                                                                           |
    | host             | Upstream2                                                                                            |
    | id               | 5b4216d0-4bed-460f-8501-1589e7b45e01                                                                 |
    | instances        | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata":   |
    |                  | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}]     |
    | name             | OneClick Restore                                                                                     |
    | progress_msg     | Restore from snapshot is complete                                                                    |
    | progress_percent | 100                                                                                                  |
    | project_id       | 8e16700ae3614da4ba80a4e57d60cdb9                                                                     |
    | restore_options  | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
    |                  | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]},   |
    |                  | "type": "openstack", "name": "OneClick Restore"}                                                     |
    | restore_type     | restore                                                                                              |
    | size             | 41126400                                                                                             |
    | snapshot_id      | 5928554d-a882-4881-9a5c-90e834c071af                                                                 |
    | status           | available                                                                                            |
    | time_taken       | 89                                                                                                   |
    | updated_at       | 2019-09-24T12:44:38.000000                                                                           |
    | uploaded_size    | 41126400                                                                                             |
    | user_id          | d5fbd79f4e834f51bfec08be6d3b2ff2                                                                     |
    | warning_msg      | None                                                                                                 |
    | workload_id      | 02b1aca2-c51a-454b-8c0f-99966314165e                                                                 |
    +------------------+------------------------------------------------------------------------------------------------------+
    # workloadmgr workload-delete <workload_id>
    # source {customer admin rc file}  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
    
    # vi /etc/workloadmgr/workloadmgr.conf
    vault_storage_nfs_export = <NFS_B1/NFS_B1-FQDN>:/<VOL-B1-Path>
    vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path>,<NFS-IP/NFS-FQDN>:/<VOL—2-Path>
    # systemctl restart wlm-workloads
    # vi /etc/tvault-contego/tvault-contego.conf
    vault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path>
    vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>
    # systemctl restart tvault-contego
    #qemu-img info bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    image: bd57ec9b-c4ac-4a37-a4fd-5c9aa002c778
    file format: qcow2
    virtual size: 1.0G (1073741824 bytes)
    disk size: 516K
    cluster_size: 65536
    
    backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_ac9cae9b-5e1b-4899-930c-6aa0600a2105/snapshot_1415095d-c047-400b-8b05-c88e57011263/vm_id_38b620f1-24ae-41d7-b0ab-85ffc2d7958b/vm_res_id_d4ab3431-5ce3-4a8f-a90b-07606e2ffa33_vda/7c39eb6a-6e42-418e-8690-b6368ecaa7bb
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false
    
    # echo -n 10.10.2.20:/NFS_A1 | base64
    MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    
    # echo -n 10.20.3.22:/NFS_B2 | base64
    MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0
    #mkdir /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #mount --bind 
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl
    #vi /etc/fstab
    /var/triliovault-mounts/MTAuMjAuMy4yMjovdXBzdHJlYW1fdGFyZ2V0/ 		/ var/triliovault-mounts/ MTAuMTAuMi4yMDovdXBzdHJlYW1fc291cmNl	none		bind 		0 0
    # source {customer admin rc file}  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role add Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role add <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
    # workloadmgr workload-get-orphaned-workloads-list --migrate_cloud True    
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    |     Name   |                  ID                  |            Project ID            |  User ID                         |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+  
    | Workload_1 | 6639525d-736a-40c5-8133-5caaddaaa8e9 | 4224d3acfd394cc08228cc8072861a35 |  329880dedb4cd357579a3279835f392 |  
    | Workload_2 | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 637a9ce3fd0d404cabf1a776696c9c04 |  329880dedb4cd357579a3279835f392 |  
    +------------+--------------------------------------+----------------------------------+----------------------------------+
    # openstack project list --domain <target_domain>  
    +----------------------------------+----------+  
    | ID                               | Name     |  
    +----------------------------------+----------+  
    | 01fca51462a44bfa821130dce9baac1a | project1 |  
    | 33b4db1099ff4a65a4c1f69a14f932ee | project2 |  
    | 9139e694eb984a4a979b5ae8feb955af | project3 |  
    +----------------------------------+----------+ 
    # openstack role assignment list --project <target_project> --project-domain <target_domain> --role <backup_trustee_role>
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | Role                             | User                             | Group | Project                          | Domain | Inherited |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    | 9fe2ff9ee4384b1894a90878d3e92bab | 72e65c264a694272928f5d84b73fe9ce |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | d5fbd79f4e834f51bfec08be6d3b2ff2 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    | 9fe2ff9ee4384b1894a90878d3e92bab | f5b1d071816742fba6287d2c8ffcd6c4 |       | 8e16700ae3614da4ba80a4e57d60cdb9 |        | False     |
    +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+
    # workloadmgr workload-reassign-workloads --new_tenant_id {target_project_id} --user_id {target_user_id} --workload_ids {workload_id} --migrate_cloud True    
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    |    Name   |                  ID                  |            Project ID            |  User ID                         |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+  
    | project1  | 904e72f7-27bb-4235-9b31-13a636eb9c95 | 4f2a91274ce9491481db795dcb10b04f | 3e05cac47338425d827193ba374749cc |  
    +-----------+--------------------------------------+----------------------------------+----------------------------------+ 
    # workloadmgr workload-show ac9cae9b-5e1b-4899-930c-6aa0600a2105
    +-------------------+------------------------------------------------------------------------------------------------------+
    | Property          | Value                                                                                                |
    +-------------------+------------------------------------------------------------------------------------------------------+
    | availability_zone | nova                                                                                                 |
    | created_at        | 2019-04-18T02:19:39.000000                                                                           |
    | description       | Test Linux VMs                                                                                       |
    | error_msg         | None                                                                                                 |
    | id                | ac9cae9b-5e1b-4899-930c-6aa0600a2105                                                                 |
    | instances         | [{"id": "38b620f1-24ae-41d7-b0ab-85ffc2d7958b", "name": "Test-Linux-1"}, {"id":                      |
    |                   | "3fd869b2-16bd-4423-b389-18d19d37c8e0", "name": "Test-Linux-2"}]                                     |
    | interval          | None                                                                                                 |
    | jobschedule       | True                                                                                                 |
    | name              | Test Linux                                                                                           |
    | project_id        | 2fc4e2180c2745629753305591aeb93b                                                                     |
    | scheduler_trust   | None                                                                                                 |
    | status            | available                                                                                            |
    | storage_usage     | {"usage": 60555264, "full": {"usage": 44695552, "snap_count": 1}, "incremental": {"usage": 15859712, |
    |                   | "snap_count": 13}}                                                                                   |
    | updated_at        | 2019-11-15T02:32:43.000000                                                                           |
    | user_id           | 72e65c264a694272928f5d84b73fe9ce                                                                     |
    | workload_type_id  | f82ce76f-17fe-438b-aa37-7a023058e50d                                                                 |
    +-------------------+------------------------------------------------------------------------------------------------------+
    # workloadmgr snapshot-list --workload_id ac9cae9b-5e1b-4899-930c-6aa0600a2105 --all True
    
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    |         Created At         |     Name     |                  ID                  |             Workload ID              | Snapshot Type |   Status  |    Host   |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    | 2019-11-02T02:30:02.000000 | jobscheduler | f5b8c3fd-c289-487d-9d50-fe27a6561d78 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |      full     | available | Upstream2 |
    | 2019-11-03T02:30:02.000000 | jobscheduler | 7e39e544-537d-4417-853d-11463e7396f9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    | 2019-11-04T02:30:02.000000 | jobscheduler | 0c086f3f-fa5d-425f-b07e-a1adcdcafea9 | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |  incremental  | available | Upstream2 |
    +----------------------------+--------------+--------------------------------------+--------------------------------------+---------------+-----------+-----------+
    # workloadmgr snapshot-show --output networks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |   Networks  | Value                                                                                                                                        |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    |  ip_address | 172.20.20.20                                                                                                                                 |
    |    vm_id    | 38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44', 
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:74:58:bb                                                                                                                            |
    |             |                                                                                                                                              |
    |  ip_address | 172.20.20.13                                                                                                                                 |
    |    vm_id    | 3fd869b2-16bd-4423-b389-18d19d37c8e0                                                                                                         |
    |   network   | {u'subnet': {u'ip_version': 4, u'cidr': u'172.20.20.0/24', u'gateway_ip': u'172.20.20.1', u'id': u'3a756a89-d979-4cda-a7f3-dacad8594e44',
    u'name': u'Trilio Test'}, u'cidr': None, u'id': u'5f0e5d34-569d-42c9-97c2-df944f3924b1', u'name': u'Trilio_Test_Internal', u'network_type': u'neutron'}      |
    | mac_address | fa:16:3e:6b:46:ae                                                                                                                            |
    +-------------+----------------------------------------------------------------------------------------------------------------------------------------------+
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr snapshot-show --output disks 7e39e544-537d-4417-853d-11463e7396f9
    
    +-------------------+--------------------------------------+
    | Snapshot property | Value                                |
    +-------------------+--------------------------------------+
    | description       | None                                 |
    | host              | Upstream2                            |
    | id                | 7e39e544-537d-4417-853d-11463e7396f9 |
    | name              | jobscheduler                         |
    | progress_percent  | 100                                  |
    | restore_size      | 44040192 Bytes or Approx (42.0MB)    |
    | restores_info     |                                      |
    | size              | 1310720 Bytes or Approx (1.2MB)      |
    | snapshot_type     | incremental                          |
    | status            | available                            |
    | time_taken        | 154 Seconds                          |
    | uploaded_size     | 1310720                              |
    | workload_id       | ac9cae9b-5e1b-4899-930c-6aa0600a2105 |
    +-------------------+--------------------------------------+
    
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |   Instances    |                                                        Value                                                        |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-1                                                    |
    |       ID       |                                         38b620f1-24ae-41d7-b0ab-85ffc2d7958b                                        |
    |                |                                                                                                                     |
    |     Status     |                                                      available                                                      |
    | Security Group | [{u'name': u'Test', u'security_group_type': u'neutron'}, {u'name': u'default', u'security_group_type': u'neutron'}] |
    |     Flavor     |                         {u'ephemeral': u'0', u'vcpus': u'1', u'disk': u'1', u'ram': u'512'}                         |
    |      Name      |                                                     Test-Linux-2                                                    |
    |       ID       |                                         3fd869b2-16bd-4423-b389-18d19d37c8e0                                        |
    |                |                                                                                                                     |
    +----------------+---------------------------------------------------------------------------------------------------------------------+
    
    +-------------------+--------------------------------------------------+
    |       Vdisks      |                      Value                       |
    +-------------------+--------------------------------------------------+
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       ebc2fdd0-3c4d-4548-b92d-0e16734b5d9a       |
    |    volume_name    |       0027b140-a427-46cb-9ccf-7895c7624493       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       0027b140-a427-46cb-9ccf-7895c7624493       |
    | availability_zone |                       nova                       |
    |       vm_id       |       38b620f1-24ae-41d7-b0ab-85ffc2d7958b       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    | volume_mountpoint |                     /dev/vda                     |
    |    restore_size   |                     22020096                     |
    |    resource_id    |       8007ed89-6a86-447e-badb-e49f1e92f57a       |
    |    volume_name    |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    |    volume_type    |                       None                       |
    |       label       |                       None                       |
    |    volume_size    |                        1                         |
    |     volume_id     |       2a7f9e78-7778-4452-af5b-8e2fa43853bd       |
    | availability_zone |                       nova                       |
    |       vm_id       |       3fd869b2-16bd-4423-b389-18d19d37c8e0       |
    |      metadata     | {u'readonly': u'False', u'attached_mode': u'rw'} |
    |                   |                                                  |
    +-------------------+--------------------------------------------------+
    {
       u'description':u'<description of the restore>',
       u'oneclickrestore':False,
       u'restore_type':u'selective',
       u'type':u'openstack',
       u'name':u'<name of the restore>'
       u'openstack':{
          u'instances':[
             {
                u'name':u'<name instance 1>',
                u'availability_zone':u'<AZ instance 1>',
                u'nics':[ #####Leave empty for network topology restore
                ],
                u'vdisks':[
                   {
                      u'id':u'<old disk id>',
                      u'new_volume_type':u'<new volume type name>',
                      u'availability_zone':u'<new cinder volume AZ>'
                   }
                ],
                u'flavor':{
                   u'ram':<RAM in MB>,
                   u'ephemeral':<GB of ephemeral disk>,
                   u'vcpus':<# vCPUs>,
                   u'swap':u'<GB of Swap disk>',
                   u'disk':<GB of boot disk>,
                   u'id':u'<id of the flavor to use>'
                },
                u'include':<True/False>,
                u'id':u'<old id of the instance>'
             } #####Repeat for each instance in the snapshot
          ],
          u'restore_topology':<True/False>,
          u'networks_mapping':{
             u'networks':[ #####Leave empty for network topology restore
                
             ]
          }
       }
    }
    
    # workloadmgr snapshot-selective-restore --filename restore.json {snapshot id}
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-list --snapshot_id 5928554d-a882-4881-9a5c-90e834c071af
    
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    |         Created At         |       Name       |                  ID                  |             Snapshot ID              |   Size   |   Status  |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    | 2019-09-24T12:44:38.000000 | OneClick Restore | 5b4216d0-4bed-460f-8501-1589e7b45e01 | 5928554d-a882-4881-9a5c-90e834c071af | 41126400 | available |
    +----------------------------+------------------+--------------------------------------+--------------------------------------+----------+-----------+
    
    [root@upstreamcontroller ~(keystone_admin)]# workloadmgr restore-show 5b4216d0-4bed-460f-8501-1589e7b45e01
    +------------------+------------------------------------------------------------------------------------------------------+
    | Property         | Value                                                                                                |
    +------------------+------------------------------------------------------------------------------------------------------+
    | created_at       | 2019-09-24T12:44:38.000000                                                                           |
    | description      | -                                                                                                    |
    | error_msg        | None                                                                                                 |
    | finished_at      | 2019-09-24T12:46:07.000000                                                                           |
    | host             | Upstream2                                                                                            |
    | id               | 5b4216d0-4bed-460f-8501-1589e7b45e01                                                                 |
    | instances        | [{"status": "available", "id": "b8506f04-1b99-4ca8-839b-6f5d2c20d9aa", "name": "temp", "metadata":   |
    |                  | {"instance_id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "production": "1", "config_drive": ""}}]     |
    | name             | OneClick Restore                                                                                     |
    | progress_msg     | Restore from snapshot is complete                                                                    |
    | progress_percent | 100                                                                                                  |
    | project_id       | 8e16700ae3614da4ba80a4e57d60cdb9                                                                     |
    | restore_options  | {"description": "-", "oneclickrestore": true, "restore_type": "oneclick", "openstack": {"instances": |
    |                  | [{"availability_zone": "US-West", "id": "c014a938-903d-43db-bfbb-ea4998ff1a0f", "name": "temp"}]},   |
    |                  | "type": "openstack", "name": "OneClick Restore"}                                                     |
    | restore_type     | restore                                                                                              |
    | size             | 41126400                                                                                             |
    | snapshot_id      | 5928554d-a882-4881-9a5c-90e834c071af                                                                 |
    | status           | available                                                                                            |
    | time_taken       | 89                                                                                                   |
    | updated_at       | 2019-09-24T12:44:38.000000                                                                           |
    | uploaded_size    | 41126400                                                                                             |
    | user_id          | d5fbd79f4e834f51bfec08be6d3b2ff2                                                                     |
    | warning_msg      | None                                                                                                 |
    | workload_id      | 02b1aca2-c51a-454b-8c0f-99966314165e                                                                 |
    +------------------+------------------------------------------------------------------------------------------------------+
    # vi /etc/workloadmgr/workloadmgr.conf
    vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>
    vault_storage_nfs_export = <NFS_B1-IP/NFS_B1-FQDN>:/<VOL-B1-Path>
    # systemctl restart wlm-workloads
    # vi /etc/tvault-contego/tvault-contego.conf
    vault_storage_nfs_export = <NFS_B1-IP/NFS-FQDN>:/<VOL-B1-Path>,<NFS_B2-IP/NFS-FQDN>:/<VOL—B2-Path>
    vault_storage_nfs_export = <NFS-IP/NFS-FQDN>:/<VOL-1-Path>
    # systemctl restart tvault-contego
    # source {customer admin rc file}  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --domain <target_domain>  
    # openstack role remove Admin --user <my_admin_user> --user-domain <admin_domain> --project <target_project> --project-domain <target_domain>  
    # openstack role remove <Backup Trustee Role> --user <my_admin_user> --user-domain <admin_domain> --project <destination_project> --project-domain <target_domain>
    
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 14:04:45 GMT
    Content-Type: application/json
    Content-Length: 2639
    Connection: keep-alive
    X-Compute-Request-Id: req-30640219-e94e-4651-9b9e-49f5574e2a7f
    
    {
       "restore":{
          "id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
          "created_at":"2020-11-05T10:17:40.000000",
          "updated_at":"2020-11-05T10:17:40.000000",
          "finished_at":"2020-11-05T10:27:20.000000",
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"available",
          "restore_type":"restore",
          "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "snapshot_details":{
             "created_at":"2020-11-04T13:58:37.000000",
             "updated_at":"2020-11-05T10:27:22.000000",
             "deleted_at":null,
             "deleted":false,
             "version":"4.0.115",
             "id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
             "user_id":"ccddc7e7a015487fa02920f4d4979779",
             "project_id":"c76b3355a164498aa95ddbc960adc238",
             "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
             "snapshot_type":"full",
             "display_name":"API taken 2",
             "display_description":"API taken description 2",
             "size":44171264,
             "restore_size":2147483648,
             "uploaded_size":44171264,
             "progress_percent":100,
             "progress_msg":"Creating Instance: cirros-2",
             "warning_msg":null,
             "error_msg":null,
             "host":"TVM1",
             "finished_at":"2020-11-04T14:06:03.000000",
             "data_deleted":false,
             "pinned":false,
             "time_taken":428,
             "vault_storage_id":null,
             "status":"available"
          },
          "workload_id":"18b809de-d7c8-41e2-867d-4a306407fb11",
          "instances":[
             {
                "id":"1fb104bf-7e2b-4cb6-84f6-96aabc8f1dd2",
                "name":"cirros-2",
                "status":"available",
                "metadata":{
                   "config_drive":"",
                   "instance_id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                   "production":"1"
                }
             },
             {
                "id":"b083bb70-e384-4107-b951-8e9e7bbac380",
                "name":"cirros-1",
                "status":"available",
                "metadata":{
                   "config_drive":"",
                   "instance_id":"e33c1eea-c533-4945-864d-0da1fc002070",
                   "production":"1"
                }
             }
          ],
          "networks":[
             
          ],
          "subnets":[
             
          ],
          "routers":[
             
          ],
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
             }
          ],
          "name":"OneClick Restore",
          "description":"-",
          "host":"TVM2",
          "size":2147483648,
          "uploaded_size":2147483648,
          "progress_percent":100,
          "progress_msg":"Restore from snapshot is complete",
          "warning_msg":null,
          "error_msg":null,
          "time_taken":580,
          "restore_options":{
             "name":"OneClick Restore",
             "oneclickrestore":true,
             "restore_type":"oneclick",
             "openstack":{
                "instances":[
                   {
                      "name":"cirros-2",
                      "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
                      "availability_zone":"nova"
                   },
                   {
                      "name":"cirros-1",
                      "id":"e33c1eea-c533-4945-864d-0da1fc002070",
                      "availability_zone":"nova"
                   }
                ]
             },
             "type":"openstack",
             "description":"-"
          },
          "metadata":[
             
          ]
       }
    }
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 14:21:07 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-0e155b21-8931-480a-a749-6d8764666e4d
    HTTP/1.1 200 OK
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 15:13:30 GMT
    Content-Type: application/json
    Content-Length: 0
    Connection: keep-alive
    X-Compute-Request-Id: req-98d4853c-314c-4f27-bd3f-f81bda1a2840
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Thu, 05 Nov 2020 14:30:56 GMT
    Content-Type: application/json
    Content-Length: 992
    Connection: keep-alive
    X-Compute-Request-Id: req-7e18c309-19e5-49cb-a07e-90dd368fddae
    
    {
       "restore":{
          "id":"3df1d432-2f76-4ebd-8f89-1275428842ff",
          "created_at":"2020-11-05T14:30:56.048656",
          "updated_at":"2020-11-05T14:30:56.048656",
          "finished_at":null,
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"restoring",
          "restore_type":"restore",
          "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/3df1d432-2f76-4ebd-8f89-1275428842ff"
             }
          ],
          "name":"One Click Restore",
          "description":"One Click Restore",
          "host":"",
          "size":0,
          "uploaded_size":0,
          "progress_percent":0,
          "progress_msg":null,
          "warning_msg":null,
          "error_msg":null,
          "time_taken":0,
          "restore_options":{
             "openstack":{
                
             },
             "type":"openstack",
             "oneclickrestore":true,
             "vmware":{
                
             },
             "restore_type":"oneclick"
          },
          "metadata":[
             
          ]
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 09:53:31 GMT
    Content-Type: application/json
    Content-Length: 1713
    Connection: keep-alive
    X-Compute-Request-Id: req-84f00d6f-1b12-47ec-b556-7b3ed4c2f1d7
    
    {
       "restore":{
          "id":"778baae0-6c64-4eb1-8fa3-29324215c43c",
          "created_at":"2020-11-09T09:53:31.037588",
          "updated_at":"2020-11-09T09:53:31.037588",
          "finished_at":null,
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"restoring",
          "restore_type":"restore",
          "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/778baae0-6c64-4eb1-8fa3-29324215c43c"
             }
          ],
          "name":"API",
          "description":"API Created",
          "host":"",
          "size":0,
          "uploaded_size":0,
          "progress_percent":0,
          "progress_msg":null,
          "warning_msg":null,
          "error_msg":null,
          "time_taken":0,
          "restore_options":{
             "openstack":{
                "instances":[
                   {
                      "vdisks":[
                         {
                            "new_volume_type":"iscsi",
                            "id":"365ad75b-ca76-46cb-8eea-435535fd2e22",
                            "availability_zone":"nova"
                         }
                      ],
                      "name":"cirros-1-selective",
                      "availability_zone":"nova",
                      "nics":[
                         
                      ],
                      "flavor":{
                         "vcpus":1,
                         "disk":1,
                         "swap":"",
                         "ram":512,
                         "ephemeral":0,
                         "id":"1"
                      },
                      "include":true,
                      "id":"e33c1eea-c533-4945-864d-0da1fc002070"
                   },
                   {
                      "include":false,
                      "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe"
                   }
                ],
                "restore_topology":false,
                "networks_mapping":{
                   "networks":[
                      {
                         "snapshot_network":{
                            "subnet":{
                               "id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
                            },
                            "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26"
                         },
                         "target_network":{
                            "subnet":{
                               "id":"b7b54304-aa82-4d50-91e6-66445ab56db4"
                            },
                            "id":"5fb7027d-a2ac-4a21-9ee1-438c281d2b26",
                            "name":"internal"
                         }
                      }
                   ]
                }
             },
             "restore_type":"selective",
             "type":"openstack",
             "oneclickrestore":false
          },
          "metadata":[
             
          ]
       }
    }
    HTTP/1.1 202 Accepted
    Server: nginx/1.16.1
    Date: Mon, 09 Nov 2020 12:53:03 GMT
    Content-Type: application/json
    Content-Length: 1341
    Connection: keep-alive
    X-Compute-Request-Id: req-311fa97e-0fd7-41ed-873b-482c149ee743
    
    {
       "restore":{
          "id":"0bf96f46-b27b-425c-a10f-a861cc18b82a",
          "created_at":"2020-11-09T12:53:02.726757",
          "updated_at":"2020-11-09T12:53:02.726757",
          "finished_at":null,
          "user_id":"ccddc7e7a015487fa02920f4d4979779",
          "project_id":"c76b3355a164498aa95ddbc960adc238",
          "status":"restoring",
          "restore_type":"restore",
          "snapshot_id":"ed4f29e8-7544-4e1c-af8a-a76031211926",
          "links":[
             {
                "rel":"self",
                "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
             },
             {
                "rel":"bookmark",
                "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/0bf96f46-b27b-425c-a10f-a861cc18b82a"
             }
          ],
          "name":"API",
          "description":"API description",
          "host":"",
          "size":0,
          "uploaded_size":0,
          "progress_percent":0,
          "progress_msg":null,
          "warning_msg":null,
          "error_msg":null,
          "time_taken":0,
          "restore_options":{
             "restore_type":"inplace",
             "type":"openstack",
             "oneclickrestore":false,
             "openstack":{
                "instances":[
                   {
                      "restore_boot_disk":true,
                      "include":true,
                      "id":"7c1bb5d2-aa5a-44f7-abcd-2d76b819b4c8",
                      "vdisks":[
                         {
                            "restore_cinder_volume":true,
                            "id":"f6b3fef6-4b0e-487e-84b5-47a14da716ca"
                         }
                      ]
                   },
                   {
                      "restore_boot_disk":true,
                      "include":true,
                      "id":"08dab61c-6efd-44d3-a9ed-8e789d338c1b",
                      "vdisks":[
                         {
                            "restore_cinder_volume":true,
                            "id":"53204f34-019d-4ba8-ada1-e6ab7b8e5b43"
                         }
                      ]
                   }
                ]
             }
          },
          "metadata":[
             
          ]
       }
    }
    {
       "restore":{
          "options":{
             "openstack":{
                
             },
             "type":"openstack",
             "oneclickrestore":true,
             "vmware":{},
             "restore_type":"oneclick"
          },
          "name":"One Click Restore",
          "description":"One Click Restore"
       }
    }
    {
       "restore":{
        "name":"<restore name>",
        "description":"<restore description>",
    	  "options":{
             "openstack":{
                "instances":[
                   {
                      "name":"<new name of instance>",
                      "include":<true/false>,
                      "id":"<original id of instance to be restored>"
    				  "availability_zone":"<availability zone>",
    				  "vdisks":[
                         {
                            "id":"<original ID of Volume>",
                            "new_volume_type":"<new volume type>",
                            "availability_zone":"<Volume availability zone>"
                         }
                      ],
                      "nics":[
                         {
                             'mac_address':'<mac address of the pre-created port>',
                             'ip_address':'<IP of the pre-created port>',
                             'id':'<ID of the pre-created port>',
                             'network':{
                                'subnet':{
                                   'id':'<ID of the subnet of the pre-created port>'
                                },
                             'id':'<ID of the network of the pre-created port>'
                          }
                      ],
                      "flavor":{
                         "vcpus":<Integer>,
                         "disk":<Integer>,
                         "swap":<Integer>,
                         "ram":<Integer>,
                         "ephemeral":<Integer>,
                         "id":<Integer>
                      }
                   }
                ],
                "restore_topology":<true/false>,
                "networks_mapping":{
                   "networks":[
                      {
                         "snapshot_network":{
                            "subnet":{
                               "id":"<ID of the original Subnet ID>"
                            },
                            "id":"<ID of the original Network ID>"
                         },
                         "target_network":{
                            "subnet":{
                               "id":"<ID of the target Subnet ID>"
                            },
                            "id":"<ID of the target Network ID>",
                            "name":"<name of the target network>"
                         }
                      }
                   ]
                }
             },
             "restore_type":"selective",
             "type":"openstack",
             "oneclickrestore":false
          }
       }
    }
    {
       "restore":{
          "name":"<restore-name>",
          "description":"<restore-description>",
          "options":{
             "restore_type":"inplace",
             "type":"openstack",
             "oneclickrestore":false,
             "openstack":{
                "instances":[
                   {
                      "restore_boot_disk":<Boolean>,
                      "include":<Boolean>,
                      "id":"<ID of the instance the volumes are attached to>",
                      "vdisks":[
                         {
                            "restore_cinder_volume":<boolean>,
                            "id":"<ID of the Volume to restore>"
                         }
                      ]
                   }
                ]
             }
          }
       }
    }
    "restore_type":"restore",
    "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
    "links":[
    {
    "rel":"self",
    "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
    },
    {
    "rel":"bookmark",
    "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/29fdc1f8-1d53-4a10-bb45-e539a64cdbfc"
    }
    ],
    "name":"OneClick Restore",
    "description":"-",
    "host":"TVM2",
    "size":2147483648,
    "uploaded_size":2147483648,
    "progress_percent":100,
    "progress_msg":"Restore from snapshot is complete",
    "warning_msg":null,
    "error_msg":null,
    "time_taken":580,
    "restore_options":{
    "name":"OneClick Restore",
    "oneclickrestore":true,
    "restore_type":"oneclick",
    "openstack":{
    "instances":[
    {
    "name":"cirros-2",
    "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
    "availability_zone":"nova"
    },
    {
    "name":"cirros-1",
    "id":"e33c1eea-c533-4945-864d-0da1fc002070",
    "availability_zone":"nova"
    }
    ]
    },
    "type":"openstack",
    "description":"-"
    },
    "metadata":[
    {
    "created_at":"2020-11-05T10:27:20.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"91ab2495-1903-4d75-982b-08a4e480835b",
    "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
    "key":"data_transfer_time",
    "value":"0"
    },
    {
    "created_at":"2020-11-05T10:27:20.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"e0e01eec-24e0-4abd-9b8c-19993a320e9f",
    "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
    "key":"object_store_transfer_time",
    "value":"0"
    },
    {
    "created_at":"2020-11-05T10:27:20.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"eb909267-ba9b-41d1-8861-a9ec22d6fd84",
    "restore_id":"29fdc1f8-1d53-4a10-bb45-e539a64cdbfc",
    "key":"restore_user_selected_value",
    "value":"Oneclick Restore"
    }
    ]
    },
    {
    "id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
    "created_at":"2020-11-04T14:37:31.000000",
    "updated_at":"2020-11-04T14:37:31.000000",
    "finished_at":"2020-11-04T14:45:27.000000",
    "user_id":"ccddc7e7a015487fa02920f4d4979779",
    "project_id":"c76b3355a164498aa95ddbc960adc238",
    "status":"error",
    "restore_type":"restore",
    "snapshot_id":"2e56d167-bad7-43c7-8ede-a613c3fe7844",
    "links":[
    {
    "rel":"self",
    "href":"http://wlm_backend/v1/c76b3355a164498aa95ddbc960adc238/restores/4673d962-f6a5-4209-8d3e-b9f2e9115f07"
    },
    {
    "rel":"bookmark",
    "href":"http://wlm_backend/c76b3355a164498aa95ddbc960adc238/restores/4673d962-f6a5-4209-8d3e-b9f2e9115f07"
    }
    ],
    "name":"OneClick Restore",
    "description":"-",
    "host":"TVM2",
    "size":2147483648,
    "uploaded_size":2147483648,
    "progress_percent":100,
    "progress_msg":"",
    "warning_msg":null,
    "error_msg":"Failed restoring snapshot: Error creating instance e271bd6e-f53e-4ebc-875a-5787cc4dddf7",
    "time_taken":476,
    "restore_options":{
    "name":"OneClick Restore",
    "oneclickrestore":true,
    "restore_type":"oneclick",
    "openstack":{
    "instances":[
    {
    "name":"cirros-2",
    "id":"67d6a100-fee6-4aa5-83a1-66b070d2eabe",
    "availability_zone":"nova"
    },
    {
    "name":"cirros-1",
    "id":"e33c1eea-c533-4945-864d-0da1fc002070",
    "availability_zone":"nova"
    }
    ]
    },
    "type":"openstack",
    "description":"-"
    },
    "metadata":[
    {
    "created_at":"2020-11-04T14:45:27.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"be6dc7e2-1be2-476b-9338-aed986be3b55",
    "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
    "key":"data_transfer_time",
    "value":"0"
    },
    {
    "created_at":"2020-11-04T14:45:27.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"2e4330b7-6389-4e21-b31b-2503b5441c3e",
    "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
    "key":"object_store_transfer_time",
    "value":"0"
    },
    {
    "created_at":"2020-11-04T14:45:27.000000",
    "updated_at":null,
    "deleted_at":null,
    "deleted":false,
    "version":"4.0.115",
    "id":"561c806b-e38a-496c-a8de-dfe96cb3e956",
    "restore_id":"4673d962-f6a5-4209-8d3e-b9f2e9115f07",
    "key":"restore_user_selected_value",
    "value":"Oneclick Restore"
    }
    ]
    }
    ]
    }

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient

    User-Agent

    string

    python-workloadmgrclient