arrow-left

Only this pageAll pages
gitbookPowered by GitBook
1 of 22

3.5

Loading...

Loading...

Loading...

Loading...

Deployment Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

User Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Troubleshooting

Loading...

Loading...

Loading...

About TrilioVault for RHV

TrilioVault for RHV, by Trilio Data, is a native RHV service that provides policy-based comprehensive backup and recovery for RHV workloads. The solution captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data, and Metadata of an environment) as full or incremental snapshots. A variety of storage environments can hold these Snapshots, including NFS and soon AWS S3 compatible storage. With TrilioVault and its single-click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). TrilioVault enables IT departments to fully deploy RHV solutions and provide business assurance through enhanced data retention, protection, and integrity.

With the use of TrilioVault’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. TrilioVault takes point-in-time backup of the entire workload consisting of computing resources, network configurations, and storage data as one unit. It also takes incremental backups that only capture the changes made since the last backup. Incremental snapshots save time and storage space as the backup only includes changes since the last backup. The summarized benefits of using VAST for backup and restore are:

  1. Efficient capture and storage of snapshots. Since our full backups only include data that is committed to storage volume and the incremental backups include changed blocks of data since the last backup, our backup processes are efficient and storages backup images efficiently on the backup media.

  2. Faster and reliable recovery. When your applications become complex that snap multiple VMs and storage volumes, our efficient recovery process brings your application from zero to operational with the click of a button.

  3. Reliable and smooth migration of workloads between environments. TrilioVault captures all the details of your application, and hence our migration includes your entire application stack without leaving anything for guesswork.

  4. Through policy and automation, lower the Total Cost of Ownership. Our role driven backup process and automation eliminates the need for dedicated backup administrators, thereby improves your total cost of ownership.

TrilioVault 3.5 Release Notes

TrilioVault 3.5 is the first release of TrilioVault for Red Hat Virtualization.

It is aimed to provide basic Backup and Recovery for Red Hat Virtualization 4.2.8. The full requirements can be found .

The following functionalities are included in TrilioVault 3.5:

Image based VMs (iSCSI and NFS)

OneClick Restore

File Search

Template based VMs (iSCSI)

Selective Restore

Workload import

Scheduled based Backup

Workload reset

OnDemand Backup

Backup

Recovery

Additional functions

here

TrilioVault for RHV Architecture

TrilioVault is following modern principles to provide a native, userfriendly, and powerful Backup & Recovery solution for RHV. It does not require any big Mediaservers or clients running on the VMs that are getting protected.

TrilioVault architecture reflects these principles.

TrilioVault for RHV Architecture overview

hashtag
TrilioVault Appliance

The TrilioVault Appliance is the controller of TrilioVault, called TVM.

The TVM is running and managing all Backup and Recovery Jobs.

During a Backup Job is the TVM:

  • gathering the Metadata information generated of the VMs that are getting protected

  • Writing the Metadata information onto the Backup Target

  • Generating the RHV Snapshot

The TVM is available as qcow2 image and runs as VM on top of a KVM Hypervisor.

It is supported and recommended to run the TVM in the same RHV environment, that the TVM protects.

hashtag
RHV-Manager GUI integration

TrilioVault is natively integrating into the available RHV-M Web Gui, providing a new tab "Backup".

All functionalities of TrilioVault are accessible through the Web Gui.

The RHV-Manager GUI integration is getting installed using Ansible-playbooks together with the ovirt-imageio-proxy extension.

hashtag
Ovirt-imageio extensions

Ovirt-imageio is an RHV internal python service that allows the upload and download of disks into and out of RHV.

The default ovirt-imageio services only allow to move the disks through the RHV-M via https.

TrilioVault extends the ovirt-imageio functionality to provide movement of the disk data through NFS over the RHV Hosts themselves.

The ovirt-imageio extensions are getting installed using Ansible-playbooks.

hashtag
Backup Target

TrilioVault is writing all Backups over the network using the NFS protocol to a provided Backup Target.

Any system utilizing the NFSv3 protocol is usable.

Sending the data copy commands to the ovirt-imageio services

TrilioVault 3.5 Support Matrix

RHV Version

ovirt-imageio version

Storage Domain

4.2.8

1.4.5

NFSv3

Requirements

TrilioVault is a pure software solution and is composed of 4 elements:

  1. TrilioVault Appliance (Virtual Machine)

  2. TrilioVault RHV-M Web-GUI extension

Uninstall TrilioVault

Uninstalling TrilioVault is done in 2 easy steps, which leave only the already created backups behind.

hashtag
Step 1: Uninstall RHV ovirt-imageio extension

To uninstall the ovirt-imageio extension do the following:

Global Job Scheduler

hashtag
Definition

The Global Job Scheduler controls whether Workloads will automatically take backups according to their schedule or not. It is used to prevent automated backups during maintenance or troubleshooting.

hashtag

TrilioVault ovirt-imageio-proxy extension
  • TrilioVault ovirt-imageio-daemon extension

  • hashtag
    System requirements TrilioVault Appliance

    The TrilioVault Appliance gets delivered as a qcow2 image, which gets attached to a virtual machine.

    Trilio supports only KVM based hypervisors and recommends to use the RHV Cluster as the hoster for the TrilioVault Appliance.

    The recommended size of the VM for the TrilioVault Appliance is:

    Ressource

    Value

    vCPU

    4

    RAM

    24 GB

    The qcow2 image itself defines the 40GB disk size of the VM.

    circle-info

    In the rare case of the TrilioVault Appliance database or log files getting larger than 40GB disk, contact or open a ticket with Trilio Customer Success to attach another drive to the TrilioVault Appliance.

    hashtag
    System requirements TrilioVault ovirt-imageio extension

    TrilioVault is extending the ovirt-imageio-proxy service running on the RHV-Manager and ovirt-imageio-daemon running on the RHV-Hosts.

    These extensions do not have any hardware related requirements, but they require specific versions of the ovirt-imageio services.

    Please check the Support Matrix for further information.

    circle-info

    The installed versions of the ovirt-imageio-proxy and the ovirt-imageio-daemon need to be the same.

    Login into the TrilioVault Appliance CLI
  • Verify the inventory files are still correct /opt/stack/imageio-ansible/inventories/production/daemon /opt/stack/imageio-ansible/inventories/production/proxy

  • Run the ansible playbooks with the clean tags cd /opt/stack/imageio-ansible/ ansible-playbook site.yml -i inventories/production/daemon --tags clean-daemon ansible-playbook site.yml -i inventories/production/proxy --tags clean-proxy

  • hashtag
    Step 2: Destroy the TrilioVault Appliance

    This guide assumes you are running the TrilioVault Appliance in a RHV environment

    To destroy the TrilioVault Appliance do the following:

    1. Login into the RHV-Manager

    2. Navigate to Compute➑️Virtual Machines

    3. Mark the TrilioVault Appliance in the list of VMs

    4. Click "Shutdown" or "Power Off"

    5. Wait till the shutdown procedure finishes

    6. Click "Remove" to destroy the TrilioVault Appliance

    Disabling an Enabling of the Global Job Scheduler

    To disable or enable the Global Job Scheduler follow these steps:

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Click "Scheduler Settings"

    4. Choose between "Enable Scheduler" or "Disable Scheduler"

    5. Click "Submit"

    circle-info

    Already started backups will finish their backup process when the Global Job Scheduler gets deactivated after the backup job has been started.

    1.4.8

    iSCSI

    Installation of ovirt-imageio extensions

    TrilioVault extends the ovirt-imageio services running on the RHV-Manager and the RHV hosts, to provide the parallel download of disks from multiple RHV hosts.

    The imageio extensions are getting installed automatically using Ansible playbooks provided on the TrilioVault Appliance.

    hashtag
    Preparing the inventory files

    Ansible playbooks are working with inventory files. These inventory files contain the list of RHV-Hosts and RHV-Managers and how to access them.

    To edit the inventory files, open the following files for the server type to add.

    For the RHV hosts open: /opt/stack/imageio-ansible/inventories/production/daemon For the RHV Manager open: /opt/stack/imageio-ansible/inventories/production/proxy

    hashtag
    Using password authentification

    The first supported method to allow Ansible to access the RHV hosts and the RHV Manager is the classic user password authentification.

    To use password authentification edit the files using the following format:

    <Server_IP> ansible_user=root password=xxxxx

    One entry per RHV Host in daemon file and one entry per RHV Manager in the proxy file are required.

    hashtag
    Using passwordless authentification (SSH keys)

    The second supported method to Allow Ansible to access the RHV hosts and the RHV Manager is utilizing SSH keys to provide passwordless authentification.

    For this method, it is necessary to prepare the TrilioVault Appliance and the RHV Cluster Nodes as well as the RHV Manager.

    The recommended method from Trilio is:

    1. Use ssh-keygen to generate a key pair

    2. Add the private key to /root/.ssh/ on the TrilioVault Appliance

    3. Add the public key to /root/.ssh/authorized_keys file on each RHV host and the RHV Manager

    Once the TrilioVault Appliance can access the nodes without password, edit the inventory files using the following format:

    <Server_IP> ansible_user=root

    One entry per RHV Host in the daemon file and one entry per RHV Manager in the proxy file are required.

    hashtag
    Starting the installation

    To install the ovirt-imageio extensions go to:

    /opt/stack/imageio-ansible

    Depending on the method of authentification prepared in the inventory files, different commands need to be used to start the Ansible playbooks.

    hashtag
    Using password authentification

    To call the Ansible playbooks when the inventory files use password authentification run.

    For RHV Hosts: ansible-playbook site.yml -i inventories/production/daemon --tags daemon For RHV Manager: ansible-playbook site.yml -i inventories/production/proxy --tags proxy

    hashtag
    Using passwordless authentification (SSH keys)

    To call the Ansible playbooks when the inventory files use passwordless authentification run.

    For RHV Hosts: ansible-playbook site.yml -i inventories/production/daemon --private-key ~/.ssh/id_rsa --tags daemonFor RHV Manager:ansible-playbook site.yml -i inventories/production/proxy --private-key ~/.ssh/id_rsa --tags proxy

    circle-info

    Ansible shows the output of the running playbook. Do not intervene until the playbook has finished.

    Important TVM Logs

    hashtag
    TrilioVault Appliance logs used during configuration

    The following logs contain all information gathered during the configuration of the TrilioVault Appliance

    /var/log/workloadmgr/tvault-config.log

    This log contains all information of pre-checks done, when filling out the configurator form.

    /var/log/workloadmgr/ansible-playbook.log

    This log contains the complete Ansible output from the playbooks that run when the configurator is started.

    circle-info

    With each configuration attempt a new ansible-playbook.log gets created. Old ansible-playbook.logs are renamed according to their creation time.

    hashtag
    TrilioVault Appliance logs during any task after configuration

    /var/log/workloadmgr/workloadmgr-api.log

    This log tracks all API requests that have been received on the wlm-api service.

    This log is helpful to verify that the TrilioVault VM is reachable from the RHV-M and authentication is working as expected.

    /var/log/workloadmgr/workloadmgr-scheduler.log

    This log tracks all jobs the wlm-scheduler is receiving from the wlm-api and sends them to the chosen wlm-workloads service.

    This log is helpful, when the wlm-api doesn't throw any error, but no task like backup or restore is getting started.

    /var/log/workloadmgr/workloadmgr-workloads.log

    This log contains the complete output from the wlm-workloads service, which is controlling the actual backup and restore tasks.

    This log is helpful to identify any errors that are happening on the TVM itself including RESTful api responses from the RHV-M.

    Important RHV-Host Logs

    hashtag
    TrilioVault data transfer related logs

    /var/log/ovirt_celery/worker<x>.log

    The worker logs contain the status of the disk transfer from the RHV Host to the backup target. Useful if the data transfer process gets stuck or errors in between.

    /var/log/ovirt-imageio-daemon/daemon.log

    The daemon.log contains all informations about the actual connection between the RHV Host and the backup target. Useful to identify potential connection issues between the RHV Host and Backup target.

    Important RHV-Manager Logs

    hashtag
    TrilioVault data transfer related logs

    /var/log/ovirt-imageio-proxyimage-proxy.log

    This log provides the used ticket number and RHV-Host for a backup transfer. It is helpful to identify the ticket numbers that are used in RHV to track a specific data transfer. It also shows any connection errors between the RHV-M and the RHV-Host.

    hashtag
    Generally helpful RHV-Manager logs

    TrilioVault is calling a lot of RHV APIs to read metadata, create RHV Snapshots and restore VMs. These tasks are done by RHV independently from TrilioVault and are logged in RHV logs.

    /var/log/ovirt-engine/engine.log

    This log is hard to read, but contains all tasks that the RHV-M is doing, including all Snapshot related tasks.

    Additional logs that can be helpful during troubleshooting are:

    /var/log/ovirt-engine/boot.log /var/log/ovirt-engine/console.log /var/log/ovirt-engine/ui.log

    Spinning up the TrilioVault VM

    The TrilioVault Appliance is delivered as qcow2 image and runs as VM on top of a KVM Hypervisor.

    circle-exclamation

    The TrilioVault VM qcow2 image must be an available disk on the RHV Storage before the creation of the TrilioVault Appliance is possible.

    circle-info

    This guide shows the tested way to spin up the TrilioVault Appliance on a RHV Cluster. Please contact a RHV Administrator and Trilio Customer Success Agent in case of incompatibility with company standards.

    hashtag
    Creation the TrilioVault VM

    The creation of the TrilioVault VM works like for any other Virtual Machine inside RHV.

    To create a new Virtual Machine, go to Compute ➑️ Virtual Machines.

    The button "New" opens the window to define the VM.

    The following instructions show the tested configuration for the TrilioVault Appliance.

    After configuration, use the OK button to create the TrilioVault Appliance.

    circle-exclamation

    It is a required to activate the Advanced Options.

    hashtag
    General Tab

    Fill out the following details as necessary on the General tab:

    • Cluster - Choose the RHV Cluster to host the TrilioVault VM

    • Template - Blank

    • Operating System - The TrilioVault VM runs CentOS 7. Red Hat Enterprise 7.x x64 is a valid option.

    Before moving to the next tab, attach the TrilioVault qcow2 image to the VM definition.

    • Click Attach under Instance Images.

    • Choose the TrilioVault qcow2 image

    • Check the box for OS

    triangle-exclamation

    Without checking the box for OS, will the TrilioVault Appliance not boot, as the RHV VM is not utilizing the disk as the boot disk.

    hashtag
    System Tab

    Under the System tab set the following:

    • Memory size - 24576 MB / 24 GB

    • Maximum memory - 24576 MB / 24 GB (RHV automatically first sets four times the Memory size)

    • Physical Memory Guaranteed - 24576 MB / 24 GB (RHV automatically first sets the same value as Memory size)

    circle-info

    It is possible to set the initial Memory size to 8GB. RHV is automatically setting the Maximum Memory to 4 times the Memory size value. The actual Memory size can be adjusted later as needed.

    triangle-exclamation

    Note: Do not set the Physical Memory Guaranteed below 8GB.

    • Total virtual CPUs - 4

    • Nothing to set at the Advanced Parameters

    • Leave Hardware Clock Timer Offset at 0

    hashtag
    Further Tabs

    There are no TrilioVault specific configurations necessary in any further tab.

    hashtag
    Starting the TrilioVault Appliance

    After the creation of the TrilioVault Appliance VM is the VM in a shutdown state.

    Go to the overview of VMs in the RHV Manager (Compute ➑️ Virtual Machines), identify the TrilioVault Appliance VM in the list, mark it, and click the Run button to start it.

    hashtag
    Configuring the TrilioVault Network

    Once the TrilioVault Appliance VM is running, an initial network configuration is needed, which requires a login onto the Operating System of the TrilioVault Appliance.

    circle-check

    Please request the initial password of the TrilioVault Appliance operating system root user from Trilio Customer Success.

    The TrilioVault VM is using a standard CentOS 7 as Operating System. Configuration of the network, works as usual.

    Use ip a to get a list of all available network interfaces.

    Edit the interface config files according to the desired network connection. The following command shows the example for the interface eth0.

    vi /etc/sysconfig/network-scripts/ifcfg-eth0

    Fill out the network configuration lines, following the example below:

    BOOTPROTO=none DEVICE=eth0 ONBOOT=yes TYPE=Ethernet IPADDR=30.30.1.10 NETMASK=255.255.0.0 GATEWAY=30.30.1.1 DNS1=30.30.1.1

    Write and close the interface configuration file.

    The interface configured needs to be restarted using the following commands.

    ifdown eth0 ifup eth0

    File Search

    hashtag
    Definition

    The file search functionality allows the user to search for files and folders located on a chosen VM in a workload in one or more Backups.

    hashtag
    Navigating to the file search tab

    The file search tab is part of every workload overview. To reach it follow these steps:

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload a file search shall be done in

    hashtag
    Configuring and starting a file search

    A file search runs against a single virtual machine for a chosen subset of backups using a provided search string.

    To run a file search the following elements need to be decided and configured

    hashtag
    Choose the VM the file search shall run against

    Under VM Name/ID choose the VM that the search is done upon. The drop down menu provides a list of all VMs that are part of any Snapshot in the Workload.

    circle-info

    VMs that are no longer activly protected by the Workload but are still part of an existing Snapshot are listed in red.

    hashtag
    Set the File Path

    The File Path defines the search string that is run against the chosen VM and Snapshots. This search string does support basic RegEx.

    circle-exclamation

    The File Path has to start with a '/'

    circle-info

    Windows partitions are fully supported. Each partition is it's own Volume with it's own root. Use '/Windows' instead of 'C:\Windows'

    circle-info

    The file search does not go into deeper directories and always searches on the directory provided in the File Path

    Example File Path for all files inside /etc : /etc/*

    hashtag
    Define the Snapshots to search in

    Filter Snapshots by is the third and last component that needs to be set. This defines which Snapshots are going to be searched.

    There are 3 possibilities for a pre-filtering:

    1. All Snapshots - Lists all Snapshots that contain the chosen VM from all available Snapshots

    2. Last Snapshots - Choose between last 10, 25, 50, or custom Snapshots and click Apply to get the list of the available Snapshots for the chosen VM that match the criteria.

    3. Date Range - Set a start and end date and click apply to get the list of all available Snapshots for the chosen VM within the set dates.

    After the pre-filtering is done choose the Snapshots that shall be searched by clicking their checkbox or by clicking the global checkbox.

    circle-info

    When no Snapshot is chosen the file search will not start.

    hashtag
    Start the File Search and retrieve the results

    To start a File Search the following elements need to be set:

    • A VM to search in has to be choosen

    • A valid File Path provided

    • At least one Snapshot to search in selected

    Once those have been set click "Search" to start the file search.

    triangle-exclamation

    Do not navigate to any other RHV tab or website after starting the File Search. Results are lost and the search has to be repeated to regain them.

    After a short time the results will be presented. The results are presented in a tabular format grouped by Snapshots and Volumes inside the Snapshot.

    For each found file or folder the following information are provided:

    • POSIX permissions

    • Amount of links pointing to the file or folder

    • User ID who owns the file or folder

    Once the Snapshot of interest has been identified it is possible to go directly to the Snapshot using the "View Snapshot" option.

    hashtag
    Download a file from File Search

    It is possible to download files directly from the results of a file search.

    circle-info

    Downloading files through the File Search results is limited to files smaller 2GiB.

    To download a file from the File Search do a File Search and wait for the results.

    On the result overview utilize the download button next to the full path of the found file to start the download procedure.

    Preparing the Installation

    TrilioVault for RHV integrates tightly into the RHV environment itself. This integration requires preparation before the installation starts.

    hashtag
    Installing Redis

    TrilioVault is extending the ovirt-imageio services running on the RHV Manager and the RHV Hosts to allow the parallel disk transfer from multiple RHV-Hosts at the same time.

    This extension is using a task queue system, .

    Instance Type - Custom
  • Optimized for - Server

  • Name - Provide a RHV internal name for the TrilioVault VM

  • Description - Provide a RHV internal description for the TrilioVault VM (optional)

  • Comment - Provide a RHV internal comment for the TrilioVault VM (Optional)

  • Activate Delete Protection

  • NICs - Choose the network the TrilioVault VM connects with. The plus and minus symbols add/delete NICs as necessary.

  • Leave custom serial policy unchecked
    Click the workload name to enter the Workload overview
  • Click File Search to enter the file search tab

  • Group ID assigned to the file or folder
  • Actual size in Bytes of the file or folder

  • Time of creation

  • Time of last modification

  • Time of last access

  • Full path to the found file or folder

  • Download File button (for files only)

  • Python Celery requires a message broker system like RabbitMQ or Redis. TrilioVault uses the Redis message broker.

    RHV does not include Redis, so installation is necessary.

    circle-info

    Redis is not available from a Red Hat repository yet. The Fedora EPEL repository provides the needed packages.

    The following steps install Redis:

    1. Add the Fedora EPEL Repository# yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

    2. Install Redis # yum install redis

    3. Start Redis # systemctl start redis.service

    4. Enable Redis to start on boot # systemctl enable redis

    5. Check Redis status # systemctl status redis.service

    hashtag
    Enabling Disk Upload through RHV Manager

    Trilio delivers the TrilioVault Appliance as a qcow2 image.

    Trilio supports that the TrilioVault Appliance is running on the same RHV Cluster it protects.

    Uploading the qcow2 to the RHV Datastore is easy, but depending on the RHV usage so far, it might require the installation of additional certificates.

    hashtag
    Verify the connection to the ovirt-imageio-proxy

    RHV is using the ovirt-imageio-proxy service to upload and download images, snapshots, and disks through the RHV Manager.

    The following steps verify the connection to the ovirt-imageio-proxy service.

    1. Log in to the administrative portal of the RHV Manager

    2. Go to Storage ➑️ Disks

    3. Go to Upload ➑️ Start

    4. Click Test Connection

    triangle-exclamation

    When the connection test is unsuccessful, please proceed with the necessary steps to install the ovirt-engine-certificates.

    circle-check

    When the connection test is successful, no further steps are required to upload the image.

    hashtag
    Install the ovirt-engine certificate

    When the Test-Connection to the ovirt-imageio-proxy is failing, a usual reason is the client system mistrusting the RHV-M due to a missing certificate.

    The RHV-M does have two certificates, which can both be required to access the ovirt-imageio-proxy.

    The first certificate is directly available for download from the error message in the window. The below URL shows the general path to the certificate. The downloaded certificate is the root certificate for the certificates used by the ovirt-imageio-proxy.

    https:///ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA

    Download and install this certificate according to the clients operating system and browser used.

    Test the connection to the ovirt-imageio-proxy again after installation.

    triangle-exclamation

    Proceed to the second certificate only in case of the connection still failing.

    The second certificate is the actual certificate of the ovirt-imageio-proxy shown to the client system upon connection. The download is only possible from the RHV-M host system directly. The usual location of the certificate is:

    /etc/pki/ovirt-engine/certs/imageio-proxy.cer

    circle-info

    Please visit /etc/ovirt-imageio-proxy/ovirt-imageio-proxy.conf in case of the certificate being located elsewhere.

    Install the certificate according to the client's operating system and browser used.

    Test the connection to the ovirt-imageio-proxy again after installation.

    triangle-exclamation

    Please contact your administrator when the connection still fails.

    hashtag
    Uploading the TrilioVault VM qcow2 disk to RHV

    The TrilioVault VM qcow2 image is a full operating system disk, that is getting attached to a VM running on the RHV-Cluster.

    To be able to spin up the TrilioVault VM, upload the qcow2 disk into the RHV datadomain.

    The following procedure uploads the qcow2 disk:

    1. Go to Storage ➑️ Disk

    2. Go to Upload ➑️ Start

    3. Fill out the presented form and choose the path to the qcow2 image on the client system

    4. Click OK to start the upload

    triangle-exclamation

    In case of the upload not starting after several minutes, verify that the connection to the ovirt-imageio-proxy is possible.

    Python Celeryarrow-up-right

    Snapshots

    hashtag
    Definition

    A Snapshot is a single TrilioVault backup of a workload including all data and metadata. It contains the information of all VM's that are protected by the workload.

    hashtag
    Creating a Snapshot

    Snapshots are automatically created by the TrilioVault scheduler. If necessary or in case of deactivated scheduler is it possible to create a Snapshot on demand.

    There are 2 possibilities to create a snapshot on demand.

    hashtag
    Possibility 1: From the Backup overview

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload that shall create a Snapshot

    hashtag
    Possibility 2: From the Workload Snapshot list

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload that shall create a Snapshot

    hashtag
    Snapshot overview

    Each Snapshot contains a lot of information about the backup. These information can be seen in the Snapshot overview.

    To reach the Snapshot Overview follow these steps:

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload that contains the Snapshot to show

    hashtag
    Details Tab

    The Snapshot Details Tab shows the most important information about the Snapshot.

    • Snapshot Name / Description

    • Snapshot Type

    • Time Taken

    hashtag
    Restores Tab

    The Snapshot Restores Tab shows the list of Restores that have been started from the chosen Snapshot. It is possible to start Restores from here.

    circle-info

    Please refer to the User Guide to learn more about Restores.

    hashtag
    Misc. Tab

    The Snapshot Miscellaneous Tab provides the remaining metadata information about the Snapshot.

    • Creation Time

    • Last Update time

    • Snapshot ID

    hashtag
    Delete Snapshots

    Once a Snapshot is no longer needed, it can be safely deleted from a Workload.

    circle-info

    The retention policy will automatically delete the oldest Snapshots according to the configure policy.

    circle-info

    You have to delete all Snapshots to be able to delete a Workload.

    circle-info

    Deleting a TrilioVault Snapshot will not delete any RHV Snapshots. Those need to be deleted separately if desired.

    There are 2 possibilities to delete a Snapshot.

    hashtag
    Possibility 1: Single Snapshot deletion through the submenu

    To delete a single Snapshot through the submenu follow these steps:

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload that contains the Snapshot to show

    hashtag
    Possibility 2: Multiple Snapshot deletion through checkbox in Snapshot overview

    To delete one or more Snapshots through the Snapshot overview do the following:

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload that contains the Snapshot to show

    Restores

    hashtag
    Definition

    A Restore is the workflow to bring back the backed up VMs from a TrilioVault Snapshot.

    TrilioVault does offer 2 types of restores:

    • One Click restore

    • Selective restore

    hashtag
    One Click Restore

    The One Click Restore will bring back all VMs from the Snapshot in the same state as they were backed up. They will:

    • be located in the same cluster in the same datacenter

    • use the same storage domain

    • connect to the same network

    The user can't change any Metadata.

    circle-info

    The One Click Restore requires, that the original VM's that have been backed up are deleted or otherwise lost. If even one VM is still existing, will the One Click Restore fail.

    circle-info

    The One Click Restore will automatically update the Workload to protect the restored VMs.

    There are 2 possibilities to start a One Click Restore.

    hashtag
    Possibility 1: From the Snapshot list

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload that contains the Snapshot to be restored

    hashtag
    Possibility 2: From the Snapshot overview

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload that contains the Snapshot to be restored

    hashtag
    Selective Restore

    The Selective Restore is the most complex restore TrilioVault has to offer. It allows to adapt the restored VMs to the exact needs of the User.

    With the selective restore the following things can be changed:

    • Which VMs are getting restored

    • Name of the restored VMs

    • Which networks to connect with

    circle-info

    The Selective Restore is always available and doesn't have any prerequirements.

    There are 2 possibilities to start a Selective Restore.

    hashtag
    Possibility 1: From the Snapshot list

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload that contains the Snapshot to be restored

    hashtag
    Possibility 2: From the Snapshot overview

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload that contains the Snapshot to be restored

    have the same flavor
    Click the workload name to enter the Workload overview
  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click "One Click Restore" in the same line as the identified Snapshot

  • (Optional) Provide a name / description

  • Click "Create"

  • Click the workload name to enter the Workload overview
  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click the Snapshot Name

  • Navigate to the "Restores" tab

  • Click "One Click Restore"

  • (Optional) Provide a name / description

  • Click "Create"

  • Which Storage domain to use
  • Which DataCenter / Cluster to restore into

  • Which flavor the restored VMs will use

  • Click the workload name to enter the Workload overview
  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

  • Click on "Selective Restore"

  • Configure the Selective Restore as desired

  • Click "Restore"

  • Click the workload name to enter the Workload overview
  • Navigate to the Snapshots tab

  • Identify the Snapshot to be restored

  • Click the Snapshot Name

  • Navigate to the "Restores" tab

  • Click "Selective Restore"

  • Configure the Selective Restore as desired

  • Click "Restore"

  • Click "Create Snapshot"
  • Provide a name and description for the Snapshot

  • Decide between Full and Incremental Snapshot

  • Click "Create"

  • Click the workload name to enter the Workload overview
  • Navigate to the Snapshots tab

  • Click "Create Snapshot"

  • Provide a name and description for the Snapshot

  • Decide between Full and Incremental Snapshot

  • Click "Create"

  • Click the workload name to enter the Workload overview
  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the Snapshot Name

  • Size
  • Which VMs are part of the Snapshot

  • for each VM in the Snapshot

    • Instance Info - Name & Status

    • Instance Type - vCPUs, Disk & RAM

    • Attached Networks

    • Attached Volumes

    • Misc - Original ID of the VM

  • Workload ID of the Workload containing the Snapshot
    Click the workload name to enter the Workload overview
  • Navigate to the Snapshots tab

  • Identify the searched Snapshot in the Snapshot list

  • Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu

  • Click "Delete Snapshot"

  • Confirm by clicking "Delete"

  • Click the workload name to enter the Workload overview
  • Navigate to the Snapshots tab

  • Identify the searched Snapshots in the Snapshot list

  • Check the checkbox for each Snapshot that shall be deleted

  • Click "Delete Snapshots"

  • Confirm by clicking "Delete"

  • Restores

    Configure TrilioVault VM

    The TrilioVault Appliance requires configuration to work with the chosen RHV environment. A Web-UI provides access to the TrilioVault Appliance dashboard and configurator.

    circle-info

    Recommended and tested browsers are: Chrome and Firefox.

    hashtag
    Accessing the TrilioVault Dashboard

    Enter the TrilioVault IP or FQDN into the browser to reach the TrilioVault Appliance landing page.

    The user is: admin The initial password is: password

    circle-info

    After the first login into the TrilioVault Dashboard is it necessary to change the password.

    hashtag
    Details needed for the TrilioVault Appliance configurator

    Upon login into the TrilioVault Appliance, the shown page is the configurator. The configurator requires some information about the TrilioVault Appliance, RHV and Backup Storage.

    hashtag
    TrilioVault Nodes information

    The TrilioVault Appliance needs to be integrated into an existing environment to be able to operate correctly. This block asks for the information about the TrilioVault Appliance operating details.

    • Virtual IP Address

      • The TrilioVault Appliance uses this IP address for all communication with RHV.

      • Format: IP/Netmask

    circle-exclamation

    The TrilioVault Appliance for RHV does not yet support multi-node installations. It is an actively worked on feature, that gets integrated step by step.

    • TVM Appliance IP

      • The first interface in the interface list of the TrilioVault Appliance will get this IP address assigned. Further is the TrilioVault Appliance hostname set.

      • Format: IP=hostname

    triangle-exclamation

    The Virtual IP and the TVM Appliance IP can not be the same address. The configuration fails upon using the same IPs for both values.

    • Name Servers

      • The DNS server the TrilioVault appliance will use.

      • Format: Comma separated list of IPs

    hashtag
    RHV Credentials information

    The TrilioVault appliance integrates with one RHV environment. This block asks for the information required to access and connect with the RHV Cluster.

    • RHV Engine URL

      • URL of the RHV-Manager used to authenticate

      • Format: URL (FQDN and IP supported)

    circle-exclamation

    A preconfigured DNS Server is required, when using FQDN. The TrilioVault Appliance local host file gets overwritten during configuration. The configuration will fail when the FQDN is not resolvable by a DNS Server.

    • RHV Username

      • admin-user to authenticate against the RHV-Manager

      • Format: user@domain

    triangle-exclamation

    The configurator verifies the credentials entered. The shown error is always Invalid Credentials in case of any error.

    hashtag
    Backup Storage Configuration information

    This block asks for the necessary details to configure the Backup Storage.

    • Backup Storage

      • Predefined as NFS

    circle-info

    TrilioVault for RHV currently only supports NFS. The addition of S3 compatible Storage solutions gets delivered in a future version.

    • NFS Export

      • Full path to the NFS Volume used as Backup Storage

      • Format: Comma separated list of NFS paths

    hashtag
    TrilioVault Certificate information

    TrilioVault is integrating into the RHV Cluster as an additional service, following the RHV communication paradigms. These require that the TrilioVault Appliance is using SSL and that the RHV-Manager does trust the TrilioVault Appliance.

    This block requests all information about the certificates to use.

    • FQDN

      • FQDN to reach the TrilioVault Appliance

      • Format: FQDN

    circle-info

    Please follow the linked to learn how to create self-signed certificates for TrilioVault correctly.

    hashtag
    TrilioVault License

    It is possible to directly provide the TrilioVault Appliance with the license file that is going to be used by it.

    circle-exclamation

    TrilioVault will not create any workloads or backups without a valid license file.

    It is not necessary to provide the License file directly through the configurator. It is also possible to provide the license afterwards through the TrilioVault License tab in the TrilioVault dashboard.

    The TrilioVault License tab can also be used to verify and update the currently installed license.

    hashtag
    Submit and Configuration

    After filling out every block of the configurator, hit the submit button to start the configuration.

    The configurator asks one more time for confirmation before starting.

    Stay patient during the configuration, as this may easily take a few more minutes.

    After the configurator has succeeded or failed, is the Ansible playbook shown. Use the possibilities to expand and collapse each task for troubleshooting failed configurations.

    Workloads

    circle-exclamation

    Important note for VMs using iSCSI disks. RHV is only creating a connection between the VM and the iSCSI disk when the VMs are running, as this connecting is achieved through a symlink on the RHV Host. This behavior leads to RHV only being able to take RHV Snapshots through the VM when the VM is running. In consequence TVR is only able to take backups while the VM is in a running state.

    hashtag
    Definition

    A workload is a backup job that protects one or more Virtual Machines according to a configured policy. There can be as many workloads as needed. But each VM can only be part of one Workload.

    hashtag
    Create a Workload

    To create a workload do the following steps:

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Click "Create Workload"

    The created Workload will be available after a few seconds and starts to take backups according to the provided schedule and policy.

    hashtag
    Workload Overview

    A workload contains many information, which can be seen in the workload overview.

    To enter the workload overview do the following steps:

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload that shall create a Snapshot

    hashtag
    Details Tab

    The Workload Details tab provides you with the general most important information about the workload:

    • Name

    • Description

    • List of protected VMs

    circle-info

    It is possible to navigate to the protected VM directly from the list of protected VMs.

    hashtag
    Snapshots Tab

    The Workload Snapshots Tab shows the list of all available Snapshots in the chosen Workload.

    From here it is possible to work with the Snapshots, create Snapshots on demand and start Restores.

    circle-info

    Please refer to the and User Guide to learn more about those.

    hashtag
    Policy Tab

    The Workload Policy Tab shows gives an overview of the current configured scheduler and retention policy. The following elements are shown:

    • Scheduler Enabled / Disabled

    • Start Date / Time

    • End Date / Time

    hashtag
    Filesearch Tab

    The Workload Filesearch Tab provides access to the power search engine, which allows to find files and folders on Snapshots without the need of a restore.

    circle-info

    Please refer to the File Search User Guide to learn more about this feature.

    hashtag
    Misc. Tab

    The Workload Miscellaneous Tab shows the remaining metadata of the Workload. The following information are provided:

    • Creation time

    • last update time

    • Workload ID

    hashtag
    Edit a Workload

    Workloads can be modified in all components to match changing needs.

    To edit a workload do the following steps:

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload to be modified

    hashtag
    Delete a Workload

    Once a workload is no longer needed it can be safely deleted.

    To delete a workload do the following steps:

    circle-info

    All Snapshots need to be deleted before the workload gets deleted. Please refer to the User Guide to learn how to delete Snapshots.

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload to be deleted

    hashtag
    Reset a Workload

    In rare cases it might be necessary to start a backup chain all over again, to ensure the quality of the created backups. To not recreate a Workload in such cases is it possible to reset a Workload.

    The Workload reset will:

    • Cancel all ongoing tasks

    • Delete all existing RHV Snapshots from the protected VMs

    • recalculate the next Snapshot time

    To reset a Workload do the following steps:

    1. Login to the RHV-Manager

    2. Navigate to the Backup Tab

    3. Identify the workload to be deleted

    Post Installation Health-Check

    After the installation and configuration of TrilioVault for RHV did succeed the following steps can be done to verify that the TrilioVault installation is healthy.

    hashtag
    Verify the TrilioVault Appliance services are up

    TrilioVault is using 3 main services on the TrilioVault Appliance:

    Example: 10.10.0.2/24

    Example: 10.10.0.1=rhv-tvm

    Example: 8.8.8.8,10.10.10.10

  • Domain Search Order

    • The domain the TrilioVault Appliance will use.

    • Format: Comma separated list of domain names

    • Example: trilio.demo,trilio.io

  • NTP Servers

    • NTP Servers the TrilioVault Appliance will use.

    • Format: Comma separated list of NTP Servers (FQDN and IP supported)

    • Example: 0.pool.ntp.org,10.10.10.10

  • Timezone

    • Timezone the TrilioVault will use.

    • Format: predefined list

    • Example: UTC

  • Example: https://rhv-manager.trilio.demo

    Example: admin@internal

  • Password

    • The password to validate the RHV Username against the RHV-Manager

    • Format: String

    • Example: password

  • Example: 10.10.100.20:/rhv_backup

  • NFS Options

    • Options used by the TrilioVault NFS client to connect to the NFS Volume

    • Format: NFS Options

    • Example: nolock,soft,timeo=180,intr

  • Example: rhv-tvm.trilio.demo

  • Certificate

    • Certificate provided by the TrilioVault appliance upon request

    • Format: Certificate file

    • Example: rhv-tvm.crt

  • Private Key

    • Private Key used to verify the provided certificate

    • Format: private key file

    • Example: rhv-tvm.key

  • TrilioVault KB articlearrow-up-right
    Provide Workload Name and Workload Description on the first tab "Details"
  • Choose between Serial or Parallel workload on the first tab "Details"

  • Choose the VMs to protect on the second Tab "Workload Members"

  • Decide for the schedule of the workload on the Tab "Schedule"

  • Provide the Retention policy on the Tab "Policy"

  • Choose the Full Backup Interval on the Tab "Policy"

  • Click create

  • Click the workload name to enter the Workload overview
    RPO
  • Time till next Snapshot run

  • Retention Policy and Value

  • Full Backup Interval policy and value

  • Workload Type
    Click the small arrow next to "Create Snapshot" to open the sub-menu
  • Click "Edit Workload"

  • Modify the workload as desired - All parameters can be changed

  • Click "Update"

  • Click the small arrow next to "Create Snapshot" to open the sub-menu
  • Click "Delete Workload"

  • Confirm by clicking "Delete Workload" yet again

  • take a full backup at the next Snapshot
    Click the small arrow next to "Create Snapshot" to open the sub-menu
  • Click "Reset Workload"

  • Confirm by clicking "Reset Workload" yet again

  • Snapshot
    Restore
    Snapshots

    wlm-api

  • wlm-scheduler

  • wlm-workloads

  • Those can be verified to be up and running using the systemctl status command.

    hashtag
    Check the TrilioVault pacemaker and nginx cluster

    The second component to check the TrilioVault Appliance's health is the nginx and pacemaker cluster.

    hashtag
    Verify API connectivity from the RHV-Manager

    The RHV-Manager is doing all API calls towards the TrilioVault Appliance. Therefore it is helpful to do a quick API connectivity check using curl.

    The following curl command lists the available workload-types and verifies that the connection is available and working:

    hashtag
    Verify the ovirt-imageio services are up and running

    TrilioVault is extending the already exiting ovirt-imageio services. The installation of these extensions does check if the ovirt-services come up. Still it is a good call to verify again afterwards:

    On the RHV-Manager check the ovirt-imageio-proxy service:

    On the RHV-Host check the ovirt-imageio-daemon service:

    hashtag
    Verify the NFS Volume is correctly mounted

    TrilioVault mounts the NFS Backup Target to the TrilioVault Appliance and RHV-Hosts.

    To verify those are correctly mounted it is recommended to do the following checks.

    First df -h looking for /var/triliovault-mounts/<hash-value>

    Secondly do a read / write / delete test as the user vdsm:kvm (uid = 36 / gid = 36) from the TrilioVault Appliance and the RHV-Host.

    systemctl status wlm-api
    ######
    ● wlm-api.service - Cluster Controlled wlm-api
       Loaded: loaded (/etc/systemd/system/wlm-api.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-api.service.d
               └─50-pacemaker.conf
       Active: active (running) since Wed 2020-04-22 09:17:05 UTC; 1 day 2h ago
     Main PID: 21265 (python)
        Tasks: 1
       CGroup: /system.slice/wlm-api.service
               └─21265 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-api --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-scheduler
    ######
    ● wlm-scheduler.service - Cluster Controlled wlm-scheduler
       Loaded: loaded (/etc/systemd/system/wlm-scheduler.service; disabled; vendor preset: disabled)
      Drop-In: /run/systemd/system/wlm-scheduler.service.d
               └─50-pacemaker.conf
       Active: active (running) since Wed 2020-04-22 09:17:17 UTC; 1 day 2h ago
     Main PID: 21512 (python)
        Tasks: 1
       CGroup: /system.slice/wlm-scheduler.service
               └─21512 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-scheduler --config-file=/etc/workloadmgr/workloadmgr.conf
    systemctl status wlm-workloads
    ######
    ● wlm-workloads.service - workloadmanager workloads service
       Loaded: loaded (/etc/systemd/system/wlm-workloads.service; enabled; vendor preset: disabled)
       Active: active (running) since Wed 2020-04-22 09:15:43 UTC; 1 day 2h ago
     Main PID: 20079 (python)
        Tasks: 33
       CGroup: /system.slice/wlm-workloads.service
               β”œβ”€20079 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
               β”œβ”€20180 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
               [...]
               β”œβ”€20181 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
               β”œβ”€20233 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
               β”œβ”€20236 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
               └─20237 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
    pcs status
    ######
    Cluster name: triliovault
    
    WARNINGS:
    Corosync and pacemaker node names do not match (IPs used in setup?)
    Stack: corosync
    Current DC: om_tvm (version 1.1.19-8.el7_6.1-c3c624ea3d) -
    partition with quorum
    Last updated: Wed Dec 5 12:25:02 2018
    Last change: Wed Dec 5 09:20:08 2018 by root via cibadmin on om_tvm
    1 node configured
    4 resources configured
    
    Online: [ om_tvm ]
    Full list of resources:
    virtual_ip (ocf::'heartbeat:IPaddr2): Started om_tvm
    wlm-api (systemd:wlm-api): Started om_tvm
    wlm-scheduler (systemd:wlm-scheduler): Started om_tvm
    Clone Set: lb_nginx-clone [lb_nginx]
    Started: [ om_tvm ]
    Daemon Status:
    corosync: active/enabled
    pacemaker: active/enabled
    pcsd: active/enabled
    curl -k -XGET https://30.30.1.11:8780/v1/admin/workload_types/detail -H "Content-Type: application/json" -H "X-OvirtAuth-User: admin@internal" -H "X-OvirtAuth-Password: password"
    ######
    {"workload_types": [{"status": "available", "user_id": "admin@internal", "name": "Parallel", "links": [{"href": "https://myapp/v1/admin/workloadtypes/2ddd528d-c9b4-4d7e-8722-cc395140255a", "rel": "self"}, {"href": "https://myapp/admin/workloadtypes/2ddd528d-c9b4-4d7e-8722-cc395140255a", "rel": "bookmark"}], "created_at": "2020-04-02T15:38:51.000000", "updated_at": "2020-04-02T15:38:51.000000", "metadata": [], "is_public": true, "project_id": "admin", "id": "2ddd528d-c9b4-4d7e-8722-cc395140255a", "description": "Parallel workload that snapshots VM in the specified order"}, {"status": "available", "user_id": "admin@internal", "name": "Serial", "links": [{"href": "https://myapp/v1/admin/workloadtypes/f82ce76f-17fe-438b-aa37-7a023058e50d", "rel": "self"}, {"href": "https://myapp/admin/workloadtypes/f82ce76f-17fe-438b-aa37-7a023058e50d", "rel": "bookmark"}], "created_at": "2020-04-02T15:38:47.000000", "updated_at": "2020-04-02T15:38:47.000000", "metadata": [], "is_public": true, "project_id": "admin", "id": "f82ce76f-17fe-438b-aa37-7a023058e50d", "description": "Serial workload that snapshots VM in the specified order"}]}
    systemctl status ovirt-imageio-proxy
    ######
    ● ovirt-imageio-proxy.service - oVirt ImageIO Proxy
       Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-proxy.service; enabled; vendor preset: disabled)
       Active: active (running) since Wed 2020-04-08 05:05:25 UTC; 2 weeks 1 days ago
     Main PID: 1834 (python)
       CGroup: /system.slice/ovirt-imageio-proxy.service
               └─1834 bin/python proxy/ovirt-imageio-proxy
    systemctl status ovirt-imageio-daemon
    ######
    ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon
       Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; enabled; vendor preset: disabled)
       Active: active (running) since Wed 2020-04-08 04:40:50 UTC; 2 weeks 1 days ago
     Main PID: 1442 (python)
        Tasks: 4
       CGroup: /system.slice/ovirt-imageio-daemon.service
               └─1442 /opt/ovirt-imageio/bin/python daemon/ovirt-imageio-daemon
    df -h
    ######
    Filesystem                                      Size  Used Avail Use% Mounted on
    devtmpfs                                         63G     0   63G   0% /dev
    tmpfs                                            63G   16K   63G   1% /dev/shm
    tmpfs                                            63G   35M   63G   1% /run
    tmpfs                                            63G     0   63G   0% /sys/fs/cgroup
    /dev/mapper/rhvh-rhvh--4.3.8.1--0.20200126.0+1  7.1T  3.7G  6.8T   1% /
    /dev/sda2                                       976M  198M  712M  22% /boot
    /dev/mapper/rhvh-var                             15G  1.9G   12G  14% /var
    /dev/mapper/rhvh-home                           976M  2.6M  907M   1% /home
    /dev/mapper/rhvh-tmp                            976M  2.6M  907M   1% /tmp
    /dev/mapper/rhvh-var_log                        7.8G  230M  7.2G   4% /var/log
    /dev/mapper/rhvh-var_log_audit                  2.0G   17M  1.8G   1% /var/log/audit
    /dev/mapper/rhvh-var_crash                      9.8G   37M  9.2G   1% /var/crash
    30.30.1.4:/rhv_backup                           2.0T  5.3G  1.9T   1% /var/triliovault-mounts/MzAuMzAuMS40Oi9yaHZfYmFja3Vw
    30.30.1.4:/rhv_data                             2.0T   37G  2.0T   2% /rhev/data-center/mnt/30.30.1.4:_rhv__data
    tmpfs                                            13G     0   13G   0% /run/user/0
    30.30.1.4:/rhv_iso                              2.0T   37G  2.0T   2% /rhev/data-center/mnt/30.30.1.4:_rhv__iso
    su vdsm
    ######
    [vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ touch foo
    [vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ ll
    total 24
    drwxr-xr-x  3 vdsm kvm 4096 Apr  2 17:27 contego_tasks
    -rw-r--r--  1 vdsm kvm    0 Apr 23 12:25 foo
    drwxr-xr-x  2 vdsm kvm 4096 Apr  2 15:38 test-cloud-id
    drwxr-xr-x 10 vdsm kvm 4096 Apr 22 11:00 workload_1540698c-8e22-4dd1-a898-8f49cd1a898c
    drwxr-xr-x  9 vdsm kvm 4096 Apr  8 15:21 workload_51517816-6d5a-4fce-9ac7-46ee1e09052c
    drwxr-xr-x  6 vdsm kvm 4096 Apr 22 11:30 workload_77fb42d2-8d34-4b8d-bfd5-4263397b636c
    drwxr-xr-x  5 vdsm kvm 4096 Apr 23 06:15 workload_85bf16ed-d4fd-49a6-a753-98c5ca6e906b
    [vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ rm foo
    [vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ ll
    total 24
    drwxr-xr-x  3 vdsm kvm 4096 Apr  2 17:27 contego_tasks
    drwxr-xr-x  2 vdsm kvm 4096 Apr  2 15:38 test-cloud-id
    drwxr-xr-x 10 vdsm kvm 4096 Apr 22 11:00 workload_1540698c-8e22-4dd1-a898-8f49cd1a898c
    drwxr-xr-x  9 vdsm kvm 4096 Apr  8 15:21 workload_51517816-6d5a-4fce-9ac7-46ee1e09052c
    drwxr-xr-x  6 vdsm kvm 4096 Apr 22 11:30 workload_77fb42d2-8d34-4b8d-bfd5-4263397b636c
    drwxr-xr-x  5 vdsm kvm 4096 Apr 23 06:15 workload_85bf16ed-d4fd-49a6-a753-98c5ca6e906b
    [vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$