Only this pageAll pages
Powered by GitBook
1 of 34

4.0

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deployment Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

User Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

ADMIN GUIDE

Loading...

Loading...

Loading...

Loading...

Troubleshooting

Loading...

Loading...

Loading...

Trilio for RHV 4.0 SP1 Release Notes

TrilioVault for RHV 4.1 provides several Bug Fixes and the support of a new platform, Ovirt.

New supported Platform: OVirt

OVirt is the Upstream project of Red Hat Virtualization.

Trilio is happy to announce the full support for OVirt starting version 4.4.

The installation, configuration, and usage of TrilioVault inside OVirt are exactly the same as for RHV.

Upgrade Trilio

To upgrade Trilio it is necessary to uninstall the old version of Trilio and install the new version.

This is done in a few easy steps.

  1. Uninstall the old ovirt-imageio extensions

  2. Delete or shutdown the old Trilio Appliance

  3. Upload the qcow2 image of the new Trilio Appliance

    (It might be necessary to verify the connection of the ovirt-imageio-proxy again)

  4. Spin up the new Trilio Appliance

  5. Configure the Trilio Appliance

  6. Install the new ovirt-imageio extensions

  7. Import the workload from the Web UI of the Appliance once Trilio Appliance is configured

During configuration check the box for workload import. This will automatically load all workloads hosted on the backup target into the fresh configured Trilio Appliance.

Trilio for RHV 4.0 SP2 Release Notes

TrilioVault for RHV 4.0 SP2 provides support for image-io 2.1 and several bug fixes. Further, have the names of the playbooks to install the TrilioVault components on RHV-Hosts and RHV-Managers to more reasonable names.

Important RHV-Manager Logs

TrilioVault data transfer related logs

/var/log/ovirt-imageio-proxyimage-proxy.log

This log provides the used ticket number and RHV-Host for a backup transfer. It is helpful to identify the ticket numbers that are used in RHV to track a specific data transfer. It also shows any connection errors between the RHV-M and the RHV-Host.

Generally helpful RHV-Manager logs

TrilioVault is calling a lot of RHV APIs to read metadata, create RHV Snapshots and restore VMs. These tasks are done by RHV independently from TrilioVault and are logged in RHV logs.

/var/log/ovirt-engine/engine.log

This log is hard to read, but contains all tasks that the RHV-M is doing, including all Snapshot related tasks.

Additional logs that can be helpful during troubleshooting are:

/var/log/ovirt-engine/boot.log /var/log/ovirt-engine/console.log /var/log/ovirt-engine/ui.log

About Trilio for RHV

Trilio for RHV, is a native RHV service that provides policy-based comprehensive backup and recovery for RHV workloads. The solution captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data, and Metadata of an environment) as full or incremental snapshots. A variety of storage environments can hold these Snapshots, including NFS and soon AWS S3 compatible storage. With TrilioVault and its single-click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). TrilioVault enables IT departments to fully deploy RHV solutions and provide business assurance through enhanced data retention, protection, and integrity.

With the use of Trilio’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. Trilio takes point-in-time backup of the entire workload consisting of computing resources, network configurations, and storage data as one unit. It also takes incremental backups that only capture the changes made since the last backup. Incremental snapshots save time and storage space as the backup only includes changes since the last backup. The summarized benefits of using VAST for backup and restore are:

  1. Efficient capture and storage of snapshots. Since our full backups only include data that is committed to storage volume and the incremental backups include changed blocks of data since the last backup, our backup processes are efficient and storages backup images efficiently on the backup media.

  2. Faster and reliable recovery. When your applications become complex that snap multiple VMs and storage volumes, our efficient recovery process brings your application from zero to operational with the click of a button.

  3. Reliable and smooth migration of workloads between environments. Trilio captures all the details of your application, and hence our migration includes your entire application stack without leaving anything for guesswork.

  4. Through policy and automation, lower the Total Cost of Ownership. Our role driven backup process and automation eliminates the need for dedicated backup administrators, thereby improves your total cost of ownership.

email alerts

TrilioVault for RHV contains the possibility to send E-Mails to a defined list of E-Mail addresses for any succeeded or failed backup or recovery process.

Enabling the email alert function

To enable the email alert function the following steps need to be done:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Admin Panel

  4. Move to Settings

  5. Click Add/update email list

  6. Add at least 1 email to the list

  7. Click Save and enable alerts

Updating the email alert receiver list

To update the email alert receiver the following steps need to be done:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Admin Panel

  4. Move to Settings

  5. Click Add/update email list

  6. Add at least 1 email to the list

  7. Click Save and enable alerts

Disabling the email alert function

To disable the email alert function the following steps need to be done:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Admin Panel

  4. Move to Settings

  5. Click the switch to disable email alerts

Trilio 4.0 Support Matrix

RHV 4.3.9 contains a bug which highly impacts TrilioVault up to the point of being unfunctional. A Red Hat Hotfix is available from TrilioVault Customer Success. The patch will be included officially in RHV 4.3.10

Certificates required for TVR

The certificates explained on this page are not the certificates provided when accessing the TrilioVault VM dashboard through HTTPS.

TrilioVault for RHV is integrating into the RHV-Manager to provide a seamless experience for RHV Administrators and Users for all their Backup & Recovery needs inside RHV.

For this purpose is TrilioVault extending the RHV-Manager GUI with a new tab "Backup", which contains the sub-tabs Workloads, Admin Panel and Reporting as shown in figure 1.

The integration of TrilioVault into the RHV-Manager contains the complete GUI. This GUI still requires Data that will be shown then.

The RHV-Manager is gathering the data shown in the GUI from the client-side. This means that next to the connection to the RHV-Manager there are also connections to the systems providing the data. For all normal RHV tabs and fields is this the RHV-Manager itself.

When accessing the TrilioVault tabs there will also be a connection build-up to the TrilioVault VM, to gather the data about Workloads, Snapshots, Restores, etc. Figure 2 visualizes this connection.

As can be seen, the TrilioVault VM provides its own certificate to the Client Browser. This connection is happening in the background of the browser. This means, that untrusted certificates can not be accepted through the browser upon opening the Backup tab in the RHV-M. The certificate for the GUI is coming from the RHV-Manager and has been accepted at this point already. The certificate for the data coming from the TrilioVault VM needs to be accepted separately.

Before installing TrilioVault it is therefore required to consider which certificates the TrilioVault VM will use and how they will be distributed to the Client Browser.

During configuration is the TrilioVault VM either able to generate its own self-signed certificate or a certificate and a private key can be provided.

When a self-signed certificate is chosen can the generated certificate be downloaded from the TrilioVault VM dashboard and then added as a trusted certificate to the Client system. Or it can be accepted through the browser itself by calling the TrilioVault VM API directly.

When a certificate is provided is the private key used with that certificate also required. This private key will be used to encrypt the communication between TrilioVault VM and the Client Browser. The provided certificate still needs to be trusted by the Client system.

Wildcards can be used for a provided certificate, but they are not recommended to ensure that the communication between TrilioVault VM and Client Browser is secure.

RHV Version

ovirt-imageio version

Storage Domain

4.3.X

1.4.5 / 1.4.8 / 1.5.1 / 1.5.2 / 1.5.3

NFSv3, iSCSI

4.4.0 / 4.4.1 / 4.4.2

1.6.X / 2.0.X

NFSv3, iSCSI

RHHI-V Version

ovirt-imagio version

Storage Domain

1.6

1.4.5 / 1.4.8 / 1.5.1 / 1.5.2 / 1.5.3

NFSv3, iSCSI

1.7

1.4.5 / 1.4.8 / 1.5.1 / 1.5.2 / 1.5.3

NFSv3, iSCSI

1.8

1.6.X / 2.0.X

NFSv3, iSCSI

Trilio 4.0 SP2 Support Matrix

RHV Version

ovirt-imageio version

Storage Domain

4.3.X

1.4.5 / 1.4.8 / 1.5.1 / 1.5.2 / 1.5.3

NFSv3, iSCSI

4.4.0 / 4.4.1 / 4.4.2

1.6.X / 2.0.X / 2.1.X

NFSv3, iSCSI

RHHI-V Version

ovirt-imageio version

Storage Domain

1.6

1.4.5 / 1.4.8

NFSv3, iSCS

1.7

1.5.1 / 1.5.2 / 1.5.3

NFSv3, iSCS

1.8

1.6.X / 2.0.X / 2.1.X

NFSv3, iSCS

RHV 4.3.9 contains a bug which highly impacts Trilio up to the point of being unfunctional. A Red Hat Hotfix is available from Trilio Customer Success. The patch will be included officially in RHV 4.3.10

OVirt Version

ovirt-imageio version

Storage Domain

4.4.0 / 4.4.1 / 4.4.2

1.6.X / 2.0.X

NFSv3, iSCSI

Uninstall Trilio

Uninstalling TrilioVault is done in 2 easy steps, which leave only the already created backups behind.

Step 1: Uninstall RHV ovirt-imageio extension

To uninstall the ovirt-imageio extension do the following:

  1. Login into the Trilio Appliance CLI

  2. Verify the inventory files are still correct /opt/stack/imageio-ansible/inventories/production/daemon /opt/stack/imageio-ansible/inventories/production/proxy

  3. Run the ansible playbooks with the clean tags For RHV 4.3 run: cd /opt/stack/imageio-ansible/ ansible-playbook site.yml -i inventories/production/daemon --tags clean-daemon ansible-playbook site.yml -i inventories/production/proxy --tags clean-proxy For RHV 4.4 run: cd /opt/stack/imageio-ansible/ ansible-playbook test.yml -i inventories/production/daemon --tags clean-daemon ansible-playbook test.yml -i inventories/production/proxy --tags clean-proxy

Step 2: Destroy the Trilio Appliance

This guide assumes you are running the Trilio Appliance in a RHV environment

To destroy the Trilio Appliance do the following:

  1. Login into the RHV-Manager

  2. Navigate to Compute➡️Virtual Machines

  3. Mark the Trilio Appliance in the list of VMs

  4. Click "Shutdown" or "Power Off"

  5. Wait till the shutdown procedure finishes

  6. Click "Remove" to destroy the Trilio Appliance

General requirements

TrilioVault is a pure software solution and is composed of 4 elements:

  1. TrilioVault Appliance (Virtual Machine)

  2. TrilioVault RHV-M Web-GUI extension

  3. TrilioVault ovirt-imageio-proxy extension

  4. TrilioVault ovirt-imageio-daemon extension

System requirements TrilioVault Appliance

The TrilioVault Appliance gets delivered as a qcow2 image, which gets attached to a virtual machine.

Trilio supports only KVM based hypervisors and recommends to use the RHV Cluster as the hoster for the TrilioVault Appliance.

The recommended size of the VM for the TrilioVault Appliance is:

Ressource

Value

vCPU

4

RAM

24 GB

The qcow2 image itself defines the 40GB disk size of the VM.

In the rare case of the TrilioVault Appliance database or log files getting larger than 40GB disk, contact or open a ticket with Trilio Customer Success to attach another drive to the TrilioVault Appliance.

System requirements TrilioVault ovirt-imageio extension

TrilioVault is extending the ovirt-imageio-proxy service running on the RHV-Manager and ovirt-imageio-daemon running on the RHV-Hosts.

These extensions do not have any hardware related requirements, but they require specific versions of the ovirt-imageio services.

Please check the Support Matrix for further information.

The installed versions of the ovirt-imageio-proxy and the ovirt-imageio-daemon need to be the same.

Important RHV-Host Logs

TrilioVault data transfer related logs

/var/log/ovirt_celery/worker<x>.log

The worker logs contain the status of the disk transfer from the RHV Host to the backup target. Useful if the data transfer process gets stuck or errors in between.

/var/log/ovirt-imageio-daemon/daemon.log

The daemon.log contains all informations about the actual connection between the RHV Host and the backup target. Useful to identify potential connection issues between the RHV Host and Backup target.

Figure 1: TrilioVault integration into RHV-M menu
Figure 2: Connection between Client Browser and RHV-Manager

Trilio for RHV Architecture

Trilio is built on the same architectural principles as modern analytical platforms such as Hadoop and other big data platforms. These platforms offer infinite scale without compromising on performance. Trilio's other attributes include agentless, natively integrated with RHV GUI, horizontally scalable, nondisruptive, and open universal backup schema.

Agentless

Trilio offers image-level backup, which backs up a given virtual machine physical disks as one file. Irrespective of the complexity of a VM or the applications running inside the VM, the user does not require any custom code, called agents, inside VM for TrilioVault to take VM backups. Agentless solutions are one of the highly desirable features because any solution that requires custom code to run in VM creates an operational nightmare.

Self-Service/UI Integrated

Trilio comes with a GUI plugin for RHV Manager, which provides seamless integration of Trilio functionality adjacent to virtual machines management. Trilio service authenticates users with OpenID tokens, so any user who logged in to RHV can use Trilio functionality without any out-of-band user management.

Scalable, Linear, Infinite Scale

Most backup solutions are built on client/server architectures, and hence, they invariably create performance and scale bottlenecks when the RHV cluster grows. Traditional backup solutions require constant tweaking to keep up with RHV cluster growth. Trilio is built on the same architectural principles as the RHV platform; hence, it grows with the RHV cluster without introducing scale and performance bottlenecks.

Nondisruptive

Deploying Trilio is nondisruptive to the RHV cluster or the virtual machines. Similarly, uninstalling Trilio is nondisruptive as well.

Open Universal Backup Schema

In the current world of multi-cloud environments, the backup images must be platform and vendor-neutral, so they are easily portable between clouds. Trilio saves backup images as QCOW2 images. QCOW2 is a standard format in KVM/RHV environments for virtual disks, and Linux comes with numerous tools to create and manage them. A backup image stored in QCOW2 format gives the user enormous flexibility on how these are leveraged for various use cases, including restoring backup images without TVM.

QCOW2 images also come with two important attributes that also make them ideal for storing backup images.

  1. QCOW2s are sparse friendly. As a regular practice, users overprovision virtual disks to VMs. These virtual disks may be thick or thin-provisioned, but at any given time, applications only use a fraction of the virtual disk capacity. When taking image-level backups of virtual disks, backup solutions should only save the blocks that are allocated and not save blocks that are not allocated or used. For example, your virtual disks maybe a 1TB in capacity, but the applications utilized only 10GB of disk space. Since QCOW2 images are spare friendly, Trilio only stores the data. In the above example, the size of the QCOW2 image is 10GB

  2. KVM/RHV supports virtual disk snapshots by a construct called overlay files. KVM/RHV creates a new disk snapshot, it creates a new qcow2 file called overlay file and is overlaid on the original qcow2 file. Any new writes are applied to overlay file, and any reads to old data is read from the older qcow2 file. Trilio leverages the same mechanism to store incremental backups. Trilio incremental backups are overlay files that include the data that is modified between the current backup process and the last good backup. Since the TrilioVault backup image structure on the backup media reflects what KVM/RHV natively represents, our process of creating backups and restores are highly efficient in terms of the amount of backup storage used and the network bandwidth utilization.

Trilio architecture reflects these principles.

Trilio for RHV Architecture overview

As you can see from above the architecture diagram, Trilio does not require any media servers. Traditionally, media servers performs numerous bookkeeping operations of backup images, including pruning older backups, synthesizing full backups from existing backups, cataloging backup images and other operations. These are usually data intensive operations and as your RHV cluster grows, media servers need to scale in capacity to keep up with RHV growth. Scaling media servers may include trial and errors approaches and very difficult to calibrate correctly. TrilioVault employs data movers that are deployed on each RHV host that can horizontally scale with RHV, hence, there is no tuning to do when RHV cluster grows. Instead of centralizing media server functionality in to one appliance, all bookkeeping operations are performed with in the data mover in the context of current backup job. TrilioVault enhances operational efficiency of backups and recoveries and by not tieing the users to hardware licenses it also significantly improves the ROI and TCO of your investments.

Trilio Appliance

The Trilio Appliance is the controller of Trilio, called TVM.

The TVM is running and managing all backup and recovery jobs.

During a backup job is the TVM:

  • Gathering the Metadata information generated of the VMs that are getting protected

  • Writing the Metadata information onto the Backup Target

  • Generating the RHV Snapshot

  • Sending the data copy commands to the ovirt-imageio services

The TVM is available as qcow2 image and runs as VM on top of a KVM Hypervisor.

It is supported and recommended to run the TVM in the same RHV environment as a VM that the TVM protects.

RHV GUI integration

Trilio is natively integrated into the available RHV GUI, provides a new tab "Backup".

All functionalities of Trilio are accessible through the RHV GUI.

The RHV-Manager GUI integration is getting installed using Ansible-playbooks together with the ovirt-imageio-proxy extension.

Ovirt-imageio extensions

Ovirt-imageio is an RHV internal python service that allows the upload and download of disks into and out of RHV.

The default ovirt-imageio services only allow to move the disks through the RHV-M via https.

Trilio extends the ovirt-imageio functionality to provide movement of the disk data through NFS over the RHV Hosts themselves.

The ovirt-imageio extensions are getting installed using Ansible-playbooks.

Backup Target

Trilio is writing all Backups over the network using the NFS protocol to a provided Backup Target.

Any system utilizing the NFSv3 protocol is usable.

Reset the Trilio GUI password

In case of the Trilio Dashboard being lost it can be resetted as long as SSH access to the appliance is available.

To reset the password to its default do the following:

[root@tvm ~]# source /home/rhv/myansible/bin/activate
(myansible) [root@tvm ~]# cd /opt/stack/workloadmgr/workloadmgr/tvault-config
(myansible) [root@tvm tvault-config]# python recreate_conf.py
(myansible) [root@tvm tvault-config]# systemctl restart tvault-config

The dashboard login will be reset to:

Username: admin
Password: password

Spinning up the Trilio VM

The Trilio Appliance is delivered as qcow2 image and runs as VM on top of a KVM Hypervisor.

The Trilio VM qcow2 image must be an available disk on the RHV Storage before the creation of the Trilio Appliance is possible.

This guide shows the tested way to spin up the Trilio Appliance on a RHV Cluster. Please contact a RHV Administrator and Trilio Customer Success Agent in case of incompatibility with company standards.

Creation the Trilio VM

The creation of the Trilio VM works like for any other Virtual Machine inside RHV.

To create a new Virtual Machine, go to Compute ➡️ Virtual Machines.

The button "New" opens the window to define the VM.

The following instructions show the tested configuration for the Trilio Appliance.

After configuration, use the OK button to create the Trilio Appliance.

It is a required to activate the Advanced Options.

General Tab

Fill out the following details as necessary on the General tab:

  • Cluster - Choose the RHV Cluster to host the Trilio VM

  • Template - Blank

  • Operating System - The Trilio VM runs CentOS 7. Red Hat Enterprise 7.x x64 is a valid option.

  • Instance Type - Custom

  • Optimized for - Server

  • Name - Provide a RHV internal name for the Trilio VM

  • Description - Provide a RHV internal description for the Trilio VM (optional)

  • Comment - Provide a RHV internal comment for the Trilio VM (Optional)

  • Activate Delete Protection

  • NICs - Choose the network the Trilio VM connects with. The plus and minus symbols add/delete NICs as necessary.

Before moving to the next tab, attach the Trilio qcow2 image to the VM definition.

  • Click Attach under Instance Images.

  • Choose the Trilio qcow2 image

  • Check the box for OS

Without checking the box for OS, will the Trilio Appliance not boot, as the RHV VM is not utilizing the disk as the boot disk.

System Tab

Under the System tab set the following:

  • Memory size - 24576 MB / 24 GB

  • Maximum memory - 24576 MB / 24 GB (RHV automatically first sets four times the Memory size)

  • Physical Memory Guaranteed - 24576 MB / 24 GB (RHV automatically first sets the same value as Memory size)

It is possible to set the initial Memory size to 8GB. RHV is automatically setting the Maximum Memory to 4 times the Memory size value. The actual Memory size can be adjusted later as needed.

Note: Do not set the Physical Memory Guaranteed below 8GB.

  • Total virtual CPUs - 4

  • Nothing to set at the Advanced Parameters

  • Leave Hardware Clock Timer Offset at 0

  • Leave custom serial policy unchecked

Initial Run

Under the Initial Run tab set the following:

  • Check "Use Cloud-Init/Sysprep"

  • (Optional) Set VM Hostname

  • (Optional) Check and set "Configure Time Zone"

  • Open Authentication

    • Set User Name

    • Set Password or SSH authentication

  • Open Networks

    • Set "Cloud-Init Network Protocol" to "Openstack Metadata"

    • (optional) Set DNS Servers and DNS Search Domains

    • Check In-guest Network Interface Name

    • Set IPv4 configuration as necessary

Further Tabs

There are no TrilioVault specific configurations necessary in any further tab.

Starting the TrilioVault Appliance

After the creation of TrilioVault Appliance, the VM will automatically be put in a shutdown state.

Go to the overview of VMs in the RHV Manager (Compute ➡️ Virtual Machines), identify the TrilioVault Appliance VM in the list, mark it, and click the Run button to start it.

Cloud-initwill be disabled after the first boot.

File Search

Definition

The file search functionality allows the user to search for files and folders located on a chosen VM in a workload in one or more Backups.

Navigating to the file search tab

The file search tab is part of every workload overview. To reach it follow these steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload a file search shall be done in

  4. Click the workload name to enter the Workload overview

  5. Click File Search to enter the file search tab

Configuring and starting a file search

A file search runs against a single virtual machine for a chosen subset of backups using a provided search string.

To run a file search the following elements need to be decided and configured

Choose the VM the file search shall run against

Under VM Name/ID choose the VM that the search is done upon. The drop down menu provides a list of all VMs that are part of any Snapshot in the Workload.

VMs that are no longer activly protected by the Workload but are still part of an existing Snapshot are listed in red.

Set the File Path

The File Path defines the search string that is run against the chosen VM and Snapshots. This search string does support basic RegEx.

The File Path has to start with a '/'

Windows partitions are fully supported. Each partition is it's own Volume with it's own root. Use '/Windows' instead of 'C:\Windows'

The file search does not go into deeper directories and always searches on the directory provided in the File Path

Example File Path for all files inside /etc : /etc/*

Define the Snapshots to search in

Filter Snapshots by is the third and last component that needs to be set. This defines which Snapshots are going to be searched.

There are 3 possibilities for a pre-filtering:

  1. All Snapshots - Lists all Snapshots that contain the chosen VM from all available Snapshots

  2. Last Snapshots - Choose between last 10, 25, 50, or custom Snapshots and click Apply to get the list of the available Snapshots for the chosen VM that match the criteria.

  3. Date Range - Set a start and end date and click apply to get the list of all available Snapshots for the chosen VM within the set dates.

After the pre-filtering is done choose the Snapshots that shall be searched by clicking their checkbox or by clicking the global checkbox.

When no Snapshot is chosen the file search will not start.

Start the File Search and retrieve the results

To start a File Search the following elements need to be set:

  • A VM to search in has to be choosen

  • A valid File Path provided

  • At least one Snapshot to search in selected

Once those have been set click "Search" to start the file search.

Do not navigate to any other RHV tab or website after starting the File Search. Results are lost and the search has to be repeated to regain them.

After a short time the results will be presented. The results are presented in a tabular format grouped by Snapshots and Volumes inside the Snapshot.

For each found file or folder the following information are provided:

  • POSIX permissions

  • Amount of links pointing to the file or folder

  • User ID who owns the file or folder

  • Group ID assigned to the file or folder

  • Actual size in Bytes of the file or folder

  • Time of creation

  • Time of last modification

  • Time of last access

  • Full path to the found file or folder

Once the Snapshot of interest has been identified it is possible to go directly to the Snapshot using the "View Snapshot" option.

Preparing for Application Consistent backups

TrilioVault for RHV does provide the capabilties to take application consistent backups by utilizing the Qemu-Guest-Agent.

The Qemu-Guest-Agent

The Qemu-Guest-Agent is a component of the qemu hypervisor, which is used by RHV. RHV automatically builds all VMs to be prepared to use the Qemu-Guest-Agent.

The Qemu-Guest-Agent provides many capabilities, including the possibility to freeze and thaw Virtual Machines Filesystems.

The Qemu-Guest-Agent is not developed or maintained by Trilio. Trilio does leverage standard capabilities of the Qemu-Guest-Agent to send freeze and thaw commands to the protected VMS during a backup process.

Installing the Qemu-Guest-Agent

The Qemu-Guest-Agent needs to be installed inside the VM.

The Qemu-Guest-Agent requires a special SCSI interface in the VM definition. This interface is automatically created by RHV upon spinning up the Virtual Machine.

The installation process depends on the Guest Operating System.

RPM-based Guests

yum install qemu-guest-agent
systemctl start qemu-guest-agent

Deb-based Guests

apt-get install qemu-guest-agent
systenctk start qemu-guest-agent

Windows Guests

Windows Guests require the installation of the VirtIO drivers and tools. These are provided by Red Hat in a prepared ISO-file. For RHV 4.3 please follow this documentation: RHV 4.3 Windows Guest Agents For RHV 4.4 please follow this documentation: RHV 4.4 Windows Guest Agents

Using the fsfreeze-hook.sh script

The Qemu-Guest-Agent is calling the fsfreeze-hook.sh script either with the freeze or the thaw argument depending on the current operation.

The fsfreeze-hook.sh script is a normal shell script. It is typically used to do all necessary steps to get an application into a consistent state for the freeze or to undo all freeze operations upon the thaw.

Location of the fsfreeze-hook.sh script

The fs-freeze-hook.sh script default path is:

/etc/qemu/fsfreeze-hook

Content of the fsfreeze-hook.sh script

The fsfreeze-hook.sh script does not require a special content.

It is recommended to provide a case identifier for the freeze and thaw argument. This can be achieved for example by the following bash code:

#!/bin/bash

case "$1" in
        freeze)
            #Commands for freeze
            ;;
         
        thaw)
            #Commands for thaw
            ;;
         
        *)
            echo $"Neither freeze nor thaw provided"
            exit 1
            ;;
 
esac

Example fsfreeze-hook.sh for MYSQL

This example flushes the MySQL tables to the disks and keeps a read lock to prevent further write access until the thaw has been done.

#!/bin/sh

MYSQL="/usr/bin/mysql"
MYSQL_OPTS="-uroot" #"-prootpassword"
FIFO=/var/run/mysql-flush.fifo
# Check mysql is installed and the server running
[ -x "$MYSQL" ] && "$MYSQL" $MYSQL_OPTS < /dev/null || exit 0
flush_and_wait() {
    printf "FLUSH TABLES WITH READ LOCK \\G\n"
    trap 'printf "$(date): $0 is killed\n">&2' HUP INT QUIT ALRM TERM
    read < $FIFO
    printf "UNLOCK TABLES \\G\n"
    rm -f $FIFO
}
case "$1" in
    freeze)
        mkfifo $FIFO || exit 1
        flush_and_wait | "$MYSQL" $MYSQL_OPTS &
        # wait until every block is flushed
        while [ "$(echo 'SHOW STATUS LIKE "Key_blocks_not_flushed"' |\
                 "$MYSQL" $MYSQL_OPTS | tail -1 | cut -f 2)" -gt 0 ]; do
            sleep 1
        done
        # for InnoDB, wait until every log is flushed
        INNODB_STATUS=$(mktemp /tmp/mysql-flush.XXXXXX)
        [ $? -ne 0 ] && exit 2
        trap "rm -f $INNODB_STATUS; exit 1" HUP INT QUIT ALRM TERM
        while :; do
            printf "SHOW ENGINE INNODB STATUS \\G" |\
                "$MYSQL" $MYSQL_OPTS > $INNODB_STATUS
            LOG_CURRENT=$(grep 'Log sequence number' $INNODB_STATUS |\
                          tr -s ' ' | cut -d' ' -f4)
            LOG_FLUSHED=$(grep 'Log flushed up to' $INNODB_STATUS |\
                          tr -s ' ' | cut -d' ' -f5)
            [ "$LOG_CURRENT" = "$LOG_FLUSHED" ] && break
            sleep 1
        done
        rm -f $INNODB_STATUS
        ;;
    thaw)
        [ ! -p $FIFO ] && exit 1
        echo > $FIFO
        ;;
    *)
        echo $"Neither freeze nor thaw provided"
        exit 1
        ;;
esac

Admin Panel

The Admin Panel gives Adminstrators a centralized overview of:

  • Snapshots

  • Storage usage

  • protected vs unprotected VMs

In addition does it contain the functionalities to control:

  • Enable/Disable Global Job Scheduler

  • Enable/Disable email alerts

  • Update email alert email list

Accessing the Admin Panel

To access the Admin Panel the following steps need to be followed:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Admin Panel

Admin Panel Overview tab

The Overview tab of the Admin Panel shows the following information:

  • Graphical overview of how many Snapshots are available, errored or canceled

  • Percentage of Snapshots that are Full Snapshots

  • Graphical overview of used versus free Storage capacity

  • Percentage of how much Storage is available

  • Graphical overview of how many VMs are protected or unprotected

  • Percentage of how many VMs are protected

Admin Panel Enable/Disable Global Job Scheduler

To enable or disable the Global Job Scheduler follow these steps:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Admin Panel

  4. Move to the Settings Tab

  5. Use the switch to enable or disable the Global Job Scheduler

Trilio 4.0 SP1 Support Matrix

RHV Version

ovirt-imageio version

Storage Domain

4.3.X

1.4.5 / 1.4.8 / 1.5.1 / 1.5.2 / 1.5.3

NFSv3, iSCSI

4.4.0 / 4.4.1 / 4.4.2

1.6.X / 2.0.X

NFSv3, iSCSI

RHHI-V Version

ovirt-imageio version

Storage Domain

1.6

1.4.5 / 1.4.8 / 1.5.1 / 1.5.2 / 1.5.3

NFSv3, iSCSI

1.7

1.4.5 / 1.4.8 / 1.5.1 / 1.5.2 / 1.5.3

NFSv3, iSCSI

1.8

1.6.X / 2.0.X

NFSv3, iSCSI

RHV 4.3.9 contains a bug which highly impacts TrilioVault up to the point of being unfunctional. A Red Hat Hotfix is available from TrilioVault Customer Success. The patch will be included officially in RHV 4.3.10

OVirt Version

ovirt-imageio version

Storage Domain

4.4.0 / 4.4.1 / 4.4.2

1.6.X / 2.0.X

NFSv3, iSCSI

Snapshot mount

TrilioVault allows to directly mount a Snapshot through the RHV-Manager.

This feature provides the capability to download any file from any Snapshot through the RHV-Manager independent of size.

It is not possible to download complete directories

Mounting a Snapshot

To mount a Snapshot follow these steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to show

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the searched Snapshot in the Snapshot list

  7. Click the Snapshot Name

  8. Click File Manager

  9. Click the VM to be mounted (this might take a minute)

Only VM can be mounted at the same time for the complete RHV environment.

Navigating the mounted Snapshot

A mounted Snapshot can be navigated like in any browser by clicking on files and folders.

Clicking on a directory will open that directory.

Clicking on a file will provide an overview about the metadata of this file including:

  • Name

  • Size

  • Last Modified

  • Last accessed

  • Owner

  • Owner Group

  • Permissions

It is further possible to get a preview of the file directly from this overview or to download the file through the RHV-Manager.

Installation of ovirt-imageio extensions

TrilioVault extends the ovirt-imageio services running on the RHV-Manager and the RHV hosts, to provide the parallel download of disks from multiple RHV hosts.

The imageio extensions are getting installed automatically using Ansible playbooks provided on the TrilioVault Appliance.

Every time the RHV environment gets updated or a new RHV host is getting added to the RHV Cluster it is necessary to rerun the installation of the ovirt-imageio extensions.

Preparing the inventory files

Ansible playbooks are working with inventory files. These inventory files contain the list of RHV-Hosts and RHV-Managers and how to access them.

To edit the inventory files, open the following files for the server type to add.

For the RHV hosts open: /opt/stack/imageio-ansible/inventories/production/daemon For the RHV Manager open: /opt/stack/imageio-ansible/inventories/production/proxy

Using password authentication

The first supported method to allow Ansible to access the RHV hosts and the RHV Manager is the classic user password authentication.

To use password authentication edit the files using the following format:

<Server_IP> ansible_user=root password=xxxxx

One entry per RHV Host in daemon file and one entry per RHV Manager in the proxy file are required.

Using passwordless authentication (SSH keys)

The second supported method to Allow Ansible to access the RHV hosts and the RHV Manager is utilizing SSH keys to provide passwordless authentication.

For this method, it is necessary to prepare the TrilioVault Appliance and the RHV Cluster Nodes as well as the RHV Manager.

The recommended method from Trilio is:

  1. Use ssh-keygen to generate a key pair

  2. Add the private key to /root/.ssh/ on the TrilioVault Appliance

  3. Add the public key to /root/.ssh/authorized_keys file on each RHV host and the RHV Manager

Once the TrilioVault Appliance can access the nodes without password, edit the inventory files using the following format:

<Server_IP> ansible_user=root

One entry per RHV Host in the daemon file and one entry per RHV Manager in the proxy file are required.

Starting the installation

To install the ovirt-imageio extensions go to:

cd /opt/stack/imageio-ansible

Depending on the method of authentication prepared in the inventory files, different commands need to be used to start the Ansible playbooks.

Using password authentication

To call the Ansible playbooks when the inventory files use password authentication run.

On TrilioVault for RHV 4.0 and 4.0 SP1

For RHV 4.3 Hosts: ansible-playbook site.yml -i inventories/production/daemon --tags daemon For RHV 4.3 Manager: ansible-playbook site.yml -i inventories/production/proxy --tags proxy

For RHV 4.4 Hosts: ansible-playbook test.yml -i inventories/production/daemon --tags daemon For RHV 4.4 Manager: ansible-playbook test.yml -i inventories/production/proxy --tags proxy

On TrilioVault for RHV 4.0 SP2

For RHV 4.3 Hosts: ansible-playbook rhv_4_3.yml -i inventories/production/daemon --tags daemon For RHV 4.3 Manager: ansible-playbook rhv_4_3.yml -i inventories/production/proxy --tags proxy

For RHV 4.4 Hosts: ansible-playbook rhv_4_4.yml -i inventories/production/daemon --tags daemon For RHV 4.4 Manager: ansible-playbook rhv_4_4.yml -i inventories/production/proxy --tags proxy

Using passwordless authentication (SSH keys)

To call the Ansible playbooks when the inventory files use passwordless authentication run.

On TrilioVault for RHV 4.0 and 4.0 SP1

For RHV 4.3 Hosts: ansible-playbook site.yml -i inventories/production/daemon --private-key ~/.ssh/id_rsa --tags daemonFor RHV 4.3 Manager:ansible-playbook site.yml -i inventories/production/proxy --private-key ~/.ssh/id_rsa --tags proxy

For RHV 4.4 Hosts: ansible-playbook test.yml -i inventories/production/daemon --private-key ~/.ssh/id_rsa --tags daemonFor RHV 4.4 Manager:ansible-playbook test.yml -i inventories/production/proxy --private-key ~/.ssh/id_rsa --tags proxy

On TrilioVault for RHV 4.0 SP2

For RHV 4.3 Hosts: ansible-playbook rhv_4_3.yml -i inventories/production/daemon --private-key ~/.ssh/id_rsa --tags daemonFor RHV 4.3 Manager:ansible-playbook rhv_4_3.yml -i inventories/production/proxy --private-key ~/.ssh/id_rsa --tags proxy

For RHV 4.4 Hosts: ansible-playbook rhv_4_4.yml -i inventories/production/daemon --private-key ~/.ssh/id_rsa --tags daemonFor RHV 4.4 Manager:ansible-playbook rhv_4_4.yml -i inventories/production/proxy --private-key ~/.ssh/id_rsa --tags proxy

Ansible shows the output of the running playbook. Do not intervene until the playbook has finished.

Trilio for RHV 4.0 Release Notes

Trilio 4.0 is the fourth release of Trilio for Red Hat Virtualization.

It is aimed to provide Backup and Recovery for Red Hat Virtualization 4.3.x & 4.4.x. The full requirements can be found .

Trilio for RHV 4.0 Features on a Glance

New Feature: Admin Panel

Administrators require a quick overview to see the status of their Backup and Recovery solution. They also want to quickly do global tasks like activating/deactivating the global job scheduler or configuring the email alerts.

TrilioVault for RHV 4.0 introduces the Admin Panel to provide Administrators the overview and they require.

New Feature: Alerting

With hundreds of backup and recovery jobs is it not always easy to keep track of every single one of them. Important questions like about success and failure of backups are not easily answered.

TrilioVault for RHV 4.0 is introducing E-Mail alerts to help to keep track of all workloads and whether their backups and restores failed or succeeded.

New Feature: Reporting

Often is it required to show the usage or health of the backup and recovery solution, including the possibility to look into the past, to identify trends.

TrilioVault for RHV 4.0 is introducing Report capabilities, to help Administrators to report on the usage and health of TrilioVault for RHV.

Known limitations

Large Size backups failing with error - "Could not initialize session: Unable to verify proxy ticket"

RHV 4.3.X only!

RHV has a limitation of 10hours for a disk's transfer session, which cannot be altered. RHV setups on slower networks will face this issue.

Ansible imageio daemon/proxy script execution interruption can leave imageio unstable

In the case of the Ansible playbooks installing the ovirt-imageio extensions being interrupted is it possible that the ovirt-image io version are getting unstable and no longer usable.

Workaround: Uninstall the ovirt-imageio extension and install them again.

RHV Snapshot changing preallocated disktypes to thinprovisioned

RHV Snapshots are always thin-provisioned qcow2 images. These qcow2 images are the active images for the VM running and they are the qcow2 backing file capability to point to the original image. That's why the disktype is changing from preallocated to thinprovisioned upon snapshotting.

Manual deletion or cancelling of any Snapshot from TVM promotes next snapshot to be Full

Manual deletion of Snapshots is only expected to happen, when an error occurred that got not identified by TrilioVault during the backup process. To prevent possible data loss is the next Snapshot upgraded to a Full Snapshot.

Workloads are getting created for unprotected hosts

The list of available VMs is taken from the RHV-Manager and every VM not part of a Snapshot can be included in a workload. It is not checked if the Virtual Machines are running on RHV-Hosts that do not have the ovirt-imageio-daemon extension installed.

TrilioVault requires all RHV-Hosts to have the ovirt-imageio-daemon extension to be installed.

File search is case sensitive

TrilioVault is providing filesearch through libguestfs and therefore bound by any libguestfs limitations.

Scheduled Snapshots for VM in power off state and having iSCSI disks attached will fail

Known issues

VMs being restored from RHV 4.2 to 4.3 or 4.4 might need changing the custom compatibility version

TrilioVault is storing and restoring the VM with all its configurations, including the compatibility version tied to a RHV release. Upon a restore might the VM fail to power up after the restore. The restore itself succeeds.

Workaround: Edit the VM configuration and update the compatibility version.

Import Workloads gives Gateway Timeout error on UI

Importing a bigger amount of Workloads with many Snapshots leads to the TrilioVault Dashboard refresh timing out before the import has finished. The import is continuing in the background as desired.

Recommendation: Verify from the RHV-M or via TrilioVault API the amount of available workloads.

Create Workload window does not disappear while creating large (>150 VMs) workloads

While the workload is being created on the TrilioVault Appliance is the RHV-M integration waiting for the workload creation successful signal to close the create workload window. The workload creation is continuing as desired.

TVM Reconfiguration requires manual restart of services

During reconfiguration it is possible that the configurator fails during nginx service restart or wlm-api restart.

Workaround: Restart nginx and wlm-api services manually before retrying the configuration.

Snapshots fail with 500 error "Server failed to performed the request" after server restart

After restarting the RHV-Hosts it is possible that the communication between Redis and ovirt_celery is broken.

Rerunning the Ansible-Playbooks for all RHV-Hosts fixes this issue.

Restarting the TrilioVault during a running Snapshot leads to Snapshot getting stuck in execution state

Restarting the TrilioVaut VM will stop any ongoing backup or restore processes on the TrilioVault Appliance. This can lead to the Snapshot status not being updated in the TrilioVault database, leaving them stuck in execution state without any task connected to them.

Further Snapshots are not getting affected. Please contact Customer Success for help in moving stuck Snapshots into the error state.

Backup

Recovery

Additional functions

Image based VMs (iSCSI and NFS)

OneClick Restore

File Search

Template based VMs (iSCSI)

Selective Restore

File Recovery

Scheduled based Backup

InPlace Restore

Workload import

OnDemand Backup

Workload reset

RestFul API

Reporting

Alerting

here

Reporting

TrilioVault for RHV provides the capabilities to generate reports, to gain insights into the usage of TrilioVault over time and the stability of the service.

Reports are generated automatically every 24h at midnight.

The timezone used is the timezone the TrilioVault Appliance is configured in. Differences in timezones between the TrilioVault Appliance and the browser timezone might lead to Data not been shown on the correct day.

Accessing the Report page

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Reporting

Report Page Configuration

The Configuration Report page provides an overview of the TrilioVault installation as a whole.

The following elements are listed for the TrilioVault Appliance:

  • The version number of the installation TrilioVault Appliance

  • API status of the TrilioVault Appliance UP/DOWN

  • Scheduler status of the TrilioVault Appliance UP/DOWN

  • Workloadmgr status of the TrilioVault Appliance UP/DOWN

  • Configured Timezone of the TrilioVault Appliance

The following information is shown for the RHV environment;

  • The Version number of the RHV environment

  • List of all RHV Hosts

  • Status of the RHV Hosts being configured for TrilioVault

  • Address of the RHV Hosts

Report Page Data Protection

The Data Protection page provides an overview of the general protection status of the RHV environment.

The following elements are shown:

  • Graphical overview of protected versus unprotected VMs

  • List of all Unprotected VMs by name

Report Page Workloads

The Workloads page provides an overview of all existing TrilioVault workloads.

The following information is shown per workload:

  • Workload name and creation time

  • When the next backup will be run

  • RPO in h

  • Retention policy

  • Available (successful Snapshots)

  • Date and time of the oldest Snapshot

  • Which VMs are protected by the Workload

Report Page Backups

The Backups page provides an overview over time about taken backups.

The following information is shown:

  • Graphical overview of successful vs errored snapshots per time bucket over the given period of time

  • Graphical overview of successful vs errored snapshots total over the given period of time

  • Graphical overview of full vs incremental vs mixed snapshots total over the given period of time

Report Page Storage

The Storage page provides an overview over time about storage usage, data transfer and average usage.

The following information is shown:

  • Graphical overview for Storage usage at the end of the given period of time

  • Graphical overview of Data transferred per day for the given period of time

  • The average data for any workloads:

    • Average Data moved in a backup

    • Average Time needed to take the backup

    • Average transfer speed to the backup target

Report Page Restores

The Reports page provides an overview of all restores over the given period of time.

The following information is shown:

  • Amount of succeeded and failed restores per day

  • Types of restores used

Download Report

The TrilioVault report can be downloaded or printed directly from the Reporting page.

To download or print a report the following steps need to be done:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Reporting

  4. Configure the time frame the report is to be created for

  5. Click Download Report to see a preview of the report

  6. Click Download Report to open your browsers printing page

    1. Use Save as PDF to save the report as PDF

    2. Use any available printer to print the report directly

Snapshots

Definition

A Snapshot is a single TrilioVault backup of a workload including all data and metadata. It contains the information of all VM's that are protected by the workload.

Creating a Snapshot

Snapshots are automatically created by the TrilioVault scheduler. If necessary or in case of deactivated scheduler is it possible to create a Snapshot on demand.

There are 2 possibilities to create a snapshot on demand.

Possibility 1: From the Backup overview

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that shall create a Snapshot

  4. Click "Create Snapshot"

  5. Provide a name and description for the Snapshot

  6. Decide between Full and Incremental Snapshot

  7. Click "Create"

Possibility 2: From the Workload Snapshot list

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that shall create a Snapshot

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Click "Create Snapshot"

  7. Provide a name and description for the Snapshot

  8. Decide between Full and Incremental Snapshot

  9. Click "Create"

Snapshot overview

Each Snapshot contains a lot of information about the backup. These information can be seen in the Snapshot overview.

To reach the Snapshot Overview follow these steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to show

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the searched Snapshot in the Snapshot list

  7. Click the Snapshot Name

Details Tab

The Snapshot Details Tab shows the most important information about the Snapshot.

  • Snapshot Name / Description

  • Snapshot Type

  • Time Taken

  • Size

  • Which VMs are part of the Snapshot

  • for each VM in the Snapshot

    • Instance Info - Name & Status

    • Instance Type - vCPUs, Disk & RAM

    • Attached Networks

    • Attached Volumes

    • Misc - Original ID of the VM

Restores Tab

The Snapshot Restores Tab shows the list of Restores that have been started from the chosen Snapshot. It is possible to start Restores from here.

Please refer to the Restores User Guide to learn more about Restores.

Misc. Tab

The Snapshot Miscellaneous Tab provides the remaining metadata information about the Snapshot.

  • Creation Time

  • Last Update time

  • Snapshot ID

  • Workload ID of the Workload containing the Snapshot

Delete Snapshots

Once a Snapshot is no longer needed, it can be safely deleted from a Workload.

The retention policy will automatically delete the oldest Snapshots according to the configure policy.

You have to delete all Snapshots to be able to delete a Workload.

Deleting a TrilioVault Snapshot will not delete any RHV Snapshots. Those need to be deleted separately if desired.

There are 2 possibilities to delete a Snapshot.

Possibility 1: Single Snapshot deletion through the submenu

To delete a single Snapshot through the submenu follow these steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to show

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the searched Snapshot in the Snapshot list

  7. Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu

  8. Click "Delete Snapshot"

  9. Confirm by clicking "Delete"

Possibility 2: Multiple Snapshot deletion through checkbox in Snapshot overview

To delete one or more Snapshots through the Snapshot overview do the following:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to show

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the searched Snapshots in the Snapshot list

  7. Check the checkbox for each Snapshot that shall be deleted

  8. Click "Delete Snapshots"

  9. Confirm by clicking "Delete"

Important TVM Logs

TrilioVault Appliance logs used during configuration

The following logs contain all information gathered during the configuration of the TrilioVault Appliance

/var/log/workloadmgr/tvault-config.log

This log contains all information of pre-checks done, when filling out the configurator form.

/var/log/workloadmgr/ansible-playbook.log

This log contains the complete Ansible output from the playbooks that run when the configurator is started.

With each configuration attempt a new ansible-playbook.log gets created. Old ansible-playbook.logs are renamed according to their creation time.

TrilioVault Appliance logs during any task after configuration

/var/log/workloadmgr/workloadmgr-api.log

This log tracks all API requests that have been received on the wlm-api service.

This log is helpful to verify that the TrilioVault VM is reachable from the RHV-M and authentication is working as expected.

/var/log/workloadmgr/workloadmgr-scheduler.log

This log tracks all jobs the wlm-scheduler is receiving from the wlm-api and sends them to the chosen wlm-workloads service.

This log is helpful, when the wlm-api doesn't throw any error, but no task like backup or restore is getting started.

/var/log/workloadmgr/workloadmgr-workloads.log

This log contains the complete output from the wlm-workloads service, which is controlling the actual backup and restore tasks.

This log is helpful to identify any errors that are happening on the TVM itself including RESTful api responses from the RHV-M.

Workloads

Important note for VMs using iSCSI disks. RHV is only creating a connection between the VM and the iSCSI disk when the VMs are running, as this connecting is achieved through a symlink on the RHV Host. This behavior leads to RHV only being able to take RHV Snapshots through the VM when the VM is running. In consequence TVR is only able to take backups while the VM is in a running state.

Definition

A workload is a backup job that protects one or more Virtual Machines according to a configured policy. There can be as many workloads as needed. But each VM can only be part of one Workload.

Create a Workload

To create a workload do the following steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Click "Create Workload"

  4. Provide Workload Name and Workload Description on the first tab "Details"

  5. Choose between Serial or Parallel workload on the first tab "Details"

  6. Choose the VMs to protect on the second Tab "Workload Members"

  7. Decide for the schedule of the workload on the Tab "Schedule"

  8. Provide the Retention policy on the Tab "Policy"

  9. Choose the Full Backup Interval on the Tab "Policy"

  10. Click create

The created Workload will be available after a few seconds and starts to take backups according to the provided schedule and policy.

Workload Overview

A workload contains many information, which can be seen in the workload overview.

To enter the workload overview do the following steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that shall create a Snapshot

  4. Click the workload name to enter the Workload overview

Details Tab

The Workload Details tab provides you with the general most important information about the workload:

  • Name

  • Description

  • List of protected VMs

It is possible to navigate to the protected VM directly from the list of protected VMs.

Snapshots Tab

The Workload Snapshots Tab shows the list of all available Snapshots in the chosen Workload.

From here it is possible to work with the Snapshots, create Snapshots on demand and start Restores.

Please refer to the and User Guide to learn more about those.

Policy Tab

The Workload Policy Tab shows gives an overview of the current configured scheduler and retention policy. The following elements are shown:

  • Scheduler Enabled / Disabled

  • Start Date / Time

  • End Date / Time

  • RPO

  • Time till next Snapshot run

  • Retention Policy and Value

  • Full Backup Interval policy and value

Filesearch Tab

The Workload Filesearch Tab provides access to the power search engine, which allows to find files and folders on Snapshots without the need of a restore.

Please refer to the File Search User Guide to learn more about this feature.

Misc. Tab

The Workload Miscellaneous Tab shows the remaining metadata of the Workload. The following information are provided:

  • Creation time

  • last update time

  • Workload ID

  • Workload Type

Edit a Workload

Workloads can be modified in all components to match changing needs.

To edit a workload do the following steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload to be modified

  4. Click the small arrow next to "Create Snapshot" to open the sub-menu

  5. Click "Edit Workload"

  6. Modify the workload as desired - All parameters can be changed

  7. Click "Update"

Delete a Workload

Once a workload is no longer needed it can be safely deleted.

To delete a workload do the following steps:

All Snapshots need to be deleted before the workload gets deleted. Please refer to the User Guide to learn how to delete Snapshots.

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload to be deleted

  4. Click the small arrow next to "Create Snapshot" to open the sub-menu

  5. Click "Delete Workload"

  6. Confirm by clicking "Delete Workload" yet again

Reset a Workload

In rare cases it might be necessary to start a backup chain all over again, to ensure the quality of the created backups. To not recreate a Workload in such cases is it possible to reset a Workload.

The Workload reset will:

  • Cancel all ongoing tasks

  • Delete all existing RHV Snapshots from the protected VMs

  • recalculate the next Snapshot time

  • take a full backup at the next Snapshot

To reset a Workload do the following steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload to be deleted

  4. Click the small arrow next to "Create Snapshot" to open the sub-menu

  5. Click "Reset Workload"

  6. Confirm by clicking "Reset Workload" yet again

Snapshot
Restore
Snapshots

Restores

Definition

A Restore is the workflow to bring back the backed up VMs from a TrilioVault Snapshot.

TrilioVault does offer 3 types of restores:

  • One Click restore

  • Selective restore

  • InPlace restore

One Click Restore

The One Click Restore will bring back all VMs from the Snapshot in the same state as they were backed up. They will:

  • be located in the same cluster in the same datacenter

  • use the same storage domain

  • connect to the same network

  • have the same flavor

The user can't change any Metadata.

The One Click Restore requires, that the original VM's that have been backed up are deleted or otherwise lost. If even one VM is still existing, will the One Click Restore fail.

The One Click Restore will automatically update the Workload to protect the restored VMs.

There are 2 possibilities to start a One Click Restore.

Possibility 1: From the Snapshot list

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click "One Click Restore" in the same line as the identified Snapshot

  8. (Optional) Provide a name / description

  9. Click "Create"

Possibility 2: From the Snapshot overview

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click the Snapshot Name

  8. Navigate to the "Restores" tab

  9. Click "One Click Restore"

  10. (Optional) Provide a name / description

  11. Click "Create"

Selective Restore

The Selective Restore is the most complex restore TrilioVault has to offer. It allows to adapt the restored VMs to the exact needs of the User.

With the selective restore the following things can be changed:

  • Which VMs are getting restored

  • Name of the restored VMs

  • Which networks to connect with

  • Which Storage domain to use

  • Which DataCenter / Cluster to restore into

  • Which flavor the restored VMs will use

The Selective Restore is always available and doesn't have any prerequirements.

There are 2 possibilities to start a Selective Restore.

Possibility 1: From the Snapshot list

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

  8. Click on "Selective Restore"

  9. Configure the Selective Restore as desired

  10. Click "Restore"

Possibility 2: From the Snapshot overview

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click the Snapshot Name

  8. Navigate to the "Restores" tab

  9. Click "Selective Restore"

  10. Configure the Selective Restore as desired

  11. Click "Restore"

Inplace Restore

The Inplace Restore covers those use cases, where the VM and its Volumes are still available, but the data got corrupted or needs to a rollback for other reasons.

It allows the user to restore only the data of a selected Volume, which is part of a backup.

The Inplace Restore only works when the original VM and the original Volume are still available and connected. TrilioVault is checking this by the saved Object-ID.

The Inplace Restore will not create any new RHV resources. Please use one of the other restore options if new Volumes or VMs are required.

There are 2 possibilities to start an Inplace Restore.

Possibility 1: From the Snapshot list

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

  8. Click on "Inplace Restore"

  9. Configure the Inplace Restore as desired

  10. Click "Restore"

Possibility 2: From the Snapshot overview

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click the Snapshot Name

  8. Navigate to the "Restores" tab

  9. Click "Inplace Restore"

  10. Configure the Inplace Restore as desired

  11. Click "Restore"

Configure Trilio VM

The Trilio Appliance requires configuration to work with the chosen RHV environment. A Web-UI provides access to the Trilio Appliance dashboard and configurator.

Recommended and tested browsers are: Chrome and Firefox.

Accessing the Trilio Dashboard

Enter the Trilio IP or FQDN into the browser to reach the Trilio Appliance landing page.

User: admin Password: password

You will be prompted to change the Web-UI password on first login.

Details needed for the Trilio Appliance configurator

Upon login into the Trilio Appliance, the shown page is the configurator. The configurator requires some information about the Trilio Appliance, RHV and Backup Storage.

Trilio Nodes information

The Trilio Appliance needs to be integrated into an existing environment to be able to operate correctly. This block asks for the information about the Trilio Appliance operating details.

  • Virtual IP Address

    • The Trilio Appliance uses this IP address for all communication with RHV.

    • Format: IP/Netmask

    • Example: 10.10.0.2/24

The Trilio Appliance for RHV does not yet support multi-node installations. It is an actively worked on feature, that gets integrated step by step.

  • TVM Appliance IP

    • The first interface in the interface list of the Trilio Appliance will get this IP address assigned. Further is the TrilioVault Appliance hostname set.

    • Format: IP=hostname

    • Example: 10.10.0.1=rhv-tvm

The Virtual IP and the TVM Appliance IP can not be the same address. The configuration fails upon using the same IPs for both values.

  • Name Servers

    • The DNS server the Trilio appliance will use.

    • Format: Comma separated list of IPs

    • Example: 8.8.8.8,10.10.10.10

  • Domain Search Order

    • The domain the Trilio Appliance will use.

    • Format: Comma separated list of domain names

    • Example: trilio.demo,trilio.io

  • NTP Servers

    • NTP Servers the Trilio Appliance will use.

    • Format: Comma separated list of NTP Servers (FQDN and IP supported)

    • Example: 0.pool.ntp.org,10.10.10.10

  • Timezone

    • Timezone the Trilio will use.

    • Format: predefined list

    • Example: UTC

RHV Credentials information

The Trilio appliance integrates with one RHV environment. This block asks for the information required to access and connect with the RHV Cluster.

  • RHV Engine URL

    • URL of the RHV-Manager used to authenticate

    • Format: URL (FQDN and IP supported)

    • Example: https://rhv-manager.trilio.demo

A preconfigured DNS Server is required, when using FQDN. The Trilio Appliance local host file gets overwritten during configuration. The configuration will fail when the FQDN is not resolvable by a DNS Server.

  • RHV Username

    • admin-user to authenticate against the RHV-Manager

    • Format: user@domain

    • Example: admin@internal

  • Password

    • The password to validate the RHV Username against the RHV-Manager

    • Format: String

    • Example: password

TheInvalid Credentials error message will be displayed when the Trilio Appliance cannot reach the RHV Manager or credentials given are incorrect.

Backup Storage Configuration information

This block asks for the necessary details to configure the Backup Storage.

  • Backup Storage

    • Predefined as NFS

Trilio for RHV currently only supports NFS. The addition of S3 compatible Storage solutions gets delivered in a future version.

  • NFS Export

    • Full path to the NFS Volume used as Backup Storage

    • Format: Comma separated list of NFS paths

    • Example: 10.10.100.20:/rhv_backup

  • NFS Options

    • Options used by the Trilio NFS client to connect to the NFS Volume

    • Format: NFS Options

    • Example: nolock,soft,timeo=180,intr

    Note:- Make sure the NFS server supports the NFSv3 as Trilio Mounts the NFS share explicitly with NFSv3

Trilio Certificate information

Trilio is integrating into the RHV Cluster as an additional service, following the RHV communication paradigms. These require that the Trilio Appliance is using SSL and that the RHV-Manager does trust the Trilio Appliance.

Trilio offers to possibilities how these required certificates can be provided. Either Trilio generates a complete fresh self-signed certificate or a certificate is provided.

In both cases is the FQDN required, to which the certificate is pointing to.

Please see below example in case of a provided certificate.

  • FQDN

    • FQDN to reach the Trilio Appliance

    • Format: FQDN

    • Example: rhv-tvm.trilio.demo

  • Certificate

    • Certificate provided by the Trilio appliance upon request

    • Format: Certificate file

    • Example: rhv-tvm.crt

  • Private Key

    • Private Key used to verify the provided certificate

    • Format: private key file

    • Example: rhv-tvm.key

Trilio License

It is possible to directly provide the Trilio Appliance with the license file that is going to be used by it.

Trilio will not create any workloads or backups without a valid license file.

It is not necessary to provide the License file directly through the configurator. It is also possible to provide the license afterwards through the Trilio License tab in the Trilio dashboard.

The Trilio License tab can also be used to verify and update the currently installed license.

Submit and Configuration

After filling out every block of the configurator, hit the submit button to start the configuration.

The configurator asks one more time for confirmation before starting.

Stay patient during the configuration, as this may easily take a few more minutes.

After the configurator has succeeded or failed, is the Ansible playbook shown. Use the possibilities to expand and collapse each task for troubleshooting failed configurations.

Post Installation Health-Check

After the installation and configuration of TrilioVault for RHV did succeed the following steps can be done to verify that the TrilioVault installation is healthy.

Verify the TrilioVault Appliance services are up

TrilioVault is using 3 main services on the TrilioVault Appliance:

  • wlm-api

  • wlm-scheduler

  • wlm-workloads

Those can be verified to be up and running using the systemctl status command.

systemctl status wlm-api
######
● wlm-api.service - Cluster Controlled wlm-api
   Loaded: loaded (/etc/systemd/system/wlm-api.service; disabled; vendor preset: disabled)
  Drop-In: /run/systemd/system/wlm-api.service.d
           └─50-pacemaker.conf
   Active: active (running) since Wed 2020-04-22 09:17:05 UTC; 1 day 2h ago
 Main PID: 21265 (python)
    Tasks: 1
   CGroup: /system.slice/wlm-api.service
           └─21265 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-api --config-file=/etc/workloadmgr/workloadmgr.conf
systemctl status wlm-scheduler
######
● wlm-scheduler.service - Cluster Controlled wlm-scheduler
   Loaded: loaded (/etc/systemd/system/wlm-scheduler.service; disabled; vendor preset: disabled)
  Drop-In: /run/systemd/system/wlm-scheduler.service.d
           └─50-pacemaker.conf
   Active: active (running) since Wed 2020-04-22 09:17:17 UTC; 1 day 2h ago
 Main PID: 21512 (python)
    Tasks: 1
   CGroup: /system.slice/wlm-scheduler.service
           └─21512 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-scheduler --config-file=/etc/workloadmgr/workloadmgr.conf
systemctl status wlm-workloads
######
● wlm-workloads.service - workloadmanager workloads service
   Loaded: loaded (/etc/systemd/system/wlm-workloads.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-04-22 09:15:43 UTC; 1 day 2h ago
 Main PID: 20079 (python)
    Tasks: 33
   CGroup: /system.slice/wlm-workloads.service
           ├─20079 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
           ├─20180 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
           [...]
           ├─20181 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
           ├─20233 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
           ├─20236 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf
           └─20237 /home/rhv/myansible/bin/python /usr/bin/workloadmgr-workloads --config-file=/etc/workloadmgr/workloadmgr.conf

Check the TrilioVault pacemaker and nginx cluster

The second component to check the TrilioVault Appliance's health is the nginx and pacemaker cluster.

pcs status
######
Cluster name: triliovault

WARNINGS:
Corosync and pacemaker node names do not match (IPs used in setup?)
Stack: corosync
Current DC: om_tvm (version 1.1.19-8.el7_6.1-c3c624ea3d) -
partition with quorum
Last updated: Wed Dec 5 12:25:02 2018
Last change: Wed Dec 5 09:20:08 2018 by root via cibadmin on om_tvm
1 node configured
4 resources configured

Online: [ om_tvm ]
Full list of resources:
virtual_ip (ocf::'heartbeat:IPaddr2): Started om_tvm
wlm-api (systemd:wlm-api): Started om_tvm
wlm-scheduler (systemd:wlm-scheduler): Started om_tvm
Clone Set: lb_nginx-clone [lb_nginx]
Started: [ om_tvm ]
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled

Verify API connectivity from the RHV-Manager

The RHV-Manager is doing all API calls towards the TrilioVault Appliance. Therefore it is helpful to do a quick API connectivity check using curl.

The following curl command lists the available workload-types and verfifies that the connection is available and working:

curl -k -XGET https://30.30.1.11:8780/v1/admin/workload_types/detail -H "Content-Type: application/json" -H "X-OvirtAuth-User: admin@internal" -H "X-OvirtAuth-Password: password"
######
{"workload_types": [{"status": "available", "user_id": "admin@internal", "name": "Parallel", "links": [{"href": "https://myapp/v1/admin/workloadtypes/2ddd528d-c9b4-4d7e-8722-cc395140255a", "rel": "self"}, {"href": "https://myapp/admin/workloadtypes/2ddd528d-c9b4-4d7e-8722-cc395140255a", "rel": "bookmark"}], "created_at": "2020-04-02T15:38:51.000000", "updated_at": "2020-04-02T15:38:51.000000", "metadata": [], "is_public": true, "project_id": "admin", "id": "2ddd528d-c9b4-4d7e-8722-cc395140255a", "description": "Parallel workload that snapshots VM in the specified order"}, {"status": "available", "user_id": "admin@internal", "name": "Serial", "links": [{"href": "https://myapp/v1/admin/workloadtypes/f82ce76f-17fe-438b-aa37-7a023058e50d", "rel": "self"}, {"href": "https://myapp/admin/workloadtypes/f82ce76f-17fe-438b-aa37-7a023058e50d", "rel": "bookmark"}], "created_at": "2020-04-02T15:38:47.000000", "updated_at": "2020-04-02T15:38:47.000000", "metadata": [], "is_public": true, "project_id": "admin", "id": "f82ce76f-17fe-438b-aa37-7a023058e50d", "description": "Serial workload that snapshots VM in the specified order"}]}

Verify the ovirt-imageio services are up and running

TrilioVault is extending the already exiting ovirt-imageio services. The installation of these extensions does check if the ovirt-services come up. Still it is a good call to verify again afterwards:

On the RHV-Manager check the ovirt-imageio-proxy service:

systemctl status ovirt-imageio-proxy
######
● ovirt-imageio-proxy.service - oVirt ImageIO Proxy
   Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-04-08 05:05:25 UTC; 2 weeks 1 days ago
 Main PID: 1834 (python)
   CGroup: /system.slice/ovirt-imageio-proxy.service
           └─1834 bin/python proxy/ovirt-imageio-proxy

On the RHV-Host check the ovirt-imageio-daemon service:

systemctl status ovirt-imageio-daemon
######
● ovirt-imageio-daemon.service - oVirt ImageIO Daemon
   Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-04-08 04:40:50 UTC; 2 weeks 1 days ago
 Main PID: 1442 (python)
    Tasks: 4
   CGroup: /system.slice/ovirt-imageio-daemon.service
           └─1442 /opt/ovirt-imageio/bin/python daemon/ovirt-imageio-daemon

Verify the NFS Volume is correctly mounted

TrilioVault mounts the NFS Backup Target to the TrilioVault Appliance and RHV-Hosts.

To verify those are correctly mounted it is recommended to do the following checks.

First df -h looking for /var/triliovault-mounts/<hash-value>

df -h
######
Filesystem                                      Size  Used Avail Use% Mounted on
devtmpfs                                         63G     0   63G   0% /dev
tmpfs                                            63G   16K   63G   1% /dev/shm
tmpfs                                            63G   35M   63G   1% /run
tmpfs                                            63G     0   63G   0% /sys/fs/cgroup
/dev/mapper/rhvh-rhvh--4.3.8.1--0.20200126.0+1  7.1T  3.7G  6.8T   1% /
/dev/sda2                                       976M  198M  712M  22% /boot
/dev/mapper/rhvh-var                             15G  1.9G   12G  14% /var
/dev/mapper/rhvh-home                           976M  2.6M  907M   1% /home
/dev/mapper/rhvh-tmp                            976M  2.6M  907M   1% /tmp
/dev/mapper/rhvh-var_log                        7.8G  230M  7.2G   4% /var/log
/dev/mapper/rhvh-var_log_audit                  2.0G   17M  1.8G   1% /var/log/audit
/dev/mapper/rhvh-var_crash                      9.8G   37M  9.2G   1% /var/crash
30.30.1.4:/rhv_backup                           2.0T  5.3G  1.9T   1% /var/triliovault-mounts/MzAuMzAuMS40Oi9yaHZfYmFja3Vw
30.30.1.4:/rhv_data                             2.0T   37G  2.0T   2% /rhev/data-center/mnt/30.30.1.4:_rhv__data
tmpfs                                            13G     0   13G   0% /run/user/0
30.30.1.4:/rhv_iso                              2.0T   37G  2.0T   2% /rhev/data-center/mnt/30.30.1.4:_rhv__iso

Secondly do a read / write / delete test as the user vdsm:kvm (uid = 36 / gid = 36) from the TrilioVault Appliance and the RHV-Host.

su vdsm
######
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ touch foo
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ ll
total 24
drwxr-xr-x  3 vdsm kvm 4096 Apr  2 17:27 contego_tasks
-rw-r--r--  1 vdsm kvm    0 Apr 23 12:25 foo
drwxr-xr-x  2 vdsm kvm 4096 Apr  2 15:38 test-cloud-id
drwxr-xr-x 10 vdsm kvm 4096 Apr 22 11:00 workload_1540698c-8e22-4dd1-a898-8f49cd1a898c
drwxr-xr-x  9 vdsm kvm 4096 Apr  8 15:21 workload_51517816-6d5a-4fce-9ac7-46ee1e09052c
drwxr-xr-x  6 vdsm kvm 4096 Apr 22 11:30 workload_77fb42d2-8d34-4b8d-bfd5-4263397b636c
drwxr-xr-x  5 vdsm kvm 4096 Apr 23 06:15 workload_85bf16ed-d4fd-49a6-a753-98c5ca6e906b
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ rm foo
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ ll
total 24
drwxr-xr-x  3 vdsm kvm 4096 Apr  2 17:27 contego_tasks
drwxr-xr-x  2 vdsm kvm 4096 Apr  2 15:38 test-cloud-id
drwxr-xr-x 10 vdsm kvm 4096 Apr 22 11:00 workload_1540698c-8e22-4dd1-a898-8f49cd1a898c
drwxr-xr-x  9 vdsm kvm 4096 Apr  8 15:21 workload_51517816-6d5a-4fce-9ac7-46ee1e09052c
drwxr-xr-x  6 vdsm kvm 4096 Apr 22 11:30 workload_77fb42d2-8d34-4b8d-bfd5-4263397b636c
drwxr-xr-x  5 vdsm kvm 4096 Apr 23 06:15 workload_85bf16ed-d4fd-49a6-a753-98c5ca6e906b
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$

Preparing the Installation

TrilioVault for RHV integrates tightly into the RHV environment itself. This integration requires preparation before the installation starts.

Installing Redis

TrilioVault is capable of parallel disk transfer from multiple RHV-Hosts at the same time.

This capability is requiring a task queue system, Python Celery.

Python Celery requires a message broker system like RabbitMQ or Redis. TrilioVault uses the Redis message broker.

RHV does not include Redis, so installation is necessary.

Redis is not available from a Red Hat repository yet. The Fedora EPEL repository provides the needed packages.

The following steps install Redis:

  1. Add the Fedora EPEL Repository# yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

  2. Install Redis # yum install redis

  3. Start Redis # systemctl start redis.service

  4. Enable Redis to start on boot # systemctl enable redis

  5. Check Redis status # systemctl status redis.service

(CLI) Uploading the TrilioVault VM qcow2 disk to RHV

Trilio is providing a python script to upload the TrilioVault qcow2 image to RHV using the RHV Apis.

The provided script is an adaption of the opensource upload_disk.py.

Trilio's changes are:

  • compatibility for RHV 4.3 and RHV 4.4

  • compatibility with Python 2.7 and Python 3.5+

The script can be downloaded here: https://github.com/trilioData/solutions/tree/master/rhv/disk-upload

The recommended process for using this script is:

  1. Upload the TrilioVault qcow2 image to the RHV-Manager VM

  2. Upload the script to the RHV-Manager VM

  3. Unpack the TrilioVault qcow2 image if necessary

  4. Run the script

Command to upload the disk using the script

For Python 2:

python disk_upload.py --url https://<manager_ip>/ovirt-engine/api --username admin@internal --password <admin_password> --sdname <storage_group_name> --filepath <file_location> --direct

For Python 3:

python3 disk_upload.py --url https://<manager_ip>/ovirt-engine/api --username admin@internal --password <admin_password> --sdname <storage_group_name> --filepath <file_location> --direct

(GUI) Enabling Disk Upload through RHV Manager

Trilio delivers the TrilioVault Appliance as a qcow2 image.

Trilio supports that the TrilioVault Appliance is running on the same RHV Cluster it protects.

Uploading the qcow2 to the RHV Datastore is easy, but depending on the RHV usage so far, it might require the installation of additional certificates.

TrilioVault for RHV (TVR) qcow2 appliance images are held in the Trilio Customer Portal. If you do not have credentials to the portal, please contact your Account Representative or Customer Success Manager. If you're not a current Trilio customer, we'd be happy to assist you if you send an email to [email protected].

To find the TVR version compatible with your environment, visit the TVR Support Matrix

Verify the connection to the ovirt-imageio-proxy

RHV is using the ovirt-imageio-proxy service to upload and download images, snapshots, and disks through the RHV Manager.

The following steps verify the connection to the ovirt-imageio-proxy service.

  1. Log in to the administrative portal of the RHV Manager

  2. Go to Storage ➡️ Disks

  3. Go to Upload ➡️ Start

  4. Click Test Connection

When the connection test is unsuccessful, please proceed with the necessary steps to install the ovirt-engine-certificates.

When the connection test is successful, no further steps are required to upload the image.

Install the ovirt-engine certificate

When the Test-Connection to the ovirt-imageio-proxy is failing, a usual reason is the client system mistrusting the RHV-M due to a missing certificate.

The RHV-M does have two certificates, which can both be required to access the ovirt-imageio-proxy.

The first certificate is directly available for download from the error message in the window. The below URL shows the general path to the certificate. The downloaded certificate is the root certificate for the certificates used by the ovirt-imageio-proxy.

https:///ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA

Download and install this certificate according to the clients operating system and browser used.

Test the connection to the ovirt-imageio-proxy again after installation.

Proceed to the second certificate only in case of the connection still failing.

The second certificate is the actual certificate of the ovirt-imageio-proxy shown to the client system upon connection. The download is only possible from the RHV-M host system directly. The usual location of the certificate is:

/etc/pki/ovirt-engine/certs/imageio-proxy.cer

Please visit /etc/ovirt-imageio-proxy/ovirt-imageio-proxy.conf in case of the certificate being located elsewhere.

Install the certificate according to the client's operating system and browser used.

Test the connection to the ovirt-imageio-proxy again after installation.

Please contact your administrator when the connection still fails.

(GUI) Uploading the TrilioVault VM qcow2 disk to RHV

TrilioVault for RHV (TVR) qcow2 appliance images are held in the Trilio Customer Portal. If you do not have credentials for the portal, please contact your Account Representative or Customer Success Manager. If you're not a current Trilio customer, we'd be happy to assist you if you send an email to [email protected].

The TrilioVault VM qcow2 image is a full operating system disk, that is getting attached to a VM running on the RHV-Cluster.

To be able to spin up the TrilioVault VM, upload the qcow2 disk into the RHV datadomain.

The following procedure uploads the qcow2 disk:

  1. Go to Storage➡️Disk

  2. Go to Upload➡️Start

  3. Fill out the presented form and choose the path to the qcow2 image on the client system

  4. Click OK to start the upload

In case of the upload not starting after several minutes, verify that the connection to the ovirt-imageio-proxy is possible.