Only this pageAll pages
Powered by GitBook
1 of 36

4.2

Loading...

Loading...

Loading...

Loading...

Deployment Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

User Guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

ADMIN GUIDE

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Troubleshooting

Loading...

Loading...

Loading...

General requirements

Trilio is a pure software solution and is composed of 3 elements:

  1. Trilio Controller Cluster

  2. Trilio RHV-M Web-GUI extension

  3. Trilio Datamover

System requirements Trilio Controller Cluster

The Trilio Controller Cluster consists of multiple containers:

  • rhv-configurator - this container contains all configuration UI used to configure the Trilio solution after deployment

  • config-api - this container is responsible to deploy and configure the actual Trilio solution after all values are provided through the configuration UI

  • wlm-api - this container provides the api used by the RHV-M integration to work with workloads, backups and restores

  • wlm-scheduler - this container takes jobs from the wlm-api and schedules them on a wlm-workloads container. It hereby chooses the wwlm-workloads container with the lowest load

  • MariaDB - this container provides the Trilio database

  • RabbitMQ - this container provides the RabbitMQ service used internally by the Trilio solution

The Trilio Controller Cluster consists of 3 VMs fulfilling the following base requirements

Ressource

Value

vCPU

6

RAM

16 GB

Disk

100 GB

Operating System

RHEL 8

System requirements Trilio datamover

Trilio is installing the Trilio datamover service next to the ovirt-imageio-proxy service running on the RHV-Manager and ovirt-imageio-daemon running on the RHV-Hosts.

The Trilio datamover doesn't have any hardware-related requirements, but they require specific versions of the ovirt-imageio services.

The installed versions of the ovirt-imageio-proxy and the ovirt-imageio-daemon need to be the same.

Please check the for further information.

Support Matrix

About Trilio for RHV

Trilio for RHV, is a native RHV service that provides policy-based comprehensive backup and recovery for RHV workloads. The solution captures point-in-time workloads (Application, OS, Compute, Network, Configurations, Data, and Metadata of an environment) as full or incremental snapshots. A variety of storage environments can hold these Snapshots, including NFS and soon AWS S3 compatible storage. With Trilio and its single-click recovery, organizations can improve Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO). Trilio enables IT departments to fully deploy RHV solutions and provide business assurance through enhanced data retention, protection, and integrity.

With the use of Trilio’s VAST (Virtual Snapshot Technology), Enterprise IT and Cloud Service Providers can now deploy backup and disaster recovery as a service to prevent data loss or data corruption through point-in-time snapshots and seamless one-click recovery. Trilio takes point-in-time backup of the entire workload consisting of computing resources, network configurations, and storage data as one unit. It also takes incremental backups that only capture the changes made since the last backup. Incremental snapshots save time and storage space as the backup only includes changes since the last backup. The summarized benefits of using VAST for backup and restore are:

  1. Efficient capture and storage of snapshots. Since our full backups only include data that is committed to storage volume and the incremental backups include changed blocks of data since the last backup, our backup processes are efficient and storages backup images efficiently on the backup media.

  2. Faster and reliable recovery. When your applications become complex that snap multiple VMs and storage volumes, our efficient recovery process brings your application from zero to operational with the click of a button.

  3. Reliable and smooth migration of workloads between environments. Trilio captures all the details of your application, and hence our migration includes your entire application stack without leaving anything for guesswork.

  4. Through policy and automation, lower the Total Cost of Ownership. Our role driven backup process and automation eliminates the need for dedicated backup administrators, thereby improves your total cost of ownership.

Certificates required for TVR

The certificates explained on this page are not the certificates provided when accessing the TrilioVault VM dashboard through HTTPS.

TrilioVault for RHV is integrating into the RHV-Manager to provide a seamless experience for RHV Administrators and Users for all their Backup & Recovery needs inside RHV.

For this purpose is TrilioVault extending the RHV-Manager GUI with a new tab "Backup", which contains the sub-tabs Workloads, Admin Panel and Reporting as shown in figure 1.

The integration of TrilioVault into the RHV-Manager contains the complete GUI. This GUI still requires Data that will be shown then.

The RHV-Manager is gathering the data shown in the GUI from the client-side. This means that next to the connection to the RHV-Manager there are also connections to the systems providing the data. For all normal RHV tabs and fields is this the RHV-Manager itself.

When accessing the TrilioVault tabs there will also be a connection build-up to the TrilioVault VM, to gather the data about Workloads, Snapshots, Restores, etc. Figure 2 visualizes this connection.

As can be seen, the TrilioVault VM provides its own certificate to the Client Browser. This connection is happening in the background of the browser. This means, that untrusted certificates can not be accepted through the browser upon opening the Backup tab in the RHV-M. The certificate for the GUI is coming from the RHV-Manager and has been accepted at this point already. The certificate for the data coming from the TrilioVault VM needs to be accepted separately.

Before installing TrilioVault it is therefore required to consider which certificates the TrilioVault VM will use and how they will be distributed to the Client Browser.

During configuration is the TrilioVault VM either able to generate its own self-signed certificate or a certificate and a private key can be provided.

When a self-signed certificate is chosen can the generated certificate be downloaded from the TrilioVault VM dashboard and then added as a trusted certificate to the Client system. Or it can be accepted through the browser itself by calling the TrilioVault VM API directly.

When a certificate is provided is the private key used with that certificate also required. This private key will be used to encrypt the communication between TrilioVault VM and the Client Browser. The provided certificate still needs to be trusted by the Client system.

Wildcards can be used for a provided certificate, but they are not recommended to ensure that the communication between TrilioVault VM and Client Browser is secure.

Trilio for RHV Architecture

Trilio is built on the same architectural principles as modern analytical platforms such as Hadoop and other big data platforms. These platforms offer infinite scale without compromising on performance. Trilio's other attributes include agentless, natively integrated with RHV GUI, horizontally scalable, nondisruptive, and open universal backup schema.

Agentless

Trilio offers image-level backup, which backs up a given virtual machine physical disks as one file. Irrespective of the complexity of a VM or the applications running inside the VM, the user does not require any custom code, called agents, inside VM for TrilioVault to take VM backups. Agentless solutions are one of the highly desirable features because any solution that requires custom code to run in VM creates an operational nightmare.

Self-Service/UI Integrated

Trilio comes with a GUI plugin for RHV Manager, which provides seamless integration of Trilio functionality adjacent to virtual machines management. Trilio service authenticates users with OpenID tokens, so any user who logged in to RHV can use Trilio functionality without any out-of-band user management.

Scalable, Linear, Infinite Scale

Most backup solutions are built on client/server architectures, and hence, they invariably create performance and scale bottlenecks when the RHV cluster grows. Traditional backup solutions require constant tweaking to keep up with RHV cluster growth. Trilio is built on the same architectural principles as the RHV platform; hence, it grows with the RHV cluster without introducing scale and performance bottlenecks.

Nondisruptive

Deploying Trilio is nondisruptive to the RHV cluster or the virtual machines. Similarly, uninstalling Trilio is nondisruptive as well.

Open Universal Backup Schema

In the current world of multi-cloud environments, the backup images must be platform and vendor-neutral, so they are easily portable between clouds. Trilio saves backup images as QCOW2 images. QCOW2 is a standard format in KVM/RHV environments for virtual disks, and Linux comes with numerous tools to create and manage them. A backup image stored in QCOW2 format gives the user enormous flexibility on how these are leveraged for various use cases, including restoring backup images without TVM.

QCOW2 images also come with two important attributes that also make them ideal for storing backup images.

  1. QCOW2s are sparse friendly. As a regular practice, users overprovision virtual disks to VMs. These virtual disks may be thick or thin-provisioned, but at any given time, applications only use a fraction of the virtual disk capacity. When taking image-level backups of virtual disks, backup solutions should only save the blocks that are allocated and not save blocks that are not allocated or used. For example, your virtual disks maybe a 1TB in capacity, but the applications utilized only 10GB of disk space. Since QCOW2 images are spare friendly, Trilio only stores the data. In the above example, the size of the QCOW2 image is 10GB

  2. KVM/RHV supports virtual disk snapshots by a construct called overlay files. KVM/RHV creates a new disk snapshot, it creates a new qcow2 file called overlay file and is overlaid on the original qcow2 file. Any new writes are applied to overlay file, and any reads to old data is read from the older qcow2 file. Trilio leverages the same mechanism to store incremental backups. Trilio incremental backups are overlay files that include the data that is modified between the current backup process and the last good backup. Since the TrilioVault backup image structure on the backup media reflects what KVM/RHV natively represents, our process of creating backups and restores are highly efficient in terms of the amount of backup storage used and the network bandwidth utilization.

TrilioVault architecture reflects these principles.

As you can see from above the architecture diagram, Trilio does not require any media servers. Traditionally, media servers performs numerous bookkeeping operations of backup images, including pruning older backups, synthesizing full backups from existing backups, cataloging backup images and other operations. These are usually data intensive operations and as your RHV cluster grows, media servers need to scale in capacity to keep up with RHV growth. Scaling media servers may include trial and errors approaches and very difficult to calibrate correctly. Trilio employs data movers that are deployed on each RHV host that can horizontally scale with RHV, hence, there is no tuning to do when RHV cluster grows. Instead of centralizing media server functionality in to one appliance, all bookkeeping operations are performed with in the data mover in the context of current backup job. Trilio enhances operational efficiency of backups and recoveries and by not tieing the users to hardware licenses it also significantly improves the ROI and TCO of your investments.

Trilio Controller Cluster

The Trilio Controller Cluster is the controller of Trilio, called TVM.

The TVM is running and managing all backup and recovery jobs.

During a backup job is the TVM:

  • Gathering the Metadata information generated of the VMs that are getting protected

  • Writing the Metadata information onto the Backup Target

  • Generating the RHV Snapshot

  • Sending the data copy commands to the ovirt-imageio services

The TVM is running on three RHEL8 VMs provided on the RHV environment. The TVM is then installed on top of these VMs in form of a K8s cluster running the TVR controller containers.

It is supported and recommended to run the TVM in the same RHV environment as a VM that the TVM protects.

RHV GUI integration

Trilio is natively integrated into the available RHV GUI, provides a new tab "Backup".

All functionalities of Trilio are accessible through the RHV GUI.

The RHV-Manager GUI integration is getting installed using Ansible-playbooks together with the ovirt-imageio-proxy extension.

TrilIo datamover

Ovirt-imageio is an RHV internal python service that allows the upload and download of disks into and out of RHV.

The default ovirt-imageio services only allow to move the disks through the RHV-M via https.

TrilioVault extends the ovirt-imageio functionality using the additional TrilioVault datamover service to provide movement of the disk data through NFS over the RHV Hosts themselves.

The Trilio datamovers are getting installed using Ansible-playbooks.

Backup Target

Trilio is writing all Backups over the network using the NFS protocol to a provided Backup Target.

Any system utilizing the NFSv3 protocol is usable.

Maintaining the Trilio Controller Cluster

The TrilioVault Controller Cluster can be maintained utilizing the TrilioVault deployment binary and the helm commands

Trilio binary overview

Provides all available commands of the Trilio binary.

triliovault --help

Configuring local DNS nameservers for wlm-workloads and wlm-api

In certain scenarios it might be necessary to set additional local DNS nameservers for the wlm-workloads and wlm-api pods.

This can be done through the Trilio binary

triliovault --add-host-entry

Updating the Trilio Controller Cluster

To update the Trilio Controller Cluster to the latest version available the helm commands are used on any of the Kubernetes Master Nodes. In case of a 3 node Cluster, all nodes are Kubernetes Master Nodes. In case of a 5 or more node Cluster, the first 3 nodes in the deployment.json are the master nodes.

To upgrade the cluster the following commands are run. It is required to reconfigure the Cluster after the upgrade.

helm repo update
helm upgrade tvr trilio/tvault --reset-values
helm upgrade mariadb trilio/mariadb -n trilio

Redeploy the Trilio Controller Cluster

Deletes the complete TrilioVault Controller cluster and all its artifacts before redeploying the same based on the last deployment. Requires a new configuration of the redeployed containers.

triliovault -t multinode -a redeploy -f deployment.json

Reset the Trilio Controller Cluster

Deletes the TrilioVault containers and redeploys the same. Requires a new configuration of the redeployed containers

triliovault -t multinode -a reset 

Cleanup the Trilio Controller Cluster

Deletes the complete Trilio Controller Cluster and all its artifacts.

triliovault -t multinode -a cleanup

Preparing the Installation

TrilioVault for RHV integrates tightly into the RHV environment itself. This integration requires preparation before the installation starts.

Installing Redis

TrilioVault is capable of parallel disk transfer from multiple RHV-Hosts at the same time.

Python Celery requires a message broker system like RabbitMQ or Redis. TrilioVault uses the Redis message broker.

RHV does not include Redis, so installation is necessary.

Redis is not available from a Red Hat repository yet. The Fedora EPEL repository provides the needed packages.

The following steps install Redis:

  1. Add the Fedora EPEL Repository# yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

  2. Install Redis # yum install redis

  3. Start Redis # systemctl start redis.service

  4. Enable Redis to start on boot # systemctl enable redis

  5. Check Redis status # systemctl status redis.service

Creation of the base VMs

The TrilioVault for RHV software Controller Cluster is getting installed on top of the 3 VMs, that need to be provided. These VMs need to fulfill the following requirements.

Ressource

Value

vCPU

6

RAM

16 GB

Disk

100 GB

Operating System

RHEL 8

These VMs can be created by any means.

Required OS configuration

The RHEL8 provided requires the following configuration.

  • Python3 installed

  • firewalld disabled

  • selinux disabled

Keeping the firewalld and selinux services enabled can lead to unstable Kubernetes cluster communication, which will then lead to failed TrilioVault services.

Installing the Trilio Controller Cluster

Download the Trilio binary

Trilio is providing a binary installation tool, which is creating the Kubernetes cluster, downloads all images in their latest version, and ensures a clean state of the whole environment.

Move the binary into the /usr/bin directory

mv triliovault /usr/bin/

The binary provides the following options:

  • deploy ==> Fresh deployment, expects only the base VMs to be present

  • redeploy ==> Deletes the entire Trilio Controller Cluster including its Kubernetes base, before redeploying based on the last available configuration.

  • cleanup ==> Deletes the entire Trilio Controller Cluster including the Kubernetes base

  • reset ==> deletes and redeploys the Trilio Controller Cluster without changing the Kubernetes base

reset should only be done from the same node that was used to run the initial deployment. running the reset option on other nodes will not execute any steps.

Define the deployment

The Trilio binary is taking a deployment file in json format.

The following information need to be provided inside the deployment.json

  • number of nodes ==> How many nodes will the Trilio Controller Cluster consist of. Default and minimum is 3 nodes, but can be increased in steps of 2 if required.

  • virtual_ip_for_keepalived ==> This is IP is used by the Kubernetes to ensure that all nodes are still working and keeping the Kubernetes Masternodes in sync

  • ingress_ip ==> IP under which the Trilio Controller Cluster is reachable

  • For each VM that is part of the cluster:

    • node_ip ==> IP under which the VM is reachable for the deployment binary

    • username ==> root

    • password ==> password that allows the provided user to ssh into the VM

The provided user requires root permissions on the VMs

An example deployment.json can be seen below

{
   "number_of_nodes": 3,
   "virtual_ip_for_keepalived": "142.44.219.119",
   "ingress_ip": "142.44.219.116",
    "node_details":[
          {
            "node_ip" : "142.44.219.120",
            "username" : "root",
            "password" : "abcd"
          },
          {
            "node_ip" : "142.44.219.121",
            "username" : "root",
            "password" : "abcd"
          },
          {
            "node_ip" : "142.44.219.122",
            "username" : "root",
            "password" : "abcd"
          }
       ]
} 

Run the Trilio binary

Once the deployment.json has beed created the Trilio binary can be started using the following command.

triliovault -t multinode -a deploy -f deployment.json

Wait for the binary to finish.

Once the procedure finished check the status of the containers using

kubectl get pods
kubectl get pods -n trilio

The output will look similar to the following.

The wlm containers will be in the status CrashLoopBackOff at this point. This is the expected behavior. The containers will go into a running state after the configuration has finished.

[root@master-node1 ~]# kubectl get pods
NAME                                           READY   STATUS    RESTARTS         AGE
tvr-ingress-nginx-controller-6d7b7c47f-zzhxq   1/1     Running   1 (3m10s ago)    3h16m
tvr-metallb-controller-79757c94b7-66r5j        1/1     Running   1 (3m10s ago)    3h16m
tvr-metallb-speaker-zxdgh                      1/1     Running   1 (3m10s ago)    3h16m

[root@master-node1 ~]# kubectl get pods -n trilio
NAME                               READY   STATUS             RESTARTS         AGE
config-api-55879c774c-dmcqt        1/1     Running            1                3h16m
mariadb-0                          1/1     Running            1                3h16m
rabbitmq-0                         1/1     Running            1 (3m10s ago)    3h16m
rabbitmq-1                         1/1     Running            1 (3m10s ago)    3h16m
rabbitmq-2                         1/1     Running            1 (3m10s ago)    3h16m
rabbitmq-ha-policy-h984m           0/1     Completed          1                3h16m
rhvconfigurator-5f65f79897-tprhx   1/1     Running            1 (3m12s ago)    3h16m
wlm-api-58fdcdfc7-jggb8            0/1     CrashLoopBackOff   42 (119s ago)    3h16m
wlm-scheduler-5f58586bd5-n2gxz     0/1     CrashLoopBackOff   42 (112s ago)    3h16m
wlm-workloads-4pfk7                0/1     CrashLoopBackOff   42 (119s ago)    3h17m
wlm-workloads-dgtpr                0/1     CrashLoopBackOff   42 (2m57s ago)   3h17m
wlm-workloads-mnxkf                0/1     CrashLoopBackOff   42 (2m26s ago)   3h17m

Installation of RHV extensions

TrilioVault extends the ovirt-imageio services running on the RHV-Manager and the RHV hosts, to provide the parallel download of disks from multiple RHV hosts.

The imageio extensions are installed through the TrilioVault GUI using Ansible inventory files as input.

Every time the RHV environment gets updated or a new RHV host is getting added to the RHV Cluster it is necessary to rerun the installation of the ovirt-imageio extensions.

Preparing the inventory files

The TrilioVault Appliance is running Ansible playbooks in the background to install the RHV extensions.

Like most Ansible playbooks does the TrilioVault Appliance require inventory files to know which roles have to run on which hosts and how to gain the required access.

Two (2) inventory files are required for the TrilioVault Appliance.

  1. Inventory file containing the RHV-Manager

  2. Inventory file containing the RHV Hosts

These can be created in two ways.

Using password authentication

The first supported method to allow Ansible to access the RHV hosts and the RHV Manager is the classic user password authentication.

To use password authentication edit the files using the following format:

<Server_IP> ansible_user=root password=xxxxx

One entry per RHV Host in daemon file and one entry per RHV Manager in the proxy file are required.

An example for an inventory file using password authentication is shown below:

Using passwordless authentication (SSH keys)

The second supported method to Allow Ansible to access the RHV hosts and the RHV Manager is utilizing SSH keys to provide passwordless authentication.

For this method, it is necessary to prepare the TrilioVault Appliance and the RHV Cluster Nodes as well as the RHV Manager.

The provided service account rhv_nw is not able to perform the necessary steps on the TrilioVault Appliance. These tasks require root privileges.

The recommended method from Trilio is:

  1. Use ssh-keygen to generate a key pair

  2. Add the private key to /root/.ssh/ on the TrilioVault Appliance

  3. Add the public key to /root/.ssh/authorized_keys file on each RHV host and the RHV Manager

Once the TrilioVault Appliance can access the nodes without password, edit the inventory files using the following format:

<Server_IP> ansible_user=root

One entry per RHV Host in the daemon file and one entry per RHV Manager in the proxy file are required.

An example for an inventory file using passwordless authentication is shown below:

Starting the installation

To install the RHV extension log in to the TrilioVault appliance GUI using the admin account.

Choose Configure RHV Host to install the RHV extensions for the RHV Hosts. Choose Configure RHV Manager to install the RHV extensions for the RHV Manager.

Upload the prepared inventory files to the provided upload areas.

Do not mix RHV Hosts and RHV Manager in the same inventory file. The configurator will not make a difference and install the extensions according to the chosen installation on any host provided in the inventory files.

Click configure to start the installation.

During the installation is the live output from the ansible playbooks shown. Some of the tasks can take up to a few minutes to complete without updating the playbook output. Wait till the playbook has succeeded or failed.

Manual installation procedure

In rare cases, the automated installation through the Web-GUI is failing or not possible. For those cases please follow this procedure to install manually.

This procedure requires root priviliges.

Preparing the inventory files

Ansible playbooks are working with inventory files. These inventory files contain the list of RHV-Hosts and RHV-Managers and how to access them.

To edit the inventory files, open the following files for the server type to add.

For the RHV hosts open: /opt/stack/imageio-ansible/inventories/production/daemon For the RHV Manager open: /opt/stack/imageio-ansible/inventories/production/proxy

Using password authentication

The first supported method to allow Ansible to access the RHV hosts and the RHV Manager is the classic user password authentication.

To use password authentication edit the files using the following format:

<Server_IP> ansible_user=root password=xxxxx

One entry per RHV Host in daemon file and one entry per RHV Manager in the proxy file are required.

Using passwordless authentication (SSH keys)

The second supported method to Allow Ansible to access the RHV hosts and the RHV Manager is utilizing SSH keys to provide passwordless authentication.

For this method, it is necessary to prepare the TrilioVault Appliance and the RHV Cluster Nodes as well as the RHV Manager.

The recommended method from Trilio is:

  1. Use ssh-keygen to generate a key pair

  2. Add the private key to /root/.ssh/ on the TrilioVault Appliance

  3. Add the public key to /root/.ssh/authorized_keys file on each RHV host and the RHV Manager

Once the TrilioVault Appliance can access the nodes without password, edit the inventory files using the following format:

<Server_IP> ansible_user=root

One entry per RHV Host in the daemon file and one entry per RHV Manager in the proxy file are required.

Starting the installation

To install the ovirt-imageio extensions go to:

cd /opt/stack/imageio-ansible

Depending on the method of authentication prepared in the inventory files, different commands need to be used to start the Ansible playbooks.

Using password authentication

To call the Ansible playbooks when the inventory files use password authentication run.

For RHV 4.3 Hosts: ansible-playbook rhv_4_3.yml -i inventories/production/daemon --tags daemon For RHV 4.3 Manager: ansible-playbook rhv_4_3.yml -i inventories/production/proxy --tags proxy

For RHV 4.4 Hosts: ansible-playbook rhv_4_4.yml -i inventories/production/daemon --tags daemon For RHV 4.4 Manager: ansible-playbook rhv_4_4.yml -i inventories/production/proxy --tags proxy

Using passwordless authentication (SSH keys)

To call the Ansible playbooks when the inventory files use passwordless authentication run.

For RHV 4.3 Hosts: ansible-playbook rhv_4_3.yml -i inventories/production/daemon --private-key ~/.ssh/id_rsa --tags daemonFor RHV 4.3 Manager:ansible-playbook rhv_4_3.yml -i inventories/production/proxy --private-key ~/.ssh/id_rsa --tags proxy

For RHV 4.4 Hosts: ansible-playbook rhv_4_4.yml -i inventories/production/daemon --private-key ~/.ssh/id_rsa --tags daemonFor RHV 4.4 Manager:ansible-playbook rhv_4_4.yml -i inventories/production/proxy --private-key ~/.ssh/id_rsa --tags proxy

Ansible shows the output of the running playbook. Do not intervene until the playbook has finished.

Post Installation Health-Check

After the installation and configuration of TrilioVault for RHV did succeed the following steps can be done to verify that the TrilioVault installation is healthy.

Verify the TrilioVault Controller Cluster

The TrilioVault Controller Cluster can be verified from the base VMs themselves using the kubectl commands.

Verify API connectivity from the RHV-Manager

The RHV-Manager is doing all API calls towards the TrilioVault Appliance. Therefore it is helpful to do a quick API connectivity check using curl.

The following curl command lists the available workload-types and verfifies that the connection is available and working:

Verify the ovirt-imageio services are up and running

TrilioVault is extending the already exiting ovirt-imageio services. The installation of these extensions does check if the ovirt-services come up. Still it is a good call to verify again afterwards:

RHV 4.3.X

On the RHV-Manager check the ovirt-imageio-proxy service:

On the RHV-Host check the ovirt-imageio-daemon service:

RHV 4.4.X

On the RHV-Manager check the ovirt-imageio-proxy service:

On the RHV-Host check the ovirt-imageio-daemon service:

Verify the NFS Volume is correctly mounted

TrilioVault mounts the NFS Backup Target to the TrilioVault Appliance and RHV-Hosts.

To verify those are correctly mounted it is recommended to do the following checks.

First df -h looking for /var/triliovault-mounts/<hash-value>

Secondly do a read / write / delete test as the user vdsm:kvm (uid = 36 / gid = 36) from the TrilioVault Appliance and the RHV-Host.

Add/Delete worker-nodes

The TrilioVault Controller Cluster can be grown or shrunk at any time in its worker nodes if required.

When the master nodes need to be changed a redeployment is necessary.

Add a worker-node

Run the following command on a master-node as root to get a Kubernetes token:

Run the following script using the output from the token create command.

It is not required to do anything else as the TrilioVault Controller pods will automatically be deployed on any new node joining the Kubernetes cluster.

Delete a worker-node

Run the following command to get a list of all nodes in the Kubernetes cluster.

Delete the node using the node-name

Uninstall Trilio

Uninstalling Trilio is done in 2 easy steps, which leave only the already created backups behind.

Step 1: Uninstall RHV ovirt-imageio extension

To uninstall the ovirt-imageio extension do the following:

  1. Login into the Trilio Appliance GUI

  2. Go to "Configure RHV Host"

  3. Upload the inventory file with the RHV hosts

  4. Click Cleanup and wait for the Ansible playbook to finish

  5. Go to "Configure RHV Manager"

  6. Upload the inventory file with the RHV Manager

  7. Click Cleanup and wait for the Ansible playbook to finish

Step 2: Destroy the Trilio Appliance

This guide assumes you are running the Trilio Appliance in a RHV environment

To destroy the Trilio Appliance do the following:

  1. Login into the RHV-Manager

  2. Mark the Trilio Appliance in the list of VMs

  3. Click "Shutdown" or "Power Off"

  4. Wait till the shutdown procedure finishes

  5. Click "Remove" to destroy the Trilio Appliance

This capability is requiring a task queue system, .

Navigate to ComputeVirtual Machines

Python Celery
[hosts]
172.16.1.13 ansible_user=root   ansible_ssh_pass="Password"
172.16.1.14 ansible_user=root   ansible_ssh_pass="Password"
172.16.1.15 ansible_user=root   ansible_ssh_pass="Password"
[hosts]
172.16.1.13 ansible_user=root
172.16.1.14 ansible_user=root
172.16.1.15 ansible_user=root
[root@master-node1 ~]# kubectl get pods
NAME                                           READY   STATUS    RESTARTS     AGE
tvr-ingress-nginx-controller-6d7b7c47f-zzhxq   1/1     Running   1 (8d ago)   22d
tvr-metallb-controller-79757c94b7-66r5j        1/1     Running   1 (8d ago)   22d
tvr-metallb-speaker-zxdgh                      1/1     Running   1 (8d ago)   22d
[root@master-node1 ~]# kubectl get pods -n trilio
NAME                               READY   STATUS      RESTARTS     AGE
config-api-55879c774c-dmcqt        1/1     Running     1            22d
mariadb-0                          1/1     Running     1            22d
rabbitmq-0                         1/1     Running     1 (8d ago)   22d
rabbitmq-1                         1/1     Running     1 (8d ago)   22d
rabbitmq-2                         1/1     Running     1 (8d ago)   22d
rabbitmq-ha-policy-h984m           0/1     Completed   1            22d
rhvconfigurator-5f65f79897-tprhx   1/1     Running     1 (8d ago)   22d
wlm-api-7df595ccd7-l2vpk           1/1     Running     1 (8d ago)   15d
wlm-scheduler-6cff68b7b8-lm666     1/1     Running     1 (8d ago)   15d
wlm-workloads-nb8f9                1/1     Running     1            15d
curl -k -XGET https://30.30.1.11:8780/v1/admin/workload_types/detail -H "Content-Type: application/json" -H "X-OvirtAuth-User: admin@internal" -H "X-OvirtAuth-Password: password"
######
{"workload_types": [{"status": "available", "user_id": "admin@internal", "name": "Parallel", "links": [{"href": "https://myapp/v1/admin/workloadtypes/2ddd528d-c9b4-4d7e-8722-cc395140255a", "rel": "self"}, {"href": "https://myapp/admin/workloadtypes/2ddd528d-c9b4-4d7e-8722-cc395140255a", "rel": "bookmark"}], "created_at": "2020-04-02T15:38:51.000000", "updated_at": "2020-04-02T15:38:51.000000", "metadata": [], "is_public": true, "project_id": "admin", "id": "2ddd528d-c9b4-4d7e-8722-cc395140255a", "description": "Parallel workload that snapshots VM in the specified order"}, {"status": "available", "user_id": "admin@internal", "name": "Serial", "links": [{"href": "https://myapp/v1/admin/workloadtypes/f82ce76f-17fe-438b-aa37-7a023058e50d", "rel": "self"}, {"href": "https://myapp/admin/workloadtypes/f82ce76f-17fe-438b-aa37-7a023058e50d", "rel": "bookmark"}], "created_at": "2020-04-02T15:38:47.000000", "updated_at": "2020-04-02T15:38:47.000000", "metadata": [], "is_public": true, "project_id": "admin", "id": "f82ce76f-17fe-438b-aa37-7a023058e50d", "description": "Serial workload that snapshots VM in the specified order"}]}
systemctl status ovirt-imageio-proxy
######
● ovirt-imageio-proxy.service - oVirt ImageIO Proxy
   Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-04-08 05:05:25 UTC; 2 weeks 1 days ago
 Main PID: 1834 (python)
   CGroup: /system.slice/ovirt-imageio-proxy.service
           └─1834 bin/python proxy/ovirt-imageio-proxy
systemctl status ovirt-imageio-daemon
######
● ovirt-imageio-daemon.service - oVirt ImageIO Daemon
   Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-04-08 04:40:50 UTC; 2 weeks 1 days ago
 Main PID: 1442 (python)
    Tasks: 4
   CGroup: /system.slice/ovirt-imageio-daemon.service
           └─1442 /opt/ovirt-imageio/bin/python daemon/ovirt-imageio-daemon
systemctl status ovirt-imageio
######
● ovirt-imageio.service - oVirt ImageIO Daemon
   Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio.service; enabled; vend>
   Active: active (running) since Tue 2021-03-02 09:18:30 UTC; 5 months 11 days>
 Main PID: 1041 (ovirt-imageio)
    Tasks: 3 (limit: 100909)
   Memory: 22.0M
   CGroup: /system.slice/ovirt-imageio.service
           └─1041 /usr/libexec/platform-python -s /usr/bin/ovirt-imageio
systemctl status ovirt-imageio
######
● ovirt-imageio.service - oVirt ImageIO Daemon
   Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio.service; enabled; vend>
   Active: active (running) since Tue 2021-03-02 09:01:57 UTC; 5 months 11 days>
 Main PID: 51766 (ovirt-imageio)
    Tasks: 4 (limit: 821679)
   Memory: 19.8M
   CGroup: /system.slice/ovirt-imageio.service
           └─51766 /usr/libexec/platform-python -s /usr/bin/ovirt-imageio
df -h
######
Filesystem                                      Size  Used Avail Use% Mounted on
devtmpfs                                         63G     0   63G   0% /dev
tmpfs                                            63G   16K   63G   1% /dev/shm
tmpfs                                            63G   35M   63G   1% /run
tmpfs                                            63G     0   63G   0% /sys/fs/cgroup
/dev/mapper/rhvh-rhvh--4.3.8.1--0.20200126.0+1  7.1T  3.7G  6.8T   1% /
/dev/sda2                                       976M  198M  712M  22% /boot
/dev/mapper/rhvh-var                             15G  1.9G   12G  14% /var
/dev/mapper/rhvh-home                           976M  2.6M  907M   1% /home
/dev/mapper/rhvh-tmp                            976M  2.6M  907M   1% /tmp
/dev/mapper/rhvh-var_log                        7.8G  230M  7.2G   4% /var/log
/dev/mapper/rhvh-var_log_audit                  2.0G   17M  1.8G   1% /var/log/audit
/dev/mapper/rhvh-var_crash                      9.8G   37M  9.2G   1% /var/crash
30.30.1.4:/rhv_backup                           2.0T  5.3G  1.9T   1% /var/triliovault-mounts/MzAuMzAuMS40Oi9yaHZfYmFja3Vw
30.30.1.4:/rhv_data                             2.0T   37G  2.0T   2% /rhev/data-center/mnt/30.30.1.4:_rhv__data
tmpfs                                            13G     0   13G   0% /run/user/0
30.30.1.4:/rhv_iso                              2.0T   37G  2.0T   2% /rhev/data-center/mnt/30.30.1.4:_rhv__iso
su vdsm
######
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ touch foo
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ ll
total 24
drwxr-xr-x  3 vdsm kvm 4096 Apr  2 17:27 contego_tasks
-rw-r--r--  1 vdsm kvm    0 Apr 23 12:25 foo
drwxr-xr-x  2 vdsm kvm 4096 Apr  2 15:38 test-cloud-id
drwxr-xr-x 10 vdsm kvm 4096 Apr 22 11:00 workload_1540698c-8e22-4dd1-a898-8f49cd1a898c
drwxr-xr-x  9 vdsm kvm 4096 Apr  8 15:21 workload_51517816-6d5a-4fce-9ac7-46ee1e09052c
drwxr-xr-x  6 vdsm kvm 4096 Apr 22 11:30 workload_77fb42d2-8d34-4b8d-bfd5-4263397b636c
drwxr-xr-x  5 vdsm kvm 4096 Apr 23 06:15 workload_85bf16ed-d4fd-49a6-a753-98c5ca6e906b
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ rm foo
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ ll
total 24
drwxr-xr-x  3 vdsm kvm 4096 Apr  2 17:27 contego_tasks
drwxr-xr-x  2 vdsm kvm 4096 Apr  2 15:38 test-cloud-id
drwxr-xr-x 10 vdsm kvm 4096 Apr 22 11:00 workload_1540698c-8e22-4dd1-a898-8f49cd1a898c
drwxr-xr-x  9 vdsm kvm 4096 Apr  8 15:21 workload_51517816-6d5a-4fce-9ac7-46ee1e09052c
drwxr-xr-x  6 vdsm kvm 4096 Apr 22 11:30 workload_77fb42d2-8d34-4b8d-bfd5-4263397b636c
drwxr-xr-x  5 vdsm kvm 4096 Apr 23 06:15 workload_85bf16ed-d4fd-49a6-a753-98c5ca6e906b
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$
kubeadm token create --print-join-command
yum install -y yum-utils
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
#Install Kubernetes (kubeadm, kubelet and kubectl) 
dnf install -y kubeadm kubelet
systemctl start docker
systemctl enable docker

< Add output from above command >
kubectl get nodes
kubectl delete node <nodename>

Required RHV User roles

TrilioVault for RHV is integrating into the RHV Manager GUI and is using the RHV users with their assigned RHV roles.

TrilioVault is integrating its own RHV roles, which need to be assigned to users. Only users with the superuser role do not require a special TrilioVault role.

All TrilioVault roles only have the RHV Administration Portal login permissions. No further permissions inside RHV are required to fulfill the TrilioVault tasks. This is possible, as the TrilioVault Appliance is using the during configuration provided admin account to send any API commands towards the RHV API itself.

Backup User role: TrilioBackup

The TrilioBackup User role enables a user to:

  • Create Workloads

  • Modify Workloads

  • Take Backups

  • Cancel ongoing Backups

  • Delete any Backups

The TrilioBackup User role is not able to:

  • Reset Workloads

  • Start a Restore

  • Delete a Restore

  • Cancel an ongoing Restore

  • Download a Report

  • Do a File Search

  • Do a File Restore through Snapshot Mount

Restore User role: TrilioRestore

The TrilioRestore User role enables a user to:

  • Restore Backups

  • Delete Restores

  • Cancel an ongoing Restore

  • Do a File Search

  • Do a File Restore through Snapshot Mount

The TrilioRestore User role is not able to:

  • Create Workloads

  • Modify Workloads

  • Reset Workloads

  • Take Backups

  • Delete Backups

  • Cancel ongoing Backups

  • Download a Report

Reporting User role: TrilioMonitor

The TrilioMonitor User role enables a user to:

  • Monitor the status of Backups and Restores

  • Download a Report

The TrilioMonitor User role is not able to:

  • Create Workloads

  • Modify Workloads

  • Reset Workloads

  • Take Backups

  • Delete Backups

  • Cancel ongoing Backups

  • Restore Backups

  • Delete Restores

  • Cancel ongoing Restores

  • Do a File Search

  • Do a File Restore through Snapshot Mount

Backup Administrator User Role: TrilioBackupAdministrator

The TrilioBackupAdministrator User role enables a user to:

  • Create Workloads

  • Modify Workloads

  • Reset Workloads

  • Take Backups

  • Delete Backups

  • Cancel ongoing Backups

  • Restore Backups

  • Delete Restores

  • Cancel ongoing Restores

  • Do a File Search

  • Do a File Restore through Snapshot Mount

  • Download a Report

File Search

Definition

The file search functionality allows the user to search for files and folders located on a chosen VM in a workload in one or more Backups.

Navigating to the file search tab

The file search tab is part of every workload overview. To reach it follow these steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload a file search shall be done in

  4. Click the workload name to enter the Workload overview

  5. Click File Search to enter the file search tab

Configuring and starting a file search

A file search runs against a single virtual machine for a chosen subset of backups using a provided search string.

To run a file search the following elements need to be decided and configured

Choose the VM the file search shall run against

Under VM Name/ID choose the VM that the search is done upon. The drop down menu provides a list of all VMs that are part of any Snapshot in the Workload.

VMs that are no longer activly protected by the Workload but are still part of an existing Snapshot are listed in red.

Set the File Path

The File Path defines the search string that is run against the chosen VM and Snapshots. This search string does support basic RegEx.

The File Path has to start with a '/'

Windows partitions are fully supported. Each partition is it's own Volume with it's own root. Use '/Windows' instead of 'C:\Windows'

The file search does not go into deeper directories and always searches on the directory provided in the File Path

Example File Path for all files inside /etc : /etc/*

Define the Snapshots to search in

Filter Snapshots by is the third and last component that needs to be set. This defines which Snapshots are going to be searched.

There are 3 possibilities for a pre-filtering:

  1. All Snapshots - Lists all Snapshots that contain the chosen VM from all available Snapshots

  2. Last Snapshots - Choose between last 10, 25, 50, or custom Snapshots and click Apply to get the list of the available Snapshots for the chosen VM that match the criteria.

  3. Date Range - Set a start and end date and click apply to get the list of all available Snapshots for the chosen VM within the set dates.

After the pre-filtering is done choose the Snapshots that shall be searched by clicking their checkbox or by clicking the global checkbox.

When no Snapshot is chosen the file search will not start.

Start the File Search and retrieve the results

To start a File Search the following elements need to be set:

  • A VM to search in has to be choosen

  • A valid File Path provided

  • At least one Snapshot to search in selected

Once those have been set click "Search" to start the file search.

Do not navigate to any other RHV tab or website after starting the File Search. Results are lost and the search has to be repeated to regain them.

After a short time the results will be presented. The results are presented in a tabular format grouped by Snapshots and Volumes inside the Snapshot.

For each found file or folder the following information are provided:

  • POSIX permissions

  • Amount of links pointing to the file or folder

  • User ID who owns the file or folder

  • Group ID assigned to the file or folder

  • Actual size in Bytes of the file or folder

  • Time of creation

  • Time of last modification

  • Time of last access

  • Full path to the found file or folder

Once the Snapshot of interest has been identified it is possible to go directly to the Snapshot using the "View Snapshot" option.

Snapshot mount

TrilioVault allows to directly mount a Snapshot through the RHV-Manager.

This feature provides the capability to download any file from any Snapshot through the RHV-Manager independent of size.

It is not possible to download complete directories

Mounting a Snapshot

To mount a Snapshot follow these steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to show

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the searched Snapshot in the Snapshot list

  7. Click the Snapshot Name

  8. Click File Manager

  9. Click the VM to be mounted (this might take a minute)

Only VM can be mounted at the same time for the complete RHV environment.

Navigating the mounted Snapshot

A mounted Snapshot can be navigated like in any browser by clicking on files and folders.

Clicking on a directory will open that directory.

Clicking on a file will provide an overview about the metadata of this file including:

  • Name

  • Size

  • Last Modified

  • Last accessed

  • Owner

  • Owner Group

  • Permissions

It is further possible to get a preview of the file directly from this overview or to download the file through the RHV-Manager.

Shutdown/Restart the Trilio Appliance

To gracefully shut down the Trilio Appliance the following steps are recommended.

Not following this guide and just sending the shutdown command to the Trilio appliance through the RHV-M GUI/API or through the CLI is working without errors. The automated stopping of all Trilio services will extend the shutdown process by up to 25 minutes.

Verify no snapshots or restores are running

It is recommended to check in the RHV-M backups tab, that no snapshots or restores are running.

Stopping or restarting the Trilio appliance will cancel all running actively running backup or restore jobs. These jobs will be marked as errored after the system has come up again.

Stop all Trilio processes

Main processes workloadmanager

The following commands will stop the main processes of the Trilio appliance.

systemctl stop wlm-api
systemctl stop wlm-scheduler
systemctl stop wlm-workloads
systemctl stop tvault-config-api
systemctl stop tvault-object-store

Secondary processes MySQL and RabbitMQ

The Trilio solution is using MySQL and RabbitMQ. It is not required but recommended to gracefully stop these services too before restarting the Appliance.

systemctl stop mysql
rabbitmqctl stop

Shutdown/Restart the Appliance

Restarting through CLI of the appliance requires root privileges. The rhv_nw user will get enabled for this feature in a future update.

After the services have been stopped the Appliance can be restarted or shut down using standard Linux commands.

reboot
shutdown

Reset the Trilio GUI password

In case of the Trilio Dashboard being lost it can be resetted as long as SSH access to the appliance is available.

To reset the password to its default do the following:

[root@tvm ~]# source /home/rhv/myansible/bin/activate
(myansible) [root@tvm ~]# cd /opt/stack/workloadmgr/workloadmgr/tvault-config
(myansible) [root@tvm tvault-config]# python recreate_conf.py
(myansible) [root@tvm tvault-config]# systemctl restart tvault-config

The dashboard login will be reset to:

Username: admin
Password: password

Trilio 4.2 Support Matrix

RHV Version

ovirt-imageio version

Storage Domain

4.3.X

1.4.5 / 1.4.8 / 1.5.1 / 1.5.2 / 1.5.3

NFSv3, iSCSI

4.4.X

1.6.X / 2.0.X / 2.1.X

NFSv3, iSCSI

RHHI-V Version

ovirt-imageio version

Storage Domain

1.6

1.4.5 / 1.4.8

NFSv3, iSCSI

1.7

1.5.1 / 1.5.2 / 1.5.3

NFSv3, iSCSI

1.8

1.6.X / 2.0.X / 2.1.X

NFSv3, iSCSI

RHV 4.3.9 contains a bug that highly impacts Trilio up to the point of being unfunctional. A Red Hat Hotfix is available from Trilio Customer Success. The patch will be included officially in RHV 4.3.10

OVirt Version

ovirt-imageio version

Storage Domain

4.4.X

1.6.X / 2.0.X / 2.1.X

NFSv3, iSCSI

email alerts

TrilioVault for RHV contains the possibility to send E-Mails to a defined list of E-Mail addresses for any succeeded or failed backup or recovery process.

Enabling the email alert function

To enable the email alert function the following steps need to be done:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Admin Panel

  4. Move to Settings

  5. Click Add/update email list

  6. Add at least 1 email to the list

  7. Click Save and enable alerts

Updating the email alert receiver list

To update the email alert receiver the following steps need to be done:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Admin Panel

  4. Move to Settings

  5. Click Add/update email list

  6. Add at least 1 email to the list

  7. Click Save and enable alerts

Disabling the email alert function

To disable the email alert function the following steps need to be done:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Admin Panel

  4. Move to Settings

  5. Click the switch to disable email alerts

Admin Panel

The Admin Panel gives Adminstrators a centralized overview of:

  • Snapshots

  • Storage usage

  • protected vs unprotected VMs

In addition does it contain the functionalities to control:

  • Enable/Disable Global Job Scheduler

  • Enable/Disable email alerts

  • Update email alert email list

Accessing the Admin Panel

To access the Admin Panel the following steps need to be followed:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Admin Panel

Admin Panel Overview tab

The Overview tab of the Admin Panel shows the following information:

  • Graphical overview of how many Snapshots are available, errored or canceled

  • Percentage of Snapshots that are Full Snapshots

  • Graphical overview of used versus free Storage capacity

  • Percentage of how much Storage is available

  • Graphical overview of how many VMs are protected or unprotected

  • Percentage of how many VMs are protected

Admin Panel Enable/Disable Global Job Scheduler

To enable or disable the Global Job Scheduler follow these steps:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Admin Panel

  4. Move to the Settings Tab

  5. Use the switch to enable or disable the Global Job Scheduler

Important TVM Logs

TrilioVault Appliance logs used during configuration

The following logs contain all information gathered during the configuration of the TrilioVault Appliance

/var/log/workloadmgr/tvault-config.log

This log contains all information of pre-checks done, when filling out the configurator form.

/var/log/workloadmgr/ansible-playbook.log

This log contains the complete Ansible output from the playbooks that run when the configurator is started.

With each configuration attempt a new ansible-playbook.log gets created. Old ansible-playbook.logs are renamed according to their creation time.

TrilioVault Appliance logs during any task after configuration

/var/log/workloadmgr/workloadmgr-api.log

This log tracks all API requests that have been received on the wlm-api service.

This log is helpful to verify that the TrilioVault VM is reachable from the RHV-M and authentication is working as expected.

/var/log/workloadmgr/workloadmgr-scheduler.log

This log tracks all jobs the wlm-scheduler is receiving from the wlm-api and sends them to the chosen wlm-workloads service.

This log is helpful, when the wlm-api doesn't throw any error, but no task like backup or restore is getting started.

/var/log/workloadmgr/workloadmgr-workloads.log

This log contains the complete output from the wlm-workloads service, which is controlling the actual backup and restore tasks.

This log is helpful to identify any errors that are happening on the TVM itself including RESTful api responses from the RHV-M.

➡️

Important RHV-Manager Logs

TrilioVault data transfer related logs

RHV 4.3: /var/log/ovirt-imageio-proxy/image-proxy.log

RHV 4.4: /var/log/ovirt-imageio/daemon.log

This log provides the used ticket number and RHV-Host for a backup transfer. It is helpful to identify the ticket numbers that are used in RHV to track a specific data transfer. It also shows any connection errors between the RHV-M and the RHV-Host.

Generally helpful RHV-Manager logs

TrilioVault is calling a lot of RHV APIs to read metadata, create RHV Snapshots and restore VMs. These tasks are done by RHV independently from TrilioVault and are logged in RHV logs.

/var/log/ovirt-engine/engine.log

This log is hard to read, but contains all tasks that the RHV-M is doing, including all Snapshot related tasks.

Additional logs that can be helpful during troubleshooting are:

/var/log/ovirt-engine/boot.log /var/log/ovirt-engine/console.log /var/log/ovirt-engine/ui.log

Important RHV-Host Logs

TrilioVault data transfer related logs

/var/log/ovirt_celery/worker<x>.log

The worker logs contain the status of the disk transfer from the RHV Host to the backup target. Useful if the data transfer process gets stuck or errors in between.

RHV 4.3: /var/log/ovirt-imageio-daemon/daemon.log

RHV 4.4: /var/log/ovirt-imageio/daemon.log

The daemon.log contains all information about the actual connection between the RHV Host and the backup target. Useful to identify potential connection issues between the RHV Host and Backup target.

Restores

Definition

A Restore is the workflow to bring back the backed up VMs from a TrilioVault Snapshot.

TrilioVault does offer 3 types of restores:

  • One Click restore

  • Selective restore

  • InPlace restore

One Click Restore

The One Click Restore will bring back all VMs from the Snapshot in the same state as they were backed up. They will:

  • be located in the same cluster in the same datacenter

  • use the same storage domain

  • connect to the same network

  • have the same flavor

The user can't change any Metadata.

The One Click Restore requires, that the original VM's that have been backed up are deleted or otherwise lost. If even one VM is still existing, will the One Click Restore fail.

The One Click Restore will automatically update the Workload to protect the restored VMs.

There are 2 possibilities to start a One Click Restore.

Possibility 1: From the Snapshot list

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click "One Click Restore" in the same line as the identified Snapshot

  8. (Optional) Provide a name / description

  9. Click "Create"

Possibility 2: From the Snapshot overview

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click the Snapshot Name

  8. Navigate to the "Restores" tab

  9. Click "One Click Restore"

  10. (Optional) Provide a name / description

  11. Click "Create"

Selective Restore

The Selective Restore is the most complex restore TrilioVault has to offer. It allows to adapt the restored VMs to the exact needs of the User.

With the selective restore the following things can be changed:

  • Which VMs are getting restored

  • Name of the restored VMs

  • Which networks to connect with

  • Which Storage domain to use

  • Which DataCenter / Cluster to restore into

  • Which flavor the restored VMs will use

The Selective Restore is always available and doesn't have any prerequirements.

There are 2 possibilities to start a Selective Restore.

Possibility 1: From the Snapshot list

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

  8. Click on "Selective Restore"

  9. Configure the Selective Restore as desired

  10. Click "Restore"

Possibility 2: From the Snapshot overview

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click the Snapshot Name

  8. Navigate to the "Restores" tab

  9. Click "Selective Restore"

  10. Configure the Selective Restore as desired

  11. Click "Restore"

Inplace Restore

The Inplace Restore covers those use cases, where the VM and its Volumes are still available, but the data got corrupted or needs to a rollback for other reasons.

It allows the user to restore only the data of a selected Volume, which is part of a backup.

The Inplace Restore only works when the original VM and the original Volume are still available and connected. TrilioVault is checking this by the saved Object-ID.

The Inplace Restore will not create any new RHV resources. Please use one of the other restore options if new Volumes or VMs are required.

There are 2 possibilities to start an Inplace Restore.

Possibility 1: From the Snapshot list

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click on the small arrow next to "One Click Restore" in the same line as the identified Snapshot

  8. Click on "Inplace Restore"

  9. Configure the Inplace Restore as desired

  10. Click "Restore"

Possibility 2: From the Snapshot overview

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to be restored

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the Snapshot to be restored

  7. Click the Snapshot Name

  8. Navigate to the "Restores" tab

  9. Click "Inplace Restore"

  10. Configure the Inplace Restore as desired

  11. Click "Restore"

Reporting

TrilioVault for RHV provides the capabilities to generate reports, to gain insights into the usage of TrilioVault over time and the stability of the service.

Reports are generated automatically every 24h at midnight.

The timezone used is the timezone the TrilioVault Appliance is configured in. Differences in timezones between the TrilioVault Appliance and the browser timezone might lead to Data not been shown on the correct day.

Accessing the Report page

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Reporting

Report Page Configuration

The Configuration Report page provides an overview of the TrilioVault installation as a whole.

The following elements are listed for the TrilioVault Appliance:

  • The version number of the installation TrilioVault Appliance

  • API status of the TrilioVault Appliance UP/DOWN

  • Scheduler status of the TrilioVault Appliance UP/DOWN

  • Workloadmgr status of the TrilioVault Appliance UP/DOWN

  • Configured Timezone of the TrilioVault Appliance

The following information is shown for the RHV environment;

  • The Version number of the RHV environment

  • List of all RHV Hosts

  • Status of the RHV Hosts being configured for TrilioVault

  • Address of the RHV Hosts

Report Page Data Protection

The Data Protection page provides an overview of the general protection status of the RHV environment.

The following elements are shown:

  • Graphical overview of protected versus unprotected VMs

  • List of all Unprotected VMs by name

Report Page Workloads

The Workloads page provides an overview of all existing TrilioVault workloads.

The following information is shown per workload:

  • Workload name and creation time

  • When the next backup will be run

  • RPO in h

  • Retention policy

  • Available (successful Snapshots)

  • Date and time of the oldest Snapshot

  • Which VMs are protected by the Workload

Report Page Backups

The Backups page provides an overview over time about taken backups.

The following information is shown:

  • Graphical overview of successful vs errored snapshots per time bucket over the given period of time

  • Graphical overview of successful vs errored snapshots total over the given period of time

  • Graphical overview of full vs incremental vs mixed snapshots total over the given period of time

Report Page Storage

The Storage page provides an overview over time about storage usage, data transfer and average usage.

The following information is shown:

  • Graphical overview for Storage usage at the end of the given period of time

  • Graphical overview of Data transferred per day for the given period of time

  • The average data for any workloads:

    • Average Data moved in a backup

    • Average Time needed to take the backup

    • Average transfer speed to the backup target

Report Page Restores

The Reports page provides an overview of all restores over the given period of time.

The following information is shown:

  • Amount of succeeded and failed restores per day

  • Types of restores used

Download Report

The TrilioVault report can be downloaded or printed directly from the Reporting page.

To download or print a report the following steps need to be done:

  1. Login into RHV-Manager as a user with admin privileges

  2. In the main menu click on Backup

  3. In the Backup sub-menu click on Reporting

  4. Configure the time frame the report is to be created for

  5. Click Download Report to see a preview of the report

  6. Click Download Report to open your browsers printing page

    1. Use Save as PDF to save the report as PDF

    2. Use any available printer to print the report directly

Preparing for Application Consistent backups

TrilioVault for RHV does provide the capabilties to take application consistent backups by utilizing the Qemu-Guest-Agent.

The Qemu-Guest-Agent

The Qemu-Guest-Agent is a component of the qemu hypervisor, which is used by RHV. RHV automatically builds all VMs to be prepared to use the Qemu-Guest-Agent.

The Qemu-Guest-Agent provides many capabilities, including the possibility to freeze and thaw Virtual Machines Filesystems.

The Qemu-Guest-Agent is not developed or maintained by Trilio. Trilio does leverage standard capabilities of the Qemu-Guest-Agent to send freeze and thaw commands to the protected VMS during a backup process.

Installing the Qemu-Guest-Agent

The Qemu-Guest-Agent needs to be installed inside the VM.

The Qemu-Guest-Agent requires a special SCSI interface in the VM definition. This interface is automatically created by RHV upon spinning up the Virtual Machine.

The installation process depends on the Guest Operating System.

RPM-based Guests

yum install qemu-guest-agent
systemctl start qemu-guest-agent

Deb-based Guests

apt-get install qemu-guest-agent
systenctk start qemu-guest-agent

Windows Guests

Using the fsfreeze-hook.sh script

The Qemu-Guest-Agent is calling the fsfreeze-hook.sh script either with the freeze or the thaw argument depending on the current operation.

The fsfreeze-hook.sh script is a normal shell script. It is typically used to do all necessary steps to get an application into a consistent state for the freeze or to undo all freeze operations upon the thaw.

Location of the fsfreeze-hook.sh script

The fs-freeze-hook.sh script default path is:

/etc/qemu/fsfreeze-hook

Content of the fsfreeze-hook.sh script

The fsfreeze-hook.sh script does not require a special content.

It is recommended to provide a case identifier for the freeze and thaw argument. This can be achieved for example by the following bash code:

#!/bin/bash

case "$1" in
        freeze)
            #Commands for freeze
            ;;
         
        thaw)
            #Commands for thaw
            ;;
         
        *)
            echo $"Neither freeze nor thaw provided"
            exit 1
            ;;
 
esac

Example fsfreeze-hook.sh for MYSQL

This example flushes the MySQL tables to the disks and keeps a read lock to prevent further write access until the thaw has been done.

#!/bin/sh

MYSQL="/usr/bin/mysql"
MYSQL_OPTS="-uroot" #"-prootpassword"
FIFO=/var/run/mysql-flush.fifo
# Check mysql is installed and the server running
[ -x "$MYSQL" ] && "$MYSQL" $MYSQL_OPTS < /dev/null || exit 0
flush_and_wait() {
    printf "FLUSH TABLES WITH READ LOCK \\G\n"
    trap 'printf "$(date): $0 is killed\n">&2' HUP INT QUIT ALRM TERM
    read < $FIFO
    printf "UNLOCK TABLES \\G\n"
    rm -f $FIFO
}
case "$1" in
    freeze)
        mkfifo $FIFO || exit 1
        flush_and_wait | "$MYSQL" $MYSQL_OPTS &
        # wait until every block is flushed
        while [ "$(echo 'SHOW STATUS LIKE "Key_blocks_not_flushed"' |\
                 "$MYSQL" $MYSQL_OPTS | tail -1 | cut -f 2)" -gt 0 ]; do
            sleep 1
        done
        # for InnoDB, wait until every log is flushed
        INNODB_STATUS=$(mktemp /tmp/mysql-flush.XXXXXX)
        [ $? -ne 0 ] && exit 2
        trap "rm -f $INNODB_STATUS; exit 1" HUP INT QUIT ALRM TERM
        while :; do
            printf "SHOW ENGINE INNODB STATUS \\G" |\
                "$MYSQL" $MYSQL_OPTS > $INNODB_STATUS
            LOG_CURRENT=$(grep 'Log sequence number' $INNODB_STATUS |\
                          tr -s ' ' | cut -d' ' -f4)
            LOG_FLUSHED=$(grep 'Log flushed up to' $INNODB_STATUS |\
                          tr -s ' ' | cut -d' ' -f5)
            [ "$LOG_CURRENT" = "$LOG_FLUSHED" ] && break
            sleep 1
        done
        rm -f $INNODB_STATUS
        ;;
    thaw)
        [ ! -p $FIFO ] && exit 1
        echo > $FIFO
        ;;
    *)
        echo $"Neither freeze nor thaw provided"
        exit 1
        ;;
esac

Workloads

Important note for VMs using iSCSI disks. RHV is only creating a connection between the VM and the iSCSI disk when the VMs are running, as this connecting is achieved through a symlink on the RHV Host. This behavior leads to RHV only being able to take RHV Snapshots through the VM when the VM is running. In consequence TVR is only able to take backups while the VM is in a running state.

Definition

A workload is a backup job that protects one or more Virtual Machines according to a configured policy. There can be as many workloads as needed. But each VM can only be part of one Workload.

Create a Workload

To create a workload do the following steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Click "Create Workload"

  4. Provide Workload Name and Workload Description on the first tab "Details"

  5. Choose between Serial or Parallel workload on the first tab "Details"

  6. Choose the VMs to protect on the second Tab "Workload Members"

  7. Decide for the schedule of the workload on the Tab "Schedule"

  8. Provide the Retention policy on the Tab "Policy"

  9. Choose the Full Backup Interval on the Tab "Policy"

  10. Click create

The created Workload will be available after a few seconds and starts to take backups according to the provided schedule and policy.

Workload Overview

A workload contains many information, which can be seen in the workload overview.

To enter the workload overview do the following steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that shall create a Snapshot

  4. Click the workload name to enter the Workload overview

Details Tab

The Workload Details tab provides you with the general most important information about the workload:

  • Name

  • Description

  • List of protected VMs

It is possible to navigate to the protected VM directly from the list of protected VMs.

Snapshots Tab

The Workload Snapshots Tab shows the list of all available Snapshots in the chosen Workload.

From here it is possible to work with the Snapshots, create Snapshots on demand and start Restores.

Policy Tab

The Workload Policy Tab shows gives an overview of the current configured scheduler and retention policy. The following elements are shown:

  • Scheduler Enabled / Disabled

  • Start Date / Time

  • End Date / Time

  • RPO

  • Time till next Snapshot run

  • Retention Policy and Value

  • Full Backup Interval policy and value

Filesearch Tab

The Workload Filesearch Tab provides access to the power search engine, which allows to find files and folders on Snapshots without the need of a restore.

Please refer to the File Search User Guide to learn more about this feature.

Misc. Tab

The Workload Miscellaneous Tab shows the remaining metadata of the Workload. The following information are provided:

  • Creation time

  • last update time

  • Workload ID

  • Workload Type

Edit a Workload

Workloads can be modified in all components to match changing needs.

To edit a workload do the following steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload to be modified

  4. Click the small arrow next to "Create Snapshot" to open the sub-menu

  5. Click "Edit Workload"

  6. Modify the workload as desired - All parameters can be changed

  7. Click "Update"

Delete a Workload

Once a workload is no longer needed it can be safely deleted.

To delete a workload do the following steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload to be deleted

  4. Click the small arrow next to "Create Snapshot" to open the sub-menu

  5. Click "Delete Workload"

  6. Confirm by clicking "Delete Workload" yet again

Reset a Workload

In rare cases it might be necessary to start a backup chain all over again, to ensure the quality of the created backups. To not recreate a Workload in such cases is it possible to reset a Workload.

The Workload reset will:

  • Cancel all ongoing tasks

  • Delete all existing RHV Snapshots from the protected VMs

  • recalculate the next Snapshot time

  • take a full backup at the next Snapshot

To reset a Workload do the following steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload to be deleted

  4. Click the small arrow next to "Create Snapshot" to open the sub-menu

  5. Click "Reset Workload"

  6. Confirm by clicking "Reset Workload" yet again

Upgrade Trilio

To upgrade Trilio it is necessary to uninstall the old version of Trilio and install the new version.

This is done in a few easy steps.

  1. Import the workload from the Web UI of the Appliance once Trilio Appliance is configured

During configuration check the box for workload import. This will automatically load all workloads hosted on the backup target into the fresh configured Trilio Appliance.

Snapshots

Definition

A Snapshot is a single TrilioVault backup of a workload including all data and metadata. It contains the information of all VM's that are protected by the workload.

Creating a Snapshot

Snapshots are automatically created by the TrilioVault scheduler. If necessary or in case of deactivated scheduler is it possible to create a Snapshot on demand.

There are 2 possibilities to create a snapshot on demand.

Possibility 1: From the Backup overview

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that shall create a Snapshot

  4. Click "Create Snapshot"

  5. Provide a name and description for the Snapshot

  6. Decide between Full and Incremental Snapshot

  7. Click "Create"

Possibility 2: From the Workload Snapshot list

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that shall create a Snapshot

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Click "Create Snapshot"

  7. Provide a name and description for the Snapshot

  8. Decide between Full and Incremental Snapshot

  9. Click "Create"

Snapshot overview

Each Snapshot contains a lot of information about the backup. These information can be seen in the Snapshot overview.

To reach the Snapshot Overview follow these steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to show

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the searched Snapshot in the Snapshot list

  7. Click the Snapshot Name

Details Tab

The Snapshot Details Tab shows the most important information about the Snapshot.

  • Snapshot Name / Description

  • Snapshot Type

  • Time Taken

  • Size

  • Which VMs are part of the Snapshot

  • for each VM in the Snapshot

    • Instance Info - Name & Status

    • Instance Type - vCPUs, Disk & RAM

    • Attached Networks

    • Attached Volumes

    • Misc - Original ID of the VM

Restores Tab

The Snapshot Restores Tab shows the list of Restores that have been started from the chosen Snapshot. It is possible to start Restores from here.

Misc. Tab

The Snapshot Miscellaneous Tab provides the remaining metadata information about the Snapshot.

  • Creation Time

  • Last Update time

  • Snapshot ID

  • Workload ID of the Workload containing the Snapshot

Delete Snapshots

Once a Snapshot is no longer needed, it can be safely deleted from a Workload.

The retention policy will automatically delete the oldest Snapshots according to the configure policy.

You have to delete all Snapshots to be able to delete a Workload.

Deleting a TrilioVault Snapshot will not delete any RHV Snapshots. Those need to be deleted separately if desired.

There are 2 possibilities to delete a Snapshot.

Possibility 1: Single Snapshot deletion through the submenu

To delete a single Snapshot through the submenu follow these steps:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to show

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the searched Snapshot in the Snapshot list

  7. Click the small arrow in the line of the Snapshot next to "One Click Restore" to open the submenu

  8. Click "Delete Snapshot"

  9. Confirm by clicking "Delete"

Possibility 2: Multiple Snapshot deletion through checkbox in Snapshot overview

To delete one or more Snapshots through the Snapshot overview do the following:

  1. Login to the RHV-Manager

  2. Navigate to the Backup Tab

  3. Identify the workload that contains the Snapshot to show

  4. Click the workload name to enter the Workload overview

  5. Navigate to the Snapshots tab

  6. Identify the searched Snapshots in the Snapshot list

  7. Check the checkbox for each Snapshot that shall be deleted

  8. Click "Delete Snapshots"

  9. Confirm by clicking "Delete"

The appliance service account

The TrilioVault Appliance is providing an account that allows doing the following through the console:

  • Change network interface configuration

  • Show logs on the TrilioVault Appliance

  • Restart TrilioVault services on the TrilioVault Appliance

Log in with the ssh service account

The following default credentials allow the usage of the service account:

  • username: rhv_nw

  • password: OphaeHaet0

Change the network configuration

The rhv_nw user is capable of editing the network configuration files and restarting the interface service.

Edit the network configuration files

The network interface files are located under:

/etc/sysconfig/network-scripts/

Use the vi editor to configure the network interface. TrilioVault's default interface is eth0, which is using the following file for its configuration:

ifcfg-eth0

Edit this file according to the attached network. A typical interface configuration looks as follows

BOOTPROTO=static
DEVICE=eth0
HWADDR=52:54:00:90:cc:a1
IPADDR=192.168.1.10
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=192.168.1.1
DNS2=8.8.8.8
ONBOOT=yes
TYPE=Ethernet
USERCTL=no

A warning will appear that a read-only file is edited. This warning needs to be overwritten using wq!

Restart the network interface

Restarting the network interface through ssh will lead to a network disconnect and might end in the TrilioVault appliance becoming inaccessible. Only restart network interfaces, when there are means available to reconnect to the Appliance, even when the interface stays down.

To restart the network interfaces the rhv_nw user is capable of using the ifdown and ifup commands. Below examples are shown for the default interface eth0.

ifdown eth0
ifup eth0

Using these two commands will reread the network configuration and apply any changes to the interface.

Reading the TrilioVault Appliance log files

The rhv_nw user does have full read permissions on all TrilioVault Appliance log files located in

/var/log

Control the TrilioVault Service

The rhv_nw user does have the permission to start, stop and restart the workloadmgr services running on top of the TrilioVault appliance.

The following services can be controlled through the service account:

  • wlm-api

  • wlm-scheduler

  • wlm-workloads

Use the systemctl commands as shown in the following examples for the wlm-api.

systemctl stop wlm-api
systemctl start wlm-api
systemctl restart wlm-api

Trilio for RHV 4.2 Release Notes

Trilio 4.2 is the sixth and release of Trilio for Red Hat Virtualization.

Trilio for RHV 4.1 Features on a Glance

Backup

Recovery

Additional functions

Image based VMs (iSCSI and NFS)

OneClick Restore

File Search

Template based VMs (iSCSI)

Selective Restore

File Recovery

Scheduled based Backup

InPlace Restore

Workload import

OnDemand Backup

Workload reset

RestFul API

Reporting

Alerting

RBAC

New Architecture

Trilio for RHV 4.2 is no longer using the VM Appliance which was provided in earlier versions of TVR.

TVR 4.2 has been fully containerized and is running on top of a Kubernetes Cluster.

TVR 4.2 installation procedure consumes prepared RHEL8 machines and deploys the Kubernetes Cluster together with the TVR 4.2 containers.

High-availability for Trilio Controller services

The change in architecture turns the Trilio Controller services into a high-available solution, which is running with as many TrilioVault containers as there are Kubernetes nodes available.

In case of a container going down will the Kubernetes cluster automatically detect the loss and spins up a new pod.

In case of a complete node loss, a new node can be added and the Kubernetes cluster will automatically deploy the Trilio controller pods on the new node.

Going into maintenance mode

Red Hat has announced that the current Red Hat Virtualization will be the last RHV version developed by Red Hat.

Date
Stage of life

August 4, 2020

General availability

August 31, 2022

End of full support

August 31, 2024

End of Maintenance support

August 31, 2026

End of extended life phase and general EOL

Trilio for RHV will follow the lead from Red Hat and will not release any future feature releases.

All current and future customer base of TVR will continue to be supported by Trilio including bug fixes and security fixes as required.

Known issues

NFSv4 not supported

TVR 4.1 is not supporting NFSv4. Backups taken using NFSv4 will fail with I/O errors.

Workaround is to enforce the usage of NFSv3 by adding vers=3 as NFS option.

Clicking download file multiple times after a file search creates error messages in wlm-api logs

It is possible to download smaller files directly after a file search. When the user is clicking the download button multiple times without waiting for the download window to appear the wlm-api logs are flooded with error messages.

TVR transfer daemon can be installed on RHV Manager

It is possible to accidently run the installation of the TVR daemon, that is supposed to run on the RHV Hosts only, against the RHV Manager.

The installation will succeed and backups might fail with the following error:

error while converting qcow2: Could not create file: Numerical result out of range\n'

Snapshot mount UI timing out on directories with over 100.000 files inside

It has been observed that while navigating through mounted Snapshots inside the RHV Manager the UI freezes when directories are accessed, which contain more than 100.000 files.

This might be observed with less files or much more files depending on the transfer speed between TrilioVault appliance and backup target.

Configuring the Trilio Controller Cluster

The Trilio Appliance requires configuration to work with the chosen RHV environment. A Web-UI provides access to the Trilio Appliance dashboard and configurator.

Recommended and tested browsers are Google Chrome, Microsoft Edge, and Mozilla Firefox.

Setup the local host file

To access the Trilio Dashboard after a fresh deployment it is required to setup a local DNS entry. This is required as Kubernetes is working using FQDNs instead of IPs.

The following entry needs to be done in the hostfile of the local system:

Accessing the Trilio Dashboard

Enter the Trilio IP or FQDN into the browser to reach the Trilio Appliance landing page.

User: admin Password: password

You will be prompted to change the Web-UI password on first login.

After the password has been changed you will be logged out and have to login using the new password.

Details needed for the Trilio Appliance configurator

Upon login into the Trilio Appliance, the shown page is the configurator. The configurator requires some information about the Trilio Appliance, RHV and Backup Storage.

Trilio Nodes information

The Trilio Appliance needs to be integrated into an existing environment to be able to operate correctly. This block asks for information about the Trilio Appliance operating details.

  • Name Servers

    • The DNS server the Trilio appliance will use.

    • Format: Comma separated list of IPs

    • Example: 8.8.8.8,10.10.10.10

  • Domain Search Order

    • The domain the Trilio Appliance will use.

    • Format: Comma separated list of domain names

    • Example: trilio.demo,trilio.io

  • NTP Servers

    • NTP Servers the Trilio Appliance will use.

    • Format: Comma separated list of NTP Servers (FQDN and IP supported)

    • Example: 0.pool.ntp.org,10.10.10.10

  • Timezone

    • Timezone the Trilio will use.

    • Format: predefined list

    • Example: UTC

RHV Credentials information

The Trilio appliance integrates with one RHV environment. This block asks for the information required to access and connect with the RHV Cluster.

  • RHV Engine URL

    • URL of the RHV-Manager used to authenticate

    • Format: URL (FQDN and IP supported)

    • Example: https://rhv-manager.trilio.demo

A preconfigured DNS Server is required, when using FQDN. The TrilioVault Appliance local host file gets overwritten during configuration. The configuration will fail when the FQDN is not resolvable by a DNS Server.

  • RHV Username

    • admin-user to authenticate against the RHV-Manager

    • Format: user@domain

    • Example: admin@internal

  • Password

    • The password to validate the RHV Username against the RHV-Manager

    • Format: String

    • Example: password

TheInvalid Credentials error message will be displayed when the TrilioVault Appliance cannot reach the RHV Manager or credentials given are incorrect.

Backup Storage Configuration information

This block asks for the necessary details to configure the Backup Storage.

  • Backup Storage

    • NFS or S3

In the case of NFS

  • NFS Export

    • Full path to the NFS Volume used as Backup Storage

    • Format: Comma separated list of NFS paths

    • Example: 10.10.100.20:/rhv_backup

  • NFS Options

    • Options used by the Trilio NFS client to connect to the NFS Volume

    • Format: NFS Options

    • Example: nolock,soft,timeo=180,intr

    Note:- Make sure the NFS server supports the NFSv3 as Trilio Mounts the NFS share explicitly with NFSv3

In the case of S3

  • Amazon or Ceph or Local Ceph

    • Amazon expects to connect to AWS services

    • Ceph allows connecting to any S3 bucket that is either not using SSL or a trusted SSL certificate

    • Local Ceph allows connecting to any S3 that is either not using SSL or a self-signed certificate

  • [Ceph and Local Ceph only] Use SSL

    • Activate when the S3 endpoint is secured

  • Access Key

    • Access Key necessary to login into the S3 storage

    • format: access key

    • example: SFHSAFHPFFSVVBSVBSZRF

  • Secret Key

    • Secret Key necessary to login into the S3 storage

    • format: secret key

    • example: bfAEURFGHsnvd3435BdfeF

  • Region

    • Configured Region for the S3 Bucket

      • use us-east-1 for Ceph and Local Ceph

    • format: String

    • example: us-east-1

  • [Ceph and Local Ceph only] Endpoint URL

    • URL to be used to reach and access the provided S3 compatible storage

    • format: URL

    • example: https://objects.trilio.io

  • Bucket Name

    • Name of the bucket to be used as Backup target

    • format: string

    • example: Trilio-backup

  • [Local Ceph with active SSL only] Cert

    • Upload area for the certificate to be used when connecting with the S3 storage

    • format: certificate

Trilio Certificate information

Trilio is integrating into the RHV Cluster as an additional service, following the RHV communication paradigms. These require that the Trilio Appliance is using SSL and that the RHV-Manager does trust the Trilio Appliance.

Trilio offers to possibilities how these required certificates can be provided. Either Trilio generates a complete fresh self-signed certificate or a certificate is provided.

In both cases is the FQDN required, to which the certificate is pointing to.

Please see below example in case of a provided certificate.

  • FQDN

    • FQDN to reach the TrilioVault Appliance

    • Format: FQDN

    • Example: rhv-tvm.trilio.demo

  • Certificate

    • Certificate provided by the TrilioVault appliance upon request

    • Format: Certificate file

    • Example: rhv-tvm.crt

  • Private Key

    • Private Key used to verify the provided certificate

    • Format: private key file

    • Example: rhv-tvm.key

Trilio License

It is possible to directly provide the Trilio Appliance with the license file that is going to be used by it.

Trilio will not create any workloads or backups without a valid license file.

It is not necessary to provide the License file directly through the configurator. It is also possible to provide the license afterwards through the Trilio License tab in the Trilio dashboard.

The Trilio License tab can also be used to verify and update the currently installed license.

Submit and Configuration

After filling out every block of the configurator, hit the submit button to start the configuration.

The configurator asks one more time for confirmation before starting.

While the configurator is running the live output from the ansible playbooks is shown. Some of the tasks can take multiple minutes until they are finished without an update to the output.

Wait until the configurator has either finished or failed.

Update or delete the local DNS entry

Once the Trilio Controller Cluster is successfully configured the FQDN will have been changed to the one used for production.

It is recommended to delete the setup localhost entry and use a full-fledged DNS entry in a DNS server now.

Windows Guests require the installation of the VirtIO drivers and tools. These are provided by Red Hat in a prepared ISO-file. For RHV 4.3 please follow this documentation: For RHV 4.4 please follow this documentation:

Please refer to the and User Guide to learn more about those.

All Snapshots need to be deleted before the workload gets deleted. Please refer to the User Guide to learn how to delete Snapshots.

(It might be necessary to again)

Please refer to the User Guide to learn more about Restores.

Please refer to the about logs available.

It is aimed to provide Backup and Recovery for Red Hat Virtualization 4.3.x & 4.4.x. The full requirements can be found .

RHV 4.4 will undergo the following stages until its end of life (source: )

RHV 4.3 Windows Guest Agents
RHV 4.4 Windows Guest Agents
Snapshot
Restore
Snapshots
Spin up the new Trilio Appliance
Configure the Trilio Appliance
Install the new ovirt-imageio extensions
Restores
Important TVM Logs
here
https://access.redhat.com/support/policy/updates/rhev
Uninstall the old ovirt-imageio extensions
Delete or shutdown the old Trilio Appliance
Upload the qcow2 image of the new Trilio Appliance
verify the connection of the ovirt-imageio-proxy
<Ingress-IP> tvault.com
50KB
triliovault.1.17.tar.gz
TrilioVault installation binary

Configure the Trilio appliance login banner

Trilio for RHV allows extending the Trilio GUI login page with a customized banner text.

This banner can be customized in the following ways:

  • Header

    • Text

    • Font Size

    • Font Color

  • Content

    • Text

    • Font Size

    • Font Color

The banner will always be centered on the page with normal line breaks (no block style).

Set the banner

To set the banner do the following

  1. Log into the Trilio Appliance GUI using the admin account

  2. Click on admin in the upper right corner to open the submenu

  3. Click on "Update Compliance Styles"

  4. Edit the banner as required

    1. Texts are accepting standard UTF-8 characters

    2. Font Size needs to be integer values

    3. Color can be chosen from color board or provided by name or by Hex-value

  5. Click Submit to activate the banner

Figure 1: TrilioVault integration into RHV-M menu
Figure 2: Connection between Client Browser and RHV-Manager
Trilio for RHV Architecture overview