Trilio for Red Hat Virtualization
4.2
4.2
  • About Trilio for RHV
  • Trilio for RHV Architecture
  • Trilio for RHV 4.2 Release Notes
  • Trilio 4.2 Support Matrix
  • Deployment Guide
    • General requirements
    • Certificates required for TVR
    • Preparing the Installation
    • Installing the Trilio Controller Cluster
    • Configuring the Trilio Controller Cluster
    • Installation of RHV extensions
    • Post Installation Health-Check
    • Maintaining the Trilio Controller Cluster
    • Add/Delete worker-nodes
    • Uninstall Trilio
    • Upgrade Trilio
  • User Guide
    • Required RHV User roles
    • Workloads
    • Preparing for Application Consistent backups
    • Snapshots
    • Restores
    • File Search
    • Snapshot mount
  • ADMIN GUIDE
    • The appliance service account
    • Configure the Trilio appliance login banner
    • Reset the Trilio GUI password
    • Shutdown/Restart the Trilio Appliance
    • Admin Panel
    • email alerts
    • Reporting
  • Troubleshooting
    • Important TVM Logs
    • Important RHV-Manager Logs
    • Important RHV-Host Logs
Powered by GitBook
On this page
  • Verify the TrilioVault Controller Cluster
  • Verify API connectivity from the RHV-Manager
  • Verify the ovirt-imageio services are up and running
  • RHV 4.3.X
  • RHV 4.4.X
  • Verify the NFS Volume is correctly mounted

Was this helpful?

Export as PDF
  1. Deployment Guide

Post Installation Health-Check

After the installation and configuration of TrilioVault for RHV did succeed the following steps can be done to verify that the TrilioVault installation is healthy.

Verify the TrilioVault Controller Cluster

The TrilioVault Controller Cluster can be verified from the base VMs themselves using the kubectl commands.

[root@master-node1 ~]# kubectl get pods
NAME                                           READY   STATUS    RESTARTS     AGE
tvr-ingress-nginx-controller-6d7b7c47f-zzhxq   1/1     Running   1 (8d ago)   22d
tvr-metallb-controller-79757c94b7-66r5j        1/1     Running   1 (8d ago)   22d
tvr-metallb-speaker-zxdgh                      1/1     Running   1 (8d ago)   22d
[root@master-node1 ~]# kubectl get pods -n trilio
NAME                               READY   STATUS      RESTARTS     AGE
config-api-55879c774c-dmcqt        1/1     Running     1            22d
mariadb-0                          1/1     Running     1            22d
rabbitmq-0                         1/1     Running     1 (8d ago)   22d
rabbitmq-1                         1/1     Running     1 (8d ago)   22d
rabbitmq-2                         1/1     Running     1 (8d ago)   22d
rabbitmq-ha-policy-h984m           0/1     Completed   1            22d
rhvconfigurator-5f65f79897-tprhx   1/1     Running     1 (8d ago)   22d
wlm-api-7df595ccd7-l2vpk           1/1     Running     1 (8d ago)   15d
wlm-scheduler-6cff68b7b8-lm666     1/1     Running     1 (8d ago)   15d
wlm-workloads-nb8f9                1/1     Running     1            15d

Verify API connectivity from the RHV-Manager

The RHV-Manager is doing all API calls towards the TrilioVault Appliance. Therefore it is helpful to do a quick API connectivity check using curl.

The following curl command lists the available workload-types and verfifies that the connection is available and working:

curl -k -XGET https://30.30.1.11:8780/v1/admin/workload_types/detail -H "Content-Type: application/json" -H "X-OvirtAuth-User: admin@internal" -H "X-OvirtAuth-Password: password"
######
{"workload_types": [{"status": "available", "user_id": "admin@internal", "name": "Parallel", "links": [{"href": "https://myapp/v1/admin/workloadtypes/2ddd528d-c9b4-4d7e-8722-cc395140255a", "rel": "self"}, {"href": "https://myapp/admin/workloadtypes/2ddd528d-c9b4-4d7e-8722-cc395140255a", "rel": "bookmark"}], "created_at": "2020-04-02T15:38:51.000000", "updated_at": "2020-04-02T15:38:51.000000", "metadata": [], "is_public": true, "project_id": "admin", "id": "2ddd528d-c9b4-4d7e-8722-cc395140255a", "description": "Parallel workload that snapshots VM in the specified order"}, {"status": "available", "user_id": "admin@internal", "name": "Serial", "links": [{"href": "https://myapp/v1/admin/workloadtypes/f82ce76f-17fe-438b-aa37-7a023058e50d", "rel": "self"}, {"href": "https://myapp/admin/workloadtypes/f82ce76f-17fe-438b-aa37-7a023058e50d", "rel": "bookmark"}], "created_at": "2020-04-02T15:38:47.000000", "updated_at": "2020-04-02T15:38:47.000000", "metadata": [], "is_public": true, "project_id": "admin", "id": "f82ce76f-17fe-438b-aa37-7a023058e50d", "description": "Serial workload that snapshots VM in the specified order"}]}

Verify the ovirt-imageio services are up and running

TrilioVault is extending the already exiting ovirt-imageio services. The installation of these extensions does check if the ovirt-services come up. Still it is a good call to verify again afterwards:

RHV 4.3.X

On the RHV-Manager check the ovirt-imageio-proxy service:

systemctl status ovirt-imageio-proxy
######
● ovirt-imageio-proxy.service - oVirt ImageIO Proxy
   Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-04-08 05:05:25 UTC; 2 weeks 1 days ago
 Main PID: 1834 (python)
   CGroup: /system.slice/ovirt-imageio-proxy.service
           └─1834 bin/python proxy/ovirt-imageio-proxy

On the RHV-Host check the ovirt-imageio-daemon service:

systemctl status ovirt-imageio-daemon
######
● ovirt-imageio-daemon.service - oVirt ImageIO Daemon
   Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-04-08 04:40:50 UTC; 2 weeks 1 days ago
 Main PID: 1442 (python)
    Tasks: 4
   CGroup: /system.slice/ovirt-imageio-daemon.service
           └─1442 /opt/ovirt-imageio/bin/python daemon/ovirt-imageio-daemon

RHV 4.4.X

On the RHV-Manager check the ovirt-imageio-proxy service:

systemctl status ovirt-imageio
######
● ovirt-imageio.service - oVirt ImageIO Daemon
   Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio.service; enabled; vend>
   Active: active (running) since Tue 2021-03-02 09:18:30 UTC; 5 months 11 days>
 Main PID: 1041 (ovirt-imageio)
    Tasks: 3 (limit: 100909)
   Memory: 22.0M
   CGroup: /system.slice/ovirt-imageio.service
           └─1041 /usr/libexec/platform-python -s /usr/bin/ovirt-imageio

On the RHV-Host check the ovirt-imageio-daemon service:

systemctl status ovirt-imageio
######
● ovirt-imageio.service - oVirt ImageIO Daemon
   Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio.service; enabled; vend>
   Active: active (running) since Tue 2021-03-02 09:01:57 UTC; 5 months 11 days>
 Main PID: 51766 (ovirt-imageio)
    Tasks: 4 (limit: 821679)
   Memory: 19.8M
   CGroup: /system.slice/ovirt-imageio.service
           └─51766 /usr/libexec/platform-python -s /usr/bin/ovirt-imageio

Verify the NFS Volume is correctly mounted

TrilioVault mounts the NFS Backup Target to the TrilioVault Appliance and RHV-Hosts.

To verify those are correctly mounted it is recommended to do the following checks.

First df -h looking for /var/triliovault-mounts/<hash-value>

df -h
######
Filesystem                                      Size  Used Avail Use% Mounted on
devtmpfs                                         63G     0   63G   0% /dev
tmpfs                                            63G   16K   63G   1% /dev/shm
tmpfs                                            63G   35M   63G   1% /run
tmpfs                                            63G     0   63G   0% /sys/fs/cgroup
/dev/mapper/rhvh-rhvh--4.3.8.1--0.20200126.0+1  7.1T  3.7G  6.8T   1% /
/dev/sda2                                       976M  198M  712M  22% /boot
/dev/mapper/rhvh-var                             15G  1.9G   12G  14% /var
/dev/mapper/rhvh-home                           976M  2.6M  907M   1% /home
/dev/mapper/rhvh-tmp                            976M  2.6M  907M   1% /tmp
/dev/mapper/rhvh-var_log                        7.8G  230M  7.2G   4% /var/log
/dev/mapper/rhvh-var_log_audit                  2.0G   17M  1.8G   1% /var/log/audit
/dev/mapper/rhvh-var_crash                      9.8G   37M  9.2G   1% /var/crash
30.30.1.4:/rhv_backup                           2.0T  5.3G  1.9T   1% /var/triliovault-mounts/MzAuMzAuMS40Oi9yaHZfYmFja3Vw
30.30.1.4:/rhv_data                             2.0T   37G  2.0T   2% /rhev/data-center/mnt/30.30.1.4:_rhv__data
tmpfs                                            13G     0   13G   0% /run/user/0
30.30.1.4:/rhv_iso                              2.0T   37G  2.0T   2% /rhev/data-center/mnt/30.30.1.4:_rhv__iso

Secondly do a read / write / delete test as the user vdsm:kvm (uid = 36 / gid = 36) from the TrilioVault Appliance and the RHV-Host.

su vdsm
######
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ touch foo
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ ll
total 24
drwxr-xr-x  3 vdsm kvm 4096 Apr  2 17:27 contego_tasks
-rw-r--r--  1 vdsm kvm    0 Apr 23 12:25 foo
drwxr-xr-x  2 vdsm kvm 4096 Apr  2 15:38 test-cloud-id
drwxr-xr-x 10 vdsm kvm 4096 Apr 22 11:00 workload_1540698c-8e22-4dd1-a898-8f49cd1a898c
drwxr-xr-x  9 vdsm kvm 4096 Apr  8 15:21 workload_51517816-6d5a-4fce-9ac7-46ee1e09052c
drwxr-xr-x  6 vdsm kvm 4096 Apr 22 11:30 workload_77fb42d2-8d34-4b8d-bfd5-4263397b636c
drwxr-xr-x  5 vdsm kvm 4096 Apr 23 06:15 workload_85bf16ed-d4fd-49a6-a753-98c5ca6e906b
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ rm foo
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$ ll
total 24
drwxr-xr-x  3 vdsm kvm 4096 Apr  2 17:27 contego_tasks
drwxr-xr-x  2 vdsm kvm 4096 Apr  2 15:38 test-cloud-id
drwxr-xr-x 10 vdsm kvm 4096 Apr 22 11:00 workload_1540698c-8e22-4dd1-a898-8f49cd1a898c
drwxr-xr-x  9 vdsm kvm 4096 Apr  8 15:21 workload_51517816-6d5a-4fce-9ac7-46ee1e09052c
drwxr-xr-x  6 vdsm kvm 4096 Apr 22 11:30 workload_77fb42d2-8d34-4b8d-bfd5-4263397b636c
drwxr-xr-x  5 vdsm kvm 4096 Apr 23 06:15 workload_85bf16ed-d4fd-49a6-a753-98c5ca6e906b
[vdsm@rhv-tvm MzAuMzAuMS40Oi9yaHZfYmFja3Vw]$
PreviousInstallation of RHV extensionsNextMaintaining the Trilio Controller Cluster

Last updated 3 years ago

Was this helpful?