# T4O 4.2 HF3 Release Notes

## Prerequisites

*To use this hotfix (4.2.HF3).*

1. Customers (*except Canonical Openstack*) and having Openstack Ussuri OR Openstack Victoria need to have an already deployed and working TVO-4.2 GA.
2. Customers (*except Canonical Openstack*) and having Openstack Wallaby need to follow the T4O-4.2 GA deployment process and directly upgrade to 4.2.HF3 containers/packages. The high-level flow below:
   1. *Deplo T4O-4.2 GA appliance.*
   2. *Upgrade to 4.2.HF3 packages on the appliance.*
   3. **Kolla**
      1. *Deploy Trilio components via 4.2.HF3 containers/packages on Openstack Wallaby.*
   4. **Openstack Ansible**
      1. *Deploy Trilio components Openstack Wallaby \[This will deploy 4.2 GA packages]*
      2. Upgrade *TrilioVault packages* to *4.2.HF3 on Openstack Wallaby.*
   5. Configure the Trilio appliance.
3. Canonical users having Openstack Ussuri Or Openstack Victoria can either upgrade (*on top of 4.2 GA*) using Trilio upgrade documents OR do a new deployment using 4.2 Deployment documents.
4. Canonical users having Openstack Wallaby need to do a new deployment using 4.2 Deployment documents.

## Release Scope

Current Hotfix release targets the following:

1. High-level Qualification (*via Sanity & Functional suites' execution*) of T4O with Ussuri, Victoria & Wallaby Openstack.
2. Verification of Jira issues targeted for 4.2. release.
3. As part of the new process, the delivery will be via packages; end users would need to do the rolling upgrade on top of 4.2 GA.

## Release Artifacts <a href="#id-3.-branch-name-and-containers-tags-for-trilio-components-deployment" id="id-3.-branch-name-and-containers-tags-for-trilio-components-deployment"></a>

| <p><br></p> | **Artifacts** | **Reference**                                                                                                                                      |
| ----------- | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1           | Release Date  | Aug 25, 2022                                                                                                                                       |
| 2           | Debian URL    | deb \[trusted=yes] [https://apt.fury.io/triliodata-4-2/](https://apt.fury.io/triliodata-4-1/) /                                                    |
| 3           | RPM URL       | [baseurl=http://trilio:XpmkpMFviqSe@repos.trilio.io:8283/triliodata-4-2/yum/](http://trilio:XpmkpMFviqSe@repos.trilio.io:8283/triliodata-4-2/yum/) |
| 4           | PIP URL       | <https://pypi.fury.io/triliodata-4-2/>                                                                                                             |

## Branch Tag and Containers Tags <a href="#id-3.-branch-name-and-containers-tags-for-trilio-components-deployment" id="id-3.-branch-name-and-containers-tags-for-trilio-components-deployment"></a>

{% hint style="info" %}
***Note** : Container images with tag 4.2.64-hotfix-3-rhosp16.1 are not available for download from RedHat registry due to technical issues. Hence, it is recommended to use the latest tag, i.e. 4.2.64-hotfix-4-rhosp16.1.*

*Ref link for 4.2.64-hotfix-4-rhosp16.1 :* [tvo-4.2-hf4-release-notes](https://docs.trilio.io/openstack/tvo-4.2/triliovault-4.2-release-notes/tvo-4.2-hf4-release-notes "mention")\_\_
{% endhint %}

| \*\*\*\* | **Tag Reference in Install/Upgrade Docs**      | **Value**                 | Comments                                                                                  |
| -------- | ---------------------------------------------- | ------------------------- | ----------------------------------------------------------------------------------------- |
| 1        | 4.2 Hotfix triliovault-cfg-scripts branch name | hotfix-3-TVO/4.2          | Label against the Trilio repositories from where required code to be pulled for upgrades. |
| 2        | 4.2 Hotfix RHOSP13 Container tag               | 4.2.64-hotfix-3-rhosp13   | RHOSP13 Container tag for 4.2.HF3                                                         |
| 3        | 4.2 Hotfix RHOSP16.1 Container tag             | 4.2.64-hotfix-3-rhosp16.1 | RHOSP16.1 Container tag for 4.2.HF3                                                       |
| 4        | 4.2 Hotfix RHOSP16.2 Container tag             | 4.2.64-hotfix-3-rhosp16.2 | RHOSP16.2 Container tag for 4.2.HF3                                                       |
| 5        | 4.2 Hotfix Kolla Victoria Container tag        | 4.2.64-hotfix-3-victoria  | Kolla Container tag against 4.2.HF3                                                       |
| 6        | 4.2 Hotfix Kolla Wallaby Container tag         | 4.2.64-hotfix-3-wallaby   | Kolla Container tag against 4.2.HF3                                                       |
| 7        | 4.2 Hotfix TripleO Container tag               | 4.2.63-hotfix-3-tripleo   | TripleO Train CentOS 7 Container tag for 4.2.HF3                                          |

## Resolved Issues

| **Summary**                                                                                                  |
| ------------------------------------------------------------------------------------------------------------ |
| Restore failing while creating security group                                                                |
| Tvault configuration failing with build 4.1.19                                                               |
| Configuration fails with pcs auth                                                                            |
| privsep Unhandled error: ConnectionRefusedError                                                              |
| Tvault configuration failing with build 4.1.19                                                               |
| Datamover container restarting                                                                               |
| While deploying trilio-wlm 4.2 directly on the machine is getting stuck at workloadmgr package installation. |
| trilio data mover pods stuck in reboot loop post stack update on RHOSP 13                                    |
| Reassigning a workload from a deleted project fails                                                          |
| Reassign of Workload from Deleted Project Fails SFDC #2821                                                   |
| default\_tvault\_dashboard\_tvo-tvm not available after yum update                                           |
| workload policy shows incorrect start time                                                                   |
| tvault-config service is in the crash loop on 2 out of 3 nodes T4O cluster                                   |
| Trilio core functionality operations do not perform as expected when the master T4O node is powered off.     |
| backup stuck in uploading phase                                                                              |
| Backup failed at snapshot\_network\_topology                                                                 |

## Deliverables

| <p><br></p> | **Package/Container Names**  | **Package Kind** | **Package Version/Container Tags** |
| ----------- | ---------------------------- | ---------------- | ---------------------------------- |
| 1           | contego                      | deb              | 4.2.64                             |
| 2           | contegoclient                | rpm              | 4.2.64-4.2                         |
| 3           | contegoclient                | deb              | 4.2.64                             |
| 4           | contegoclient                | python           | 4.2.64                             |
| 5           | dmapi                        | rpm              | 4.2.64-4.2                         |
| 6           | dmapi                        | deb              | 4.2.64                             |
| 7           | puppet-triliovault           | rpm              | 4.2.64-4.2                         |
| 8           | python3-contegoclient        | deb              | 4.2.64                             |
| 9           | python3-contegoclient-el8    | rpm              | 4.2.64-4.2                         |
| 10          | python3-dmapi                | deb              | 4.2.64                             |
| 11          | python3-dmapi                | rpm              | 4.2.64-4.2                         |
| 12          | python3-s3-fuse-plugin       | deb              | 4.2.64                             |
| 13          | python3-s3fuse-plugin        | rpm              | 4.2.64-4.2                         |
| 14          | python3-trilio-fusepy        | rpm              | 3.0.1-1                            |
| 15          | python-s3fuse-plugin-cent7   | rpm              | 4.2.64-4.2                         |
| 16          | s3fuse                       | python           | 4.2.64                             |
| 17          | s3-fuse-plugin               | deb              | 4.2.64                             |
| 18          | trilio-fusepy                | rpm              | 3.0.1-1                            |
| 19          | 4.2-RHOSP13-CONTAINER        | Containers       | 4.2.64-hotfix-3-rhosp13            |
| 20          | 4.2-RHOSP16.1-CONTAINER      | Containers       | 4.2.64-hotfix-3-rhosp16.1          |
| 21          | 4.2-RHOSP16.2-CONTAINER      | Containers       | 4.2.64-hotfix-3-rhosp16.2          |
| 22          | 4.2-KOLLA-CONTAINER Victoria | Containers       | 4.2.64-hotfix-3-victoria           |
| 23          | 4.2-KOLLA-CONTAINER Wallaby  | Containers       | 4.2.64-hotfix-3-wallaby            |
| 24          | 4.2-TRIPLEO-CONTAINER        | Containers       | 4.2.64-hotfix-3-tripleo            |

## Package Added/Changed

|    | **Package/Container Names**       | **Package Kind** | **Package/Container Version/Tags** |
| -- | --------------------------------- | ---------------- | ---------------------------------- |
| 1  | python3-tvault-contego            | deb              | 4.2.64.7                           |
| 2  | tvault-contego                    | deb              | 4.2.64.7                           |
| 3  | python3-tvault-contego            | rpm              | 4.2.64.7-4.2                       |
| 4  | tvault-contego                    | rpm              | 4.2.64.7-4.2                       |
| 5  | workloadmgr                       | deb              | 4.2.64.6                           |
| 6  | workloadmgr                       | python           | 4.2.64.6                           |
| 7  | tvault\_configurator              | python           | 4.2.64.6                           |
| 8  | tvault-horizon-plugin             | deb              | 4.2.64.1                           |
| 9  | tvault-horizon-plugin             | rpm              | 4.2.64.1-4.2                       |
| 10 | python3-tvault-horizon-plugin     | deb              | 4.2.64.1                           |
| 11 | python3-tvault-horizon-plugin-el8 | rpm              | 4.2.64.1-4.2                       |
| 12 | python3-workloadmgrclient         | deb              | 4.2.64.1                           |
| 13 | python3-workloadmgrclient-el8     | rpm              | 4.2.64.1-4.2                       |
| 14 | python-workloadmgrclient          | deb              | 4.2.64.1                           |
| 15 | workloadmgrclient                 | python           | 4.2.64.1                           |
| 16 | workloadmgrclient                 | rpm              | 4.2.64.1-4.2                       |

## Known Issues

| <p><br></p> | **Summary**                                                                                          | **Workaround/Comments (if any)**                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
| ----------- | ---------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1           | encrypted volume backup fails with SSO user                                                          | <p>Follow below steps if T4O is reconfigured with ‘creator’ role</p><ol><li>login to any T4O node</li><li>source particular user rc file</li><li>fire below command to get the trust id</li></ol><p>workloadmgr trust-list<br>4. In order to create encrypted workload user needs to delete the existing trust which is created using other than ‘creator’ role</p><p>workloadmgr trust-delete \<TrustID></p><p>5.create a new trust with ‘creator’ role</p><p>workloadmgr trust-create creator</p><p>6.now create encrypted workload</p>                                                                                                                                                                                                                   |
| 2           | additional security rule is getting added in shared security group after restore                     | It will go as known issue in 4.2HF3 and will be targeted in 4.2HF4                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| 3           |                                                                                                      |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| 4           | \[encrypted] Post restore of incremental snapshots centos instance is not getting booted             | <p>There is no workaround as such.<br>But customer can only restore the already taken full snapshot</p>                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| 5           | \[Intermittent] In-place restore doesn't work for ext3 & ext4 file system in canonical bionic-queens | <p>In-place restore doesn't work well for ext3 & ext4 file system in canonical bionic-queens.</p><p>After in-place restore instance has data from the latest snapshot for ext3 & ext4 file system, however In-place restore was done for previous full/incremental snapshot.</p>                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
| 6           | Performance difference between encrypted & unencrypted WL/snapshot                                   | <p>With encryption in place, user would see some performance degradation against all operations done by Trilio.</p><p><strong>Stats below as per trials in Trilio Lab</strong></p><p><em>Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : \~108GB</em></p><ol><li><em>For unencrypted WL : 62 min</em></li><li><em>For encrypted WL : 82 min</em></li></ol><p><em>Snapshot time for Windows Image booted VM. No additional data except OS. : \~12 GB</em></p><ol><li><em>For unencrypted WL : 10 min</em></li><li><em>For encrypted WL : 18 min</em></li></ol>                                                                                                                                                    |
| 7           | get-importworkload-list and get-orphaned-workloads-list are showing the wrong list of WLs            | <p>Customer need to use --project option with importworkload-lists cli to get the list WLs that can be imported with particular openstack.</p><p><em>workloadmgr workload-get-importworkloads-list --project\_id \<project\_id></em></p>                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| 8           | File-search not displaying files present in logical vol on volume group (*LVM*)                      | If we create lvm partition using fdisk utility, then the file search will work.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| 9           | Retention not working post snapshot mount/unmount operation                                          | <p># List the Workload id for which Retention failing due to ownership change issue.<br># Fire the cmd chown -R nova:nova \<WL\_DIR><br># After firing above cmd, now one should able to see the snapshot ownership as nova:nova.</p>                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| 10          | \[Barbican]File search on encrypted workload returns empty data                                      | By default, if root directory is not having read permissions for group, then file search will also fail as it runs from nova user.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| 11          | Single corrupted snapshot impacts import of all other valid snapshots causing file search failure    | <p>As per the current import design flow, if any single WL is corrupted (<em>in current case few DB files were missing</em>), then other good workloads get impacted during import, but import operation doesn’t stop OR fails. Respective wlm-api logs should show the error.</p><p>To mitigate the impact, the identified corrupted WL should be manually removed from target backend followed by reinitialize and import.</p>                                                                                                                                                                                                                                                                                                                            |
| 12          | Test email error message should be in readable and understandable format                             | NA                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| 13          | File search will not work on Canonical if wlm is running on container (lxc container in this case)   | <p>NA</p><p>LP Bug : \[<a href="https://triliodata.atlassian.net/wiki/spaces/TVO/pages/3381657699/TVO-4.2+Release+Notes"><https://bugs.launchpad.net/trilio/+bug/1961149></a></p>                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
| 14          | Unable to create encrypted workload if T4O reconfigured with creator trustee role.                   | <p>If T4O is initially configured with <em>member</em> as trustee role and then user reconfigures the same with <em><strong>creator</strong></em> as a trustee role, then this failure would occur.<br><strong>Workaround</strong> : Follow below steps if T4O is reconfigured with ‘creator’ role</p><ol><li>login to any T4O node</li><li>source particular user rc file</li><li>fire command to get the trust id (workloadmgr trust-list)</li><li>In order to create encrypted workload user needs to delete the existing trust which is created using other than ‘creator’ role (workloadmgr trust-delete \<TrustID>)</li><li>create a new trust with ‘creator’ role (workloadmgr trust-create creator)</li><li>now create encrypted workload</li></ol> |
