T4O 4.2 HF3 Release Notes

Prerequisites

To use this hotfix (4.2.HF3).

  1. Customers (except Canonical Openstack) and having Openstack Ussuri OR Openstack Victoria need to have an already deployed and working TVO-4.2 GA.

  2. Customers (except Canonical Openstack) and having Openstack Wallaby need to follow the T4O-4.2 GA deployment process and directly upgrade to 4.2.HF3 containers/packages. The high-level flow below:

    1. Deplo T4O-4.2 GA appliance.

    2. Upgrade to 4.2.HF3 packages on the appliance.

    3. Kolla

      1. Deploy Trilio components via 4.2.HF3 containers/packages on Openstack Wallaby.

    4. Openstack Ansible

      1. Deploy Trilio components Openstack Wallaby [This will deploy 4.2 GA packages]

      2. Upgrade TrilioVault packages to 4.2.HF3 on Openstack Wallaby.

    5. Configure the Trilio appliance.

  3. Canonical users having Openstack Ussuri Or Openstack Victoria can either upgrade (on top of 4.2 GA) using Trilio upgrade documents OR do a new deployment using 4.2 Deployment documents.

  4. Canonical users having Openstack Wallaby need to do a new deployment using 4.2 Deployment documents.

Release Scope

Current Hotfix release targets the following:

  1. High-level Qualification (via Sanity & Functional suites' execution) of T4O with Ussuri, Victoria & Wallaby Openstack.

  2. Verification of Jira issues targeted for 4.2. release.

  3. As part of the new process, the delivery will be via packages; end users would need to do the rolling upgrade on top of 4.2 GA.

Release Artifacts

Branch Tag and Containers Tags

Note : Container images with tag 4.2.64-hotfix-3-rhosp16.1 are not available for download from RedHat registry due to technical issues. Hence, it is recommended to use the latest tag, i.e. 4.2.64-hotfix-4-rhosp16.1.

Ref link for 4.2.64-hotfix-4-rhosp16.1 : T4O 4.2 HF4 Release Notes__

****

Tag Reference in Install/Upgrade Docs

Value

Comments

1

4.2 Hotfix triliovault-cfg-scripts branch name

hotfix-3-TVO/4.2

Label against the Trilio repositories from where required code to be pulled for upgrades.

2

4.2 Hotfix RHOSP13 Container tag

4.2.64-hotfix-3-rhosp13

RHOSP13 Container tag for 4.2.HF3

3

4.2 Hotfix RHOSP16.1 Container tag

4.2.64-hotfix-3-rhosp16.1

RHOSP16.1 Container tag for 4.2.HF3

4

4.2 Hotfix RHOSP16.2 Container tag

4.2.64-hotfix-3-rhosp16.2

RHOSP16.2 Container tag for 4.2.HF3

5

4.2 Hotfix Kolla Victoria Container tag

4.2.64-hotfix-3-victoria

Kolla Container tag against 4.2.HF3

6

4.2 Hotfix Kolla Wallaby Container tag

4.2.64-hotfix-3-wallaby

Kolla Container tag against 4.2.HF3

7

4.2 Hotfix TripleO Container tag

4.2.63-hotfix-3-tripleo

TripleO Train CentOS 7 Container tag for 4.2.HF3

Resolved Issues

Summary

Restore failing while creating security group

Tvault configuration failing with build 4.1.19

Configuration fails with pcs auth

privsep Unhandled error: ConnectionRefusedError

Tvault configuration failing with build 4.1.19

Datamover container restarting

While deploying trilio-wlm 4.2 directly on the machine is getting stuck at workloadmgr package installation.

trilio data mover pods stuck in reboot loop post stack update on RHOSP 13

Reassigning a workload from a deleted project fails

Reassign of Workload from Deleted Project Fails SFDC #2821

default_tvault_dashboard_tvo-tvm not available after yum update

workload policy shows incorrect start time

tvault-config service is in the crash loop on 2 out of 3 nodes T4O cluster

Trilio core functionality operations do not perform as expected when the master T4O node is powered off.

backup stuck in uploading phase

Backup failed at snapshot_network_topology

Deliverables

Package/Container Names

Package Kind

Package Version/Container Tags

1

contego

deb

4.2.64

2

contegoclient

rpm

4.2.64-4.2

3

contegoclient

deb

4.2.64

4

contegoclient

python

4.2.64

5

dmapi

rpm

4.2.64-4.2

6

dmapi

deb

4.2.64

7

puppet-triliovault

rpm

4.2.64-4.2

8

python3-contegoclient

deb

4.2.64

9

python3-contegoclient-el8

rpm

4.2.64-4.2

10

python3-dmapi

deb

4.2.64

11

python3-dmapi

rpm

4.2.64-4.2

12

python3-s3-fuse-plugin

deb

4.2.64

13

python3-s3fuse-plugin

rpm

4.2.64-4.2

14

python3-trilio-fusepy

rpm

3.0.1-1

15

python-s3fuse-plugin-cent7

rpm

4.2.64-4.2

16

s3fuse

python

4.2.64

17

s3-fuse-plugin

deb

4.2.64

18

trilio-fusepy

rpm

3.0.1-1

19

4.2-RHOSP13-CONTAINER

Containers

4.2.64-hotfix-3-rhosp13

20

4.2-RHOSP16.1-CONTAINER

Containers

4.2.64-hotfix-3-rhosp16.1

21

4.2-RHOSP16.2-CONTAINER

Containers

4.2.64-hotfix-3-rhosp16.2

22

4.2-KOLLA-CONTAINER Victoria

Containers

4.2.64-hotfix-3-victoria

23

4.2-KOLLA-CONTAINER Wallaby

Containers

4.2.64-hotfix-3-wallaby

24

4.2-TRIPLEO-CONTAINER

Containers

4.2.64-hotfix-3-tripleo

Package Added/Changed

Package/Container Names

Package Kind

Package/Container Version/Tags

1

python3-tvault-contego

deb

4.2.64.7

2

tvault-contego

deb

4.2.64.7

3

python3-tvault-contego

rpm

4.2.64.7-4.2

4

tvault-contego

rpm

4.2.64.7-4.2

5

workloadmgr

deb

4.2.64.6

6

workloadmgr

python

4.2.64.6

7

tvault_configurator

python

4.2.64.6

8

tvault-horizon-plugin

deb

4.2.64.1

9

tvault-horizon-plugin

rpm

4.2.64.1-4.2

10

python3-tvault-horizon-plugin

deb

4.2.64.1

11

python3-tvault-horizon-plugin-el8

rpm

4.2.64.1-4.2

12

python3-workloadmgrclient

deb

4.2.64.1

13

python3-workloadmgrclient-el8

rpm

4.2.64.1-4.2

14

python-workloadmgrclient

deb

4.2.64.1

15

workloadmgrclient

python

4.2.64.1

16

workloadmgrclient

rpm

4.2.64.1-4.2

Known Issues

Summary

Workaround/Comments (if any)

1

encrypted volume backup fails with SSO user

Follow below steps if T4O is reconfigured with ‘creator’ role

  1. login to any T4O node

  2. source particular user rc file

  3. fire below command to get the trust id

workloadmgr trust-list 4. In order to create encrypted workload user needs to delete the existing trust which is created using other than ‘creator’ role

workloadmgr trust-delete <TrustID>

5.create a new trust with ‘creator’ role

workloadmgr trust-create creator

6.now create encrypted workload

2

additional security rule is getting added in shared security group after restore

It will go as known issue in 4.2HF3 and will be targeted in 4.2HF4

3

4

[encrypted] Post restore of incremental snapshots centos instance is not getting booted

There is no workaround as such. But customer can only restore the already taken full snapshot

5

[Intermittent] In-place restore doesn't work for ext3 & ext4 file system in canonical bionic-queens

In-place restore doesn't work well for ext3 & ext4 file system in canonical bionic-queens.

After in-place restore instance has data from the latest snapshot for ext3 & ext4 file system, however In-place restore was done for previous full/incremental snapshot.

6

Performance difference between encrypted & unencrypted WL/snapshot

With encryption in place, user would see some performance degradation against all operations done by Trilio.

Stats below as per trials in Trilio Lab

Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : ~108GB

  1. For unencrypted WL : 62 min

  2. For encrypted WL : 82 min

Snapshot time for Windows Image booted VM. No additional data except OS. : ~12 GB

  1. For unencrypted WL : 10 min

  2. For encrypted WL : 18 min

7

get-importworkload-list and get-orphaned-workloads-list are showing the wrong list of WLs

Customer need to use --project option with importworkload-lists cli to get the list WLs that can be imported with particular openstack.

workloadmgr workload-get-importworkloads-list --project_id <project_id>

8

File-search not displaying files present in logical vol on volume group (LVM)

If we create lvm partition using fdisk utility, then the file search will work.

9

Retention not working post snapshot mount/unmount operation

# List the Workload id for which Retention failing due to ownership change issue. # Fire the cmd chown -R nova:nova <WL_DIR> # After firing above cmd, now one should able to see the snapshot ownership as nova:nova.

10

[Barbican]File search on encrypted workload returns empty data

By default, if root directory is not having read permissions for group, then file search will also fail as it runs from nova user.

11

Single corrupted snapshot impacts import of all other valid snapshots causing file search failure

As per the current import design flow, if any single WL is corrupted (in current case few DB files were missing), then other good workloads get impacted during import, but import operation doesn’t stop OR fails. Respective wlm-api logs should show the error.

To mitigate the impact, the identified corrupted WL should be manually removed from target backend followed by reinitialize and import.

12

Test email error message should be in readable and understandable format

NA

13

File search will not work on Canonical if wlm is running on container (lxc container in this case)

14

Unable to create encrypted workload if T4O reconfigured with creator trustee role.

If T4O is initially configured with member as trustee role and then user reconfigures the same with creator as a trustee role, then this failure would occur. Workaround : Follow below steps if T4O is reconfigured with ‘creator’ role

  1. login to any T4O node

  2. source particular user rc file

  3. fire command to get the trust id (workloadmgr trust-list)

  4. In order to create encrypted workload user needs to delete the existing trust which is created using other than ‘creator’ role (workloadmgr trust-delete <TrustID>)

  5. create a new trust with ‘creator’ role (workloadmgr trust-create creator)

  6. now create encrypted workload

Last updated