T4O 4.2.6 Release Notes
Release Versions
Packages
Deliverables against TVO-4.2.6
Package/Container Names | Package Kind | Package Version/Container Tags |
contego | deb | 4.2.64 |
contegoclient | rpm | 4.2.64-4.2 |
contegoclient | deb | 4.2.64 |
contegoclient | python | 4.2.64 |
puppet-triliovault | rpm | 4.2.64-4.2 |
python3-contegoclient | deb | 4.2.64 |
python3-contegoclient-el8 | rpm | 4.2.64-4.2 |
python3-trilio-fusepy | rpm | 3.0.1-1 |
trilio-fusepy | rpm | 3.0.1-1 |
python3-workloadmgrclient | deb | 4.2.64.1 |
python3-workloadmgrclient-el8 | rpm | 4.2.64.1-4.2 |
python-workloadmgrclient | deb | 4.2.64.1 |
workloadmgrclient | python | 4.2.64.1 |
workloadmgrclient | rpm | 4.2.64.1-4.2 |
dmapi | python | 4.2.64.1 |
dmapi | rpm | 4.2.64.1-4.2 |
dmapi | deb | 4.2.64.1 |
python3-dmapi | deb | 4.2.64.1 |
python3-dmapi | rpm | 4.2.64.1-4.2 |
python3-s3-fuse-plugin | deb | 4.2.64.1 |
python3-tvault-horizon-plugin | deb | 4.2.64.3 |
s3-fuse-plugin | deb | 4.2.64.1 |
tvault-horizon-plugin | deb | 4.2.64.3 |
s3fuse | python | 4.2.64.2 |
tvault-horizon-plugin | python | 4.2.64.2 |
python-s3fuse-plugin-cent7 | rpm | 4.2.64.1-4.2 |
python3-s3fuse-plugin | rpm | 4.2.64.1-4.2 |
python3-tvault-horizon-plugin-el8 | rpm | 4.2.64.3-4.2 |
tvault-horizon-plugin | rpm | 4.2.64.3-4.2 |
Following packages changed/added in the current release
Package/Container Names | Package Kind | Package/Container Version/Tags |
tvault-contego | deb | 4.2.64.13 |
python3-tvault-contego | rpm | 4.2.64.13-4.2 |
tvault-contego | rpm | 4.2.64.13-4.2 |
tvault-contego | python | 4.2.64.6 |
tvault_configurator | python | 4.2.64.15 |
workloadmgr | python | 4.2.64.11 |
workloadmgr | deb | 4.2.64.11 |
python3-tvault-contego | deb | 4.2.64.13 |
Containers and Gitbranch
Name | Tag |
Gitbranch | TVO/4.2.6 |
RHOSP13 containers | 4.2.6-rhosp13 |
RHOSP16.1 containers | 4.2.6-rhosp16.1 |
RHOSP16.2 containers | 4.2.6-rhosp16.2 |
Kolla Ansible Victoria containers | 4.2.6-victoria |
Kolla Ansible Wallaby containers | 4.2.6-wallaby |
Kolla Yoga Containers | 4.2.6-yoga |
TripleO Containers | 4.2.6-tripleo |
Changelog
Verification of Jira issues targeted for 4.2.6 release
Cohesity NFS/S3 storage backend support
Fixed Bugs and issues
Mounting a snapshot fails if the first disk of the File Recovery Manager VM is a CDROM
TVM HA configuration fails when one of the chosen Controller's hostname is part of any other TVM's hostname
Validation of keystone and s3 endpoint is stuck on the TVM UI
Temp volume creation taking too long
The httpd service is in a failed state on a freshly deployed T4O
The sshd option UseDNS should be set to "no" to avoid issues
Known issues
[RHOSP 16.1.8 and RHOSP 16.2.4] Trilio Horizon container in reboot loop
Observation : Post upgrade from a previous release/hotfix or on fresh deployment, the Trilio Horizon container is in a reboot loop
Workaround:
Either of the below workarounds should be performed on the controller where the issue occurs for the horizon pod.
option-1: Restart the memcached service on controller using systemctl (command: systemctl restart tripleo_memcached.service).
option-2: Restart the memcached pod (command: podman restart memcached).
[Snapshot mount] Unable to see correct content after mounting snapshot
Observation : Snapshot mount using RHEL8 File recovery manager image is not showing any mounted device
Workaround:
Use RHEL7 image for File recovery manager vm instead RHEL8.
[Backup failure for HPE nimble storage volumes with timeout receiving packet error
Observation : Snapshot is failing for VM's that have volumes backed by a Nimble iSCSI SAN with a "timeout receiving packet" error
Workaround: add uxsock_timeout parameter
log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container
[Input/Output error while writing to Cohesity NFS share]
Observation : Input/Output error during qemu-img convert operation while writing to Cohesity NFS share
Workaround: For cohesity NFS use below nfs options during Trilio configuration and datamover deployment and if issue still persists then increase timeo and retrans parameter values in nfs options
nfs_options = nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
Last updated