T4O 4.3.2
Release Versions
Packages
Deliverables against T4O-4.3.2
Package/Container Names | Package Kind | Package Versions |
contego | deb | 4.2.64 |
s3-fuse-plugin | deb | 4.3.1.2 |
python3-s3-fuse-plugin | deb | 4.3.1.2 |
tvault-contego | deb | 4.3.1.3 |
python3-tvault-contego | deb | 4.3.1.3 |
dmapi | deb | 4.3.1 |
contegoclient | deb | 4.3.1 |
python3-dmapi | deb | 4.3.1 |
python3-contegoclient | deb | 4.3.1 |
tvault-horizon-plugin | deb | 4.3.2.2 |
python3-tvault-horizon-plugin | deb | 4.3.2.2 |
python-workloadmgrclient | deb | 4.3.6 |
python3-workloadmgrclient | deb | 4.3.6 |
workloadmgr | deb | 4.3.8.4 |
python3-namedatomiclock | deb | 1.1.3 |
tvault-contego | python | 4.3.1.3 |
s3fuse | python | 4.3.2.2 |
dmapi | python | 4.3.2 |
contegoclient | python | 4.3.2 |
tvault-horizon-plugin | python | 4.3.3.2 |
tvault_configurator | python | 4.3.7 |
workloadmgrclient | python | 4.3.7 |
workloadmgr | python | 4.3.9.4 |
trilio-fusepy | rpm | 3.0.1-1 |
python3-trilio-fusepy | rpm | 3.0.1-1 |
python3-trilio-fusepy-el9 | rpm | 3.0.1-1 |
puppet-triliovault | rpm | 4.2.64-4.2 |
python3-s3fuse-plugin | rpm | 4.3.1.2-4.3 |
python3-s3fuse-plugin-el9 | rpm | 4.3.1.2-4.3 |
python-s3fuse-plugin-cent7 | rpm | 4.3.1.2-4.3 |
tvault-contego | rpm | 4.3.1.3-4.3 |
python3-tvault-contego | rpm | 4.3.1.3-4.3 |
python3-tvault-contego-el9 | rpm | 4.3.1.3-4.3 |
dmapi | rpm | 4.3.1-4.3 |
contegoclient | rpm | 4.3.1-4.3 |
python3-dmapi | rpm | 4.3.1-4.3 |
python3-dmapi-el9 | rpm | 4.3.1-4.3 |
python3-contegoclient-el8 | rpm | 4.3.1-4.3 |
python3-contegoclient-el9 | rpm | 4.3.1-4.3 |
tvault-horizon-plugin | rpm | 4.3.2.2-4.3 |
python3-tvault-horizon-plugin-el8 | rpm | 4.3.2.2-4.3 |
python3-tvault-horizon-plugin-el9 | rpm | 4.3.2.2-4.3 |
workloadmgrclient | rpm | 4.3.6-4.3 |
python3-workloadmgrclient-el8 | rpm | 4.3.6-4.3 |
python3-workloadmgrclient-el9 | rpm | 4.3.6-4.3 |
Containers and Gitbranch
Name | Tag |
Gitbranch | 4.3.2 |
RHOSP13 containers | 4.3.2-rhosp13 |
RHOSP16.1 containers | 4.3.2-rhosp16.1 |
RHOSP16.2 containers | 4.3.2-rhosp16.2 |
RHOSP17.0 containers | 4.3.2-rhosp17.0 |
Kolla Ansible Victoria containers | 4.3.2-victoria |
Kolla Ansible Wallaby containers | 4.3.2-wallaby |
Kolla Yoga Containers | 4.3.2-yoga |
Kolla Zed Containers | 4.3.2-zed |
TripleO Containers | 4.3.2-tripleo |
Changelog
Issues reported by customers.
Fixed Bugs and issues
Snapshot listing (workload open) on UI doesn't happen if snapshot count is > 1000.
FileManager caches old snapshot details.
Custom horizon integration - Canonical.
Restore failing with error Failed restoring snapshot: RetryError[<Future at 0x7fc607693c18 state=finished raised StorageFailure>.
Duplicate security groups get created post restore of a VM.
kolla ansible yoga 4.3.0 deployment fails with error "Unsupported parameters for (kolla_container_facts) module: container_engine. Supported parameters include: name, api_version".
Known issues
1. Import of multiple workloads don't get segregated across all nodes evenly
Observation : When import of multiple workloads is triggered, it is expected that system must evenly divide the load on all available nodes in round robin fashion. However, current it is running the import on any one of the available nodes.
2. After clicking on snapshot or any restore against Imported WL, UI become unresponsive for first time only
Observation : For workloads with a large number of VMs and with a large number of networks attached to those VMs, then the import of the snapshot details may take more than 1 minute (connection timeout) and hence this issue might be observed.
Comments:
If a user hits this issue, then the import of the snapshot has already been triggered and now the user needs to wait till the snapshot import is done.
The users are advised to wait for a couple of minutes before they recheck the snapshot details.
Once the details of the snapshot are visible, the restore operation can be carried out.
3. Import of ALL workloads without specifying any workload id NOT recommended
Observation : If user hits workload import command without any workload id, thereby expecting that all eligible workloads will get imported, then the command execution takes longer than expected time.
Comments:
As long as the import command is not returning, it is expected to be running; if successful, it will return the job ID, if not, it will throw an error msg.
The execution time may vary based on the number of workloads present in the backend target.
It is recommended to run this command with specific workload IDs.
To import ALL workloads at once, all WL IDs to be provided as parameter to the import CLI command. Procedure for the same mentioned below.
4. Deleting snapshots show status as available in horizon UI
Observation : Snapshot for which delete operation is in-progress from UI , its status is showing as available instead deleting.
Workaround:
Wait for sometime to complete all the delete operations.Eventually all the snapshots will be deleted successfully.
Last updated