T4O 4.3.1

Release Versions

Packages

Deliverables against T4O-4.3.1

Containers and Gitbranch

Changelog

  • Issues reported by customers.

Fixed Bugs and issues

  1. Backup and Selective restore failure with quobyte cinder volume.

  2. Trilio takes user time and not the dashboard time.

  3. Documentation Change : Steps to be followed from Trilio end while minor upgrade from RHOSP 16.x to 16.y.

  4. Canonical/Juju - python-os-brick_5.2.2-0ubuntu1.3 will disrupt Trilio snapshots completely, if using multipath.

  5. Restores failing with Error : "security group rule does not exist".

  6. When VMs of the workloads are deleted, snapshots fail (with a very misleading message).

  7. "Timeout uploading data" when backing up a VM with a 23TB volume.

Known issues

1. Import of multiple workloads don't get segregated across all nodes evenly

Observation : When import of multiple workloads is triggered, it is expected that system must evenly divide the load on all available nodes in round robin fashion. However, current it is running the import on any one of the available nodes.

2. After clicking on snapshot or any restore against Imported WL, UI become unresponsive for first time only

Observation : For workloads with a large number of VMs and with a large number of networks attached to those VMs, then the import of the snapshot details may take more than 1 minute (connection timeout) and hence this issue might be observed.

Comments:

  1. If a user hits this issue, then the import of the snapshot has already been triggered and now the user needs to wait till the snapshot import is done.

  2. The users are advised to wait for a couple of minutes before they recheck the snapshot details.

  3. Once the details of the snapshot are visible, the restore operation can be carried out.

3. Import of ALL workloads without specifying any workload id NOT recommended

Observation : If user hits workload import command without any workload id, thereby expecting that all eligible workloads will get imported, then the command execution takes longer than expected time.

Comments:

  1. As long as the import command is not returning, it is expected to be running; if successful, it will return the job ID, if not, it will throw an error msg.

  2. The execution time may vary based on the number of workloads present in the backend target.

  3. It is recommended to run this command with specific workload IDs.

  4. To import ALL workloads at once, all WL IDs to be provided as parameter to the import CLI command. Procedure for the same mentioned below.

	i. Before proceeding for upgrade OR reinitialize, fetch the list of ALL workload IDs which are NOT in error OR deleted state from database 
		Query : select id from workloads where status not in ('deleted','error')
	ii. Use IDs from this list to create the import CLI command parameters. 
		Sample : --workloadids <wl_id1> --workloadids <wl_id2> …. etc. Shell command below to do the same. 
		wlIdList.txt to have all workload IDs; one ID per line.
	iii. awk '{print " --workloadids "$1}' wlIdList.txt | tr -d '\n'
	iv Append output of above command to the import command.
		workloadmgr workload-importworkloads <Command output>
		Eg : workloadmgr workload-importworkloads --workloadids ff24945f-7bef-498d-98eb-d727ec85bc7b --workloadids a15948b4-942c-47e2-85c5-06cad697010f

4. Deleting snapshots show status as available in horizon UI

Observation : Snapshot for which delete operation is in-progress from UI , its status is showing as available instead deleting.

Workaround:

Wait for sometime to complete all the delete operations.Eventually all the snapshots will be deleted successfully.

Last updated