arrow-left

All pages
gitbookPowered by GitBook
1 of 1

Loading...

T4O 4.3.1

hashtag
Release Versions

hashtag
Packages

hashtag
Deliverables against T4O-4.3.1

hashtag
Containers and Gitbranch

hashtag
Changelog

  • Issues reported by customers.

hashtag
Fixed Bugs and issues

  1. Backup and Selective restore failure with quobyte cinder volume.

  2. Trilio takes user time and not the dashboard time.

  3. Documentation Change : Steps to be followed from Trilio end while minor upgrade from RHOSP 16.x to 16.y.

hashtag
Known issues

1. Import of multiple workloads don't get segregated across all nodes evenly

Observation : When import of multiple workloads is triggered, it is expected that system must evenly divide the load on all available nodes in round robin fashion. However, current it is running the import on any one of the available nodes.

2. After clicking on snapshot or any restore against Imported WL, UI become unresponsive for first time only

Observation : For workloads with a large number of VMs and with a large number of networks attached to those VMs, then the import of the snapshot details may take more than 1 minute (connection timeout) and hence this issue might be observed.

Comments:

  1. If a user hits this issue, then the import of the snapshot has already been triggered and now the user needs to wait till the snapshot import is done.

  2. The users are advised to wait for a couple of minutes before they recheck the snapshot details.

  3. Once the details of the snapshot are visible, the restore operation can be carried out.

3. Import of ALL workloads without specifying any workload id NOT recommended

Observation : If user hits workload import command without any workload id, thereby expecting that all eligible workloads will get imported, then the command execution takes longer than expected time.

Comments:

  1. As long as the import command is not returning, it is expected to be running; if successful, it will return the job ID, if not, it will throw an error msg.

  2. The execution time may vary based on the number of workloads present in the backend target.

  3. It is recommended to run this command with specific workload IDs.

4. Deleting snapshots show status as available in horizon UI

Observation : Snapshot for which delete operation is in-progress from UI , its status is showing as available instead deleting.

Workaround:

Wait for sometime to complete all the delete operations.Eventually all the snapshots will be deleted successfully.

4.3.1.2

dmapi

deb

4.3.1

contegoclient

deb

4.3.1

python3-dmapi

deb

4.3.1

python3-contegoclient

deb

4.3.1

tvault-horizon-plugin

deb

4.3.2.1

python3-tvault-horizon-plugin

deb

4.3.2.1

python-workloadmgrclient

deb

4.3.6

python3-workloadmgrclient

deb

4.3.6

workloadmgr

deb

4.3.8.2

tvault-contego

python

4.3.1.2

s3fuse

python

4.3.2.2

dmapi

python

4.3.2

contegoclient

python

4.3.2

tvault-horizon-plugin

python

4.3.3.1

workloadmgrclient

python

4.3.7

tvault_configurator

python

4.3.7

workloadmgr

python

4.3.9.2

trilio-fusepy

rpm

3.0.1-1

python3-trilio-fusepy

rpm

3.0.1-1

python3-trilio-fusepy-el9

rpm

3.0.1-1

puppet-triliovault

rpm

4.2.64-4.2

tvault-contego

rpm

4.3.1.2-4.3

python3-s3fuse-plugin

rpm

4.3.1.2-4.3

python3-tvault-contego

rpm

4.3.1.2-4.3

python3-s3fuse-plugin-el9

rpm

4.3.1.2-4.3

python3-tvault-contego-el9

rpm

4.3.1.2-4.3

python-s3fuse-plugin-cent7

rpm

4.3.1.2-4.3

dmapi

rpm

4.3.1-4.3

contegoclient

rpm

4.3.1-4.3

python3-dmapi

rpm

4.3.1-4.3

python3-dmapi-el9

rpm

4.3.1-4.3

python3-contegoclient-el8

rpm

4.3.1-4.3

python3-contegoclient-el9

rpm

4.3.1-4.3

tvault-horizon-plugin

rpm

4.3.2.1-4.3

python3-tvault-horizon-plugin-el8

rpm

4.3.2.1-4.3

python3-tvault-horizon-plugin-el9

rpm

4.3.2.1-4.3

workloadmgrclient

rpm

4.3.6-4.3

python3-workloadmgrclient-el8

rpm

4.3.6-4.3

python3-workloadmgrclient-el9

rpm

4.3.6-4.3

4.3.1-victoria

Kolla Ansible Wallaby containers

4.3.1-wallaby

Kolla Yoga Containers

4.3.1-yoga

Kolla Zed Containers

4.3.1-zed

TripleO Containers

4.3.1-tripleo

Canonical/Juju - python-os-brick_5.2.2-0ubuntu1.3 will disrupt Trilio snapshots completely, if using multipath.
  • Restores failing with Error : "security group rule does not exist".

  • When VMs of the workloads are deleted, snapshots fail (with a very misleading message).

  • "Timeout uploading data" when backing up a VM with a 23TB volume.

  • To import ALL workloads at once, all WL IDs to be provided as parameter to the import CLI command. Procedure for the same mentioned below.

    Package/Container Names

    Package Kind

    Package Versions

    s3-fuse-plugin

    deb

    4.3.1.2

    tvault-contego

    deb

    4.3.1.2

    python3-s3-fuse-plugin

    deb

    4.3.1.2

    python3-tvault-contego

    Name

    Tag

    Gitbranch

    4.3.1

    RHOSP13 containers

    4.3.1-rhosp13

    RHOSP16.1 containers

    4.3.1-rhosp16.1

    RHOSP16.2 containers

    4.3.1-rhosp16.2

    RHOSP17.0 containers

    4.3.1-rhosp17.0

    deb

    Kolla Ansible Victoria containers

    	i. Before proceeding for upgrade OR reinitialize, fetch the list of ALL workload IDs which are NOT in error OR deleted state from database 
    		Query : select id from workloads where status not in ('deleted','error')
    	ii. Use IDs from this list to create the import CLI command parameters. 
    		Sample : --workloadids <wl_id1> --workloadids <wl_id2> …. etc. Shell command below to do the same. 
    		wlIdList.txt to have all workload IDs; one ID per line.
    	iii. awk '{print " --workloadids "$1}' wlIdList.txt | tr -d '\n'
    	iv Append output of above command to the import command.
    		workloadmgr workload-importworkloads <Command output>
    		Eg : workloadmgr workload-importworkloads --workloadids ff24945f-7bef-498d-98eb-d727ec85bc7b --workloadids a15948b4-942c-47e2-85c5-06cad697010f