Trilio 4.2 Release Notes
Last updated
Last updated
Trilio 4.2 introduces new features and capabilities:
Backup and Recovery of encrypted Cinder volumes (Barbican support)
Encryption of Workloads (Barbican support)
Support for multi-IP NFS backup targets
Database clean up utility
Backup rebase utility
Trilio 4.2.64
Name | Type | Version |
---|---|---|
This functionality is not yet available for Canonical OpenStack. An update will be provided once it is available for Canonical OpenStack too.
This functionality is not available for RHOSP13 or TripleO Train on CentOS7. The reason is a dependency package, which is not available in RHEL7 or CentOS7.
The OpenStack Barbican service enables the OpenStack Cinder service to provide encrypted Volumes. These volumes are software encrypted by the Cinder service with the secret used for the encryption being managed by the Barbican service.
Trilio for OpenStack 4.2 is integrating into OpenStack Barbican to enable T4O to provide native backup and recovery of the encrypted Cinder volume.
Any workload containing an encrypted Cinder volume has to create encrypted backups too. It is not possible to create unencrypted Workloads for encrypted Cinder Volumes.
T4O 4.2 is providing encryption on the Workload level. All VMs that are part of an encrypted Workload will have their Cinder Volume data encrypted.
This functionality is not available for encrypted Nova boot volumes. Encrypted Nova boot volumes can not be backed up. Unencrypted Nova boot volumes can be backed up and put into an encrypted Workload.
Activating this feature and using encrypted Workload will lead to longer backup times. The following timings have been seen in Trilio labs:
Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : ~108GB
For unencrypted WL : 62 min
For encrypted WL : 82 min
Snapshot time for Windows Image booted VM. No additional data except OS. : ~12 GB
For unencrypted WL : 10 min
For encrypted WL : 18 min
This functionality is not yet available for Canonical OpenStack. An update will be provided once it is available for Canonical OpenStack too.
This functionality is not available for RHOSP13 or TripleO Train on CentOS7. The reason is a dependency package, which is not available in RHEL7 or CentOS7.
The integration of Trilio for OpenStack 4.2 into Barbican enables T4O to provide encryption for the qcow2-data component of the Trilio backups. The json files containing the backed-up OpenStack metadata stays unencrypted.
This functionality requires the OpenStack Barbican service to be present. Without the Barbican service, the possibilities to encrypt Workloads will not be shown inside Horizon.
T4O 4.2 is only consuming secrets from Barbican. It is not creating, editing, or deleting any secret inside Barbican.
Barbican secrets are required to run backups or restores in encryption-enabled Workloads. It is the OpenStack project user's responsibility to provide secrets and to ensure that the correct secrets are available.
To utilize encrypted Workloads the Trilio trustee role needs to be able to engage with the Barbican service to read and fetch secrets from Barbican. The only default roles enabled with these permissions are the admin and the creator roles.
The possibility to encrypt a Workload is only provided during Workload creation. Once a workload has been created it is no longer possible to change, whether the Workload is encrypted or not.
Every encrypted Workload is consuming a unique Barbican secret. It is not possible to assign the same secret to two Workloads.
The following secret configurations are supported.
A default Barbican installation will generate secrets of the following type:
Alghorithm: AES-256
Mode: cbc
content type: application/octet-stream
payload-content-encoding: base64
secret type: opaque
payload: plaintext
Barbican can be configured to use other secret vaults as a backend. Trilio has not tested other than the default secret vault provided by the Barbican service.
Other secret vaults are supported under best-effort support.
Encrypted Workloads can be migrated to different projects or clouds just like normal Workloads. The Barbican secrets used for the encryption need to be made available inside the target environment. This already applies when the Workload gets assigned to a different owner.
This functionality is not yet available for Canonical OpenStack. An update will be provided once it is available for Canonical OpenStack too.
Many Trilio customers are using software-defined storage solutions for a scalable backup target. These software-defined storage solutions often provide the capability to spread read and write operations over multiple nodes of the storage cluster. Each of these nodes has its own access IP to interact with. All nodes are still writing to the same logical volume, which is available through the NFS protocol.
Trilio for Openstack 4.2 is supporting such solutions by enabling the have different IPs for the same backup target for each Datamover.
Every Datamover and the Trilio appliance are still consuming one NFS path per Volume.
The requirement for this functionality is that all NFS paths spread across the Trilio solution are accessing the same NFS Volume. Using this functionality to provide the Trilio solution with different backup targets to different Datamovers will lead to backup and restore failures.
This is achieved by changing the method to calculate the T4O mount point. It is now only considering the volume path instead of the complete NFS path. Example below.
Trilio for OpenStack is following an older OpenStack database schema and model. This model contains, that no data inside the database ever gets truly deleted. Only a flag is set to showcase that the dataset is no longer considered active.
This has the advantage that it is always possible to trace back any objects and their existence timeline inside the database for analytical purposes.
The big disadvantage of this model is that the database itself is ever-growing with every activity that T4O is doing.
Over time it is possible for the database to reach sizes, that normal activities like listing Workloads are taking so long, that other tasks like taking backups are impacted.
To counter this issue does Trilio provide a new utility, which will delete all no longer required database entries to reduce the load of the work when the T4O solution needs to access the database.
Running this utility will completely delete all elements that are not required for active Workloads or Snapshots. It is recommended to create a database dump first when the possibility to analyze past data is required.
Trilio will revamp its Database schema and usage in a future T4O release.
Trilio for OpenStack is providing full synthetic backups utilizing the backing-file functionality of qcow2 images.
This functionality enables Trilio backups to run incremental forever, without having to restore every incremental backup. Instead only the latest backup needs to be restored as all missing blocks are fetched through the backing file chain from older backups.
The backing file of a particular backup is hardcoded inside the qcow2 image itself. These backing files need to consist of the full path and can't use relative paths. An example is provided below.
T4O is using a base64 hash value inside this path for NFS backup targets. This backup target path needs to be resolvable for the qcow2 image to find its backing file.
The hash value of the mount path can change when the backup is moved to a different backup target or when the T4O calculation method is changed like it is for T4O 4.2.
Trilio is now providing a utility that will change the backing file path for a given workload.
The runtime of this utility is increasing exponentially with increasing backing file chain.
[Canonical OpenStack] Selective restore fails for migrated workloads
Observation:
after migrating workloads to a different Canonical OpenStack environments restore fails with permission denied error in tmp directory
Workaround:
Run the {{sudo sysctl fs.protected_regular=0}} command on wlm units.
[Intermittent Canonical OpenStack Queens] In-Place restore doesn't work with ext3/ext4 file systems
Observation:
In-Place restore succeeds logically
Data is not getting replaced
Workaround:
Running the In-Place restore a second time
Running a selective restore and reattaching the restored Volume
[intermittent] [Canonical OpenStack] Retention not honored after mount/unmount operation
Observation:
After a mount or unmount operation the ownership of the backup stays qemu:qemu which leads to T4O no longer being able to run required merge commands
Workaround:
identify the workload with failed retention policy
run: chown -R nova:nova <WL_DIR>
verify next backup applies the retention policy
CLI commands get-importworkload-list and get-orphaned-workloads-list show wrong list of workloads
Observation:
independent of the command all workloads located on the backup target are listed
Workaround:
use project id in command to show only workloads that can be imported for that project
workloadmgr workload-get-importworkloads-list --project_id <project_id>
File-search not displaying files in lvm created logical volumes
Observation:
File search returns an empty list for lvm controlled logical volumes
fdisk created logical volumes work as desired
File-search not displaying files when root directory doesn't contain read permissions for group
Observation:
File search returns an empty list when root doesn't have read permissions for groups
File search is run as user nova
Unable to create encrypted Workload if T4O gets reconfigured with creator trustee role
Observation:
T4O initially configured with trustee role member (not able to create encrypted workloads)
After reconfiguration to trustee role creator encrypted workloads are still not creatable
Workaround:
After reconfiguration create one unencrypted workload
Then create encrypted workloads as required
Post restore of encrypted incremental snapshots CentOS instance is not getting booted
Observation:
Restoring of a CentOS-based instance fails to start the instance in the case of restoring from an encrypted incremental snapshot
Workaround:
Use only full backups for CentOS-based instances in combination with encrypted Workloads
Name | Tag |
---|---|
Algorithm | mode | content types | payload-content-encoding | secret type | payload | secretfile |
---|---|---|---|---|---|---|
s3fuse
python package
4.2.64
tvault-configurator
python package
4.2.64
workloadmgr
python package
4.2.64
workloadmgrclient
python package
4.2.64
dmapi
deb package
4.2.64
python3-dmapi
deb package
4.2.64
tvault-contego
deb package
4.2.64
python3-tvault-contego
deb package
4.2.64
tvault-horizon-plugin
deb package
4.2.64
python3-tvault-horizon-plugin
deb package
4.2.64
s3-fuse-plugin
deb package
4.2.64
python3-s3-fuse-plugin
deb package
4.2.64
workloadmgr
deb package
4.2.64
workloadmgrclient
deb package
4.2.64
python3-namedatomiclock
deb package
1.1.3
dmapi
rpm package
4.2.64-4.2
python3-dmapi
rpm package
4.2.64-4.2
tvault-contego
rpm package
4.2.64-4.2
python3-tvault-contego
rpm package
4.2.64-4.2
tvault-horizon-plugin
rpm package
4.2.64-4.2
python3-tvault-horizon plugin-el8
rpm package
4.2.64-4.2
python-s3fuse-plugin-cent7
rpm package
3.0.1-1
python3-s3fuse-plugin
rpm package
3.0.1-1
workloadmgrclient
rpm package
4.2.64-4.2
Gitbranch
stable/4.2
RHOSP13 containers
4.2.64-rhosp13
RHOSP16.1 containers
4.2.64-rhosp16.1
RHOSP16.2 containers
4.2.64-rhosp16.2
Kolla Ansible Victoria containers
4.2.64-victoria
TripleO Train containers
4.2.64-tripleo
AES-256
ctr
text/plain
None
passphrase
plaintext
plaintext
xts
application/octet-stream
base64
symmetric keys
encoded with base64
cbc
opaque