Search…
TVO-4.2
TrilioVault 4.2 Release Notes
TrilioVault 4.2 introduces new features and capabilities:
  • Backup and Recovery of encrypted Cinder volumes (Barbican support)
  • Encryption of Workloads (Barbican support)
  • Support for multi-IP NFS backup targets
  • Database clean up utility
  • Backup rebase utility
TrilioVault 4.2.64

Release Versions

Packages

Name
Type
Version
s3fuse
python package
4.2.64
tvault-configurator
python package
4.2.64
workloadmgr
python package
4.2.64
workloadmgrclient
python package
4.2.64
dmapi
deb package
4.2.64
python3-dmapi
deb package
4.2.64
tvault-contego
deb package
4.2.64
python3-tvault-contego
deb package
4.2.64
tvault-horizon-plugin
deb package
4.2.64
python3-tvault-horizon-plugin
deb package
4.2.64
s3-fuse-plugin
deb package
4.2.64
python3-s3-fuse-plugin
deb package
4.2.64
workloadmgr
deb package
4.2.64
workloadmgrclient
deb package
4.2.64
python3-namedatomiclock
deb package
1.1.3
dmapi
rpm package
4.2.64-4.2
python3-dmapi
rpm package
4.2.64-4.2
tvault-contego
rpm package
4.2.64-4.2
python3-tvault-contego
rpm package
4.2.64-4.2
tvault-horizon-plugin
rpm package
4.2.64-4.2
python3-tvault-horizon plugin-el8
rpm package
4.2.64-4.2
python-s3fuse-plugin-cent7
rpm package
3.0.1-1
python3-s3fuse-plugin
rpm package
3.0.1-1
workloadmgrclient
rpm package
4.2.64-4.2

Containers and Gitbranch

Name
Tag
Gitbranch
stable/4.2
RHOSP13 containers
4.2.64-rhosp13
RHOSP16.1 containers
4.2.64-rhosp16.1
RHOSP16.2 containers
4.2.64rhosp16.2
Kolla Ansible Victoria containers
4.2.64-victoria
TripleO Train containers
4.2.64-tripleo

Backup and Recovery of encrypted Cinder Volumes (Barbican support)

This functionality is not yet available for Canonical OpenStack. An update will be provided once it is available for Canonical OpenStack too.
This functionality is not available for RHOSP13 or TripleO Train on CentOS7. The reason is a dependency package, which is not available in RHEL7 or CentOS7.
The OpenStack Barbican service enables the OpenStack Cinder service to provide encrypted Volumes. These volumes are software encrypted by the Cinder service with the secret used for the encryption being managed by the Barbican service.
TrilioVault for OpenStack 4.2 is integrating into OpenStack Barbican to enable TVO to provide native backup and recovery of the encrypted Cinder volume.
Any workload containing an encrypted Cinder volume has to create encrypted backups too. It is not possible to create unencrypted Workloads for encrypted Cinder Volumes.
TVO 4.2 is providing encryption on the Workload level. All VMs that are part of an encrypted Workload will have their Cinder Volume data encrypted.
This functionality is not available for encrypted Nova boot volumes. Encrypted Nova boot volumes can not be backed up. Unencrypted Nova boot volumes can be backed up and put into an encrypted Workload.

Encryption of Workloads (Barbican support)

Activating this feature and using encrypted Workload will lead to longer backup times. The following timings have been seen in TrilioVault labs:
Snapshot time for LVM Volume Booted CentOS VM. Disk size 200 GB; total data including OS : ~108GB
  1. 1.
    For unencrypted WL : 62 min
  2. 2.
    For encrypted WL : 82 min
Snapshot time for Windows Image booted VM. No additional data except OS. : ~12 GB
  1. 1.
    For unencrypted WL : 10 min
  2. 2.
    For encrypted WL : 18 min
This functionality is not yet available for Canonical OpenStack. An update will be provided once it is available for Canonical OpenStack too.
This functionality is not available for RHOSP13 or TripleO Train on CentOS7. The reason is a dependency package, which is not available in RHEL7 or CentOS7.
The integration of TrilioVault for OpenStack 4.2 into Barbican enables TVO to provide encryption for the qcow2-data component of the TrilioVault backups. The json files containing the backed-up OpenStack metadata stays unencrypted.
This functionality requires the OpenStack Barbican service to be present. Without the Barbican service, the possibilities to encrypt Workloads will not be shown inside Horizon.
TVO 4.2 is only consuming secrets from Barbican. It is not creating, editing, or deleting any secret inside Barbican.
Barbican secrets are required to run backups or restores in encryption-enabled Workloads. It is the OpenStack project user's responsibility to provide secrets and to ensure that the correct secrets are available.
To utilize encrypted Workloads the Trilio trustee role needs to be able to engage with the Barbican service to read and fetch secrets from Barbican. The only default roles enabled with these permissions are the admin and the creator roles.
The possibility to encrypt a Workload is only provided during Workload creation. Once a workload has been created it is no longer possible to change, whether the Workload is encrypted or not.
Every encrypted Workload is consuming a unique Barbican secret. It is not possible to assign the same secret to two Workloads.
The following secret configurations are supported.
Algorithm
mode
content types
payload-content-encoding
secret type
payload
secretfile
AES-256
ctr
text/plain
None
passphrase
plaintext
plaintext
xts
application/octet-stream
base64
symmetric keys
encoded with base64
cbc
opaque
A default Barbican installation will generate secrets of the following type:
  • Alghorithm: AES-256
  • Mode: cbc
  • content type: application/octet-stream
  • payload-content-encoding: base64
  • secret type: opaque
  • payload: plaintext
Barbican can be configured to use other secret vaults as a backend. Trilio has not tested other than the default secret vault provided by the Barbican service.
Other secret vaults are supported under best-effort support.
Encrypted Workloads can be migrated to different projects or clouds just like normal Workloads. The Barbican secrets used for the encryption need to be made available inside the target environment. This already applies when the Workload gets assigned to a different owner.

Support for Multi-IP NFS backends

This functionality is not yet available for Canonical OpenStack. An update will be provided once it is available for Canonical OpenStack too.
Many TrilioVault customers are using software-defined storage solutions for a scalable backup target. These software-defined storage solutions often provide the capability to spread read and write operations over multiple nodes of the storage cluster. Each of these nodes has its own access IP to interact with. All nodes are still writing to the same logical volume, which is available through the NFS protocol.
TrilioVault for Openstack 4.2 is supporting such solutions by enabling the have different IPs for the same backup target for each Datamover.
Every Datamover and the TrilioVault appliance are still consuming one NFS path per Volume.
The requirement for this functionality is that all NFS paths spread across the TrilioVault solution are accessing the same NFS Volume. Using this functionality to provide the TrilioVault solution with different backup targets to different Datamovers will lead to backup and restore failures.
This is achieved by changing the method to calculate the TVO mount point. It is now only considering the volume path instead of the complete NFS path. Example below.
# echo -n /Trilio_Backup | base64
L1RyaWxpb19CYWNrdXA=
/var/triliovault-mounts/L1RyaWxpb19CYWNrdXA=/

Database cleanup utility

TrilioVault for OpenStack is following an older OpenStack database schema and model. This model contains, that no data inside the database ever gets truly deleted. Only a flag is set to showcase that the dataset is no longer considered active.
This has the advantage that it is always possible to trace back any objects and their existence timeline inside the database for analytical purposes.
The big disadvantage of this model is that the database itself is ever-growing with every activity that TVO is doing.
Over time it is possible for the database to reach sizes, that normal activities like listing Workloads are taking so long, that other tasks like taking backups are impacted.
To counter this issue does Trilio provide a new utility, which will delete all no longer required database entries to reduce the load of the work when the TVO solution needs to access the database.
Running this utility will completely delete all elements that are not required for active Workloads or Snapshots. It is recommended to create a database dump first when the possibility to analyze past data is required.
Trilio will revamp its Database schema and usage in a future TVO release.

Backup rebase utility

TrilioVault for OpenStack is providing full synthetic backups utilizing the backing-file functionality of qcow2 images.
This functionality enables TrilioVault backups to run incremental forever, without having to restore every incremental backup. Instead only the latest backup needs to be restored as all missing blocks are fetched through the backing file chain from older backups.
The backing file of a particular backup is hardcoded inside the qcow2 image itself. These backing files need to consist of the full path and can't use relative paths. An example is provided below.
qemu-img info 85b645c5-c1ea-4628-b5d8-1faea0e9d549
image: 85b645c5-c1ea-4628-b5d8-1faea0e9d549
file format: qcow2
virtual size: 1.0G (1073741824 bytes)
disk size: 21M
cluster_size: 65536
backing file: /var/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0=/workload_3c2fbee5-ad90-4448-b009-5047bcffc2ea/snapshot_f4874ed7-fe85-4d7d-b22b-082a2e068010/vm_id_9894f013-77dd-4514-8e65-818f4ae91d1f/vm_res_id_9ae3a6e7-dffe-4424-badc-bc4de1a18b40_vda/a6289269-3e72-4085-adca-e228ba656984
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
TVO is using a base64 hash value inside this path for NFS backup targets. This backup target path needs to be resolvable for the qcow2 image to find its backing file.
The hash value of the mount path can change when the backup is moved to a different backup target or when the TVO calculation method is changed like it is for TVO 4.2.
Trilio is now providing a utility that will change the backing file path for a given workload.
The runtime of this utility is increasing exponentially with increasing backing file chain.

Known issues

[Canonical OpenStack] Selective restore fails for migrated workloads
Observation:
  • after migrating workloads to a different Canonical OpenStack environments restore fails with permission denied error in tmp directory
Workaround:
Run the {{sudo sysctl fs.protected_regular=0}} command on wlm units.
[Intermittent Canonical OpenStack Queens] In-Place restore doesn't work with ext3/ext4 file systems
Observation:
  • In-Place restore succeeds logically
  • Data is not getting replaced
Workaround:
  • Running the In-Place restore a second time
  • Running a selective restore and reattaching the restored Volume
[intermittent] [Canonical OpenStack] Retention not honored after mount/unmount operation
Observation:
  • After a mount or unmount operation the ownership of the backup stays qemu:qemu which leads to TVO no longer being able to run required merge commands
Workaround:
  • identify the workload with failed retention policy
  • run: chown -R nova:nova <WL_DIR>
  • verify next backup applies the retention policy
CLI commands get-importworkload-list and get-orphaned-workloads-list show wrong list of workloads
Observation:
  • independent of the command all workloads located on the backup target are listed
Workaround:
  • use project id in command to show only workloads that can be imported for that project workloadmgr workload-get-importworkloads-list --project_id <project_id>
File-search not displaying files in lvm created logical volumes
Observation:
  • File search returns an empty list for lvm controlled logical volumes
  • fdisk created logical volumes work as desired
File-search not displaying files when root directory doesn't contain read permissions for group
Observation:
  • File search returns an empty list when root doesn't have read permissions for groups
  • File search is run as user nova
Unable to create encrypted Workload if TVO gets reconfigured with creator trustee role
Observation:
  • TVO initially configured with trustee role member (not able to create encrypted workloads)
  • After reconfiguration to trustee role creator encrypted workloads are still not creatable
Workaround:
  • After reconfiguration create one unencrypted workload
  • Then create encrypted workloads as required
Post restore of encrypted incremental snapshots CentOS instance is not getting booted
Observation:
  • Restoring of a CentOS-based instance fails to start the instance in the case of restoring from an encrypted incremental snapshot
Workaround:
  • Use only full backups for CentOS-based instances in combination with encrypted Workloads
Export as PDF
Copy link
Outline
Release Versions
Packages
Containers and Gitbranch
Backup and Recovery of encrypted Cinder Volumes (Barbican support)
Encryption of Workloads (Barbican support)
Support for Multi-IP NFS backends
Database cleanup utility
Backup rebase utility
Known issues