Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Due to the new installation method of Trilio for Kolla OpenStack, it is required to reinstall the Trilio components running on the Kolla OpenStack nodes when upgrading from Trilio 4.0.
The Trilio appliance can be upgraded as documented here.
Trilio 4.1 can be upgraded without reinstallation to a higher version of T4O if available.
Please ensure the following points are met before starting the upgrade process:
No Snapshot or Restore is running
Global job scheduler is disabled
wlm-cron is disabled on the Trilio Appliance
Access to the gemfury repository to fetch new packages
The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut down.
Add the Gemfury repository on each dmapi, horizon containers & compute nodes.
Create file /etc/apt/sources.list.d/fury.list
and add the below line to it.
The following commands can be used to verify the connection to the gemfury repository and to check for available packages.
Add trilio repo on each dmapi, horizon containers & compute nodes.
Modify the file /etc/yum.repos.d/trilio.repo
and add below line in it.
The following commands can be used to verify the connection to the Trilio rpm server and to check for available packages.
The following steps represent the best practice procedure to upgrade the dmapi service.
Login to dmapi container
Take a backup of the dmapi configuration in /etc/dmapi/
use apt list --upgradeable
to identify the package used for the dmapi service
Update the dmapi package
restore the backed-up config files into /etc/dmapi/
Restart the dmapi container
Check the status of the dmapi service
These steps are done with the following commands. This example is assuming that the more common python3 packages are used.
The following steps represent the best practice procedure to update the Horizon plugin.
Login to Horizon Container
use apt list --upgradeable
to identify the package the Trilio packages for the workloadmgrclient, contegoclient and Horizon plugin
Install the tvault-horizon-plugin package in the required python version
install the workloadmgrclient package
install the contegoclient
Restart the Horizon webserver
check the installed version of the workloadmgrclient
These steps are done with the following commands. This example is assuming that the more common python3 packages are used.
The following steps represent the best practice procedure to update the tvault-contego service on the compute nodes.
Login into the compute node
Take a backup of the config files in
(NFS and S3) /etc/tvault-contego/
(S3 only) /etc/tvault-object-store
use apt list --upgradeable
to identify the tvault-contego package used
Unmount backup storage
upgrade the tvault-contego package in the required python version
(S3 only) upgrade the s3-fuse-plugin package
restore the config files into /etc/tvault-contego/
(S3 only) Restart the tvault-object-store service
Restart the tvault-contego service
check the status
These steps are done with the following commands. This example is assuming that the more common python3 packages are used.
Following are the haproxy cfg parameters recommended for optimal performance of dmapi service. File location on controller /etc/haproxy/haproxy.cfg
If values were already updated during any of the previous releases, further steps can be skipped.
Remove below content, if present in the file/etc/openstack_deploy/user_variables.yml
on ansible host.
Add the below lines at end of the file /etc/openstack_deploy/user_variables.yml
on the ansible host.
Update Haproxy configuration using the below command on ansible host.
For the major upgrade from 4.0 to 4.1 use the JuJu charms upgrade path.
The charms will always install the latest version available of T4O 4.1. This will only work when upgrading from 4.0 to 4.1.
The following charms exist:
Installs and manages Trilio Controller services.
Installs and manages the Trilio Datamover API service.
Installs and manages the Trilio Datamover service.
Installs and manages the Trilio Horizon Plugin.
The documentation of the charms can be found here:
The following steps have been tested and verified within Trilio environments. There have been cases where these steps updated all packages inside the LXC containers, leading to failures in basic OpenStack services.
It is recommended to run each of these steps in dry-run first.
When any other packages but Trilio packages are getting updated, stop the upgrade procedure and contact your Trilio customer success manager.
Trilio is releasing hotfixes, which require updating the packages inside the containers. These hotfixes can not be installed using the Juju charms as they don't require an update to the charms.
Either 4.1 GA OR any hotfix patch against 4.1 should be already deployed for performing upgrades mentioned in the current document.
No snapshot OR restore to be running.
Global job scheduler should be disabled.
wlm-cron should be disabled ( Following commands are to be run on MAAS node)
If trilio-wlm is HA enabled, set the cluster configuration to maintenance mode ( this command will fail for single node deployment)
juju exec [-m <model>] --unit trilio-wlm/leader "sudo crm configure property maintenance-mode=true"
Stop wlm-cron service
juju exec [-m <model>] --application trilio-wlm "sudo systemctl stop wlm-cron"
Ensure that no stale wlm-cron processes are there
juju exec [-m <model>] --application trilio-wlm "sudo ps -ef | grep [w]orkloadmgr-cron"
If any stale process is found, that needs to be killed manually.
The mentioned gemfury repository should be accessible from trilio units.
The deployed Trilio version is controlled by the triliovault-pkg-source
charm configuration option.
For each trilio charm it should be pointing to below gemfury repository.
This can be checked via juju [-m ] config triliovault-pkg-source
command output.
This is the preferred, recommended and tested method to update the packages is through the Juju command line.
Run below commands form MASS node
Check the trilio units status in juju status [-m ] | grep trilio
output. All the trilio units will be with new package.
Run the below command to update the schema
Check the schema head with below command. It should point to latest schema head.
Run below command to restart the apache2 service on horizon container
If the trilio-wlm nodes are HA enabled:
Make sure the wlm-cron services are down after the pkg upgrade.
Run the following command for the same:juju exec [-m <model>] --application trilio-wlm "sudo systemctl stop wlm-cron"
Unset the cluster maintenance modejuju exec [-m <model>] --unit trilio-wlm/leader "sudo crm configure property maintenance-mode=false"
Make sure the wlm-cron service up and running on any one node.juju exec [-m <model>] --application trilio-wlm "sudo systemctl status wlm-cron"
Set the Global Job Scheduler to the original state.
If any trilio unit get into error state with message
hook failed: "update-status"
Follow below steps
This describes the upgrade process from Trilio 4.0 or Trilio 4.0SP1 to Trilio 4.1 GA or its hotfix releases.
Kolla Ansible Openstack only: The mount point for the Trilio Backup Target has changed in Trilio 4.1. A reconfiguration after the upgrade is required.
The prerequisites should already be fulfilled from upgrading the Trilio components on the Controller and Compute nodes.
Please ensure to complete the upgrade of all the Trilio components on the Openstack controller & compute nodes before starting the rolling upgrade of TVM.
The mentioned Gemfury repository should be accessible from TVault VM.
Please ensure the following points before starting the upgrade process:
No snapshot OR restore to be running.
Global job-scheduler should be disabled.
wlm-cron should be disabled and any lingering process should be killed.
The following sets of commands will disable the wlm-cron service and verify that it has been completely shut down.
Verify if the service is shut down with the below set of commands and expected output:
Take a backup of the conf files on all TVM nodes.
Check if Python 3.8 virtual environment exists on the T4O nodes
If the virtual environment does not exist, perform the below steps on the T4O nodes
Activate the Python3.6 virtual environment on all T4O nodes for wlm services upgrade
Ansible doesn't support the upgrade from previous versions to the latest one (2.10.4) and needs to be uninstalled for that reason
Run the following command on all TVM nodes to upgrade the pip package
Run the following commands on all TVM nodes to upgrade s3fuse and its dependent packages.
Run the following commands on all TVM nodes to upgrade s3fuse packages only.
Post upgrade, the password for T4O configurator will be reset to the default one i.e. 'password' for user 'admin'. Reset T4O configurator password after the upgrade.
Make sure the correct virtual environment(myansible_3.8) has been activated
Run the following command on all TVM nodes to upgrade tvault-configurator and its dependent packages.
Run the following command on all TVM nodes to upgrade tvault-configurator packages only.
During the update of the tvault-configurator the following error might be shown:
This error can be ignored.
Run the upgrade command on all TVM nodes to upgrade workloadmgr and its dependent packages.
Run the upgrade command on all TVM nodes to upgrade workloadmgr packages only.
Run the upgrade command on all TVM nodes to upgrade workloadmgr and its dependent packages.
Run the upgrade command on all TVM nodes to upgrade workloadmgr packages only.
Run the upgrade command on all TVM nodes to upgrade contegoclient and its dependent packages.
Run the upgrade command on all TVM nodes to upgrade contegoclient packages only.
Using the latest available oslo.messaging version can lead to stuck RPC and API calls.
It is therefore required to fix the oslo.messaging version on the TVM.
Delete the wlm-scheduler pcs resource because in 4.1 it is not a part of pcs
Restart the following services on all node(s) using respective commands\
tvault-object-store restart required only if Trilio is configured with S3 backend storage
Enable Global Job Scheduler ****Restart pcs resources only on the primary node
tvault-object-store will run only if TVault configured with S3 backend storage
Additional check for wlm-cron on the primary node
The above command should show only 2 processes running: sample below:
Check the mount point using “df -h” command
Trilio for Openstack 4.1 HF1 is introducing several new config parameters, which will be automatically set upon reconfiguration.
Trilio for Openstack 4.1 is changing the Trilio mount point as follows:
RHOSP 13 & 16.0 & 16.1: /var/lib/nova/triliovault-mounts
Kolla Ansible Ussuri: /var/trilio/triliovault-mounts
Reconfiguring the Trilio Appliance will automatically handle this change.
Trilio for Openstack 4.1 is changing the Trilio mount point as follows:
RHOSP 13 & 16.0 & 16.1: /var/lib/nova/triliovault-mounts
Kolla Ansible Ussuri: /var/trilio/triliovault-mounts
After reconfiguration of the Trilio Appliance, it is necessary to create a mount bind between the old and new mount points to provide full access to the old Trilio backups.
For RHOSP:
For Kolla:
To have this change persistent it is recommended to change the fstab accordingly:
For RHOSP:
For Kolla:
Red Hat OpenStack and Kolla Ansible Openstack are using the nova UID/GID of 42436 inside their containers instead of 162:162 which is the standard in other Openstack environments.
Please verify that the nova UID/GID on the Trilio Appliance is still 42436,
In case of the UID/GID is changed back to 162:162 follow these steps to set it back to 42436:42436.
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that nova
user and group ids have changed to '42436'
The offline upgrade of the Trilio Appliance is only recommended for hotfix upgrades. For major upgrades in offline environments, it is recommended to download the latest qcow2 image and redeploy the appliance.
Please ensure to complete the upgrade of all the TVault components on the Openstack controller & compute nodes before starting the rolling upgrade of TVO.
The mentioned gemfury repository should be accessible from a VM/Server.
Please ensure the following points before starting the upgrade process:
No snapshot OR restore to be running.
Global job-scheduler should be disabled.
wlm-cron should be disabled & any lingering process should be killed. (This should already have been done during Trilio components upgrade on Openstack)
pcs resource disable wlm-cron
Check: systemctl status wlm-cron OR pcs resource show wlm-cron
Additional step: To ensure that cron is actually stopped, search for any lingering processes against wlm-cron and kill them. [Cmd : ps -ef | grep -i workloadmgr-cron]
VM/Server must have internet connectivity and connectivity to Trilio gemfury repo
Download latest pip package
Export the index URL
Download s3fuse package
Download tvault-configurator dependent package
Download workloadmgr and dependent package
Download workloadmgrclient package
Download contegoclient package
Download oslo.messaging package
All downloaded packages need to be copied from VM/server to all the TVM nodes.
Copy all the downloaded packages(listed below) from the VM/server to all the TVM nodes
pip
s3fuse
tvault-configurator
workloadmgr
workloadmgrclient
contegoclient
If any of the packages are already on the latest, the upgrade won’t happen. Make sure you should be present at the right dir which means run the below commands from where there all packages should be present
Please refer to the versions of the downloaded packages for the placeholder <HF_VERSION> in the below sections.
Take a backup of the configuration files
Activate the virtual environment
Run the following command on all TVM nodes to upgrade the pip package
Run the following command on all TVM nodes to upgrade s3fuse
Run the following command on all TVM nodes to upgrade tvault-configurator
Run the upgrade command on all TVM nodes to upgrade workloadmgr
Run the upgrade command on all TVM nodes to upgrade workloadmgrclient
Run the upgrade command on all TVM nodes to upgrade contegoclient
Using the latest available oslo.messaging version can lead to stuck RPC and API calls.
It is therefore required to fix the oslo.messaging version on the TVM.
Verify if the upgrade successfully completed or not.
And match the versions with the respective latest downloaded versions.
Restore the backed-up configuration files
Restart following services on all node(s) using respective commands
tvault-object-store restart required only if Trilio is configured with S3 backend storage
Enable wlm-cron service on primary node through pcs cmd, if T4O is configured with Openstack
Enable Global Job Scheduler
Verify the status of the services, if T4O is configured with Openstack.
tvault-object-store will run only if TVault is configured with S3 backend storage
Additional check for wlm-cron on the primary node, if T4O is configured with Openstack_._
Check the mount point using the “df -h” command if T4O is configured with Openstack
Due to the new installation method of Trilio for Kolla OpenStack, it is required to reinstall the Trilio components running on the Kolla Openstack nodes when upgrading from Trilio 4.0.
The Trilio appliance can be upgraded as documented.
Trilio 4.1 can be upgraded without reinstallation to a higher version of T4O if available.
Refer to the below-mentioned acceptable values for the placeholders in this document as per the Openstack environment: kolla_base_distro : ubuntu / centos triliovault_tag : 4.1.94-hotfix-13-ussuri / 4.1.94-hotfix-12-victoria
Please ensure the following points are met before starting the upgrade process:
Either 4.1 GA OR any hotfix patch against 4.1 should be already deployed
No Snapshot OR Restore is running
Global job scheduler should be disabled
wlm-cron is disabled on the primary Trilio Appliance
Access to the gemfury repository to fetch new packages
The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut-down.
Before the latest configuration script is loaded it is recommended to take a backup of the existing config scripts' folder & Trilio ansible roles. The following command can be used for this purpose:
Clone the latest configuration scripts of the required branch and access the deployment script directory for Kolla Ansible Openstack. Available branches to upgrade T4O 4.1 are:
Copy the downloaded Trilio ansible role into the Kolla-Ansible roles directory.
This step is not always required. It is recommended to comparetriliovault_globals.yml
with the Trilio entries in the/etc/kolla/globals.yml
file.
In case of no changes, this step can be skipped.
This is required, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_globals.yml
they need to be updated in /etc/kolla/globals.yml
file.
This step is not always required. It is recommended to comparetriliovault_passwords.yml
with the Trilio entries in the/etc/kolla/passwords.yml
file.
In case of no changes, this step can be skipped.
This step is required, when some password variable names have been added, changed, or removed in the latest triliovault_passwords.yml. In this case, the /etc/kolla/passwords.yml needs to be updated.
This step is not always required. It is recommended to comparetriliovault_site.yml
with the Trilio entries in the/usr/local/share/kolla-ansible/ansible/site.yml
file.
In case of no changes, this step can be skipped.
This is required because, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_site.yml
they need to be updated in /usr/local/share/kolla-ansible/ansible/site.yml
file.
This step is not always required. It is recommended to comparetriliovault_inventory.yml
ith the Trilio entries in the/root/multinode
file.
In case of no changes, this step can be skipped.
By default, the triliovault-datamover-api service gets installed on ‘control' hosts and the trilio-datamover service gets installed on 'compute’ hosts. You can edit the T4O groups in the inventory file as per your cloud architecture.
T4O group names are ‘triliovault-datamover-api’ and ‘triliovault-datamover’
Edit '/etc/kolla/globals.yml' file to fill triliovault backup target and build details. You will find the triliovault related parameters at the end of globals.yml file. User needs to fill in details like triliovault build version, backup target type, backup target details, etc.
Following is the list of parameters that the user needs to edit.
This step is already part of the 4.1 GA installation procedure and should only be verified.
To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml
and find nova_libvirt_default_volumes
variable. Append the Trilio mount bind /var/trilio:/var/trilio:shared
to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
Next, find the variable nova_compute_default_volumes
in the same file and append the mount bind /var/trilio:/var/trilio:shared
to the list.
After the change will the variable look for a default Kolla installation as follows:
In case of using Ironic compute nodes one more entry need to be adjusted in the same file.
Find the variable nova_compute_ironic_default_volumes
and append trilio mount /var/trilio:/var/trilio:shared
to the list.
After the changes the variable will looks like the following:
In case, the user doesn’t want to use the docker hub registry for triliovault containers during cloud deployment, then the user can pull triliovault images before starting cloud deployment and push them to other preferred registries.
Following are the triliovault container image URLs. Replace kolla_base_distro and triliovault_tag variables with their values
Run the below command from the directory with the multinode file tull pull the required images.
Run the below command from the directory with the multinode file to start the upgrade process.
Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.
Following are the default haproxy conf parameters set against triliovault datamover api service.
These values work best for triliovault dmapi service. It’s not recommended to change these parameter values. However, in some exceptional cases, If needed to change any of the above parameter values then same can be done on kolla-ansible server in the following file.
After editing, run kolla-ansible deploy command again to push these changes to openstack cloud.
Post kolla-ansible deploy, to verify the changes, please check following file, available on all controller/haproxy nodes.
Starting Trilio for Openstack 4.0 does Trilio for Openstack allow in-place upgrades.
The following versions can be upgraded to each other:
Old | New |
---|---|
The upgrade process contains upgrading the Trilio appliance and the Openstack components and is dependent on the underlying operating system.
The Upgrade of Trilio for Canonical Openstack is managed through the charms.
Please ensure the following points are met before starting the upgrade process:
No Snapshot or Restore is running
Global job scheduler is disabled
wlm-cron is disabled on the Trilio Appliance
The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut-down.
All commands need to be run as user 'stack' on undercloud node
The Trilio appliance connected to this installation needs to be of version 4.1 HF10
Separate directories are created as per Redhat OpenStack release under 'triliovault-cfg-scripts/redhat-director-scripts/' directory. Use all scripts/templates from respective directory. For ex, if your RHOSP release is 13, then use scripts/templates from 'triliovault-cfg-scripts/redhat-director-scripts/rhosp13' directory only.
Available RHOSP_RELEASE___DIRECTORY values are:
rhosp13 rhosp16.1 rhosp16.2
RHOSP 16.0 is not supported anymore as RedHat has officially stopped supporting it. However, Trilio maintained it for some time and stopped the support from 4.1HF11 onwards. The latest hotfix available for RHOSP16.0 is 41.HF10. Reach out to the Support team for any help.
If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into the puppet directory of the right release.
Trilio has two services as explained below.
You need to add these two services to your roles_data.yaml.
If you do not have customized roles_data file, you can find your default roles_data.yaml file at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
on undercloud.
You need to find that role_data file and edit it to add the following Trilio services.
i) Trilio Datamover Api Service:
Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamoverApi
This service needs to be co-located with database and keystone services. That said, you need to add this service on the same role as of keystone and database service.
Typically this service should be deployed on controller nodes where keystone and database runs.
If you are using RHOSP's pre-defined roles, you need to addOS::TripleO::Services::TrilioDatamoverApi
service to Controller role
.
ii) Trilio Datamover Service:
Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamover
This service should be deployed on role where nova-compute
service is running.
If you are using RHOSP's pre-defined roles, you need to add our OS::TripleO::Services::TrilioDatamover
service to Compute role
.
If you have defined your custom roles, then you need to identify the role name where in 'nova-compute' service is running and then you need to add 'OS::TripleO::Services::TrilioDatamover' service to that role.
All commands need to be run as user 'stack'
Refer to the word <HOTFIX-TAG-VERSION> as 4.1.94-hotfix-16 in the below sections
There are three registry methods available in RedHat OpenStack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
For this method, it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from Redhat registry.
Populate the trilio_env.yaml with container URLs for:
Trilio Datamover container
Trilio Datamover api container
Trilio Horizon Plugin
trilio_env.yaml will be available in
__triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments
Follow this section when 'local registry' is used on the undercloud.
In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.
Verify the changes
Verify the changes:
Verify the changes
The changes can be verified using the following commands.
Follow this section when a Satellite Server is used for the container registry.
Populate the trilio_env.yaml with container urls.
It is recommended to re-populate the backup target details in the freshly downloaded trilio_env.yaml file. This will ensure that parameters that have been added since the last update/installation of Trilio are available and will be filled out too.
Locations of the trilio_env.yaml:
Use the following heat environment file and roles data file in overcloud deploy command:
trilio_env.yaml
roles_data.yaml
Use correct Trilio endpoint map file as per available Keystone endpoint configuration
Instead of tls-endpoints-public-dns.yaml
file, use environments/trilio_env_tls_endpoints_public_dns.yaml
Instead of tls-endpoints-public-ip.yaml
file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml
Instead of tls-everywhere-endpoints-dns.yaml
file, useenvironments/trilio_env_tls_everywhere_dns.yaml
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.
Download the latest available version of the below-mentioned packages. To know more about the latest releases, check out the latest release note under section.
Parameter | Defaults/choices | comments |
---|
Trilio containers are pushed to 'RedHat Container Registry'. Registry URL is ''. The Trilio container URLs are as follows:
Please refer to the to see which containers are available.
Please refer to the to see which containers are available.
Please refer to the to see which containers are available.
Pull the Trilio containers on the Red Hat Satellite using the given
For more details about the trilio_env.yaml please check .
4.0 GA (4.0.92)
4.0 SP1 (4.0.115)
4.0 GA (4.0.92)
4.1 GA (4.1.94)
4.1 GA (4.1.94)
4.1 HF1 (4.1.94-hotfix1)
4.1 GA (4.1.94)
4.1 HF2 (4.1.94-hotfix2)
4.1 HF1 (4.1.94-hotfix1)
4.1 HF2 (4.1.94-hotfix2)
triliovault_tag | <triliovault_tag> | Trilio Build Version |
horizon_image___full | commented out | Uncomment to install Trilio Horizon Container instead of previous installed container |
triliovault_docker___username | triliodocker |
triliovault_docker___password | triliopassword |
triliovault_docker___registry | Default: docker.io |
triliovault_backup___target | nfs amazon_s3 ceph_s3 | 'nfs': If the backup target is NFS 'amazon_s3': If the backup target is Amazon S3 'ceph_s3': If the backup target type is S3 but not amazon S3. |
dmapi_workers | Default: 16 | If dmapi_workers field is not present in config file. The Default value will be equals to number of cores present on the node |
triliovault_nfs___shares | Only with nfs for triliovault_backup_target User needs to provide NFS share path, e.g.: 192.168.122.101:/opt/tvault |
triliovault_nfs___options | Default: nolock, soft, timeo=180, intr, lookupcache=none | Only with nfs for triliovault_backup_target Keep default values if unclear |
triliovault_s3___access_key | Only with amazon_s3 or cephs3 for triliovault_backuptarget Provide S3 access key |
triliovault_s3___secret_key | Only with amazon_s3 or cephs3 for triliovault_backuptarget Provide S3 secret key |
triliovault_s3___region_name | Default: us-east-1 | Only with amazon_s3 or cephs3 for triliovault_backuptarget Provide S3 region or keep default if no region required |
triliovault_s3___bucket_name | Only with amazon_s3 or cephs3 for triliovault_backuptarget Provide S3 bucket |
triliovault_s3___endpoint_url | Only with cephs3 for triliovault_backuptarget Provide S3 endpoint URL |
triliovault_s3___ssl_enabled | True False | Only with ceph_s3 for triliovault_backup_target Set to true if endpoint is on HTTPS |
triliovault_s3__ssl_cert__file_name | s3-cert-pem | Only with ceph_s3 for triliovault_backup_target and if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority user needs to copy the 'ceph s3 ca chain file' to "/etc/kolla/config/triliovault/" directory on ansible server. Create this directory if it does not exist already. |
triliovault_copy__ceph_s3__ssl_cert | True False | Set to true if: ceph_s3 for triliovault_backup_target and if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority |
If users want to use a different container registry for the triliovault containers, then the user can edit this value. In that case, the user first needs to manually pull triliovault containers from and push them to the other registry.