Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Please ensure following points before starting the upgrade process:
Either 4.1 GA OR any hotfix patch against 4.1 should be already deployed for performing upgrades mentioned in the current document.
No snapshot OR restore to be running.
Global job scheduler should be disabled.
wlm-cron should be disabled (on primary T4O node).
pcs resource disable wlm-cron
Check : systemctl status wlm-cron OR pcs resource show wlm-cron
Additional step : To ensure that cron is actually stopped, search for any lingering processes against wlm-cron and kill them. [Cmd : ps -ef | grep -i workloadmgr-cron]
Run all the commands with 'stack' user
If the backup target is Ceph S3 with SSL and SSL certificates are self-signed or authorized by private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, user needs to rename his CA chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/puppet/trilio/files'.
In this step, we are going to pull triliovault container images to the user’s registry.
Trilio containers are pushed to ‘Dockerhub'. The registry URL is ‘docker.io’. Following are the triliovault container pull URLs.
Refer to the word <HOTFIX-TAG-VERSION> as 4.2.8 in the below sections
Trilio container URLs for TripleO Train CentOS7:
There are two registry methods available in the TripleO Openstack Platform.
Remote Registry
Local Registry
Identify which method you are using. Below we have explained all three methods to pull and configure trilioVault's container images for overcloud deployment.
If you are using the 'Remote Registry' method follow this section. You don't need to pull anything. You just need to populate the following container URLs in trilio env yaml.
Populate 'environments/trilio_env.yaml' file with triliovault container urls. Changes look like the following.
If you are using 'local registry' on undercloud, follow this section.
Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.
## Above script pushes trilio container images to undercloud registry and sets correct trilio images URLs in ‘environments/trilio_env.yaml’. Verify the changes using the following command.
This section is only required when the multi-IP feature for NFS is required.
This feature allows to set the IP to access the NFS Volume per datamover instead of globally.
On Undercloud node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml
' file.
Run this command on undercloud by sourcing 'stackrc'.
Edit input map file and fill all the details. Refer to the this page for details about the structure.
vi triliovault_nfs_map_input.yml
Update pyyaml on the undercloud node only
If pip isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Validate the changes in file 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Include the environment file (trilio_nfs_map.yaml
) in overcloud deploy command with '-e' option as shown below.
Ensure that MultiIPNfsEnabled
is set to true in
trilio_env.yaml file and that nfs is used as backup target.
Refer old 'trilio_env.yaml' - (/home/stack/triliovault-cfg-scripts-old/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml) file and update new trilio_env.yaml file.
Fill triliovault details in file - '/home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml', triliovault environment file is self explanatory. Fill details of backup target, verify image urls and other details.
For Cohesity NFS format for NFS options : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
A new separate triliovault service is introduced in Trilio 4.2 release for Trilio Horizon plugin. User need to add following service to roles_data.yaml file and this service need to be co-located with openstack horizon service.
OS::TripleO::Services::TrilioHorizon
Use the following heat environment file and roles data file in overcloud deploy command
trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations
roles_data.yaml: This file contains overcloud roles data with Trilio roles added. This file need not be changed, you can use the old role_data.yaml file
Use the correct trilio endpoint map file as per your keystone endpoint configuration.
- Instead of tls-endpoints-public-dns.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’
- Instead of tls-endpoints-public-ip.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’
- Instead of tls-everywhere-endpoints-dns.yaml
this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
Deploy command with triliovault environment file looks like following.
Make sure Trilio dmapi and horizon containers(shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps
Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps.
Make sure the horizon container is in a running state. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin
Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html
This is not a Trilio issue. It’s a TripleO issue that is not fixed in Train Centos7. This is fixed in higher versions of TripleO.
Workaround:
APPLY the fix directly on the setup. It's not merged in train centos7.
PR: https://github.com/saltedsignal/puppet-certmonger/pull/35/files
Need fix in on controller and compute nodes
/usr/share/openstack-puppet/modules/certmonger/lib/puppet/provider/certmonger_certificate/certmonger_certificate.rb
Note : Below mentioned steps required only if target backend is NFS.
Please refer to this page for detailed steps to setup mount bind.
Trilio 4.1 can be upgraded without reinstallation to a higher version of T4O if available.
Please ensure the following points are met before starting the upgrade process:
No Snapshot or Restore is running
The Global-Job-Scheduler is disabled
wlm-cron
is disabled on the Trilio Appliance
Access to the Gemfury repository to fetch new packages \
Note: For single IP-based NFS share as a backup target refer to this rolling upgrade on the Ansible Openstack document. User needs to follow the Ansible Openstack installation document if having multiple IP-based NFS share
The following sets of commands will disable the wlm-cron
service and verify that it has been completely shut down.
Add the Gemfury repository on each of the DMAPI containers, Horizon containers & Compute nodes.
Create a file /etc/apt/sources.list.d/fury.list
and add the below line to it.
The following commands can be used to verify the connection to the Gemfury repository and to check for available packages.
Add Trilio repo on each of the DMAPI containers, Horizon containers & Compute nodes.
Modify the file /etc/yum.repos.d/trilio.repo
and add the below line in it.
The following commands can be used to verify the connection to the Gemfury repository and to check for available packages.
The following steps represent the best practice procedure to upgrade the DMAPI service.
Login to DMAPI container
Take a backup of the DMAPI configuration in /etc/dmapi/
use apt list --upgradeable
to identify the package used for the dmapi service
Update the DMAPI package
restore the backed-up config files into /etc/dmapi/
Restart the DMAPI service
Check the status of the DMAPI service
These steps are done with the following commands. This example is assuming that the more common python3 packages are used.
The following steps represent the best practice procedure to update the Horizon plugin.
Login to Horizon Container
use apt list --upgradeable
to identify the package the Trilio packages for the workloadmgrclient
, contegoclient
and tvault-horizon-plugin
Install the tvault-horizon-plugin
package in the required python version
Install the workloadmgrclient
package
Install the contegoclient
Restart the Horizon webserver
Check the installed version of the workloadmgrclient
These steps are done with the following commands. This example is assuming that the more common python3 packages are used.
The following steps represent the best practice procedure to update the tvault-contego
service on the compute nodes.
Login into the compute node
Take a backup of the config files at and /etc/tvault-contego/
and /etc/tvault-object-store
(if S3)
Unmount storage mount path
Upgrade the tvault-contego
package in the required python version
(S3 only) upgrade the s3-fuse-plugin
package
Restore the config files
(S3 only) Restart the tvault-object-store
service
Restart the tvault-contego
service
Check the status of the service(s)
These steps are done with the following commands. This example is assuming that the more common python3 packages are used.
Take a backup of the config files
Check the mount path of the NFS storage using the command df -h
and unmount the path using umount
command.
e.g.
Upgrade the Trilio packages:
Deb-based (Ubuntu):
RPM-based (CentOS):
Restore the config files, restart the service and verify the mount point
Take a backup of the config files
Check the mount path of the S3 storage using the command df -h
and unmount the path using umount
command.
Upgrade the Trilio packages
Deb-based (Ubuntu):
RPM-based (CentOS):
Restore the config files, restart the service and verify the mount point
Following are the haproxy cfg parameters recommended for optimal performance of dmapi service. File location on controller /etc/haproxy/haproxy.cfg
If values were already updated during any of the previous releases, further steps can be skipped.
Remove the below content, if present in the file/etc/openstack_deploy/user_variables.yml
on the ansible host.
Add the below lines at end of the file /etc/openstack_deploy/user_variables.yml
on the ansible host.
Update Haproxy configuration using the below command on the ansible host.
T4O 4.2 has changed the calculation of the mount point. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2
Please ensure the following points are met before starting the upgrade process:
No Snapshot or Restore is running
Global job scheduler is disabled
wlm-cron is disabled on the Trilio Appliance
The following sets of commands will disable the wlm-cron service and verify that it is has been completely shut-down.
All commands need to be run as user 'stack' on undercloud node
Separate directories are created as per Redhat OpenStack release under 'triliovault-cfg-scripts/redhat-director-scripts/' directory. Use all scripts/templates from respective directory. For ex, if your RHOSP release is 13, then use scripts/templates from 'triliovault-cfg-scripts/redhat-director-scripts/rhosp13' directory only.
Available RHOSP_RELEASE___DIRECTORY values are:
rhosp13 rhosp16.1 rhosp16.2 rhosp17.0
If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into the puppet directory of the right release.
Trilio has two services as explained below.
You need to add these two services to your roles_data.yaml.
If you do not have customized roles_data file, you can find your default roles_data.yaml file at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
on undercloud.
You need to find that role_data file and edit it to add the following Trilio services.
i) Trilio Datamover Api Service:
Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamoverApi
This service needs to be co-located with database and keystone services. That said, you need to add this service on the same role as of keystone and database service.
Typically this service should be deployed on controller nodes where keystone and database runs.
If you are using RHOSP's pre-defined roles, you need to addOS::TripleO::Services::TrilioDatamoverApi
service to Controller role
.
ii) Trilio Datamover Service:
Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamover
This service should be deployed on role where nova-compute
service is running.
If you are using RHOSP's pre-defined roles, you need to add our OS::TripleO::Services::TrilioDatamover
service to Compute role
.
If you have defined your custom roles, then you need to identify the role name where in 'nova-compute' service is running and then you need to add 'OS::TripleO::Services::TrilioDatamover' service to that role.
iii) Trilio Horizon Service:
This service needs to share the same role as the OpenStack Horizon server.
In the case of the pre-defined roles will the Horizon service run on the role Controller.
Add the following to the identified role OS::TripleO::Services::TrilioHorizon
All commands need to be run as user 'stack'
Refer to the word <HOTFIX-TAG-VERSION> as 4.2.8 in the below sections
There are three registry methods available in RedHat Openstack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from Redhat registry.
Populate the trilio_env.yaml with container URLs for:
Trilio Datamover container
Trilio Datamover api container
Trilio Horizon Plugin
trilio_env.yaml will be available in
__triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments
Follow this section when 'local registry' is used on the undercloud.
In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.
The changes can be verified using the following commands.
Follow this section when a Satellite Server is used for the container registry.
Populate the trilio_env.yaml with container urls.
It is recommended to re-populate the backup target details in the freshly downloaded trilio_env.yaml file. This will ensure that parameters that have been added since the last update/installation of Trilio are available and will be filled out too.
Locations of the trilio_env.yaml:
This section is only required when the multi-IP feature for NFS is required.
This feature allows to set the IP to access the NFS Volume per datamover instead of globally.
On Undercloud node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml
' file.
Run this command on undercloud by sourcing 'stackrc'.
vi triliovault_nfs_map_input.yml
Update pyyaml on the undercloud node only
If pip isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Validate the changes in file 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Include the environment file (trilio_nfs_map.yaml
) in overcloud deploy command with '-e' option as shown below.
Ensure that MultiIPNfsEnabled
is set to true in
trilio_env.yaml file and that nfs is used as backup target.
Use the following heat environment file and roles data file in overcloud deploy command:
trilio_env.yaml
roles_data.yaml
Use correct Trilio endpoint map file as per available Keystone endpoint configuration
Instead of tls-endpoints-public-dns.yaml
file, use environments/trilio_env_tls_endpoints_public_dns.yaml
Instead of tls-endpoints-public-ip.yaml
file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml
Instead of tls-everywhere-endpoints-dns.yaml
file, useenvironments/trilio_env_tls_everywhere_dns.yaml
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.
If Trilio Horizon container is in restarted state on RHOSP 16.1.8/RHSOP 16.2.4 then use below workaroud
T4O 4.2 has changed the calculation of the mount point. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2
Undercloud jobs puppet task ertmonger_certificate[haproxy-external-cert] fails with Unrecognized parameter or wrong value type Bug #1915242 “[Train] [CentOS7] Undercloud jobs puppet task ertm...” : Bugs : tripleo
Please follow for detailed steps to set up mount bind.
Trilio containers are pushed to 'RedHat Container Registry'. Registry URL is ''. The Trilio container URLs are as follows:
Please refer to the to see which containers are available.
Please refer to the to see which containers are available.
Please refer to the to see which containers are available.
Pull the Trilio containers on the Red Hat Satellite using the given
For more details about the trilio_env.yaml please check .
Edit input map file and fill all the details. Refer to the for details about the structure.
Please follow to set up the mount bind for RHOSP.
The Trilio appliance of T4O 4.2 is running a different Kernel than the Trilio appliance for T4O 4.1 or older.
When upgrading from T4O 4.1 or older it is recommended to replace the Trilio appliance entirely and not do an in-place upgrade. This document provides both online/offline upgrades of Trilio Appliance from any of the older TVO-4.2-based releases to the latest TVO-4.2 release.
The prerequisites should already be fulfilled from upgrading the Trilio components on the Controller and Compute nodes.
Please ensure to complete the upgrade of all the Trilio components on the Openstack controller & compute nodes before starting the rolling upgrade of Trilio.
The mentioned Gemfury repository should be accessible from Trilio VM.
Either 4.2 GA OR any hotfix patch against 4.2 should be already deployed for performing upgrades mentioned in the current document.
Please ensure the following points before starting the upgrade process:
No snapshot OR restore to be running.
Global job-scheduler should be disabled.
wlm-cron should be disabled and any lingering process should be killed.
The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut-down.
hf_upgrade.sh
script on all T4O nodes where the upgrade is to be done.You can check the usage of this script here.
Either perform the Step 1.1 Upgrade to the latest Hotfix/Maintenance release
or
1.2 Upgrade to the intermediate Hotfix/Maintenance release
depending on your requirement.
a. Download the hf_upgrade.sh
of desired Hotfix/Maintenance release
You have to replace the Gitbranch
in below command with the actual branch name of the Hotfix/Maintenance release you wish to upgrade.
You will get that git branch name from Release Notes page of that particular Hotfix/Maintenance release
b. Edit downloaded hf_upgrade.sh
and set the BRANCH_NAME
as triliodata-hotfix-4-2
for any Hotfix/Maintenance release and OFFLINE_PKG_NAME
must be set to particular hotfix package name
You would find the offline package name for particular release at http://repos.trilio.io:8283/triliodata-hotfix-4-2/offlinePkgs/
For example, if you wish to upgarde to 4.2.7 release then BRANCH_NAME would be triliodata-hotfix-4-2
and OFFLINE_PKG_NAME must be set to 4.2.7-offlinePkgs.tar.gz
in the script.
Here the packages need to be downloaded on a separate host which has internet access.
The script ./hf_upgrade.sh
can be used with the below-mentioned option to download the required package
hf_upgrade.sh
script on a separate host which has internet accessUse this script to download the upgraded packages.
You can check the usage of this script here.
Either perform the Step 1.1 Upgrade to the latest Hotfix/Maintenance release
or 1.2 Upgrade to the intermediate Hotfix/Maintenance release
depending on your requirement.
a. Download the hf_upgrade.sh
of desired Hotfix/Maintenance release
You have to replace the Gitbranch
in below command with the actual branch name of the Hotfix/Maintenance release you wish to upgrade.
You will get that git branch name from Release Notes page of that particular Hotfix/Maintenance release
b. Edit downloaded hf_upgrade.sh
and set the BRANCH_NAME
as triliodata-hotfix-4-2
for any Hotfix/Maintenance release and OFFLINE_PKG_NAME
must be set to particular hotfix package name and then run the script.
You would find the offline package name for particular release at http://repos.trilio.io:8283/triliodata-hotfix-4-2/offlinePkgs/
For example, if you wish to upgarde to 4.2.7 release then BRANCH_NAME would be triliodata-hotfix-4-2
and OFFLINE_PKG_NAME must be set to 4.2.7-offlinePkgs.tar.gz
in the script.
4.2-offlinePkgs.tar.gz
package and hf_upgrade.sh
script to all the Trilio nodesVerify all wlm services on all Trilio nodes
Make sure all wlm services are up and running from python version 3.8.x
Check the mount point using “df -h” command
Grafana dashboard shows the correct wlm service status on all T4O nodes.
Enable Global Job Scheduler
Additional check for wlm-cron on the primary node
Red Hat OpenStack, TripleO, and Kolla Ansible Openstack are using the nova UID/GID of 42436 inside their containers instead of 162:162 which is the standard in other Openstack environments.
Please verify that the nova UID/GID on the Trilio Appliance is still 42436, to do so follow the below document provided here:
Trilio supports the upgrade of Trilio-Openstack components from any of the older releases (4.1 onwards) to the latest 4.2 hotfix releases without ripping up the older deployments.
Refer to the below-mentioned acceptable values for the placeholders
kolla_base_distro
and **triliovault_tag
** in this document as per the Openstack environment:
Openstack Version | triliovault_tag | kolla_base_distro |
---|---|---|
Please ensure the following points are met before starting the upgrade process:
Either 4.1 or 4.2 GA OR any hotfix patch against 4.1/4.2 should be already deployed
No Snapshot OR Restore is running
Global job scheduler should be disabled
wlm-cron is disabled on the primary Trilio Appliance
Access to the gemfury repository to fetch new packages
The following sets of commands will disable the wlm-cron service and verify that it has been completly shut-down.
Before the latest configuration script is loaded it is recommended to take a backup of the existing config scripts' folder & Trilio ansible roles. The following command can be used for this purpose:
Clone the latest configuration scripts of the required branch and access the deployment script directory for Kolla Ansible Openstack.
Copy the downloaded Trilio ansible role into the Kolla-Ansible roles directory.
This step is not always required. It is recommended to comparetriliovault_globals.yml
with the Trilio entries in the/etc/kolla/globals.yml
file.
In case of no changes, this step can be skipped.
This is required, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_globals.yml
they need to be updated in /etc/kolla/globals.yml
file.
This step is not always required. It is recommended to comparetriliovault_passwords.yml
with the Trilio entries in the/etc/kolla/passwords.yml
file.
In case of no changes, this step can be skipped.
This step is required, when some password variable names have been added, changed, or removed in the latest triliovault_passwords.yml. In this case, the /etc/kolla/passwords.yml needs to be updated.
This step is not always required. It is recommended to comparetriliovault_site.yml
with the Trilio entries in the/usr/local/share/kolla-ansible/ansible/site.yml
file.
In case of no changes, this step can be skipped.
This is required because, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_site.yml
they need to be updated in /usr/local/share/kolla-ansible/ansible/site.yml
file.
This step is not always required. It is recommended to comparetriliovault_inventory.yml
ith the Trilio entries in the/root/multinode
file.
In case of no changes, this step can be skipped.
By default, the triliovault-datamover-api service gets installed on ‘control' hosts and the trilio-datamover service gets installed on 'compute’ hosts. You can edit the T4O groups in the inventory file as per your cloud architecture.
T4O group names are ‘triliovault-datamover-api’ and ‘triliovault-datamover’
This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs
On kolla-ansible server node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.
Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.
vi triliovault_nfs_map_input.yml
The triliovault_nfs_map_input.yml
is explained here.
Update PyYAML
on the kolla-ansible server node only
Expand the map file to create a one-to-one mapping of compute and NFS share.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all necessary NFS shares.
Append this output map file to triliovault_globals.yml
File Path: /home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml
Ensure to set multi_ip_nfs_enabled in _`_triliovault_globals.yml` file to yes
A new parameter is added to triliovault_globals.yml
, set this parameter value to 'yes' if backup target NFS supports multiple endpoints/IPs.
File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’
multi_ip_nfs_enabled: 'yes'
Later append triliovault_globals.yaml
file to /etc/kolla/globals.yml
Edit /etc/kolla/globals.yml
file to fill triliovault backup target and build details. You will find the triliovault related parameters at the end of globals.yml
. The user needs to fill in details like triliovault build version, backup target type, backup target details, etc.
Following is the list of parameters that the user needs to edit.
This step is already part of the 4.2 GA installation procedure and should only be verified.
To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml
and find nova_libvirt_default_volumes
variable. Append the Trilio mount bind /var/trilio:/var/trilio:shared
to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
Next, find the variable nova_compute_default_volumes
in the same file and append the mount bind /var/trilio:/var/trilio:shared
to the list.
After the change will the variable look for a default Kolla installation as follows:
In the case of using Ironic compute nodes one more entry needs to be adjusted in the same file.
Find the variable nova_compute_ironic_default_volumes
and append trilio mount /var/trilio:/var/trilio:shared
to the list.
After the changes the variable will look like the following:
In case, the user doesn’t want to use the docker hub registry for triliovault containers during cloud deployment, then the user can pull triliovault images before starting cloud deployment and push them to other preferred registries.
Following are the triliovault container image URLs for 4.2 releases.
Replace kolla_base_distro
and triliovault_tag
variables with their values.
This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro This {{ triliovault_tag }} is mentioned at the start of this document.
Trilio supports the Source-based containers from the OpenStack Yoga release **** onwards.
Below are the Source-based OpenStack deployment images
Below are the Binary-based OpenStack deployment images
Activate the login into dockerhub for Trilio tagged containers.
Please get the Dockerhub login credentials from Trilio Sales/Support team
Run the below command from the directory with the multinode file tull pull the required images.
Run the below command from the directory with the multinode file to start the upgrade process.
Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.
Following are the default haproxy conf parameters set against triliovault datamover api service.
These values work best for triliovault dmapi service. It’s not recommended to change these parameter values. However, in some exceptional cases, If needed to change any of the above parameter values then same can be done on kolla-ansible server in the following file.
After editing, run kolla-ansible deploy command again to push these changes to openstack cloud.
Post kolla-ansible deploy, to verify the changes, please check following file, available on all controller/haproxy nodes.
T4O 4.2 is changing the calculation for the mount point hash value when using NFS backups.
Please follow this documentation to ensure that Backups taken from T4O 4.1 or older can be used with T4O 4.2.
Starting Trilio for Openstack 4.0 does Trilio for Openstack allow in-place upgrades.
The following versions can be upgraded to each other:
Old | New |
---|---|
The upgrade process contains upgrading the Trilio appliance and the Openstack components and is dependent on the underlying operating system.
The Upgrade of Trilio for Canonical Openstack is managed through the charms.
T4O 4.2 has changed the calculation of the mount point. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2
Please follow this documentation to set up the mount bind for Canonical OpenStack.
Learn about upgrading Trilio on Canonical OpenStack
Trilio supports the upgrade of charms and Trilio components from older releases (4.0 onwards) to the latest release. More information about the latest 4.2 Trilio charms for the Trilio-4.2 release can be found here.
The following charms exist:
trilio-wlm Installs and manages Trilio Controller services.
trilio-dm-api Installs and manages the Trilio Datamover API service.
trilio-data-mover Installs and manages the Trilio Datamover service.
trilio-horizon-plugin Installs and manages the Trilio Horizon Plugin.
The documentation of the charms can be found here:
The following steps have been tested and verified within Trilio environments. There have been cases where these steps updated all packages inside the LXC containers, leading to failures in basic OpenStack services.
It is recommended to run each of these steps in dry-run first.
When any other packages but Trilio packages are getting updated, stop the upgrade procedure and contact your Trilio customer success manager.
More detailed information about the latest 4.2 Trilio charms for the Trilio-4.2 release can be found here.
Follow the steps mentioned in this document to upgrade the charms to the latest 4.2 charms before upgrading the Trilio packages.
Follow the steps mentioned below to upgrade the charms to the latest 4.2 charms before upgrading the Trilio packages.
General Prerequisites
No snapshot OR restore are to be running during this process.
Global job scheduler should be disabled
juju [-m <model>] upgrade-charm trilio-wlm --switch trilio-charmers-trilio-wlm-focal --channel latest/edge
juju [-m <model>] upgrade-charm trilio-horizon-plugin --switch trilio-charmers-trilio-horizon-plugin-focal --channel latest/edge
juju [-m <model>] upgrade-charm trilio-dm-api --switch trilio-charmers-trilio-dm-api-focal --channel latest/edge
juju [-m <model>] upgrade-charm trilio-data-mover --switch trilio-charmers-trilio-data-mover-focal --channel latest/edge
juju [-m <model>] exec --application trilio-dm-api "sudo systemctl restart tvault-datamover-api"
juju [-m <model>] exec --application trilio-data-mover "sudo systemctl restart tvault-contego"
juju [-m <model>] exec --application trilio-horizon-plugin "sudo systemctl restart apache2"
juju [-m <model>] exec --application trilio-wlm "sudo systemctl restart wlm-api wlm-scheduler wlm-workloads wlm-cron"
juju [-m <model>] exec --application trilio-wlm "systemctl restart wlm-api wlm-scheduler wlm-workloads"
juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource restart res_trilio_wlm_wlm_cron"
juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-cron"
juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource stop res_trilio_wlm_wlm_cron"
juju [-m <model>] exec --application trilio-wlm "sudo systemctl stop wlm-cron"
juju [-m <model>] exec --application trilio-wlm "sudo ps -ef | grep workloadmgr-cron | grep -v grep"
'sudo systemctl stop wlm-cron'
juju [-m <model>] exec --unit trilio-wlm/leader "sudo crm resource start res_trilio_wlm_wlm_cron"
juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-cron"
wlm-cron service should only be running in a single Juju Unit and not running in the other two.
Check the trilio unit's status in juju status [-m <model>] | grep trilio
output.
All the trilio units will be reporting the new juju charm version.
trilio-data-mover 4.2.64.26 active 3 trilio-charmers-trilio-data-mover-jammy latest/edge 4 no Unit is ready
trilio-dm-api 4.2.64.2 active 1 trilio-charmers-trilio-dm-api-jammy latest/edge 2 no Unit is ready
trilio-horizon-plugin 4.2.64.5 active 1 trilio-charmers-trilio-horizon-plugin-jammy latest/edge 4 no Unit is ready
trilio-wlm 4.2.64.20 active 1 trilio-charmers-trilio-wlm-jammy latest/edge 3 no Unit is ready
juju [-m <model>] exec --application trilio-dm-api "sudo systemctl status tvault-datamover-api"
juju [-m <model>] exec --application trilio-data-mover "sudo systemctl status tvault-contego"
juju [-m <model>] exec --application trilio-horizon-plugin "sudo systemctl status apache2"
juju [-m <model>] exec --application trilio-wlm "sudo systemctl status wlm-api wlm-scheduler wlm-workloads wlm-cron"
Trilio is releasing hotfixes, which require updating the packages inside the containers. These hotfixes can not be installed using the Juju charms as they don't require an update to the charms.
Trilio packages can be upgraded after deployment. Trilio supports upgrade to the latest 4.2 releases from as old as the Trilio 4.0 release.
No snapshot OR restore are to be running during this process.
Global job scheduler should be disabled.
wlm-cron should be disabled ( Following sets of commands are to be run on MAAS node).
If trilio-wlm is HA enabled, set the cluster configuration to maintenance mode ( this command will fail for single node deployment).
Stop the wlm-cron service
Ensure that no stale wlm-cron processes exist
If any stale process are found, they should be stopped manually.
The deployed Trilio version is controlled by the triliovault-pkg-source
charm configuration option.
The gemfury repository should be accessible from all Trilio units. For each trilio charm, it should be pointing to the following Gemfury repository:
This can be checked via juju [-m ] config trilio-wlm triliovault-pkg-source
command output.
The preferred, recommended, and tested method to update the packages is through the Juju command line.
Run the below commands from MAAS node
juju status [-m ] | grep trilio
output. All the trilio units will be with the new packages.Run the below command to update the schema
Check the schema head with below command. It should point to latest schema head.
T4O 4.2 is changing how the mount point is calculated. It is required to setup a mount bind to make T4O 4.1 or older backups available.
Please refer to this page for detailed steps to set up the mount bind.
If the trilio-wlm nodes are HA enabled:
Make sure the wlm-cron services are down after the pkg upgrade. Run the following command for the same:
Unset the cluster maintenance mode
Make sure the wlm-cron service up and running on any one node.
Set the Global Job Scheduler to the original state.
If any trilio unit gets into an error state with the message :
hook failed: "update-status"
One of the reasons could be the package installation did not finish correctly. One way to verify that is by logging into that unit and following the below steps;
Parameter | Defaults/choices | comments |
---|---|---|
Victoria
4.2.8-victoria
ubuntu centos
Wallaby
4.2.8-wallaby
ubuntu centos
Yoga
4.2.8-yoga
ubuntu centos
Zed
4.2.8-zed
ubuntu rocky
triliovault_tag
<triliovault_tag >
Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned at the start of this document
horizon_image___full
Uncomment
By default, Trilio Horizon container would not get deployed.
Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.
triliovault_docker___username
<dockerhub-login-username>
Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team
triliovault_docker___password
<dockerhub-login-password>
Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team
triliovault_docker___registry
Default: docker.io
If users want to use a different container registry for the triliovault containers, then the user can edit this value. In that case, the user first needs to manually pull triliovault containers from docker.io and push them to the other registry.
triliovault_backup___target
nfs
amazon_s3
ceph_s3
'nfs': If the backup target is NFS
'amazon_s3': If the backup target is Amazon S3
'ceph_s3': If the backup target type is S3 but not amazon S3.
multi_ip_nfs_enabled
yes no Default: no
This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.
dmapi_workers
Default: 16
If dmapi_workers field is not present in config file. The Default value will be equals to number of cores present on the node
triliovault_nfs___shares
<NFS-IP/FQDN>:/<NFS path>
Only with nfs for triliovault_backup_target
User needs to provide NFS share path, e.g.: 192.168.122.101:/opt/tvault
triliovault_nfs___options
Default:'nolock,soft,timeo=180,
intr,lookupcache=none'
.
for Cohesity nfs: 'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
'
-Only with nfs for triliovault_backup_target -Keep default values if unclear
triliovault_s3___access_key
S3 Access Key
Only with amazon_s3 or cephs3 for triliovault_backuptarget
Provide S3 access key
triliovault_s3___secret_key
S3 Secret Key
Only with amazon_s3 or cephs3 for triliovault_backuptarget
Provide S3 secret key
triliovault_s3___region_name
S3 Region Name
Only with amazon_s3 or cephs3 for triliovault_backuptarget
Provide S3 region or keep default if no region required
triliovault_s3___bucket_name
S3 Bucket name
Only with amazon_s3 or cephs3 for triliovault_backuptarget
Provide S3 bucket
triliovault_s3___endpoint_url
S3 Endpoint URL
Valid for other_s3_compatible
only
triliovault_s3___ssl_enabled
True
False
Only with ceph_s3 for triliovault_backup_target
Set to true if endpoint is on HTTPS
triliovault_s3__ssl_cert__file_name
s3-cert-pem
Only with ceph_s3 for triliovault_backup_target and
if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority user needs to copy the 'ceph s3 ca chain file' to "/etc/kolla/config/triliovault/" directory on ansible server. Create this directory if it does not exist already.
triliovault_copy__ceph_s3__ssl_cert
True
False
Set to true if:
ceph_s3 for triliovault_backup_target and
if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority
4.0 GA (4.0.92)
4.0 SP1 (4.0.115)
4.0 GA (4.0.92)
4.1 latest hotfix
4.0 GA (4.0.92)
4.2 GA (4.2.64)
4.1 GA (4.1.94)
4.1 latest Hotfix
4.1 Hotfix
4.1 latest Hotfix
4.1 GA (4.1.94)
4.2 GA (4.2.64)
4.1 Hotfix
4.2 GA (4.2.64)