T4O-5.1
Search
K

Upgrading on RHOSP

1] Prerequisites

Please ensure the following requirements are met before starting the upgrade process:
  • No Snapshot or Restore is running
  • Global job scheduler is disabled
  • wlm-cron is disabled on the Trilio Appliance

Deactivating the wlm-cron service

The following sets of commands will disable the wlm-cron service and verify that it is has been completely shutdown.
[root@TVM2 ~]# pcs resource disable wlm-cron
[root@TVM2 ~]# systemctl status wlm-cron
● wlm-cron.service - workload's scheduler cron service
Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset : disabled)
Active: inactive (dead)
Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
Hint: Some lines were ellipsized, use -l to show in full.
[root@TVM2 ~]# pcs resource show wlm-cron
Resource: wlm-cron (class=systemd type=wlm-cron)
Meta Attrs: target-role=Stopped
Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito r-interval-30s)
start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int erval-0s)
stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
[root@TVM2 ~]# ps -ef | grep -i workloadmgr-cron
root 15379 14383 0 08:27 pts/0 00:00:00 grep --color=auto -i workloadmgr -cron

2] Clone the latest Trilio repository

2.1] Clone triliovault-cfg-scripts repository

The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.
All commands need to be run as a 'stack' user on undercloud node
The following command clones the triliovault-cfg-scripts github repository.
cd /home/stack
mv triliovault-cfg-scripts triliovault-cfg-scripts-old
git clone -b 5.1.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/

2.2] If backup target type is 'Ceph based S3' with SSL:

If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/rhosp16/puppet/trilio/files'
cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/puppet/trilio/files

3] Update overcloud roles data file to include Trilio services

Trilio contains multiple services. Add these services to your roles_data.yaml.
In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
All commands need to be run as a 'stack' user

3.1] Add Trilio Datamover Api and Trilio Workload Manager services to role data file

This service needs to share the same role as the keystone and database service. In case of the pre-defined roles, these services will run on the role Controller. In case of custom defined roles, it is necessary to use the same role where 'OS::TripleO::Services::Keystone' service installed.
Add the following line to the identified role:
'OS::TripleO::Services::TrilioDatamoverApi'
'OS::TripleO::Services::TrilioWlmApi'
'OS::TripleO::Services::TrilioWlmWorkloads'
'OS::TripleO::Services::TrilioWlmScheduler'
'OS::TripleO::Services::TrilioWlmCron'

3.2] Add Trilio Datamover Service to role data file

This service needs to share the same role as the nova-compute service. In case of the pre-defined roles will the nova-compute service run on the role Compute. In case of custom defined roles, it is necessary to use the role the nova-compute service is using.
Add the following line to the identified role:
'OS::TripleO::Services::TrilioDatamover'

4] Prepare Trilio container images

All commands need to be run as a 'stack' user
Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.
Refer to the word <CONTAINER-TAG-VERSION> as 5.1.0 in the below sections
RHOSP16.1
RHOSP16.2
Trilio Datamover Container: registry.connect.redhat.com/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
Trilio Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
Trilio Horizon Plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1
Trilio WLM Container: registry.connect.redhat.com/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
Trilio Datamover Container: registry.connect.redhat.com/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
Trilio Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
Trilio Horizon Plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2
Trilio WLM Container: registry.connect.redhat.com/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
There are three registry methods available in RedHat Openstack Platform.
  1. 1.
    Remote Registry
  2. 2.
    Local Registry
  3. 3.
    Satellite Server

4.1] Remote Registry

Follow this section when 'Remote Registry' is used.
In this method, container images gets downloaded directly on overcloud nodes during overcloud deploy/update command execution. User can set remote registry to redhat registry or any other private registry that he wants to use. User needs to provide credentials of the registry in 'containers-prepare-parameter.yaml' file.
  1. 1.
    Make sure other openstack service images are also using the same method to pull container images. If it's not the case you can not use this method.
  2. 2.
    Populate 'containers-prepare-parameter.yaml' with content like following. Important parameters are 'push_destination: false', ContainerImageRegistryLogin: true and registry credentials. Trilio container images are published to registry 'registry.connect.redhat.com'. Credentials of registry 'registry.redhat.io' will work for 'registry.connect.redhat.com' registry too.
Note: This file -'containers-prepare-parameter.yaml'
File Name: containers-prepare-parameter.yaml
parameter_defaults:
ContainerImagePrepare:
- push_destination: false
set:
namespace: registry.redhat.io/...
...
...
ContainerImageRegistryCredentials:
registry.redhat.io:
myuser: 'p@55w0rd!'
registry.connect.redhat.com:
myuser: 'p@55w0rd!'
ContainerImageRegistryLogin: true
Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat
3. Make sure you have network connectivity to above registries from all overcloud nodes. Otherwise image pull operation will fail.
4. User need to manually populate the trilio_env.yaml file with Trilio container image URLs as given below:
trilio_env.yaml file path:
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/
# For RHOSP16.1
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1
# For RHOSP16.2
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.2' trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2
At this step, you have configured Trilio image urls in the necessary environment file.

4.2] Local Registry

Follow this section when 'local registry' is used on the undercloud.
In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.

RHOSP 16.1

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp16.1
## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME', 'undercloud2-161.ctlplane.trilio.local' is the undercloud registry hostname in below example
$ openstack tripleo container image list | grep keystone
| docker://undercloud2-161.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-keystone:16.1 |
| docker://undercloud2-161.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.1 |
## Example of running the script with parameters
sudo ./prepare_trilio_images.sh undercloud2-161.ctlplane.trilio.local 5.0.14-rhosp16.1
## Verify changes
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' ../environments/trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1
$ openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp16.1
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1 |

RHOSP16.2

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp16.2
## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME', 'undercloudqa162.ctlplane.trilio.local' is the undercloud registry hostname in below example
$ openstack tripleo container image list | grep keystone
| docker://undercloudqa162.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-keystone:16.2 |
## Example of running the script with parameters
sudo ./prepare_trilio_images_dockerhub.sh undercloudqa162.ctlplane.trilio.local 5.0.14-rhosp16.2
## Verify changes
grep '<CONTAINER-TAG-VERSION>-rhosp16.2' ../environments/trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2
$ openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp16.2
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2 |
At this step, you have downloaded triliovault container images and configured triliovault image urls in necessary environment file.

4.3] Red Hat Satellite Server

Follow this section when a Satellite Server is used for the container registry.
Pull the Trilio containers on the Red Hat Satellite using the given Red Hat registry URLs.
Populate the trilio_env.yaml with container urls.

RHOSP 16.1 and RHOSP16.2

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.2' trilio_env.yaml
ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2
At this step, you have downloaded triliovault container images into RedHat sattelite server and configured triliovault image urls in necessary environment file.

5] Provide environment details in trilio-env.yaml

Edit /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml file and provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.
You don't need to provide anything for resource_registry, keep it as it is.
Parameter
Comments
CloudAdminUserName
Default value is admin.
Provide the cloudadmin user name of your overcloud
CloudAdminProjectName
Default value is admin.
Provide the cloudadmin project name of your overcloud
CloudAdminDomainName
Default value is default.
Provide the cloudadmin project name of your overcloud
CloudAdminPassword
Provide the cloudadmin user's password of your overcloud
ContainerTriliovaultDatamoverImage
Trilio Datamover Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
ContainerTriliovaultDatamoverApiImage
Trilio DatamoverApi Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
ContainerTriliovaultWlmImage
Trilio WLM Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
ContainerHorizonImage
Horizon Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
BackupTargetType
Default value is nfs.
Set either 'nfs' or 's3' as a target backend for snapshots taken by Triliovault
MultiIPNfsEnabled
Default value is False.
Set to True only if you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.
NfsShares
Provide the nfs share you want to use as backup target for snapshots taken by Triliovault
NfsOptions
This parameter set NFS mount options.
Keep default values, unless a special requirement exists.
S3Type
If your Backup target is S3 then either provide the amazon_s3 or ceph_s3 depends on s3 type
S3AccessKey
Provide S3 Access key
S3SecretKey
Provide Secret key
S3RegionName
Provide S3 region.
If your S3 type does not have region parameter, just keep the parameter as it is
S3Bucket
Provide S3 bucket name
S3EndpointUrl
Provide S3 endpoint url, if your S3 type does not not required keep it as it is.
Not required for Amazon S3
S3SignatureVersion
Provide S3 singature version.
S3AuthVersion
Provide S3 auth version.
S3SslEnabled
Default value is False.
If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint url then change it to 'True', otherwise keep it as 'False'.
TrilioDatamoverOptVolumes
User can specify list of extra volumes that they want to mount on 'triliovault_datamover' container.
Refer the Configure Custom Volume/Directory Mounts for the Trilio Datamover Service section in this doc

6] Generate random passwords for TrilioVault OpenStack resources

User needs to generate random passwords for triliovault resources using following script.
This script will generate random passwords for all triliovault resources that are going to get created in openstack cloud.

6.1] Change the directory and run the script

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
./generate_passwords.sh

6.2] Output will be written to below file.

Include this file in your overcloud deploy command as environment file with option '-e'.
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/passwords.yaml

7] Fetch ids of required OpenStack resources

7.1] User need to source 'overcloudrc' file of cloud admin user. This is needed to run OpenStack CLI.

For only this section user need to source the cloudrc file of overcloud node
source <OVERCLOUD_RC_FILE>

7.2] User must have fill in the cloud admin user details of overcloud in 'trilio_env.yaml' file in the 'Provide environment details in trilio-env.yaml' section. If not please do so.

vi /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml

7.3] Cloud admin user should have admin role on cloud admin domain

openstack role add --user <cloud-Admin-UserName> --domain <Cloud-Admin-DomainName> admin
# Example
openstack role add --user admin --domain default admin

7.4] After this, user can run following script.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts
./create_wlm_ids_conf.sh
Output will be written to
cat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/puppet/trilio/files/triliovault_wlm_ids.conf

8] Load necessary linux drivers on all Controller and Compute nodes

For TrilioVault functionality to work, we need following linux kernel modules to be loaded on all controllers and compute nodes(Where triliovault WLM and datamover services are going to be installed).

8.1] Install nbd module

modprobe nbd nbds_max=128
lsmod | grep nbd

8.2] Install fuse module

modprobe fuse
lsmod | grep fuse

9] Upload trilio puppet module

All commands need to be run as a 'stack' user on undercloud node

9.1] Source the stackrc

source stacrc

9.2] The following commands upload the Trilio puppet module to the overcloud registry. The actual upload happens upon the next deployment.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
./upload_puppet_module.sh
## Output of above command looks like following
Creating tarball...
Tarball created.
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
Uploading file to swift: /tmp/puppet-modules-B1bp1Bk/puppet-modules.tar.gz
+-----------------------+---------------------+----------------------------------+
| object | container | etag |
+-----------------------+---------------------+----------------------------------+
| puppet-modules.tar.gz | overcloud-artifacts | 17ed9cb7a08f67e1853c610860b8ea99 |
+-----------------------+---------------------+----------------------------------+
Upload complete
## Above command creates following file
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml

10] Deploy overcloud with trilio environment

10.1] Include the environment file defaults.yaml in overcloud deploy command with '-e' option as shown below.

-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/defaults.yaml

10.2] Addtionally include the following heat environment files and roles data file we mentioned in above sections in a overcloud deploy command:

  1. 1.
    trilio_env.yaml
  2. 2.
    roles_data.yaml
  3. 3.
    passwords.yaml
  4. 4.
    defaults.yaml
  5. 5.
    Use correct Trilio endpoint map file as per available Keystone endpoint configuration
    1. 1.
      Instead of tls-endpoints-public-dns.yaml file, use triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env_tls_endpoints_public_dns.yaml
    2. 2.
      Instead of tls-endpoints-public-ip.yaml file, usetriliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env_tls_endpoints_public_ip.yaml
    3. 3.
      Instead of tls-everywhere-endpoints-dns.yaml file, usetriliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env_tls_everywhere_dns.yaml
To include new environment files use '-e' option and for roles data file use '-r' option.\
Below is an example of overcloud deploy command with trilio environment:
openstack overcloud deploy --stack overcloudtrain5 --templates \
--libvirt-type qemu \
--ntp-server 192.168.1.34 \
-e /home/stack/templates/node-info.yaml \
-e /home/stack/containers-prepare-parameter.yaml \
-e /home/stack/templates/ceph-config.yaml \
-e /home/stack/templates/cinder_size.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-simple-crypto.yaml \
-e /home/stack/templates/configure-barbican.yaml \
-e /home/stack/templates/multidomain_horizon.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml \
-e /home/stack/templates/tls-parameters.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env_tls_everywhere_dns.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/defaults.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/passwords.yaml \
-r /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

11] Verify deployment

Please follow this documentation to verify the deployment.

12] Troubleshooting for overcloud deployment failures

Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html
openstack stack failures list overcloud
heat stack-list --show-nested -f "status=FAILED"
heat resource-list --nested-depth 5 overcloud | grep FAILED
=> If any Trilio containers does not start well or in restarting state on Contoller/Compute node, use following logs to debug.
podman logs <trilio-container-name>
tailf /var/log/containers/<trilio-container-name>/<trilio-container-name>.log

13] Advanced Settings/Configuration

13.1] Configure Multi-IP NFS

This section is only required when the Multi-IP feature for NFS is required.
This feature allows to set the IP to access the NFS Volume per datamover instead of globally.

i] On Undercloud node, change directory

cd triliovault-cfg-scripts/common/

ii] Edit file triliovault_nfs_map_input.yml in the current directory and provide compute host and NFS share/IP map.

Get the overcloud Controller and Compute hostnames from the following command. Check 'Name' column. Use exact hostnames in triliovault_nfs_map_input.yml file.
Run this command on undercloud by sourcing 'stackrc'.
(undercloud) [stack@ucqa161 ~]$ openstack server list
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2 | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0 | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1 | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
Edit input map file triliovault_nfs_map_input.yml and fill all the details. Refer to the this page for details about the structure.
Below is example how you can set the multi-IP NFS details:
You can not configure the different IPs for the Contollers/WLM nodes, you need to use same share on all the controller nodes. You can configure the different IPs for Compute/datamover nodes
$ cat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_nfs_map.yaml
# TriliovaultMultiIPNfsMap represents datamover, WLM nodes (compute and controller nodes) and it's NFS share mapping.
parameter_defaults:
TriliovaultMultiIPNfsMap:
overcloudtrain4-controller-0: 172.30.1.11:/rhospnfs
overcloudtrain4-controller-1: 172.30.1.11:/rhospnfs
overcloudtrain4-controller-2: 172.30.1.11:/rhospnfs
overcloudtrain4-novacompute-0: 172.30.1.12:/rhospnfs
overcloudtrain4-novacompute-1: 172.30.1.13:/rhospnfs

iii] Update pyyaml on the undercloud node only

If pip isn't available please install pip on the undercloud.
sudo pip3 install PyYAML==5.1 3
Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.
python3 ./generate_nfs_map.py

iv] Validate output map file

The result will be stored in the triliovault_nfs_map_output.yml file
Open file triliovault_nfs_map_output.yml available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

v] Append this output map file to triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_nfs_map.yaml

grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/rhosp16/environments/trilio_nfs_map.yaml
Validate the changes in file triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_nfs_map.yaml`

vi] Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_nfs_map.yaml

vii] Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that nfs is used as backup target.

/home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml

viii] After this you need to run the overcloud deployement.

13.2] Haproxy customized configuration for Trilio dmapi service

The existing default haproxy configuration works fine with most of the environments. Only when timeout issues with the Trilio Datamover Api are observed or other reasons are known, change the configuration as described here.
Following is the haproxy conf file location on haproxy nodes of the overcloud. Trilio datamover api service haproxy configuration gets added to this file.
/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
Trilio datamover haproxy default configuration from the above file looks as follows:
listen triliovault_datamover_api
bind 172.30.4.53:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.30.4.53:8784 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
balance roundrobin
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Port %[dst_port]
maxconn 50000
option httpchk
option httplog
option forwardfor
retries 5
timeout check 10m
timeout client 10m
timeout connect 10m
timeout http-request 10m
timeout queue 10m
timeout server 10m
server overcloudtraindev2-controller-0.internalapi.trilio.local 172.30.4.57:8784 check fall 5 inter 2000 rise 2 verifyhost overcloudtraindev2-controller-0.internalapi.trilio.local
The user can change the following configuration parameter values.
retries 5
timeout http-request 10m
timeout queue 10m
timeout connect 10m
timeout client 10m
timeout server 10m
timeout check 10m
balance roundrobin
maxconn 50000
To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for edit.
/home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/services/triliovault-datamover-api.yaml
ii) Search the following entries and edit as required
tripleo::haproxy::trilio_datamover_api::options:
'retries': '5'
'maxconn': '50000'
'balance': 'roundrobin'
'timeout http-request': '10m'
'timeout queue': '10m'
'timeout connect': '10m'
'timeout client': '10m'
'timeout server': '10m'
'timeout check': '10m'
iii) Save the changes and do the overcloud deployment again to reflect these changes for overcloud nodes.

13.3] Configure Custom Volume/Directory Mounts for the Trilio Datamover Service

i) If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use a variable named 'TrilioDatamoverOptVolumes' is available in below file.
triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml
To add one more more extra volume/directoy mounts to the Trilio Datamover Service container it is necessary that volumes/directories should already be mounted on Compute host
ii) The variable 'TrilioDatamoverOptVolumes' accepts list of volume/bind mounts. User needs to edit the file and add their volume mounts in below format.
TrilioDatamoverOptVolumes:
- <mount-dir-on-compute-host>:<mount-dir-inside-the-datamover-container>
## For example, below is the `/mnt/mount-on-host` mount directory mounted on Compute host that directory you want to mount on the `/mnt/mount-inside-container` directory inside the Datamover container
[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436 2.5T 2.3T 234G 91% /mnt/mount-on-host
## Then provide that mount in below format
TrilioDatamoverOptVolumes:
- /mnt/mount-on-host:/mnt/mount-inside-container
iii) Lastly you need to do overcloud deploy/update.
After successful deployment you will see that volume/directlory mount will be mouted inside the Trilio Datamover Service container.
[root@overcloudtrain5-novacompute-0 heat-admin]# podman exec -itu root triliovault_datamover bash
[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436 2.5T 2.3T 234G 91% /mnt/mount-inside-container

14] Enable mount-bind for NFS

This step would need only when your backup target is NFS and if you are upgrading from T4O 4.1 or older releases.
T4O has changed the calculation of the mount point from 4.2 release onwards. It is necessary to set up the mount-bind to make T4O 4.1 or older backups available for T4O 4.2
Please follow this documentation to set up the mount bind for RHOSP.

15] Enable Global Job Scheduler

After upgrade, global job scheduler would be disable. You can enable that either through UI or CLI.

15.1] Through UI

Login to the Dashboard and go to Admin -> Backups-Admin -> Settings tab and check the 'Job Scheduler Enabled' checkbox. By clicking the 'Change' button you can enable the Global Job Scheduler.

15.2] Through CLI

Login to the any WLM container, create and source the admin rc file. Execute the enable the global job scheduler cli.
$ podman exec -itu root triliovault_wlm_api /bin/bash
$ source admin.rc
$ workloadmgr enable-global-job-scheduler
Global job scheduler is successfully enabled

16] Import old workloads existing on the Backup Target

Workload import allows to import Workloads existing on the Backup Target into the Trilio database.
Please follow this documentation to import the Workloads.