Getting started with Trilio on Red-Hat OpenStack Platform (RHOSP)

The Red Hat OpenStack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.

Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.

1. Prepare for deployment

Refer to the below-mentioned acceptable values for the placeholders triliovault_tag, trilio_branch, RHOSP_version and CONTAINER-TAG-VERSION in this document as per the Openstack environment:

Trilio Releasetriliovault_tagtrilio_branchRHOSP_versionCONTAINER-TAG-VERSION

6.0.0

6.0.0-beta-2-rhosp17.1

6.0.0-beta-2

RHOSP17.1

6.0.0-beta-2

1.1] Select 'backup target' type

Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

The following backup target types are supported by Trilio

a) NFS

Need NFS share path

b) Amazon S3

- S3 Access Key - Secret Key - Region - Bucket name

c) Other S3 compatible storage (Like Ceph based S3)

- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

1.2] Clone triliovault-cfg-scripts repository

The following steps are to be done on the undercloud node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.

All commands need to be run as a stack user on the undercloud node

Refer to the word <RHOSP_RELEASE_DIRECTORY> as rhosp17 in the below sections

The following command clones the triliovault-cfg-scripts github repository.

cd /home/stack
source stackrc
git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>

1.3] Set executable permissions for all shell scripts

cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
chmod +x *.sh

1.4] If the backup target type is 'Ceph based S3' with SSL:

If your backup target is CEPH S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For all s3 backup target with self signed TLS certificates, user need to copy ca chain files in following location and in given file name format in trilio puppet module. Edit <S3_BACKUP_TARGET_NAME>, <S3_SELF_SIGNED_CERT_CA_CHAIN_FILE> parameters in following command.

 cp <S3_SELF_SIGNED_CERT_CA_CHAIN_FILE> /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/puppet/trilio/files/s3-cert-<S3_BACKUP_TARGET_NAME>.pem

For example if S3_BACKUP_TARGET_NAME = BT2_S3 and your S3_SELF_SIGNED_CERT_CA_CHAIN_FILE='s3-ca.pem' then command to copy this ca chain file to trilio puppet module would be

cp s3-ca.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/puppet/trilio/files/s3-cert-BT2_S3.pem

1.5] If the VM migration feature needs to be enabled:

More about this feature can be found here.

Please refer to this page to collect the necessary artifacts before continuing further.

1.5.1] Copy the VMware vSphere Virtual Disk Development Kit 7.0

Rename the file as vddk.tar.gz and place it at

/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/vddk.tar.gz

1.5.2] For secure connection with the vCenter

Copy the vCenter SSL cert file to

/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/

2] Update the overcloud roles data file to include Trilio services

Trilio contains multiple services. Add these services to your roles_data.yaml.

In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:

/usr/share/openstack-tripleo-heat-templates/roles_data.yaml

Add the following services to the roles_data.yaml

All commands need to be run as a 'stack' user

2.1] Add Trilio Datamover Api and Trilio Workload Manager services to role data file

This service needs to share the same role as the keystone and database service. In the case of the pre-defined roles, these services will run on the role Controller. In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone service installed.

Add the following line to the identified role:

- OS::TripleO::Services::TrilioDatamoverApi
- OS::TripleO::Services::TrilioWlmApi
- OS::TripleO::Services::TrilioWlmWorkloads
- OS::TripleO::Services::TrilioWlmScheduler
- OS::TripleO::Services::TrilioWlmCron
- OS::TripleO::Services::TrilioObjectStore

2.2] Add Trilio Datamover Service to role data file

This service needs to share the same role as the nova-compute service. In the case of the pre-defined roles will the nova-compute service run on the role Compute. In the case of custom-defined roles, it is necessary to use the role that the nova-compute service uses.

Add the following line to the identified role:

- OS::TripleO::Services::TrilioDatamover
- OS::TripleO::Services::TrilioObjectStore

3] Prepare Trilio container images

All commands need to be run as a 'stack' user

Trilio containers are pushed to the RedHat Container Registry. Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.

Trilio Datamover Container:       registry.connect.redhat.com/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
Trilio Horizon Plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1
Trilio WLM Container:             registry.connect.redhat.com/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1

There are three registry methods available in the RedHat OpenStack Platform.

  1. Remote Registry

  2. Local Registry

  3. Satellite Server

3.1] Remote Registry

Follow this section when 'Remote Registry' is used.

In this method, container images get downloaded directly on overcloud nodes during overcloud deploy/update command execution. Users can set the remote registry to a redhat registry or any other private registry that they want to use. The user needs to provide credentials for the registry in containers-prepare-parameter.yaml file.

  1. Make sure other OpenStack service images are also using the same method to pull container images. If it's not the case you can not use this method.

  2. Populate containers-prepare-parameter.yaml with content like the following. Important parameters are push_destination: false, ContainerImageRegistryLogin: true and registry credentials. Trilio container images are published to registry registry.connect.redhat.com. Credentials of registry 'registry.redhat.io' will work for registry.connect.redhat.com registry too.

Note: This file containers-prepare-parameter.yaml

File Name: containers-prepare-parameter.yaml
parameter_defaults:
  ContainerImagePrepare:
  - push_destination: false
    set:
      namespace: registry.redhat.io/...
      ...
  ...
  ContainerImageRegistryCredentials:
    registry.redhat.io:
      myuser: 'p@55w0rd!'
    registry.connect.redhat.com:
      myuser: 'p@55w0rd!'
  ContainerImageRegistryLogin: true

Redhat document for remote registry method: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/director_installation_and_usage/preparing-for-director-installation#container-image-preparation-parameters

Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat

3. Make sure you have network connectivity to the above registries from all overcloud nodes. Otherwise image pull operation will fail.

4. The user needs to manually populate the trilio_env.yaml file with Trilio container image URLs as given below:

trilio_env.yaml file path:

cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/

$ grep '<CONTAINER-TAG-VERSION>-rhosp17.1' trilio_env.yaml
   ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1

At this step, you have configured Trilio image URLs in the necessary environment file.

3.2] Local Registry

Follow this section when 'local registry' is used on the undercloud.

In this case, it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts that will pull the containers from registry.connect.redhat.com and push them to the undercloud and update the trilio_env.yaml.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/scripts/

sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp17.1

## Example of running the script with parameters
sudo ./prepare_trilio_images.sh undercloudqa17.ctlplane.trilio.local 6.0.0-rhosp17.1


## Verify changes
grep '<CONTAINER-TAG-VERSION>-rhosp17.1' ../environments/trilio_env.yaml
   ContainerTriliovaultDatamoverImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultDatamoverApiImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultWlmImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerHorizonImage: undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1

$  openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp17.1
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1                |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1               |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1                    |
| docker://undercloudqa17.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1                          |

At this step, you have downloaded Trilio container images and configured Trilio image URLs in the necessary environment file.

3.3] Red Hat Satellite Server

Follow this section when a Satellite Server is used for the container registry.

Pull the Trilio containers on the Red Hat Satellite using the given Red Hat registry URLs.

Populate the trilio_env.yaml with container URLs.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments

$ grep '<CONTAINER-TAG-VERSION>-rhosp17.1' trilio_env.yaml
   ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1

At this step, you have downloaded Trilio container images into the RedHat satellite server and configured Trilio image URLs in the necessary environment file.

4] Provide environment details in trilio-env.yaml

Edit /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml file and provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still, it is recommended to verify the container URLs.

You don't need to provide anything for resource_registry, keep it as it is.

ParameterDescription

CloudAdminUserName

Default value is admin.

Provide the cloudadmin user name of your overcloud

CloudAdminProjectName

Default value is admin.

Provide the cloudadmin project name of your overcloud

CloudAdminDomainName

Default value is default.

Provide the cloudadmin project name of your overcloud

CloudAdminPassword

Provide the cloudadmin user's password of your overcloud

ContainerTriliovaultDatamoverImage

Trilio Datamover Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

ContainerTriliovaultDatamoverApiImage

Trilio DatamoverApi Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

ContainerTriliovaultWlmImage

Trilio WLM Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

ContainerHorizonImage

Horizon Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

TrilioBackupTargets

List of Backup Targets for TrilioVault. These backup targets will be used to store backups taken by TrilioVault. Backup target examples and format of NFS and S3 types are already provided in the trilio_env.yaml file. Details of respective prameters under TrilioBackupTargets given in next section

TrilioDatamoverOptVolumes

User can specify list of extra volumes that they want to mount on 'triliovault_datamover' container.

Refer the Configure Custom Volume/Directory Mounts for the Trilio Datamover Service section in this doc

4.1] TrilioBackupTargets Details

T4O supports setting up multiple target backend for storing snapshots. User can define any number of storage backends as required. At a high level, NFS and S3 are supported.

Following table provides the details of parameters to be set in trilio_env.yaml file against all the S3 target backends.

ParametersDescription

backup_target_name

User Defined Name of the target backend. Can be any name which can help in quick identifying respective target

backup_target_type

s3

is_default

Can be true or false. Ideally, any one of the multiple target backends specified in trilio_env.yaml file must be marked as true.

s3_type

Could be either of amazon s3 OR ceph_s3 depending upon which S3 is to be configured with T4O.

s3_access_key

S3 Access Key

s3_secret_key

S3 Secret Key

s3_region_name

S3 Region name

s3_bucket

S3 Bucket

s3_endpoint_url

S3 endpoint url

s3_signature_version

Provide S3 signature version

s3_auth_version

Provide S3 auth version

s3_ssl_enabled

true

s3_ssl_verify

true

s3_self_signed_cert

true

s3_bucket_object_lock_enabled

If S3 bucket is having object lock enabled, then this should be set as true else false

Following table provides the details of parameters to be set in trilio_env.yaml file against all the NFS target backends.

ParametersDescription

backup_target_name

User Defined Name of the target backend. Can be any name which can help in quick identifying respective target

backup_target_type

nfs

is_default

Can be true or false. Ideally, any one of the multiple target backends specified in trilio_env.yaml file must be marked as true.

nfs_options

'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10' These parameter set NFS mount options. Keep default values, unless a special requirement exists

is_multi_ip_nfs

true or false depending upon whether the storage backend is NFS having single or multiple IP.

nfs_shares

NFS IP and share path. To be kept in case of single IP NFS. Eg. 11.30.1.10:/mnt/share

multi_ip_nfs_map

NFS IPs and share paths. To be kept in case of multiple NFS IPs. Sample below multi_ip_nfs_map:controller1: 192.168.2.3:/var/nfssharecontroller2: 192.168.2.4:/var/nfssharecompute0: 192.168.3.2:/var/nfssharecompute1: 192.168.3.4:/var/nfsshare

4.2] Update triliovault object store yaml

After you fill in details of backup targets in trilio_env.yaml, user needs to run following script from ‘scripts' directory on undercloud node. This script will update ‘services/triliovault-object-store.yaml' file. User don’t need to verify that.

cd redhat-director-scripts/rhosp17/scripts/
dnf install python3-ruamel-yaml
python3 update_object_store_yaml.py

4.3] For enabling the VM migration feature, populate the below mentioned parameters

More about this feature can be found here.

ParametersDescription

VmwareToOpenstackMigrationEnabled

Set it to True if this feature is required to be enabled, otherwise keep it to False Populate all the below mentioned parameters if it is set to True

VcenterUrl

vCenter access URL example: https://vcenter-1.infra.trilio.io/

VcenterUsername

Access username (Check out the privilege requirement here)

VcenterPassword

Access user's Password

VcenterNoSsl

If the connection is to be established securely, set it to False Set it to True if SSL verification is to be ignored

VcenterCACertFileName

If VcenterNoSsl is set to False, provide the name of the SSL certificate file which is uploaded at step 1.5.2 Otherwise, keep it blank.

5] Generate random passwords for TrilioVault OpenStack resources

The user needs to generate random passwords for Trilio resources using the following script.

This script will generate random passwords for all Trilio resources that are going to get created in OpenStack cloud.

5.1] Change the directory and run the script

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
./generate_passwords.sh

5.2] Output will be written to the below file.

Include this file in your overcloud deploy command as an environment file with the option "-e"

-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/passwords.yaml

6] Fetch ids of required OpenStack resources

6.1] User needs to source 'overcloudrc' file of cloud admin user. This is needed to run OpenStack CLI.

For only this section user needs to source the cloudrc file of overcloud node

source <OVERCLOUD_RC_FILE>

6.2] User must have filled in the cloud admin user details of overcloud in 'trilio_env.yaml' file in the 'Provide environment details in trilio-env.yaml' section. If not please do so.

vi /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml

6.3] Cloud admin user should have admin role on cloud admin domain

openstack role add --user <cloud-Admin-UserName> --domain <Cloud-Admin-DomainName> admin

# Example
openstack role add --user admin --domain default admin

6.4] After this, user can run the following script.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts
./create_wlm_ids_conf.sh

The output will be written to

cat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/triliovault_wlm_ids.conf

7] Load necessary Linux drivers on all Controller and Compute nodes

For TrilioVault functionality to work, we need the following Linux kernel modules to be loaded on all controllers and compute nodes(Where Trilio WLM and Datamover services are going to be installed).

7.1] Install nbd module

modprobe nbd nbds_max=128
lsmod | grep nbd

7.2] Install fuse module

modprobe fuse
lsmod | grep fuse

8] Upload Trilio puppet module

All commands need to be run as a 'stack' user on undercloud node

8.1] Source the stackrc

source stackrc

8.2] The following commands upload the Trilio puppet module to the overcloud registry. The actual upload happens upon the next deployment.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/scripts/
./upload_puppet_module.sh

## Output of above command looks like following
Creating tarball...
Tarball created.
renamed '/tmp/puppet-modules-MUIyvXI/puppet-modules.tar.gz' -> '/var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz'
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
[stack@uc17-1 scripts]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml
parameter_defaults:
  DeployArtifactFILEs:
  - /var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz

## Above command creates following file.
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml

9] Deploy overcloud with Trilio environment

9.1] Include the environment file defaults.yaml in overcloud deploy command with `-e` option as shown below.

This YAML file holds the default values, like default Trustee Role is creator and Keystone endpoint interface is Internal. There are some other parameters as well those User can update as per their requirements.

-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/defaults.yaml 

9.2] Additionally include the following heat environment files and roles data file we mentioned in the above sections in an overcloud deploy command:

  1. trilio_env.yaml

  2. roles_data.yaml

  3. passwords.yaml

  4. defaults.yaml

  5. Use the correct Trilio endpoint map file as per the available Keystone endpoint configuration. You have to remove your OpenStack's endpoint map file from overcloud deploy command and instead of that use Trilio endpoint map file.

    1. Instead of tls-endpoints-public-dns.yaml file, use triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_dns.yaml

    2. Instead of tls-endpoints-public-ip.yaml file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_ip.yaml

    3. Instead of tls-everywhere-endpoints-dns.yaml file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_everywhere_dns.yaml

    4. Instead of no-tls-endpoints-public-ip.yaml file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_non_tls_endpoints_ip.yaml

To include new environment files use -e option and for roles data files use -r option.\

Below is an example of an overcloud deploy command with Trilio environment:

openstack overcloud deploy --stack overcloudtrain5 --templates \
  --libvirt-type qemu \
  --ntp-server 192.168.1.34 \
  -e /home/stack/templates/node-info.yaml \
  -e /home/stack/containers-prepare-parameter.yaml \
  -e /home/stack/templates/ceph-config.yaml \
  -e /home/stack/templates/cinder_size.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-simple-crypto.yaml \
  -e /home/stack/templates/configure-barbican.yaml \
  -e /home/stack/templates/multidomain_horizon.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml \
  -e /home/stack/templates/tls-parameters.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml \
  -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/trilio_env.yaml \
  -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/trilio_env_tls_everywhere_dns.yaml \
  -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/defaults.yaml \
  -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/passwords.yaml \
  -r /usr/share/openstack-tripleo-heat-templates/roles_data.yaml

Post-deployment for the multipath enabled environment, log into respective Datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

10] Verify deployment

Please follow this documentation to verify the deployment.

11] Troubleshooting for overcloud deployment failures

Trilio components will be deployed using puppet scripts.

In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html

openstack stack failures list overcloud
heat stack-list --show-nested -f "status=FAILED"
heat resource-list --nested-depth 5 overcloud | grep FAILED

=> If any Trilio containers do not start well or are in a restarting state on the Controller/Compute node, use the following logs to debug.

podman logs <trilio-container-name>

tailf /var/log/containers/<trilio-container-name>/<trilio-container-name>.log

12] Advanced Settings/Configuration

12.1] Configure Multi-IP NFS

This section is only required when the Multi-IP feature for NFS is required.

This feature allows us to set the IP to access the NFS Volume per datamover instead of globally.

i] On Undercloud node, change the directory

cd triliovault-cfg-scripts/common/

ii] Edit file triliovault_nfs_map_input.yml in the current directory and provide compute host and NFS share/IP map.

Get the overcloud Controller and Compute hostnames from the following command. Check Name column. Use exact host names in the triliovault_nfs_map_input.yml file.

Run this command on undercloud by sourcing stackrc.

(undercloud) [stack@ucqa161 ~]$ openstack server list
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+

Edit the input map file triliovault_nfs_map_input.yml and fill in all the details. Refer to this page for details about the structure.

Below is an example of how you can set the multi-IP NFS details:

You can not configure the different IPs for the Controllers/WLM nodes, you need to use the same share on all the controller nodes. You can configure the different IPs for Compute/Datamover nodes

$ cat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
# TriliovaultMultiIPNfsMap represents datamover, WLM nodes (compute and controller nodes) and it's NFS share mapping.
parameter_defaults:
  TriliovaultMultiIPNfsMap:
    overcloudtrain4-controller-0: 172.30.1.11:/rhospnfs
    overcloudtrain4-controller-1: 172.30.1.11:/rhospnfs
    overcloudtrain4-controller-2: 172.30.1.11:/rhospnfs
    overcloudtrain4-novacompute-0: 172.30.1.12:/rhospnfs
    overcloudtrain4-novacompute-1: 172.30.1.13:/rhospnfs

iii] Update pyyaml on the undercloud node only

If pip isn't available please install pip on the undercloud.

sudo pip3 install PyYAML==5.1 3

Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.

python3 ./generate_nfs_map.py

iv] Validate output map file

The result will be stored in the triliovault_nfs_map_output.yml file

Open file triliovault_nfs_map_output.yml available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

v] Append this output map file to triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

Validate the changes in the file triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

vi] Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

vii] Ensure that MultiIPNfsEnabled is set to true in the trilio_env.yaml file and that NFS is used as a backup target.

/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml

viii] After this you need to run the overcloud deployment.

12.2] Haproxy customized configuration for Trilio dmapi service

The existing default HAproxy configuration works fine with most of the environments. Only when timeout issues with the Trilio Datamover Api are observed or other reasons are known, change the configuration as described here.

Following is the HAproxy conf file location on HAproxy nodes of the overcloud. Trilio Datamover API service HAproxy configuration gets added to this file.

/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg

Trilio Datamover HAproxy default configuration from the above file looks as follows:

listen triliovault_datamover_api
  bind 172.30.4.53:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
  bind 172.30.4.53:8784 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
  balance roundrobin
  http-request set-header X-Forwarded-Proto https if { ssl_fc }
  http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
  http-request set-header X-Forwarded-Port %[dst_port]
  maxconn 50000
  option httpchk
  option httplog
  option forwardfor
  retries 5
  timeout check 10m
  timeout client 10m
  timeout connect 10m
  timeout http-request 10m
  timeout queue 10m
  timeout server 10m
  server overcloudtraindev2-controller-0.internalapi.trilio.local 172.30.4.57:8784 check fall 5 inter 2000 rise 2 verifyhost overcloudtraindev2-controller-0.internalapi.trilio.local

The user can change the following configuration parameter values.

retries 5
timeout http-request 10m
timeout queue 10m
timeout connect 10m
timeout client 10m
timeout server 10m
timeout check 10m
balance roundrobin
maxconn 50000

To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for editing.

/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/services/triliovault-datamover-api.yaml

ii) Search the following entries and edit as required

          tripleo::haproxy::trilio_datamover_api::options:
             'retries': '5'
             'maxconn': '50000'
             'balance': 'roundrobin'
             'timeout http-request': '10m'
             'timeout queue': '10m'
             'timeout connect': '10m'
             'timeout client': '10m'
             'timeout server': '10m'
             'timeout check': '10m'

iii) Save the changes and do the overcloud deployment again to reflect these changes for overcloud nodes.

12.3] Configure Custom Volume/Directory Mounts for the Trilio Datamover Service

i) If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use a variable named 'TrilioDatamoverOptVolumes' is available in the below file.

triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml

To add one more extra volume/directoy mount to the Trilio Datamover Service container it is necessary that volumes/directories should already be mounted on the Compute host

ii) The variable 'TrilioDatamoverOptVolumes' accepts a list of volume/bind mounts. User needs to edit the file and add their volume mounts in the below format.

TrilioDatamoverOptVolumes:
   - <mount-dir-on-compute-host>:<mount-dir-inside-the-datamover-container>

## For example, below is the `/mnt/mount-on-host` mount directory mounted on Compute host that directory you want to mount on the `/mnt/mount-inside-container` directory inside the Datamover container

[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436   2.5T  2.3T  234G  91% /mnt/mount-on-host

## Then provide that mount in the below format

TrilioDatamoverOptVolumes:
   - /mnt/mount-on-host:/mnt/mount-inside-container

iii) Lastly you need to do overcloud deploy/update.

After successful deployment, you will see that volume/directory mount will be mounted inside the Trilio Datamover Service container.

[root@overcloudtrain5-novacompute-0 heat-admin]# podman exec -itu root triliovault_datamover bash
[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436   2.5T  2.3T  234G  91% /mnt/mount-inside-container

12.4] Advanced Ceph Configration (Optional)

We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user'.

Details about multiple ceph configuration can be found here.

Last updated