Installing on TripleO Train

1. Prepare for deployment

1.1] Select 'backup target' type

Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

The following backup target types are supported by Trilio

a) NFS

Need NFS share path

b) Amazon S3

- S3 Access Key - Secret Key - Region - Bucket name

c) Other S3 compatible storage (Like, Ceph based S3)

- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

1.2] Clone triliovault-cfg-scripts repository

The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.

All commands need to be run as user 'stack' on undercloud node

TripleO CentOS8 is not supported anymore as CentOS Linux 8 has reached End of Life on December 31st,2021.

The following command clones the triliovault-cfg-scripts github repository.

cd /home/stack
git clone -b TVO/4.2.8 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/

Please note that the Trilio Appliance needs to get updated to the latest HF as well.

1.3] If the backup target type is 'Ceph based S3' with SSL:

If your backup target is ceph S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, the user needs to rename his ca chain cert file to s3-cert.pem and copy it into the directory triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE_Directory/puppet/trilio/files

cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/puppet/trilio/files/

2] Upload Trilio puppet module

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
chmod +x *.sh
./upload_puppet_module.sh

## Output of the above command looks like the following.
Creating tarball...
Tarball created.
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
+-----------------------+---------------------+----------------------------------+
| object                | container           | etag                             |
+-----------------------+---------------------+----------------------------------+
| puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
+-----------------------+---------------------+----------------------------------+

## Above command creates the following file.
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml

3] Update overcloud roles data file to include Trilio services

Trilio contains multiple services. Add these services to your roles_data.yaml.

In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:

/usr/share/openstack-tripleo-heat-templates/roles_data.yaml

Add the following services to the roles_data.yaml

All commands need to be run as user 'stack'

3.1] Add Trilio Datamover Api Service to role data file

This service needs to share the same role as the keystone and database service. In the case of the pre-defined roles will these services run on the role Controller. In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone service is installed.

Add the following line to the identified role:

'OS::TripleO::Services::TrilioDatamoverApi'

3.2] Add Trilio Datamover Service to role data file

This service needs to share the same role as the nova-compute service. In the case of the pre-defined roles will the nova-compute service run on the role Compute. In the case of custom-defined roles, it is necessary to use the role the nova-compute service is used.

Add the following line to the identified role:

'OS::TripleO::Services::TrilioDatamover'

3.3] Add Trilio Horizon Service role data file4] Prepare Trilio container images

This service needs to share the same role as the OpenStack Horizon server. In the case of the pre-defined roles, Horizon service runs on the role Controller. Add the following line to the identified role:

'OS::TripleO::Services::TrilioHorizon'

All commands need to be run as user 'stack'

Refer to the word <HOTFIX-TAG-VERSION> as 4.2.8 in the below sections

Trilio containers are pushed to 'Dockerhub'. Registry URL: 'docker.io'. Container pull URLs are given below.

CentOS7

Trilio Datamove container:        docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
Trilio Datamover Api Container:   docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
Trilio horizon plugin:            docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo

There are two registry methods available in TripleO Openstack Platform.

  1. Remote Registry

  2. Local Registry

4.1] Remote Registry

Follow this section when 'Remote Registry' is used.

For this method, it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from the Dockerhub registry.

Populate the trilio_env.yaml with container URLs for:

  • Trilio Datamover container

  • Trilio Datamover api container

  • Trilio Horizon Plugin

trilio_env.yaml will be available in __triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments

# For TripleO Train Centos7
$ grep 'Image' trilio_env.yaml
   DockerTrilioDatamoverImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
   DockerTrilioDmApiImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
   DockerHorizonImage: docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo

4.2] Local Registry

Follow this section when 'local registry' is used on the undercloud.

Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/

sudo ./prepare_trilio_images.sh <undercloud_registry_hostname_or_ip> <OS_platform> <4.1-TRIPLEO-CONTAINER> <container_tool_available_on_undercloud>

Options OS_platform: [centos7]
Options container_tool_available_on_undercloud: [docker, podman]

## To get undercloud registry hostname/ip, we have two approaches. Use either one.
1. openstack tripleo container image list

2. find your 'containers-prepare-parameter.yaml' (from overcloud deploy command) and search for 'push_destination'
cat /home/stack/containers-prepare-parameter.yaml | grep push_destination
 - push_destination: "undercloud.ctlplane.ooo.prod1:8787"

Here, 'undercloud.ctlplane.ooo.prod1' is undercloud registry hostname. Use it in our command like following example.

# Command Example:
sudo ./prepare_trilio_images.sh undercloud.ctlplane.ooo.prod1 centos7 <HOTFIX-TAG-VERSION>-tripleo podman

## Verify changes
# For TripleO Train Centos7
$ grep '<HOTFIX-TAG-VERSION>-tripleo' ../environments/trilio_env.yaml
   DockerTrilioDatamoverImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
   DockerTrilioDmApiImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
   DockerHorizonImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo

The changes can be verified using the following commands.

## For Centos7 Train

(undercloud) [stack@undercloud redhat-director-scripts/tripleo-train]$ openstack tripleo container image list | grep trilio
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo                       |                        |
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo                  |                   |
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo                  |

5] Configure multi-IP NFS

This section is only required when the multi-IP feature for NFS is required.

This feature allows setting the IP to access the NFS Volume per datamover instead of globally.

On Undercloud node, change the directory

cd triliovault-cfg-scripts/common/

Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/IP map.

Get the compute hostnames from the following command. Check the 'Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.

Run this command on undercloud by sourcing 'stackrc'.

(undercloud) [stack@ucqa161 ~]$ openstack server list
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+

Edit the input map file and fill in all the details. Refer to this page for details about the structure.

vi triliovault_nfs_map_input.yml

Update pyYAML on the undercloud node only

If pip isn't available please install pip on the undercloud.

## On Python3 env
sudo pip3 install PyYAML==5.1

## On Python2 env
sudo pip install PyYAML==5.1

Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.

## On Python3 env
python3 ./generate_nfs_map.py

## On Python2 env
python ./generate_nfs_map.py

The result will be in file - 'triliovault_nfs_map_output.yml'

Validate output map file

Open file 'triliovault_nfs_map_output.yml

vi triliovault_nfs_map_output.yml

available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml

Validate the changes in file triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml

Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that NFS is used as the backup target.

6] Fill in Trilio environment details

Fill Trilio details in the file /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml, triliovault environment file is self-explanatory. Fill in details of the backup target, verify image URLs, and other details.

NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

7] Install Trilio on Overcloud

Use the following heat environment file and roles data file in overcloud deploy command

  1. trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations

  2. roles_data.yaml: This file contains overcloud roles data with Trilio roles added.

  3. Use the correct trilio endpoint map file as per your keystone endpoint configuration. - Instead of tls-endpoints-public-dns.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’ - Instead of tls-endpoints-public-ip.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’ - Instead of tls-everywhere-endpoints-dns.yaml this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’

Deploy command with triliovault environment file looks like the following.

openstack overcloud deploy --templates \
  -e /home/stack/templates/node-info.yaml \
  -e /home/stack/templates/overcloud_images.yaml \
  -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
  -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
  -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env_tls_endpoints_public_dns.yaml \
  --ntp-server 192.168.1.34 \
  --libvirt-type qemu \
  --log-file overcloud_deploy.log \
  -r /home/stack/templates/roles_data.yaml

Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

8] Verify the deployment

If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

8.1] On the Controller node

Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

[root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo       kolla_start           5 days ago  Up 5 days ago         horizon

Verify the haproxy configuration under:

/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg

8.2] On Compute node

Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

[root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover

8.3] On the node with Horizon service

Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.

[root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo       kolla_start           5 days ago  Up 5 days ago         horizon

10] Troubleshooting for overcloud deployment failures

Trilio components will be deployed using puppet scripts.

In case of the overcloud deployment fails do the following command to provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html

openstack stack failures list overcloud
heat stack-list --show-nested -f "status=FAILED"
heat resource-list --nested-depth 5 overcloud | grep FAILED

##=> If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
docker logs trilio_dmapi
tailf /var/log/containers/trilio-datamover-api/dmapi.log

##=> If trilio datamover containers does not start well or in restarting state, use following logs to debug.
docker logs trilio_datamover
tailf /var/log/containers/trilio-datamover/tvault-contego.log

Last updated