Search…
TVO-4.2
Installing on TripleO Train

1. Prepare for deployment

1.1] Select 'backup target' type

Backup target storage is used to store backup images taken by TrilioVault and details needed for configuration:
Following backup target types are supported by TrilioVault
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

1.2] Clone triliovault-cfg-scripts repository

The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.
All commands need to be run as user 'stack' on undercloud node
The following command clones the triliovault-cfg-scripts github repository.
1
cd /home/stack
2
git clone -b hotfix-4-TVO/4.1 https://github.com/trilioData/triliovault-cfg-scripts.git
3
cd triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/
Copied!
Please note that the TrilioVault Appliance needs to get updated to hf3 as well.

1.3] If backup target type is 'Ceph based S3' with SSL:

If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE_Directory/puppet/trilio/files'
1
cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/puppet/trilio/files/
Copied!

2] Upload TrilioVault puppet module

1
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
2
chmod +x *.sh
3
./upload_puppet_module.sh
4
5
## Output of above command looks like following.
6
Creating tarball...
7
Tarball created.
8
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
9
Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
10
+-----------------------+---------------------+----------------------------------+
11
| object                | container           | etag                             |
12
+-----------------------+---------------------+----------------------------------+
13
| puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
14
+-----------------------+---------------------+----------------------------------+
15
16
## Above command creates following file.
17
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
18
Copied!

3] Update overcloud roles data file to include Trilio services

TrilioVault contains multiple services. Add these services to your roles_data.yaml.
In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
All commands need to be run as user 'stack'

3.1] Add Trilio Datamover Api Service to role data file

This service needs to share the same role as the keystone and database service. In case of the pre-defined roles will these services run on the role Controller. In case of custom defined roles, it is necessary to use the same role where 'OS::TripleO::Services::Keystone' service installed.
Add the following line to the identified role:
1
'OS::TripleO::Services::TrilioDatamoverApi'
Copied!

3.2] Add Trilio Datamover Service to role data file

This service needs to share the same role as the nova-compute service. In case of the pre-defined roles will the nova-compute service run on the role Compute. In case of custom defined roles, it is necessary to use the role the nova-compute service is using.
Add the following line to the identified role:
1
'OS::TripleO::Services::TrilioDatamover'
Copied!

4] Prepare Trilio container images

All commands need to be run as user 'stack'
Trilio containers are pushed to 'Dockerhub'. Registry URL: 'docker.io'. Container pull URLs are given below.

CentOS7

1
TrilioVault Datamove container: docker.io/trilio/tripleo-train-centos7-trilio-datamover:4.1.94-hotfix-3-tripleo
2
TrilioVault Datamover Api Container: docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:4.1.94-hotfix-3-tripleo
3
TrilioVault horizon plugin: docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:4.1.94-hotfix-3-tripleo
Copied!

CentOS8

1
TrilioVault Datamove container: docker.io/trilio/tripleo-train-centos8-trilio-datamover:4.1.94-hotfix-3-tripleo
2
TrilioVault Datamover Api Container: docker.io/trilio/tripleo-train-centos8-trilio-datamover-api:4.1.94-hotfix-3-tripleo
3
TrilioVault horizon plugin: docker.io/trilio/tripleo-train-centos8-trilio-horizon-plugin:4.1.94-hotfix-3-tripleo
Copied!
There are two registry methods available in TripleO Openstack Platform.
  1. 1.
    Remote Registry
  2. 2.
    Local Registry

4.1] Remote Registry

Follow this section when 'Remote Registry' is used.
For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the TrilioVault container URLs from Dockerhub registry.
Populate the trilio_env.yaml with container URLs for:
  • TrilioVault Datamover container
  • TrilioVault Datamover api container
  • TrilioVault Horizon Plugin
trilio_env.yaml will be available in triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments
1
# For TripleO Train Centos7
2
$ grep 'Image' trilio_env.yaml
3
   DockerTrilioDatamoverImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover:4.1.94-hotfix-3-tripleo
4
   DockerTrilioDmApiImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:4.1.94-hotfix-3-tripleo
5
DockerHorizonImage: docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:4.1.94-hotfix-3-tripleo
6
7
# For Tripleo Train Centos8
8
$ grep 'Image' trilio_env.yaml
9
   DockerTrilioDatamoverImage: docker.io/trilio/tripleo-train-centos8-trilio-datamover:4.1.94-hotfix-3-tripleo
10
   DockerTrilioDmApiImage: docker.io/trilio/tripleo-train-centos8-trilio-datamover-api:4.1.94-hotfix-3-tripleo
11
   ContainerHorizonImage: docker.io/trilio/tripleo-train-centos8-trilio-horizon-plugin:4.1.94-hotfix-3-tripleo
12
Copied!

4.2] Local Registry

Follow this section when 'local registry' is used on the undercloud.
Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.
1
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
2
3
sudo ./prepare_trilio_images.sh <undercloud_registry_hostname_or_ip> <OS_platform> <4.1-TRIPLEO-CONTAINER> <container_tool_available_on_undercloud>
4
5
Options OS_platform: [centos7, centos8]
6
Options container_tool_available_on_undercloud: [docker, podman]
7
8
## To get undercloud registry hostname/ip, we have two approaches. Use either one.
9
1. openstack tripleo container image list
10
11
2. find your 'containers-prepare-parameter.yaml' (from overcloud deploy command) and search for 'push_destination'
12
cat /home/stack/containers-prepare-parameter.yaml | grep push_destination
13
- push_destination: "undercloud.ctlplane.ooo.prod1:8787"
14
15
Here, 'undercloud.ctlplane.ooo.prod1' is undercloud registry hostname. Use it in our command like following example.
16
17
# Command Example:
18
sudo ./prepare_trilio_images.sh undercloud.ctlplane.ooo.prod1 centos7 4.1.94-hotfix-1-tripleo podman
19
20
## Verify changes
21
# For TripleO Train Centos7
22
$ grep 'Image' ../environments/trilio_env.yaml
23
   DockerTrilioDatamoverImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:4.1.94-hotfix-3-tripleo
24
   DockerTrilioDmApiImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover-api:4.1.94-hotfix-3-tripleo
25
DockerHorizonImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:4.1.94-hotfix-3-tripleo
26
27
# For Tripleo Train Centos8
28
$ grep 'Image' trilio_env.yaml
29
   DockerTrilioDatamoverImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos8-trilio-datamover:4.1.94-hotfix-3-tripleo
30
   DockerTrilioDmApiImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos8-trilio-datamover-api4.1.94-hotfix-3-tripleo
31
   ContainerHorizonImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos8-trilio-horizon-plugin:4.1.94-hotfix-3-tripleo
Copied!
The changes can be verified using the following commands.
1
## For Centos7 Train
2
3
(undercloud) [[email protected] redhat-director-scripts/tripleo-train]$ openstack tripleo container image list | grep trilio
4
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover:4.1.94-hotfix-3-tripleo                       |                        |
5
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover-api:4.1.94-hotfix-3-tripleo                  |                   |
6
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:4.1.94-hotfix-3-tripleo                  |
7
8
-----------------------------------------------------------------------------------------------------
9
## For Centos8 Train
10
(undercloud) [[email protected] redhat-director-scripts/tripleo-train]$ openstack tripleo container image list | grep trilio
11
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos8-trilio-datamover:4.1.94-hotfix-3-tripleo                       |                        |
12
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos8-trilio-datamover-api:4.1.94-hotfix-3-tripleo                  |                   |
13
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos8-trilio-horizon-plugin:4.1.94-hotfix-3-tripleo                 |
14
Copied!

5] Configure multi-IP NFS

This section is only required when the multi-IP feature for NFS is required.
This feature allows to set the IP to access the NFS Volume per datamover instead of globally.
On Undercloud node, change directory
1
cd triliovault-cfg-scripts/common/
Copied!
Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.
Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.
Run this command on undercloud by sourcing 'stackrc'.
1
(undercloud) [[email protected] ~]$ openstack server list
2
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
3
| ID | Name | Status | Networks | Image | Flavor |
4
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
5
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2 | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
6
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0 | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
7
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1 | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
8
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
9
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
10
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
Copied!
Edit input map file and fill all the details. Refer to the this page for details about the structure.
vi triliovault_nfs_map_input.yml
Update pyyaml on the undercloud node only
If pip isn't available please install pip on the undercloud.
1
## On Python3 env
2
sudo pip3 install PyYAML==5.1
3
4
## On Python2 env
5
sudo pip install PyYAML==5.1
Copied!
Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.
1
## On Python3 env
2
python3 ./generate_nfs_map.py
3
4
## On Python2 env
5
python ./generate_nfs_map.py
Copied!
The result will be in file - 'triliovault_nfs_map_output.yml'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'
1
grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
Copied!
Validate the changes in file 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'
Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.
1
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
Copied!
Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that nfs is used as backup target.

6] Fill in triliovault environment details

Fill triliovault details in file - '/home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml', triliovault environment file is self explanatory. Fill details of backup target, verify image urls and other details.

7] Install TrilioVault on Overcloud

Use the following heat environment file and roles data file in overcloud deploy command
  1. 1.
    trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations
  2. 2.
    roles_data.yaml: This file contains overcloud roles data with Trilio roles added.
  3. 3.
    Use the correct trilio endpoint map file as per your keystone endpoint configuration. - Instead of tls-endpoints-public-dns.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’ - Instead of tls-endpoints-public-ip.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’ - Instead of tls-everywhere-endpoints-dns.yaml this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
Deploy command with triliovault environment file looks like following.
1
openstack overcloud deploy --templates \
2
-e /home/stack/templates/node-info.yaml \
3
-e /home/stack/templates/overcloud_images.yaml \
4
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml \
5
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
6
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
7
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env_tls_endpoints_public_dns.yaml \
8
--ntp-server 192.168.1.34 \
9
--libvirt-type qemu \
10
--log-file overcloud_deploy.log \
11
-r /home/stack/templates/roles_data.yaml
Copied!

8] Verify deployment

If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

8.1] On Controller node

Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
1
[[email protected] heat-admin]# podman ps | grep trilio
2
26fcb9194566 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:4.1.94-hotfix-3-tripleo kolla_start 5 days ago Up 5 days ago trilio_dmapi
3
094971d0f5a9 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:4.1.94-hotfix-3-tripleo kolla_start 5 days ago Up 5 days ago horizon
Copied!
Verify the haproxy configuration under:
1
/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
Copied!

8.2] On Compute node

Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
1
[[email protected] heat-admin]# podman ps | grep trilio
2
b1840444cc59 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:4.1.94-hotfix-3-tripleo kolla_start 5 days ago Up 5 days ago trilio_datamover
Copied!

8.3] On the node with Horizon service

Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + TrilioVault's horizon plugin.
1
[[email protected] heat-admin]# podman ps | grep horizon
2
094971d0f5a9 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:4.1.94-hotfix-3-tripleo kolla_start 5 days ago Up 5 days ago horizon
Copied!

9] Additional Steps on TrilioVault Appliance

9.1] Change the nova user id on the TrilioVault Nodes

In TripleO, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the TrilioVault nodes need to be set the same. Do the following steps on all TrilioVault nodes:
  1. 1.
    Download the shell script that will change the user id
  2. 2.
    Assign executable permissions
  3. 3.
    Execute the script
  4. 4.
    Verify that 'nova' user and group id has changed to '42436'
1
## Download the shell script
2
$ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
3
4
## Assign executable permissions
5
$ chmod +x nova_userid.sh
6
7
## Execute the shell script to change 'nova' user and group id to '42436'
8
$ ./nova_userid.sh
9
10
## Ignore any errors and verify that 'nova' user and group id has changed to '42436'
11
$ id nova
12
uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
Copied!

10] Troubleshooting for overcloud deployment failures

Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html
1
openstack stack failures list overcloud
2
heat stack-list --show-nested -f "status=FAILED"
3
heat resource-list --nested-depth 5 overcloud | grep FAILED
4
5
=> If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
6
7
8
docker logs trilio_dmapi
9
10
tailf /var/log/containers/trilio-datamover-api/dmapi.log
11
12
13
14
=> If trilio datamover containers does not start well or in restarting state, use following logs to debug.
15
16
17
docker logs trilio_datamover
18
19
tailf /var/log/containers/trilio-datamover/tvault-contego.log
Copied!