Search…
TVO-4.2
Upgrading on RHOSP

0] Pre-requisites

Please ensure the following points are met before starting the upgrade process:
  • No Snapshot or Restore is running
  • Global job scheduler is disabled
  • wlm-cron is disabled on the TrilioVault Appliance

Deactivating the wlm-cron service

The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut-down.
1
[[email protected] ~]# pcs resource disable wlm-cron
2
[[email protected] ~]# systemctl status wlm-cron
3
● wlm-cron.service - workload's scheduler cron service
4
Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset : disabled)
5
Active: inactive (dead)
6
7
Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
8
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
9
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
10
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
11
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
12
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
13
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
14
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
15
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
16
Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
17
Hint: Some lines were ellipsized, use -l to show in full.
18
[[email protected] ~]# pcs resource show wlm-cron
19
Resource: wlm-cron (class=systemd type=wlm-cron)
20
Meta Attrs: target-role=Stopped
21
Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito r-interval-30s)
22
start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int erval-0s)
23
stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
24
[[email protected] ~]# ps -ef | grep -i workloadmgr-cron
25
root 15379 14383 0 08:27 pts/0 00:00:00 grep --color=auto -i workloadmgr -cron
26
Copied!

1.] [On Undercloud node] Clone latest TrilioVault repository and upload TrilioVault puppet module

All commands need to be run as user 'stack' on undercloud node

1.1] Clone TrilioVault cfg scripts repository

1
cd /home/stack
2
mv triliovault-cfg-scripts triliovault-cfg-scripts-old
3
git clone -b stable/4.2 https://github.com/trilioData/triliovault-cfg-scripts.git
4
cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
Copied!
Separate directories are created as per Redhat OpenStack release under 'triliovault-cfg-scripts/redhat-director-scripts/' directory. Use all scripts/templates from respective directory. For ex, if your RHOSP release is 13, then use scripts/templates from 'triliovault-cfg-scripts/redhat-director-scripts/rhosp13' directory only.
Available RHOSP_RELEASE_DIRECTORY values are:
rhosp13 rhosp16.1 rhosp16.2

1.2] If backup target type is 'Ceph based S3' with SSL:

If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into the puppet directory of the right release.
1
cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/
Copied!

2] Upload TrilioVault puppet module

1
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
2
./upload_puppet_module.sh
3
4
## Output of above command looks like following.
5
Creating tarball...
6
Tarball created.
7
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
8
Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
9
+-----------------------+---------------------+----------------------------------+
10
| object                | container           | etag                             |
11
+-----------------------+---------------------+----------------------------------+
12
| puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
13
+-----------------------+---------------------+----------------------------------+
14
15
## Above command creates following file.
16
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
17
Copied!

3] Update overcloud roles data file to include TrilioVault services

TrilioVault has two services as explained below. You need to add these two services to your roles_data.yaml. If you do not have customized roles_data file, you can find your default roles_data.yaml file at /usr/share/openstack-tripleo-heat-templates/roles_data.yaml on undercloud.
You need to find that role_data file and edit it to add the following TrilioVault services.
i) Trilio Datamover Api Service:
Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamoverApi
This service needs to be co-located with database and keystone services. That said, you need to add this service on the same role as of keystone and database service.
Typically this service should be deployed on controller nodes where keystone and database runs. If you are using RHOSP's pre-defined roles, you need to addOS::TripleO::Services::TrilioDatamoverApiservice to Controller role.
ii) Trilio Datamover Service: Service Entry in roles_data yaml: OS::TripleO::Services::TrilioDatamover This service should be deployed on role where nova-compute service is running.
If you are using RHOSP's pre-defined roles, you need to add our OS::TripleO::Services::TrilioDatamover service to Compute role.
If you have defined your custom roles, then you need to identify the role name where in 'nova-compute' service is running and then you need to add 'OS::TripleO::Services::TrilioDatamover' service to that role.
iii) Trilio Horizon Service:
This service needs to share the same role as the OpenStack Horizon server. In the case of the pre-defined roles will the Horizon service run on the role Controller. Add the following to the identified role OS::TripleO::Services::TrilioHorizon

4] Prepare latest TrilioVault container images

All commands need to be run as user 'stack'
TrilioVault containers are pushed to 'RedHat Container Registry'. Registry URL is 'registry.connect.redhat.com'. The TrilioVault container URLs are as follows:

4.1] available container images

RHOSP 13

1
TrilioVault Datamover container: registry.connect.redhat.com/trilio/trilio-datamover:4.2.64-rhosp13
2
TrilioVault Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:4.2.64-rhosp13
3
TrilioVault horizon plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.2.64-rhosp13
Copied!

RHOSP 16.1

1
TrilioVault Datamover container: registry.connect.redhat.com/trilio/trilio-datamover:4.2.64-rhosp16.1
2
TrilioVault Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:4.2.64-rhosp16.1
3
TrilioVault horizon plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.2.64-rhosp16.1
Copied!

RHOSP 16.2

1
TrilioVault Datamover container: registry.connect.redhat.com/trilio/trilio-datamover:4.2.64-rhosp16.2
2
TrilioVault Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:4.2.64-rhosp16.2
3
TrilioVault horizon plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.2.64-rhosp16.2
Copied!
There are three registry methods available in RedHat Openstack Platform.
  1. 1.
    Remote Registry
  2. 2.
    Local Registry
  3. 3.
    Satellite Server

4.2] Remote Registry

Please refer to the following overview to see which containers are available.
Follow this section when 'Remote Registry' is used.
For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the TrilioVault container URLs from Redhat registry.
Populate the trilio_env.yaml with container URLs for:
  • TrilioVault Datamover container
  • TrilioVault Datamover api container
  • TrilioVault Horizon Plugin
trilio_env.yaml will be available in triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments

Example

1
# For RHOSP13
2
$ grep 'Image' trilio_env.yaml
3
DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:4.2.64-rhosp13
4
DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:4.2.64-rhosp13
5
DockerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.2.64-rhosp13
6
7
# For RHOSP16.1
8
$ grep 'Image' trilio_env.yaml
9
DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:4.2.64-rhosp16.1
10
DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:4.2.64-rhosp16.1
11
ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.2.64-rhosp16.1
12
13
# For RHOSP16.2
14
$ grep 'Image' trilio_env.yaml
15
DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:4.2.64-rhosp16.2
16
DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:4.2.64-rhosp16.2
17
ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.2.64-rhosp16.2
Copied!

4.2] Local Registry

Please refer to the this overview to see which containers are available.
Follow this section when 'local registry' is used on the undercloud.
In this case it is necessary to push the TrilioVault containers to the undercloud registry. TrilioVault provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.

RHOSP13 example

1
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
2
3
./prepare_trilio_images.sh <undercloud_ip> <container_tag>
4
5
# Example:
6
./prepare_trilio_images.sh 192.168.13.34 4.2.64-rhosp13
7
8
## Verify changes
9
# For RHOSP13
10
$ grep '4.1.94-rhosp13' ../environments/trilio_env.yaml
11
DockerTrilioDatamoverImage: 172.25.2.2:8787/trilio/trilio-datamover:4.2.64-rhosp13
12
DockerTrilioDmApiImage: 172.25.2.2:8787/trilio/trilio-datamover-api:4.2.64-rhosp13
13
DockerHorizonImage: 172.25.2.2:8787/trilio/trilio-horizon-plugin:4.2.64-rhosp13
Copied!

RHOSP 16.1 and RHOSP16.2 example

1
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
2
3
sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER_TAG>
4
5
## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME'.
6
-- In the below example 'trilio-undercloud.ctlplane.localdomain' is <UNDERCLOUD_REGISTRY_HOSTNAME>
7
$ openstack tripleo container image list | grep keystone
8
| docker://trilio-undercloud.ctlplane.localdomain:8787/rhosp-rhel8/openstack-keystone:16.0-82 |
9
| docker://trilio-undercloud.ctlplane.localdomain:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.0-84
10
11
## 'CONTAINER_TAG' format for RHOSP16: <TRILIOVAULT_VERSION>-rhosp16
12
## 'CONTAINER_TAG' format for RHOSP16.1: <TRILIOVAULT_VERSION>-rhosp16.1
13
For example. If TrilioVault version='4.1.94', then 'CONTAINER_TAG'=4.2.64-rhosp16.1
14
For example. If TrilioVault version='4.1.94', then 'CONTAINER_TAG'=4.2.64-rhosp16.2
15
16
## Example
17
sudo ./prepare_trilio_images.sh trilio-undercloud.ctlplane.localdomain 4.2.64-rhosp16.1
Copied!
The changes can be verified using the following commands.
1
(undercloud) [[email protected] redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>]$ openstack tripleo container image list | grep trilio
2
| docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:4.2.64-rhosp16.1 | |
3
| docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:4.2.64-rhosp16.1 | |
4
| docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:4.2.64-rhosp16.1 |
5
6
-----------------------------------------------------------------------------------------------------
7
8
(undercloud) [[email protected] redhat-director-scripts]$ grep 'Image' ../environments/trilio_env.yaml
9
DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:4.2.64-rhosp16.1
10
DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:4.2.64-rhosp16.1
11
ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:4.2.64-rhosp16.1
Copied!

4.3] Red Hat Satellite Server

Please refer to the following overview to see which containers are available.
Follow this section when a Satellite Server is used for the container registry.
Pull the TrilioVault containers on the Red Hat Satellite using the given Red Hat registry URLs.
Populate the trilio_env.yaml with container urls.

RHOSP 13 example

1
$ grep '4.2.64-rhosp13' ../environments/trilio_env.yaml
2
DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:4.2.64-rhosp13
3
DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:4.2.64-rhosp13
4
DockerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:4.2.64-rhosp13
Copied!

RHOSP 16.1 and RHOSP 16.2

1
## For RHOSP16.1
2
$ grep 'Image' trilio_env.yaml
3
DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:4.2.64-rhosp16.1
4
DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:4.2.64-rhosp16.1
5
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:4.2.64-rhosp16.1
6
7
## For RHOSP16.2
8
$ grep 'Image' trilio_env.yaml
9
DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:4.2.64-rhosp16.2
10
DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:4.2.64-rhosp16.2
11
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:4.2.64-rhosp16.2
Copied!

5. Verify TrilioVault environment details

It is recommended to re-populate the backup target details in the freshly downloaded trilio_env.yaml file. This will ensure that parameters that have been added since the last update/installation of TrilioVault are available and will be filled out too.
Locations of the trilio_env.yaml:
1
RHOSP13: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/environments/trilio_env.yaml
2
RHOSP16: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env.yaml
3
RHOSP16.1: /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/environments/trilio_env.yaml
Copied!
For more details about the trilio_env.yaml please check here.

6] Configure multi-IP NFS

This section is only required when the multi-IP feature for NFS is required.
This feature allows to set the IP to access the NFS Volume per datamover instead of globally.
On Undercloud node, change directory
1
cd triliovault-cfg-scripts/common/
Copied!
Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.
Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.
Run this command on undercloud by sourcing 'stackrc'.
1
(undercloud) [[email protected] ~]$ openstack server list
2
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
3
| ID | Name | Status | Networks | Image | Flavor |
4
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
5
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2 | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
6
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0 | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
7
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1 | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
8
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
9
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
10
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
Copied!
Edit input map file and fill all the details. Refer to the this page for details about the structure.
vi triliovault_nfs_map_input.yml
Update pyyaml on the undercloud node only
If pip isn't available please install pip on the undercloud.
1
## On Python3 env
2
sudo pip3 install PyYAML==5.1 3
3
4
## On Python2 env
5
sudo pip install PyYAML==5.1
Copied!
Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.
1
## On Python3 env
2
python3 ./generate_nfs_map.py
3
4
## On Python2 env
5
python ./generate_nfs_map.py
Copied!
The result will be in file - 'triliovault_nfs_map_output.yml'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'
1
grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
Copied!
Validate the changes in file 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'
Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.
1
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
Copied!
Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that nfs is used as backup target.

7] Update Overcloud TrilioVault components

Use the following heat environment file and roles data file in overcloud deploy command:
  1. 1.
    trilio_env.yaml
  2. 2.
    roles_data.yaml
  3. 3.
    Use correct Trilio endpoint map file as per available Keystone endpoint configuration
    1. 1.
      Instead of tls-endpoints-public-dns.yaml file, use environments/trilio_env_tls_endpoints_public_dns.yaml
    2. 2.
      Instead of tls-endpoints-public-ip.yaml file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml
    3. 3.
      Instead of tls-everywhere-endpoints-dns.yaml file, useenvironments/trilio_env_tls_everywhere_dns.yaml
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
1
openstack overcloud deploy --templates \
2
-e /home/stack/templates/node-info.yaml \
3
-e /home/stack/templates/overcloud_images.yaml \
4
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env.yaml \
5
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
6
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
7
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env_tls_endpoints_public_dns.yaml \
8
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_nfs_map.yaml \
9
--ntp-server 192.168.1.34 \
10
--libvirt-type qemu \
11
--log-file overcloud_deploy.log \
12
-r /home/stack/templates/roles_data.yaml
Copied!

8] Verify deployment

If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

8.1] On Controller node

Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
1
[[email protected] heat-admin]# podman ps | grep trilio
2
26fcb9194566 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:4.2.64-rhosp16.2 kolla_start 5 days ago Up 5 days ago trilio_dmapi
3
094971d0f5a9 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:4.2.64-rhosp16.2 kolla_start 5 days ago Up 5 days ago horizon
Copied!

8.2] On Compute node

Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
1
[[email protected] heat-admin]# podman ps | grep trilio
2
b1840444cc59 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:4.2.64-rhosp16.2 kolla_start 5 days ago Up 5 days ago trilio_datamover
Copied!

8.3] On the node with Horizon service

Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + TrilioVault's horizon plugin.
1
[[email protected] heat-admin]# podman ps | grep horizon
2
094971d0f5a9 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:4.2.64-rhosp16.2 kolla_start 5 days ago Up 5 days ago horizon
Copied!

9] Enable mount-bind for NFS

TVO 4.2 has changed the calculation of the mount point. It is necessary to set up the mount-bind to make TVO 4.1 or older backups available for TVO 4.2
Please follow this documentation to set up the mount bind for Canonical OpenStack.