Search…
TVO-4.2
Upgrading on TripleO Train [CentOS7]

1. Generic Pre-requisites

  1. 1.
    Please ensure following points before starting the upgrade process:
    1. 1.
      Either 4.1 GA OR any hotfix patch against 4.1 should be already deployed for performing upgrades mentioned in the current document.
    2. 2.
      No snapshot OR restore to be running.
    3. 3.
      Global job scheduler should be disabled.
    4. 4.
      wlm-cron should be disabled (on primary TVO node).
      1. 1.
        pcs resource disable wlm-cron
      2. 2.
        Check : systemctl status wlm-cron OR pcs resource show wlm-cron
      3. 3.
        Additional step : To ensure that cron is actually stopped, search for any lingering processes against wlm-cron and kill them. [Cmd : ps -ef | grep -i workloadmgr-cron]

2. [On Undercloud node] Clone triliovault repo and upload trilio puppet module

Run all the commands with 'stack' user

2.1 Clone the latest configuration scripts

1
cd /home/stack
2
mv triliovault-cfg-scripts triliovault-cfg-scripts-old
3
#Additionally keep the NFS share path noted
4
#/var/lib/nova/triliovault-mounts/MTcyLjMwLjEuMzovcmhvc3BuZnM=
5
6
##Clone latest repo against respective label
7
git clone --branch <4.2> https://github.com/trilioData/triliovault-cfg-scripts.git
8
Eg. git clone --branch stable/4.2 https://github.com/trilioData/triliovault-cfg-scripts.git
9
cd triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/
Copied!

2.2 Backup target is “Ceph based S3” with SSL

If the backup target is Ceph S3 with SSL and SSL certificates are self-signed or authorized by private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, user needs to rename his CA chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/puppet/trilio/files'.
1
cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/puppet/trilio/files/
Copied!

2.3 Upload triliovault puppet module

1
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
2
chmod +x *.sh
3
./upload_puppet_module.sh
4
5
## Output of above command looks like following.
6
Creating tarball...
7
Tarball created.
8
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
9
Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
10
+-----------------------+---------------------+----------------------------------+
11
| object                | container           | etag                             |
12
+-----------------------+---------------------+----------------------------------+
13
| puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
14
+-----------------------+---------------------+----------------------------------+
15
16
## Above command creates following file.
17
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
18
Copied!

3. Prepare TrilioVault container images

In this step, we are going to pull triliovault container images to the user’s registry.
TrilioVault containers are pushed to ‘Dockerhub'. The registry URL is ‘docker.io’. Following are the triliovault container pull URLs.

3.1 TrilioVault container URLs

TrilioVault container URLs for TripleO Train CentOS7:
1
TrilioVault Datamove container: docker.io/trilio/tripleo-train-centos7-trilio-datamover:4.2.64-tripleo
2
TrilioVault Datamover Api Container: docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:4.2.64-tripleo
3
TrilioVault horizon plugin: docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:4.2.64-tripleo
Copied!
There are two registry methods available in the TripleO Openstack Platform.
  1. 1.
    Remote Registry
  2. 2.
    Local Registry
Identify which method you are using. Below we have explained all three methods to pull and configure trilioVault's container images for overcloud deployment.

3.2 Remote Registry

If you are using the 'Remote Registry' method follow this section. You don't need to pull anything. You just need to populate the following container URLs in trilio env yaml.
  • Populate 'environments/trilio_env.yaml' file with triliovault container urls. Changes look like the following.
1
# For TripleO Train Centos7
2
$ grep 'Image' trilio_env.yaml
3
   DockerTrilioDatamoverImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover:4.2.64-tripleo
4
   DockerTrilioDmApiImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:4.2.64-tripleo
5
DockerHorizonImage: docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:4.2.64-tripleo
Copied!

3.3 Registry on Undercloud

If you are using 'local registry' on undercloud, follow this section.
  • Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.
1
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
2
3
sudo ./prepare_trilio_images.sh <undercloud_registry_hostname_or_ip> <OS_platform> <4.2-TRIPLEO-CONTAINER> <container_tool_available_on_undercloud>
4
5
## To get undercloud registry hostname/ip, we have two approaches. Use either one.
6
1. openstack tripleo container image list
7
8
2. find your 'containers-prepare-parameter.yaml' (from overcloud deploy command) and search for 'push_destination'
9
cat /home/stack/containers-prepare-parameter.yaml | grep push_destination
10
- push_destination: "undercloud.ctlplane.ooo.prod1:8787"
11
12
Here, 'undercloud.ctlplane.ooo.prod1' is undercloud registry hostname. Use it in our command like following example.
13
14
# Command Example:
15
sudo ./prepare_trilio_images.sh undercloud.ctlplane.ooo.prod1 centos7 4.2.64-tripleo podman
16
17
## Verify changes
18
# For TripleO Train Centos7
19
$ grep 'Image' ../environments/trilio_env.yaml
20
   DockerTrilioDatamoverImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:4.2.64-tripleo
21
   DockerTrilioDmApiImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover-api:4.2.64-tripleo
22
DockerHorizonImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:4.2.64-tripleo
Copied!
## Above script pushes trilio container images to undercloud registry and sets correct trilio images URLs in ‘environments/trilio_env.yaml’. Verify the changes using the following command.
1
## For Centos7 Train
2
3
(undercloud) [[email protected] redhat-director-scripts/tripleo-train]$ openstack tripleo container image list | grep trilio
4
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover:4.2.64-tripleo                       |                        |
5
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover-api:4.2.64-tripleo                  |                   |
6
| docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:4.2.64-tripleo                  |
Copied!

4. Configure multi-IP NFS

This section is only required when the multi-IP feature for NFS is required.
This feature allows to set the IP to access the NFS Volume per datamover instead of globally.
On Undercloud node, change directory
1
cd triliovault-cfg-scripts/common/
Copied!
Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.
Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.
Run this command on undercloud by sourcing 'stackrc'.
1
(undercloud) [[email protected] ~]$ openstack server list
2
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
3
| ID | Name | Status | Networks | Image | Flavor |
4
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
5
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2 | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
6
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0 | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
7
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1 | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
8
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
9
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
10
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
Copied!
Edit input map file and fill all the details. Refer to the this page for details about the structure.
vi triliovault_nfs_map_input.yml
Update pyyaml on the undercloud node only
If pip isn't available please install pip on the undercloud.
1
## On Python3 env
2
sudo pip3 install PyYAML==5.1
3
4
## On Python2 env
5
sudo pip install PyYAML==5.1
Copied!
Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.
1
## On Python3 env
2
python3 ./generate_nfs_map.py
3
4
## On Python2 env
5
python ./generate_nfs_map.py
Copied!
The result will be in file - 'triliovault_nfs_map_output.yml'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'
1
grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
Copied!
Validate the changes in file 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'
Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.
1
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
Copied!
Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that nfs is used as backup target.

5. Fill in triliovault environment details

  • Refer old 'trilio_env.yaml' - (/home/stack/triliovault-cfg-scripts-old/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml) file and update new trilio_env.yaml file.
  • Fill triliovault details in file - '/home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml', triliovault environment file is self explanatory. Fill details of backup target, verify image urls and other details.

6. roles_data.yaml file changes

A new separate triliovault service is introduced in TrilioVault 4.2 release for TrilioVault Horizon plugin. User need to add following service to roles_data.yaml file and this service need to be co-located with openstack horizon service.
OS::TripleO::Services::TrilioHorizon

7. Install TrilioVault on Overcloud

Use the following heat environment file and roles data file in overcloud deploy command
  1. 1.
    trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations
  2. 2.
    roles_data.yaml: This file contains overcloud roles data with Trilio roles added. This file need not be changed, you can use the old role_data.yaml file
  3. 3.
    Use the correct trilio endpoint map file as per your keystone endpoint configuration. - Instead of tls-endpoints-public-dns.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’ - Instead of tls-endpoints-public-ip.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’ - Instead of tls-everywhere-endpoints-dns.yaml this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
Deploy command with triliovault environment file looks like following.
1
openstack overcloud deploy --templates \
2
-e /home/stack/templates/node-info.yaml \
3
-e /home/stack/templates/overcloud_images.yaml \
4
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml \
5
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
6
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
7
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env_tls_endpoints_public_dns.yaml \
8
--ntp-server 192.168.1.34 \
9
--libvirt-type qemu \
10
--log-file overcloud_deploy.log \
11
-r /home/stack/templates/roles_data.yaml
Copied!

8. Steps to verify correct deployment

8.1 On overcloud controller node(s)

Make sure Trilio dmapi and horizon containers(shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps

8.2 On overcloud compute node(s)

Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps.
1
[[email protected] heat-admin]# podman ps | grep trilio
2
b1840444cc59  prod1-compute1.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:4.2.64-tripleo         kolla_start           5 days ago  Up 5 days ago         trilio_datamover
Copied!

8.3 On OpenStack node where OpenStack Horizon Service is running

Make sure the horizon container is in a running state. Please note that the 'Horizon' container is replaced with the Triliovault Horizon container. This container will have the latest OpenStack horizon + TrilioVault's horizon plugin
1
[[email protected] heat-admin]# podman ps | grep horizon
2
094971d0f5a9  prod1-controller1.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:4.2.64-tripleo      kolla_start           5 days ago  Up 5 days ago         horizon
Copied!

9. Troubleshooting if any failures

Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html
1
openstack stack failures list overcloud
2
heat stack-list --show-nested -f "status=FAILED"
3
heat resource-list --nested-depth 5 overcloud | grep FAILED
4
5
=> If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
6
7
8
docker logs trilio_dmapi
9
10
tailf /var/log/containers/trilio-datamover-api/dmapi.log
11
12
13
14
=> If trilio datamover containers does not start well or in restarting state, use following logs to debug.
15
16
17
docker logs trilio_datamover
18
19
tailf /var/log/containers/trilio-datamover/tvault-contego.log
Copied!

10. Known Issues/Limitations

10.1 Overcloud deploy fails with the following error. Valid for Train CentOS7 only.

This is not a TrilioVault issue. It’s a TripleO issue that is not fixed in Train Centos7. This is fixed in higher versions of TripleO.
Undercloud jobs puppet task ertmonger_certificate[haproxy-external-cert] fails with Unrecognized parameter or wrong value type
Bug #1915242 “[Train] [CentOS7] Undercloud jobs puppet task ertm...” : Bugs : tripleo
Workaround:
APPLY the fix directly on the setup. It's not merged in train centos7. PR: https://github.com/saltedsignal/puppet-certmonger/pull/35/files Need fix in on controller and compute nodes /usr/share/openstack-puppet/modules/certmonger/lib/puppet/provider/certmonger_certificate/certmonger_certificate.rb

11. Enable mount-bind for NFS

Note : Below mentioned steps required only if target backend is NFS.
Please refer to this page for detailed steps to setup mount bind.