Upgrading on TripleO Train [CentOS7]
1. Generic Pre-requisites
Please ensure following points before starting the upgrade process:
Either 4.1 GA OR any hotfix patch against 4.1 should be already deployed for performing upgrades mentioned in the current document.
No snapshot OR restore to be running.
Global job scheduler should be disabled.
wlm-cron should be disabled (on primary T4O node).
pcs resource disable wlm-cron
Check : systemctl status wlm-cron OR pcs resource show wlm-cron
Additional step : To ensure that cron is actually stopped, search for any lingering processes against wlm-cron and kill them. [Cmd : ps -ef | grep -i workloadmgr-cron]
2. [On Undercloud node] Clone triliovault repo and upload trilio puppet module
Run all the commands with 'stack' user
2.1 Clone the latest configuration scripts
2.2 Backup target is “Ceph based S3” with SSL
If the backup target is Ceph S3 with SSL and SSL certificates are self-signed or authorized by private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, user needs to rename his CA chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/puppet/trilio/files'.
2.3 Upload triliovault puppet module
3. Prepare Trilio container images
In this step, we are going to pull triliovault container images to the user’s registry.
Trilio containers are pushed to ‘Dockerhub'. The registry URL is ‘docker.io’. Following are the triliovault container pull URLs.
Refer to the word <HOTFIX-TAG-VERSION> as 4.3.2 in the below sections
3.1 Trilio container URLs
Trilio container URLs for TripleO Train CentOS7:
There are two registry methods available in the TripleO Openstack Platform.
Remote Registry
Local Registry
Identify which method you are using. Below we have explained all three methods to pull and configure trilioVault's container images for overcloud deployment.
3.2 Remote Registry
If you are using the 'Remote Registry' method follow this section. You don't need to pull anything. You just need to populate the following container URLs in trilio env yaml.
Populate 'environments/trilio_env.yaml' file with triliovault container urls. Changes look like the following.
3.3 Registry on Undercloud
If you are using 'local registry' on undercloud, follow this section.
Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.
## Above script pushes trilio container images to undercloud registry and sets correct trilio images URLs in ‘environments/trilio_env.yaml’. Verify the changes using the following command.
4. Configure multi-IP NFS
This section is only required when the multi-IP feature for NFS is required.
This feature allows to set the IP to access the NFS Volume per datamover instead of globally.
On Undercloud node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml
' file.
Run this command on undercloud by sourcing 'stackrc'.
Edit input map file and fill all the details. Refer to the this page for details about the structure.
vi triliovault_nfs_map_input.yml
Update pyyaml on the undercloud node only
If pip isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Validate the changes in file 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Include the environment file (trilio_nfs_map.yaml
) in overcloud deploy command with '-e' option as shown below.
Ensure that MultiIPNfsEnabled
is set to true in
trilio_env.yaml file and that nfs is used as backup target.
5. Fill in triliovault environment details
Refer old 'trilio_env.yaml' - (/home/stack/triliovault-cfg-scripts-old/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml) file and update new trilio_env.yaml file.
Fill triliovault details in file - '/home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml', triliovault environment file is self explanatory. Fill details of backup target, verify image urls and other details.
For Cohesity NFS format for NFS options : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
6. roles_data.yaml file changes
A new separate triliovault service is introduced in Trilio 4.3 release for Trilio Horizon plugin. User need to add following service to roles_data.yaml file and this service need to be co-located with openstack horizon service.
OS::TripleO::Services::TrilioHorizon
7. Install Trilio on Overcloud
Use the following heat environment file and roles data file in overcloud deploy command
trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations
roles_data.yaml: This file contains overcloud roles data with Trilio roles added. This file need not be changed, you can use the old role_data.yaml file
Use the correct trilio endpoint map file as per your keystone endpoint configuration. - Instead of
tls-endpoints-public-dns.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’ - Instead oftls-endpoints-public-ip.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’ - Instead oftls-everywhere-endpoints-dns.yaml
this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
Deploy command with triliovault environment file looks like following.
8. Steps to verify correct deployment
8.1 On overcloud controller node(s)
Make sure Trilio dmapi and horizon containers(shown below) are in a running state and no other Trilio container is deployed on controller nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps
8.2 On overcloud compute node(s)
Make sure the Trilio datamover container (shown below) is in a running state and no other Trilio container is deployed on compute nodes. If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. You need to revisit the above steps.
8.3 On OpenStack node where OpenStack Horizon Service is running
Make sure the horizon container is in a running state. Please note that the 'Horizon' container is replaced with the Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin
9. Troubleshooting if any failures
Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html
10. Known Issues/Limitations
10.1 Overcloud deploy fails with the following error. Valid for Train CentOS7 only.
This is not a Trilio issue. It’s a TripleO issue that is not fixed in Train Centos7. This is fixed in higher versions of TripleO.
Workaround:
APPLY the fix directly on the setup. It's not merged in train centos7.
PR: https://github.com/saltedsignal/puppet-certmonger/pull/35/files
Need fix in on controller and compute nodes
/usr/share/openstack-puppet/modules/certmonger/lib/puppet/provider/certmonger_certificate/certmonger_certificate.rb
11. Enable mount-bind for NFS
Note : Below mentioned steps required only if target backend is NFS.
Please refer to this page for detailed steps to setup mount bind.
Last updated