If any config parameter changed in tvo-operator-inputs.yaml like db user password or service endpoints, you can apply the changes using following command.
cd ctlplane-scripts
./deploy_tvo_control_plane.sh
Above command will output ‘configured' or 'unchanged’ depending upon changes happened in tvo-operator-inputs.yaml.
2] Upgrade to new build
Please ensure the following requirements are met before starting the upgrade process:
No Snapshot or Restore is running
Global job scheduler is disabled
Please follow below steps to upgrade to new build on RHOSO18 setup.
Take a backup of existing triliovault-cfg-scripts and clone latest triliovault-cfg-scripts github repository.
⚠️ Important: After running the above script, immediately upgrade the operator to version 6.1.7.
2.7] Verify successful deployment of T4O control plane services.
Follow the steps mentioned below only if you want to migrate from the old DB approach (openstack-galera) to the new DB approach (trilio-galera) . Otherwise, skip steps 2.8 to 2.13 to continue using the old openstack-galera DB approach.
Old DB approach: Trilio uses the DB service used by the OpenStack.
New DB approach: Trilio deploys and uses its own DB service.
2.8] Take workloadmgr Database Dump
Export workloadmgr database using following command from OpenStack’s galera cluster to a file
Create a directory to store .sql file using following command:
Get openstack-galera root password using following command:
First, go inside the OpenStack Galera pod using the following command:
Create a MySQL dump using the following command in the openstack-galera pod.
Copy the above .sql file to the following directory using the command below:
2.9] Upgrade T4O 6.1.7 deployment with new database approach
Update the tvo-operator-inputs.yaml file with the following changes to configure Trilio to use the new DB installation approach.
2.10] Run deploy command
2.11] Verify successful deployment of T4O control plane services.
2.12] Import Old workloadmgr Database Data into New Galera
After successful installation, import the database dump from the SQL file created using the mysqldump command.
Copy the workloadmgr database dump file to the Trilio Galera pod.
Get the Trilio Galera root password using the following command:
Import the workloadmgr database data from the .sql file.
First, go inside the Trilio Galera pod using the following command:
Check the size of the .sql file using the following command:
Import the .sql file into the workloadmgr database using the following command:
2.13] Restart of wlm-cron
After importing the database dump, the wlm-cron pod must be restarted.
Restarting is required for wlm-cron to pick up the newly imported, scheduler-enabled workloads.
In OpenShift, a pod restart is performed by deleting the pod. The controller will automatically recreate it.
Note: This step is needed only when there is change in python3-s3fuse-plugin-el9 package.
Please follow below steps to upgrade s3 backup target pods to new build on RHOSO18 setup.
Copy the input files from old triliovault-cfg-scripts directory to latest directory.
5.1] Upgrade dynamic backup target on control plane
⚠️ Important Upgrade Note for T4O 6.1.3 or Older
If you are on T4O 6.1.3 or older and have dynamic BTTs (TVOBackupTarget CRs), run the script below on your bastion node before upgrading to patch Helm annotations and labels:
The script updates all dynamic TVOBackupTarget resources with required Helm annotations and labels so they are compatible with the current release.
If a user has created dynamic BTTs using T4O 6.1.3 or older, this step is mandatory before upgrading.
Users on T4O 6.1.4 or newer with BTTs created on 6.1.4+ do not need to run this script as the fix is included in the operator.
Update the below parameter with new image tag in respective files tvo-backup-target-cr-amazon-s3.yaml and tvo-backup-target-cr-other-s3.yaml.
Now apply the changes.
Check the s3 backup target containers up and running with new image.
5.2] Upgrade dynamic backup target on data plane
Update the wlm image parameter with new image tag in respective files.
Now apply the changes.
Update the ansible runner image with new image tag.
Now apply the changes.
Update the deployment name with unique name.
Now apply the changes.
Verify the s3 backup target containers on compute nodes.
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp18/ctlplane-scripts/
chmod +x update_helm_annotations.sh
./update_helm_annotations.sh
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp18/ctlplane-scripts
vi tvo-backup-target-cr-amazon-s3.yaml
vi tvo-backup-target-cr-other-s3.yaml
oc -n trilio-openstack get pods | grep <backup-target-name>
oc -n trilio-openstack describe pod <pod-name> | grep Image:
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp18/dataplane-scripts
vi <BACKUP-TARGET-NAME>/cm-trilio-backup-target.yaml
trilio_env.yml: |
triliovault_wlm_image: "registry.connect.redhat.com/trilio/trilio-wlm:<NEW-BUILD-TAG>"
vi <BACKUP-TARGET-NAME>/trilio-add-backup-target-service.yaml
openStackAnsibleEERunnerImage: registry.connect.redhat.com/trilio/trilio-ansible-runner-rhoso:<NEW-BUILD-TAG>