Upgrading on RHOSO
1] Configuration change
If any config parameter changed in tvo-operator-inputs.yaml like db user password or service endpoints, you can apply the changes using following command.
cd ctlplane-scripts
./deploy_tvo_control_plane.shAbove command will output ‘configured' or 'unchanged’ depending upon changes happened in tvo-operator-inputs.yaml.
2] Upgrade to new build
Please ensure the following requirements are met before starting the upgrade process:
No Snapshot or Restore is running
Global job scheduler is disabled
Please follow below steps to upgrade to new build on RHOSO18 setup.
Take a backup of existing triliovault-cfg-scripts and clone latest triliovault-cfg-scripts github repository.
mv triliovault-cfg-scripts triliovault-cfg-scritps-old
git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.gitCopy the input values from triliovault-cfg-scripts-old to latest directory.
cp triliovault-cfg-scripts-old/redhat-director-scripts/rhosp18/ctlplane-scripts/tvo-operator-inputs.yaml triliovault-cfg-scripts/redhat-director-scripts/rhosp18/ctlplane-scripts/tvo-operator-inputs.yaml
cp triliovault-cfg-scripts-old/redhat-director-scripts/rhosp18/dataplane-scripts/cm-trilio-datamover.yaml triliovault-cfg-scripts/redhat-director-scripts/rhosp18/dataplane-scripts/cm-trilio-datamover.yaml
cp triliovault-cfg-scripts-old/redhat-director-scripts/rhosp18/dataplane-scripts/trilio-datamover-service.yaml triliovault-cfg-scripts/redhat-director-scripts/rhosp18/dataplane-scripts/trilio-datamover-service.yaml
cp triliovault-cfg-scripts-old/redhat-director-scripts/rhosp18/dataplane-scripts/trilio-data-plane-deployment.yaml triliovault-cfg-scripts/redhat-director-scripts/rhosp18/dataplane-scripts/trilio-data-plane-deployment.yaml2.1] Upgrade Trilio for OpenStack Operator
Run operator deployment with new image tag as mentioned in step 2.1 of this documentation
2.2] Upgrade Trilio OpenStack Control Plane Services
Update the image tags in tvo-operator-inputs.yaml file.
Update the below parameters:
Now apply the changes using below command:
Verify the deployment status and successful deployment.
2.3] Upgrade Trilio Data Plane Services
Update the image tags in cm-trilio-datamover.yaml file.
Update the below parameters:
Now apply the changes using below command:
Update the ansible runner tag in trilio-datamover-service.yaml file.
Update the below parameters:
Now apply the changes using command:
Update the deployment name as mentioned in step 3.6 of this documentation and trigger deployment.
Verify the deployment as mentioned in step 3.7 of this documentation
2.4] Upgrade Trilio Horizon Plugin
Follow step 4 of this documentation and update the trilio horizon plugin with new image tag.
3] Upgrade dynamically added Backup Target
Note: This step is needed only when there is change in python3-s3fuse-plugin-el9 package.
Please follow below steps to upgrade s3 backup target pods to new build on RHOSO18 setup.
Copy the input files from old triliovault-cfg-scripts directory to latest directory.
3.1] Upgrade dynamic backup target on control plane
⚠️ Important Upgrade Note for T4O 6.1.3 or Older
If you are on T4O 6.1.3 or older and have dynamic BTTs (
TVOBackupTargetCRs), run the script below on your bastion node before upgrading to patch Helm annotations and labels:The script updates all dynamic TVOBackupTarget resources with required Helm annotations and labels so they are compatible with the current release.
If a user has created dynamic BTTs using T4O 6.1.3 or older, this step is mandatory before upgrading.
Users on T4O 6.1.4 or newer with BTTs created on 6.1.4+ do not need to run this script as the fix is included in the operator.
Update the below parameter with new image tag in respective files tvo-backup-target-cr-amazon-s3.yaml and tvo-backup-target-cr-other-s3.yaml.
Now apply the changes.
Check the s3 backup target containers up and running with new image.
3.2] Upgrade dynamic backup target on data plane
Update the wlm image parameter with new image tag in respective files.
Now apply the changes.
Update the ansible runner image with new image tag.
Now apply the changes.
Update the deployment name with unique name.
Now apply the changes.
Verify the s3 backup target containers on compute nodes.
Last updated
Was this helpful?
