Offline upgrade Trilio Appliance
The offline upgrade of the Trilio Appliance is only recommended for hotfix upgrades. For major upgrades in offline environments, it is recommended to download the latest qcow2 image and redeploy the appliance.
Generic Pre-requisites
Please ensure to complete the upgrade of all the TVault components on the Openstack controller & compute nodes before starting the rolling upgrade of TVO.
The mentioned gemfury repository should be accessible from a VM/Server.
Please ensure the following points before starting the upgrade process:
No snapshot OR restore to be running.
Global job-scheduler should be disabled.
wlm-cron should be disabled & any lingering process should be killed. (This should already have been done during Trilio components upgrade on Openstack)
pcs resource disable wlm-cron
Check: systemctl status wlm-cron OR pcs resource show wlm-cron
Additional step: To ensure that cron is actually stopped, search for any lingering processes against wlm-cron and kill them. [Cmd : ps -ef | grep -i workloadmgr-cron]
Download and copy packages from VM/server to TVM node(s)
Download the required system packages
Download latest pip package
pip3 install --upgrade pip
pip3 download pipDownload Trilio packages
Download the latest available version of the below-mentioned packages. To know more about the latest releases, check out the latest release note under this section.
Export the index URL
export PIP_EXTRA_INDEX_URL=https://pypi.fury.io/triliodata-4-1/Download s3fuse package
pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL s3fuse --no-cache-dir --no-depsDownload tvault-configurator dependent package
#First install dependent package configobj
pip3 install configobj
pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL tvault-configurator --no-cache-dir --no-depsDownload workloadmgr and dependent package
#First install dependent package jinja2
pip3 install jinja2
pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgr --no-cache-dir --no-depsDownload workloadmgrclient package
pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgrclient --no-cache-dir --no-depsDownload contegoclient package
pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL contegoclient --no-cache-dir --no-depsDownload oslo.messaging package
pip3 download oslo.messaging==12.1.6 --no-depsCopy the downloaded packages from VM/Server to TVM node(s)
Copy all the downloaded packages(listed below) from the VM/server to all the TVM nodes
pip
s3fuse
tvault-configurator
workloadmgr
workloadmgrclient
contegoclient
Upgrade packages on all T4O Node(s)
Preparation
Take a backup of the configuration files
tar -czvf tvault_backup.tar.gz /etc/tvault /etc/tvault-config /etc/workloadmgr /home/stack/myansible/lib/python3.6/site-packages/tvault_configurator/conf/users.json
cp tvault_backup.tar.gz /root/Activate the virtual environment
source /home/stack/myansible/bin/activateUpgrade system packages
Run the following command on all TVM nodes to upgrade the pip package
pip3 install --upgrade pip-<downloaded-version>-py3-none-any.whl --no-depsUpgrade s3fuse/tvault-object-store
Run the following command on all TVM nodes to upgrade s3fuse
systemctl stop tvault-object-store
pip3 install --upgrade s3fuse-<HF_VERSION>.tar.gz --no-deps
rm -rf /var/triliovault/*Upgrade tvault-configurator
Run the following command on all TVM nodes to upgrade tvault-configurator
pip3 install --upgrade tvault_configurator-<HF_VERSION>.tar.gz --no-depsUpgrade workloadmgr
Run the upgrade command on all TVM nodes to upgrade workloadmgr
pip3 install --upgrade workloadmgr-<HF_VERSION>.tar.gz --no-depsUpgrade workloadmgrclient
Run the upgrade command on all TVM nodes to upgrade workloadmgrclient
pip3 install --upgrade workloadmgrclient-<HF_VERSION>.tar.gz --no-depsUpgrade contegoclient
Run the upgrade command on all TVM nodes to upgrade contegoclient
pip3 install --upgrade contegoclient-<HF_VERSION>.tar.gz --no-depsSet oslo.messaging version
Using the latest available oslo.messaging version can lead to stuck RPC and API calls.
It is therefore required to fix the oslo.messaging version on the TVM.
pip3 install ./oslo.messaging-12.1.6-py3-none-any.whl --no-depsPost Upgrade Steps
Verify if the upgrade successfully completed or not.
source /home/stack/myansible/bin/activate
pip3 list | grep <package_name> And match the versions with the respective latest downloaded versions.
Restore the backed-up configuration files
cd /root
tar -xzvf tvault_backup.tar.gz -C /Restart following services on all node(s) using respective commands
systemctl restart tvault-object-store
systemctl restart wlm-api
systemctl restart wlm-scheduler
systemctl restart wlm-workloads
systemctl restart tvault-config Enable wlm-cron service on primary node through pcs cmd, if T4O is configured with Openstack
pcs resource enable wlm-cron
## run below command to check status of wlm-cron
pcs statusEnable Global Job Scheduler
Verify the status of the services, if T4O is configured with Openstack.
## On All nodes
systemctl status wlm-api wlm-scheduler wlm-workloads tvault-config tvault-object-store | grep -E 'Active|loaded'
## On primary node
pcs status
systemctl status wlm-cronAdditional check for wlm-cron on the primary node, if T4O is configured with Openstack_._
ps -ef | grep workloadmgr-cron | grep -v grep
## Above command should show only 2 processes running; sample below
[root@tvm6 ~]# ps -ef | grep workloadmgr-cron | grep -v grep
nova 8841 1 2 Jul28 ? 00:40:44 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
nova 8898 8841 0 Jul28 ? 00:07:03 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.confCheck the mount point using the “df -h” command if T4O is configured with Openstack
Last updated
Was this helpful?
