Offline upgrade Trilio Appliance

The offline upgrade of the Trilio Appliance is only recommended for hotfix upgrades. For major upgrades in offline environments, it is recommended to download the latest qcow2 image and redeploy the appliance.

Generic Pre-requisites

  • Please ensure to complete the upgrade of all the TVault components on the Openstack controller & compute nodes before starting the rolling upgrade of TVO.

  • The mentioned gemfury repository should be accessible from a VM/Server.

  • Please ensure the following points before starting the upgrade process:

    • No snapshot OR restore to be running.

    • Global job-scheduler should be disabled.

    • wlm-cron should be disabled & any lingering process should be killed. (This should already have been done during Trilio components upgrade on Openstack)

      • pcs resource disable wlm-cron

      • Check: systemctl status wlm-cron OR pcs resource show wlm-cron

      • Additional step: To ensure that cron is actually stopped, search for any lingering processes against wlm-cron and kill them. [Cmd : ps -ef | grep -i workloadmgr-cron]

Download and copy packages from VM/server to TVM node(s)

VM/Server must have internet connectivity and connectivity to Trilio gemfury repo

Download the required system packages

Download latest pip package

pip3 install --upgrade pip
pip3 download pip

Download Trilio packages

Download the latest available version of the below-mentioned packages. To know more about the latest releases, check out the latest release note under this section.

Export the index URL

export PIP_EXTRA_INDEX_URL=https://pypi.fury.io/triliodata-4-1/

Download s3fuse package

pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL s3fuse --no-cache-dir --no-deps

Download tvault-configurator dependent package

#First install dependent package configobj
pip3 install configobj
pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL tvault-configurator --no-cache-dir --no-deps

Download workloadmgr and dependent package

#First install dependent package jinja2
pip3 install jinja2 
pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgr --no-cache-dir --no-deps

Download workloadmgrclient package

pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgrclient --no-cache-dir --no-deps

Download contegoclient package

pip3 download --extra-index-url $PIP_EXTRA_INDEX_URL contegoclient --no-cache-dir --no-deps

Download oslo.messaging package

pip3 download oslo.messaging==12.1.6 --no-deps

Copy the downloaded packages from VM/Server to TVM node(s)

All downloaded packages need to be copied from VM/server to all the TVM nodes.

  • Copy all the downloaded packages(listed below) from the VM/server to all the TVM nodes

    1. pip

    2. s3fuse

    3. tvault-configurator

    4. workloadmgr

    5. workloadmgrclient

    6. contegoclient

Upgrade packages on all T4O Node(s)

If any of the packages are already on the latest, the upgrade won’t happen. Make sure you should be present at the right dir which means run the below commands from where there all packages should be present

Please refer to the versions of the downloaded packages for the placeholder <HF_VERSION> in the below sections.

Preparation

Take a backup of the configuration files

tar -czvf tvault_backup.tar.gz /etc/tvault /etc/tvault-config /etc/workloadmgr /home/stack/myansible/lib/python3.6/site-packages/tvault_configurator/conf/users.json 
cp tvault_backup.tar.gz /root/

Activate the virtual environment

source /home/stack/myansible/bin/activate

Upgrade system packages

Run the following command on all TVM nodes to upgrade the pip package

pip3 install --upgrade pip-<downloaded-version>-py3-none-any.whl --no-deps

Upgrade s3fuse/tvault-object-store

Run the following command on all TVM nodes to upgrade s3fuse

systemctl stop tvault-object-store
pip3 install --upgrade s3fuse-<HF_VERSION>.tar.gz --no-deps
rm -rf /var/triliovault/*

Upgrade tvault-configurator

Run the following command on all TVM nodes to upgrade tvault-configurator

pip3 install --upgrade tvault_configurator-<HF_VERSION>.tar.gz --no-deps

Upgrade workloadmgr

Run the upgrade command on all TVM nodes to upgrade workloadmgr

pip3 install --upgrade workloadmgr-<HF_VERSION>.tar.gz --no-deps

Upgrade workloadmgrclient

Run the upgrade command on all TVM nodes to upgrade workloadmgrclient

pip3 install --upgrade workloadmgrclient-<HF_VERSION>.tar.gz --no-deps

Upgrade contegoclient

Run the upgrade command on all TVM nodes to upgrade contegoclient

pip3 install --upgrade contegoclient-<HF_VERSION>.tar.gz --no-deps

Set oslo.messaging version

Using the latest available oslo.messaging version can lead to stuck RPC and API calls.

It is therefore required to fix the oslo.messaging version on the TVM.

pip3 install ./oslo.messaging-12.1.6-py3-none-any.whl  --no-deps

Post Upgrade Steps

Verify if the upgrade successfully completed or not.

source /home/stack/myansible/bin/activate 
pip3 list | grep <package_name> 

And match the versions with the respective latest downloaded versions.

Restore the backed-up configuration files

cd /root 
tar -xzvf tvault_backup.tar.gz -C /

Restart following services on all node(s) using respective commands

tvault-object-store restart required only if Trilio is configured with S3 backend storage

systemctl restart tvault-object-store 
systemctl restart wlm-api 
systemctl restart wlm-scheduler 
systemctl restart wlm-workloads 
systemctl restart tvault-config 

Enable wlm-cron service on primary node through pcs cmd, if T4O is configured with Openstack

pcs resource enable wlm-cron 

## run below command to check status of wlm-cron 
pcs status

Enable Global Job Scheduler

Verify the status of the services, if T4O is configured with Openstack.

tvault-object-store will run only if TVault is configured with S3 backend storage

## On All nodes 
systemctl status wlm-api wlm-scheduler wlm-workloads tvault-config tvault-object-store | grep -E 'Active|loaded' 
## On primary node 
pcs status 
systemctl status wlm-cron

Additional check for wlm-cron on the primary node, if T4O is configured with Openstack_._

ps -ef | grep workloadmgr-cron | grep -v grep 

## Above command should show only 2 processes running; sample below 

[root@tvm6 ~]# ps -ef | grep workloadmgr-cron | grep -v grep 
nova 8841 1 2 Jul28 ? 00:40:44 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf 
nova 8898 8841 0 Jul28 ? 00:07:03 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf

Check the mount point using the “df -h” command if T4O is configured with Openstack

Last updated