Search…
TVO-4.2
Online upgrade TrilioVault Appliance
The TrilioVault appliance of TVO 4.2 is running a different Kernel than the TrilioVault appliance for TVO 4.1 or older.
When upgrading from TVO 4.1 or older it is recommended to replace the TrilioVault appliance entirely and not do an in-place upgrade.

Generic Pre-requisites

The pre-requisites should already be fulfilled from upgrading the TrilioVault components on the Controller and Compute nodes.
  • Please ensure to complete the upgrade of all the TrilioVault components on Openstack controller & compute nodes before starting the rolling upgrade of TVM.
  • The mentioned gemfury repository should be accessible from TVault VM.
  • Please ensure the following points before starting the upgrade process:
    • No snapshot OR restore to be running.
    • Global job-scheduler should be disabled.
    • wlm-cron should be disabled and any lingering process should be killed.

Deactivating the wlm-cron service

The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut-down.
1
[[email protected] ~]# pcs resource disable wlm-cron
2
[[email protected] ~]# systemctl status wlm-cron
3
● wlm-cron.service - workload's scheduler cron service
4
Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset : disabled)
5
Active: inactive (dead)
6
7
Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
8
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
9
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
10
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
11
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
12
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
13
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
14
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
15
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
16
Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
17
Hint: Some lines were ellipsized, use -l to show in full.
18
[[email protected] ~]# pcs resource show wlm-cron
19
Resource: wlm-cron (class=systemd type=wlm-cron)
20
Meta Attrs: target-role=Stopped
21
Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito r-interval-30s)
22
start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int erval-0s)
23
stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
24
[[email protected] ~]# ps -ef | grep -i workloadmgr-cron
25
root 15379 14383 0 08:27 pts/0 00:00:00 grep --color=auto -i workloadmgr -cron
26
Copied!

Backup old configuration data

Take a backup of the conf files on all TVM nodes.
1
> tar -czvf tvault_backup.tar.gz /etc/tvault /etc/tvault-config /etc/workloadmgr
2
> cp tvault_backup.tar.gz /root/
Copied!

Set the environment

Activate the virtual environment on all TVM nodes.
1
> source /home/stack/myansible/bin/activate
Copied!
Export new TVault version PYPI url:
1
> export TVAULT_PYPI=https://pypi.fury.io/triliodata-4-2/
Copied!

Upgrade pip package

Run the following command on all TVM nodes to upgrade the pip package
1
> pip3 install --upgrade pip
Copied!

Set pip package repository variable

1
> export PIP_EXTRA_INDEX_URL=https://pypi.fury.io/triliodata-4-2/
Copied!

Hotfix Upgrade s3fuse/tvault-object-store

Run the following command on all TVM nodes to upgrade s3fuse packages only.
1
> pip install --extra-index-url $PIP_EXTRA_INDEX_URL s3fuse --upgrade --no-cache-dir --no-deps
Copied!

Hotfix upgrade tvault-configurator

Run the following command on all TVM nodes to upgrade tvault-configurator packages only.
1
> pip install --extra-index-url $PIP_EXTRA_INDEX_URL tvault-configurator --upgrade --no-cache-dir --no-deps
Copied!
During the update of the tvault-configurator the following error might be shown:
1
ERROR: Command errored out with exit status 1:
2
command: /home/stack/myansible/bin/python3 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-crie5qno/ansible_086eb28a1523443f802ab202398d361e/setup.py'"'"'; __file__='"'"'/tmp/pip-install-crie5qno/ansible_086eb28a1523443f802ab202398d361e/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pdd9x77v
3
cwd: /tmp/pip-install-crie5qno/ansible_086eb28a1523443f802ab202398d361e/
Copied!
This error can be ignored.

Hotfix upgrade workloadmgr

Run the upgrade command on all TVM nodes to upgrade workloadmgr packages only.
1
> pip install --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgr --upgrade --no-cache-dir --no-deps
Copied!

Hotfix upgrade workloadmgrclient

Run the upgrade command on all TVM nodes to upgrade workloadmgr packages only.
1
> pip install --extra-index-url $PIP_EXTRA_INDEX_URL workloadmgrclient --upgrade --no-cache-dir --no-deps
Copied!

Hotfix upgrade contegoclient

Run the upgrade command on all TVM nodes to upgrade contegoclient packages only.
1
> pip install --extra-index-url $PIP_EXTRA_INDEX_URL contegoclient --upgrade --no-cache-dir --no-deps
Copied!

Post Upgrade Steps

Restore the backed-up config files

1
> cd /root
2
> tar -xzvf tvault_backup.tar.gz -C /
Copied!

Restart services

Restart following services on all node(s) using respective commands
1
systemctl restart wlm-api
2
systemctl restart wlm-scheduler
3
systemctl restart wlm-workloads
4
systemctl restart tvault-config
5
systemctl restart tvault-object-store (Required ONLY if TVault configured with S3 backend storage)
Copied!
Enable Global Job Scheduler Restart pcs resources only on the primary node
1
pcs resource enable wlm-cron
2
pcs resource restart wlm-cron
Copied!

Verify the status of the services

tvault-object-store will run only if TVault configured with S3 backend storage
1
systemctl status wlm-api wlm-scheduler wlm-workloads tvault-config tvault-object-store | grep -E 'Active|loaded'
2
On primary node
3
pcs status
4
systemctl status wlm-cron
Copied!
Additional check for wlm-cron on primary node
1
ps -ef | grep workloadmgr-cron | grep -v grep
2
Above command should show only 2 processes running; sample below
3
4
[[email protected] ~]# ps -ef | grep workloadmgr-cron | grep -v grep
5
nova 8841 1 2 Jul28 ? 00:40:44 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
6
nova 8898 8841 0 Jul28 ? 00:07:03 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
Copied!
Check the mount point using “df -h” command

[RHOSP, TripleO and Kolla only] Verify nova UID/GID for nova user on the Appliance

Red Hat Openstack, TripleO and Kolla Ansible Openstack are using the nova UID/GID of 42436 inside their containers instead of 162:162 which is the standard in other Openstack environments.
Please verify that the nova UID/GID on the TrilioVault Appliance is still 42436,
1
[[email protected] ~]# id nova
2
uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
Copied!
In case of the UID/GID being changed back to 162:162 follow these steps to set it back to 42436:42436.
  1. 1.
    Download the shell script that will change the user id
  2. 2.
    Assign executable permissions
  3. 3.
    Execute the script
  4. 4.
    Verify that 'nova' user and group id has changed to '42436'
1
## Download the shell script
2
$ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
3
4
## Assign executable permissions
5
$ chmod +x nova_userid.sh
6
7
## Execute the shell script to change 'nova' user and group id to '42436'
8
$ ./nova_userid.sh
9
10
## Ignore any errors and verify that 'nova' user and group id has changed to '42436'
11
$ id nova
12
uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
Copied!