LogoLogo
T4O-4.0
T4O-4.0
  • About Trilio for Openstack
  • Trilio for Openstack Architecture
  • Trilio 4.0 Release Notes
  • Deployment Guide
    • Support Matrix
    • Requirements
    • Trilio network considerations
    • Preparing the installation
    • Spinning up the Trilio VM
    • Installing Trilio Components
      • Installing on RHOSP13
      • Installing on RHOSP16.0
      • Installing on RHOSP16.1
      • Installing on Canonical Openstack Queens and Train
      • Installing on Kolla Train
      • Installing on Ansible Openstack Train
    • Configuring Trilio
    • Apply the Trilio license
    • Additions for multiple CEPH configurations
    • Post Installation Health-Check
    • Uninstall Trilio
    • Upgrade Trilio
      • Ubuntu/Debian based Openstack enviroments
      • CentOS/RHEL based Openstack environments
      • RHOSP Upgrade
      • Upgrade Trilio Appliance
    • Uploading the File Recovery Manager
    • Install workloadmgr CLI client
  • Trilio Appliance Administration Guide
    • Trilio Appliance Dashboard
    • Reconfigure the Trilio Cluster
    • Change the Trilio GUI password
    • Reset the Trilio GUI password
    • Reinitialize Trilio
    • Set the Trilio Openstack service password
    • Available downloads from the Trilio Cluster
  • User Guide
    • Workloads
    • Snapshots
    • Restores
    • File Search
    • Snapshot Mount
    • Schedulers
    • E-Mail Notifications
  • Admin Guide
    • Backups-Admin Area
    • Workload Policies
    • Workload Quotas
    • Managing Trusts
    • Workload Import & Migration
    • Disaster Recovery
      • Example runbook for Disaster Recovery using NFS
  • Troubleshooting
    • General Troubleshooting Tips
    • Example RC file for workloadmgr CLI
    • Using the workloadmgr CLI tool on the Trilio Appliance
    • Healthcheck of Trilio
    • Important log files
  • API GUIDE
    • Workloads
    • Snapshots
    • Restores
    • File Search
    • Snapshot Mount
    • Schedulers
    • E-Mail Notifications Settings
    • Workload Policies
    • Workload Quotas
    • Managing Trusts
    • Workload Import and Migration
Powered by GitBook
On this page
  • Generic Pre-requisites
  • Upgrade s3fuse/tvault-object-store
  • Upgrade tvault-configurator
  • Upgrade workloadmgr
  • Updating Config Parameters of specific services
  • Post Upgrade Steps

Was this helpful?

Export as PDF
  1. Deployment Guide
  2. Upgrade Trilio

Upgrade Trilio Appliance

Generic Pre-requisites

  • Please ensure to complete the upgrade of all the TVault components on Openstack controller & compute nodes before starting the rolling upgrade of TVM.

  • The mentioned gemfury repository should be accessible from TVault VM.

  • Please ensure the following points before starting the upgrade process:

    • No snapshot OR restore to be running.

    • Global job-scheduler should be disabled.

  • Take a backup of the conf files on all TVM nodes.

> tar -czvf tvault_backup.tar.gz /etc/tvault /etc/tvault-config /etc/workloadmgr
> cp tvault_backup.tar.gz /root/ 
  • Activate the virtual environment on all TVM nodes.

> source /home/stack/myansible/bin/activate
  • Export new TVault version PYPI url:

    > export TVAULT_PYPI=https://pypi.fury.io/triliodata-4-0/

Upgrade s3fuse/tvault-object-store

  • Run the following command on all TVM nodes to upgrade s3fuse and its dependent packages.

    > pip install --extra-index-url $TVAULT_PYPI s3fuse --upgrade --no-cache-dir

Upgrade tvault-configurator

  • Run the following command on all TVM nodes to upgrade tvault-configurator and its dependent packages.

    > pip install --extra-index-url $TVAULT_PYPI tvault-configurator --upgrade --no-cache-dir

Upgrade workloadmgr

  • Run the upgrade command on all TVM nodes to upgrade workloadmgr and its dependent packages (workloadmgrclient, contegoclient, etc)

    > pip install --extra-index-url $TVAULT_PYPI workloadmgr --upgrade --no-cache-dir

Updating Config Parameters of specific services

  • Update wlm-cron service entries

    • If Reconfigure is NOT planned, please perform following steps on all TVM nodes, else skip.

      • Update the following two parameters in wlm-cron.service systemd file (/etc/systemd/system/wlm-cron.service) :

        KillMode=control-group
        Restart=on-failure
      • And once done, use the following command to reload the service file:

        systemctl daemon-reload
  • Maria DB changes

    • If Reconfigure is planned

      • Remove Galera clustered-flag from all TVM nodes & proceed with reconfigure

        rm -rf /etc/galera_cluster_configured
    • If Reconfigure is NOT planned

      • Increase the max SQL connections limit by doing the following steps:

        • Edit /etc/my.cnf.d/server.cnf file on each TVM node

        • Add the parameter max_connections=5000 under [mysqld] section

        • Stop and Start MariaDB service on each node one by one

          systemctl stop mariadb
          systemctl start mariadb

Post Upgrade Steps

  • Restore the backed-up config files

    > cd /root 
    > tar -xzvf tvault_backup.tar.gz -C /
  • Restart following services on all node(s) using respective commands

    systemctl restart wlm-api 
    systemctl restart wlm-workloads 
    systemctl restart tvault-config
    systemctl restart tvault-object-store (Required ONLY if TVault configured with S3 backend storage)
  • Enable Global Job Scheduler

  • Restart pcs resources only on the primary node

    pcs resource restart wlm-cron
    pcs resource restart wlm-scheduler
  • Verify the status of the services

    • Note: tvault-object-store will run only if TVault configured with S3 backend storage

      On All nodes
      systemctl status wlm-api wlm-workloads tvault-config tvault-object-store | grep -E 'Active|loaded'
      On primary node
      pcs status
      systemctl status wlm-cron wlm-scheduler
  • Additional check for wlm-cron on primary node

    ps -ef | grep workloadmgr-cron | grep -v grep
    Above command should show only 2 processes running; sample below
    
    [root@tvm6 ~]# ps -ef | grep workloadmgr-cron | grep -v grep
    nova      8841     1  2 Jul28 ?        00:40:44 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
    nova      8898  8841  0 Jul28 ?        00:07:03 /home/stack/myansible/bin/python3 /home/stack/myansible/bin/workloadmgr-cron --config-file=/etc/workloadmgr/workloadmgr.conf
  • Check the mount point using “df -h” command

PreviousRHOSP UpgradeNextUploading the File Recovery Manager

Last updated 1 year ago

Was this helpful?