Search…
TVO-4.2
Upgrading on Kolla OpenStack

Upgrading from TrilioVault 4.1 to a higher version

TrilioVault 4.1 can be upgraded without reinstallation to a higher version of TVO if available.

Pre-requisites

Please ensure the following points are met before starting the upgrade process:
  • Either 4.1 or 4.2 GA OR any hotfix patch against 4.1/4.2 should be already deployed
  • No Snapshot OR Restore is running
  • Global job scheduler should be disabled
  • wlm-cron is disabled on the primary TrilioVault Appliance
  • Access to the gemfury repository to fetch new packages

Deactivating the wlm-cron service

The following sets of commands will disable the wlm-cron service and verify that it is has been completly shut-down.
1
[[email protected] ~]# pcs resource disable wlm-cron
2
[[email protected] ~]# systemctl status wlm-cron
3
● wlm-cron.service - workload's scheduler cron service
4
Loaded: loaded (/etc/systemd/system/wlm-cron.service; disabled; vendor preset : disabled)
5
Active: inactive (dead)
6
7
Jun 11 08:27:06 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:06 - INFO - 1...t
8
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 140686268624368 Child 11389 ki...5
9
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - 1...5
10
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Shutting down thread pool
11
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...l
12
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: Stopping the threads
13
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - S...s
14
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: All threads are stopped succes...y
15
Jun 11 08:27:07 TVM2 workloadmgr-cron[11115]: 11-06-2021 08:27:07 - INFO - A...y
16
Jun 11 08:27:09 TVM2 systemd[1]: Stopped workload's scheduler cron service.
17
Hint: Some lines were ellipsized, use -l to show in full.
18
[[email protected] ~]# pcs resource show wlm-cron
19
Resource: wlm-cron (class=systemd type=wlm-cron)
20
Meta Attrs: target-role=Stopped
21
Operations: monitor interval=30s on-fail=restart timeout=300s (wlm-cron-monito r-interval-30s)
22
start interval=0s on-fail=restart timeout=300s (wlm-cron-start-int erval-0s)
23
stop interval=0s timeout=300s (wlm-cron-stop-interval-0s)
24
[[email protected] ~]# ps -ef | grep -i workloadmgr-cron
25
root 15379 14383 0 08:27 pts/0 00:00:00 grep --color=auto -i workloadmgr -cron
26
Copied!

Clone latest configuration scripts

Before the latest configuration script is loaded it is recommended to take a backup of the existing config scripts' folder & TrilioVault ansible roles. The following command can be used for this purpose:
1
mv triliovault-cfg-scripts triliovault-cfg-scripts_old
2
mv /usr/local/share/kolla-ansible/ansible/roles/triliovault /opt/triliovault_old
Copied!
Clone the latest configuration scripts of the required branch and access the deployment script directory for Kolla Ansible Openstack.
  • 1
    git clone -b stable/4.2 https://github.com/trilioData/triliovault-cfg-scripts.git
    Copied!
1
cd triliovault-cfg-scripts/kolla-ansible/
Copied!
Copy the downloaded TrilioVault ansible role into the Kolla-Ansible roles directory.
1
cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
Copied!

Append TrilioVault variables

Clean old TrilioVault variables and append new TrilioVault variables

This step is not always required. It is recommended to comparetriliovault_globals.ymlwith the TrilioVault entries in the/etc/kolla/globals.ymlfile.
In case of no changes, this step can be skipped.
This is required, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_globals.yml they need to be updated in /etc/kolla/globals.yml file.
1
#copy the backed-up original globals.yml which is not having triliovault variables iniside current globals.yml
2
cp /opt/globals.yml /etc/kolla/globals.yml
3
4
#Append TrilioVault global variables to globals.yml
5
cat ansible/triliovault_globals.yml >> /etc/kolla/globals.yml
Copied!

Clean old TrilioVault passwords and append new TrilioVault password variables

This step is not always required. It is recommended to comparetriliovault_passwords.ymlwith the TrilioVault entries in the/etc/kolla/passwords.ymlfile.
In case of no changes, this step can be skipped.
This step is required, when some password variable names have been added, changed, or removed in the latest triliovault_passwords.yml. In this case, the /etc/kolla/passwords.yml needs to be updated.
1
Take backup of current password file
2
cp /etc/kolla/password.yml /opt/password-<CURRENT-RELEASE>.yml
3
4
#Reset the passwords file to default one by reverting the backed-up original password.yml. This backup would have been taken during previous install/upgrade.
5
cp /opt/password.yml /etc/kolla/password.yml
6
7
#Append TrilioVault password variables to passwords.yml
8
cat ansible/triliovault_passwords.yml >> /etc/kolla/passwords.yml
9
10
#File /etc/kolla/passwords.yml to be edited to set passwords.
11
#To set the passwords, it's recommended to use the same passwords as done during previous TVO deployment, as present in the password file backup (/opt/password-<CURRENT-RELEASE>.yml).
12
#Any additional passwords (in triliovault_passwords.yml), should be set by the user in /etc/kolla/passwords.yml.
Copied!

Append triliovault_site.yml content to kolla ansible's site.yml

This step is not always required. It is recommended to comparetriliovault_site.ymlwith the TrilioVault entries in the/usr/local/share/kolla-ansible/ansible/site.ymlfile.
In case of no changes, this step can be skipped.
This is required because, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_site.yml they need to be updated in /usr/local/share/kolla-ansible/ansible/site.yml file.
1
#Take backup of current site.yml file
2
cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/site-<CURRENT-RELEASE>.yml
3
4
#Reset the site.yml to default one by reverting the backed-up original site.yml inside current site.yml. This backup would have been taken during previous install/upgrade.
5
cp /opt/site.yml /usr/local/share/kolla-ansible/ansible/site.yml
6
7
#Append TrilioVault code to site.yml
8
cat ansible/triliovault_site.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
Copied!

Append triliovault_inventory.txt to the kolla-ansible inventory file

This step is not always required. It is recommended to comparetriliovault_inventory.yml ith the TrilioVault entries in the/root/multinodefile.
In case of no changes, this step can be skipped.
By default, the triliovault-datamover-api service gets installed on ‘control' hosts and the trilio-datamover service gets installed on 'compute’ hosts. You can edit the TVO groups in the inventory file as per your cloud architecture.
TVO group names are ‘triliovault-datamover-api’ and ‘triliovault-datamover’
1
For example:
2
If your inventory file name path '/root/multinode' then use following
3
#cleanup old TVO groups from /root/multinode and copy latest triliovault inventory file
4
cat ansible/triliovault_inventory.txt >> /root/multinode
Copied!

Configure multi-IP NFS

This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs
On kolla-ansible server node, change directory
1
cd triliovault-cfg-scripts/common/
Copied!
Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.
If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.
Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.
vi triliovault_nfs_map_input.yml
The triliovault_nfs_map_imput.yml is explained here.
Update pyyaml on the kolla-ansible server node only
1
pip3 install -U pyyaml
Copied!
Expand the map file to create one to one mapping of compute and nfs share.
1
python ./generate_nfs_map.py
Copied!
Result will be in file - 'triliovault_nfs_map_output.yml'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all necessary nfs shares.
Append this output map file to 'triliovault_globals.yml' File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’
1
cat triliovault_nfs_map_output.yml >> ../kolla-ansible/ansible/triliovault_globals.yml
Copied!
Ensure to set multi_ip_nfs_enabled in triliovault_globals.yml file to yes

Edit globals.yml to set TVO parameters

Edit '/etc/kolla/globals.yml' file to fill triliovault backup target and build details. You will find the triliovault related parameters at the end of globals.yml file. User needs to fill in details like triliovault build version, backup target type, backup target details, etc.
Following is the list of parameters that the user needs to edit.
Parameter
Defaults/choices
comments
triliovault_tag
TrilioVault Build Version
Possible tags: 4.1.94-hotfix-5-ussuri
4.1.94-hotfix-3-victoria
horizon_image_full
commented out
Uncomment to install TrilioVault Horizon Container instead of previous installed container
triliovault_docker_username
triliodocker
triliovault_docker_password
triliopassword
triliovault_docker_registry
Default: docker.io
If users want to use a different container registry for the triliovault containers, then the user can edit this value. In that case, the user first needs to manually pull triliovault containers from docker.io and push them to the other registry.
triliovault_backup_target
nfs
amazon_s3
ceph_s3
'nfs': If the backup target is NFS
'amazon_s3': If the backup target is Amazon S3
'ceph_s3': If the backup target type is S3 but not amazon S3.
multi_ip_nfs_enabled
yes no Default: no
This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.
dmapi_workers
Default: 16
If dmapi_workers field is not present in config file. The Default value will be equals to number of cores present on the node
triliovault_nfs_shares
Only with nfs for triliovault_backup_target
User needs to provide NFS share path, e.g.: 192.168.122.101:/opt/tvault
triliovault_nfs_options
Default: nolock, soft, timeo=180, intr, lookupcache=none
Only with nfs for triliovault_backup_target
Keep default values if unclear
triliovault_s3_access_key
Only with amazon_s3 or ceph_s3 for triliovault_backup_target
Provide S3 access key
triliovault_s3_secret_key
Only with amazon_s3 or ceph_s3 for triliovault_backup_target
Provide S3 secret key
triliovault_s3_region_name
Default: us-east-1
Only with amazon_s3 or ceph_s3 for triliovault_backup_target
Provide S3 region or keep default if no region required
triliovault_s3_bucket_name
Only with amazon_s3 or ceph_s3 for triliovault_backup_target
Provide S3 bucket
triliovault_s3_endpoint_url
Only with ceph_s3 for triliovault_backup_target
Provide S3 endpoint URL
triliovault_s3_ssl_enabled
True
False
Only with ceph_s3 for triliovault_backup_target
Set to true if endpoint is on HTTPS
triliovault_s3_ssl_cert_file_name
s3-cert-pem
Only with ceph_s3 for triliovault_backup_target and
if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority user needs to copy the 'ceph s3 ca chain file' to "/etc/kolla/config/triliovault/" directory on ansible server. Create this directory if it does not exist already.
triliovault_copy_ceph_s3_ssl_cert
True
False
Set to true if:
ceph_s3 for triliovault_backup_target and
if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority

Enable TVO Snapshot mount feature

This step is already part of the 4.1 GA installation procedure and should only be verified.
To enable TrilioVault's Snapshot mount feature it is necessary to make the TrilioVault Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variable. Append the TrilioVault mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
1
nova_libvirt_default_volumes:
2
- "{{ node_config_directory }}/nova-libvirt/:{{ container_config_directory }}/:ro"
3
- "/etc/localtime:/etc/localtime:ro"
4
- "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
5
- "/lib/modules:/lib/modules:ro"
6
- "/run/:/run/:shared"
7
- "/dev:/dev"
8
- "/sys/fs/cgroup:/sys/fs/cgroup"
9
- "kolla_logs:/var/log/kolla/"
10
- "libvirtd:/var/lib/libvirt"
11
- "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
12
- "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}"
13
- "nova_libvirt_qemu:/etc/libvirt/qemu"
14
- "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }
15
- "/var/trilio:/var/trilio:shared"
Copied!
Next, find the variable nova_compute_default_volumes in the same file and append the mount bind /var/trilio:/var/trilio:shared to the list.
After the change will the variable look for a default Kolla installation as follows:
1
nova_compute_default_volumes:
2
- "{{ node_config_directory }}/nova-compute/:{{ container_config_directory }}/:ro"
3
- "/etc/localtime:/etc/localtime:ro"
4
- "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
5
- "/lib/modules:/lib/modules:ro"
6
- "/run:/run:shared"
7
- "/dev:/dev"
8
- "kolla_logs:/var/log/kolla/"
9
- "{% if enable_iscsid | bool %}iscsi_info:/etc/iscsi{% endif %}"
10
- "libvirtd:/var/lib/libvirt"
11
- "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
12
- "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}"
13
- "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
14
- "/var/trilio:/var/trilio:shared"
Copied!
In case of using Ironic compute nodes one more entry need to be adjusted in the same file. Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.
After the changes the variable will looks like the following:
1
nova_compute_ironic_default_volumes:
2
- "{{ node_config_directory }}/nova-compute-ironic/:{{ container_config_directory }}/:ro"
3
- "/etc/localtime:/etc/localtime:ro"
4
- "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
5
- "kolla_logs:/var/log/kolla/"
6
- "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
7
- "/var/trilio:/var/trilio:shared"
Copied!

Pull containers in case of private repository

In case, the user doesn’t want to use the docker hub registry for triliovault containers during cloud deployment, then the user can pull triliovault images before starting cloud deployment and push them to other preferred registries.
Following are the triliovault container image URLs. Replace kolla_base_distro and triliovault_tag variables with their values
1
docker.io/trilio/{{ kolla_base_distro }}-binary-trilio-datamover-api:{{ triliovault_tag}}
2
docker.io/trilio/{{ kolla_base_distro }}-binary-trilio-datamover:{{ triliovault_tag}}
3
docker.io/trilio/{{ kolla_base_distro }}-binary-trilio-horizon-plugin:{{ triliovault_tag}}
Copied!
For example if kolla_base_distro='ubuntu' and triliovault_tag='4.1.94-ussuri' then pull urls will be:
1
docker.io/trilio/ubuntu-binary-trilio-datamover-api:4.1.94-hotfix-5-ussuri
2
docker.io/trilio/ubuntu-binary-trilio-datamover:4.1.94-hotfix-5-ussuri
3
docker.io/trilio/ubuntu-binary-trilio-horizon-plugin:4.1.94-hotfix-5-ussuri
Copied!
Possible tags are:
  • 4.1.94-hotfix-5-ussuri
  • 4.1.94-hotfix-3-victoria

Pull TVO container images

Activate the login into dockerhub for TrilioVault tagged containers.
1
ansible -i multinode control -m shell -a "docker login -u triliodocker -p triliopassword docker.io"
Copied!
Run the below command from the directory with the multinode file tull pull the required images.
1
kolla-ansible -i multinode pull --tags triliovault
Copied!

Run Kolla-Ansible upgrade command

Run the below command from the directory with the multinode file to start the upgrade process.
1
kolla-ansible -i multinode upgrade
Copied!

Verify TrilioVault deployment

Verify on the nodes that are supposed to run the TrilioVault containers, that those are available and healthy.
1
[[email protected] ~]# docker ps | grep triliovault_datamover_api
2
f00781997bc3 trilio/centos-binary-trilio-datamover-api:4.1.94-hotfix-5-ussuri "dumb-init --single-…" 2 minutes ago Up 2 minutes triliovault_datamover_api
3
4
[[email protected] ~]# docker ps | grep triliovault_datamover
5
84831db5d215 trilio/centos-binary-trilio-datamover:4.1.94-hotfix-5-ussuri "dumb-init --single-…" 5 minutes ago Up 4 minutes triliovault_datamover
6
7
[[email protected] ~]# docker ps | grep horizon
8
f3647e0fff27 trilio/centos-binary-trilio-horizon-plugin:4.1.94-hotfix-5-ussuri "dumb-init --single-…" 8 minutes ago Up 8 minutes horizon
Copied!

Advance settings/configuration for TrilioVault services

Customize HAproxy configuration parameters for TrilioVault datamover api service

Following are the default haproxy conf parameters set against triliovault datamover api service.
1
retries 5
2
timeout http-request 10m
3
timeout queue 10m
4
timeout connect 10m
5
timeout client 10m
6
timeout server 10m
7
timeout check 10m
8
balance roundrobin
9
maxconn 50000
Copied!
These values work best for triliovault dmapi service. It’s not recommended to change these parameter values. However, in some exceptional cases, If needed to change any of the above parameter values then same can be done on kolla-ansible server in the following file.
1
/usr/local/share/kolla-ansible/ansible/roles/triliovault/defaults/main.yml
Copied!
After editing, run kolla-ansible deploy command again to push these changes to openstack cloud.
Post kolla-ansible deploy, to verify the changes, please check following file, available on all controller/haproxy nodes.
1
/etc/kolla/haproxy/services.d/triliovault-datamover-api.cfg
Copied!

Enable mound-bind for NFS

TVO 4.2 is changing the calculation for the mount point hash value when using NFS backups.
Please follow this documentation to ensure that Backups taken from TVO 4.1 or older can be used with TVO 4.2.