Upgrading on Kolla OpenStack
Trilio supports the upgrade of Trilio-Openstack components from any of the older releases (4.1 onwards) to the latest 4.2 hotfix releases without ripping up the older deployments.
Refer to the below-mentioned acceptable values for the placeholders
kolla_base_distro
and **triliovault_tag
** in this document as per the Openstack environment:
Openstack Version | triliovault_tag | kolla_base_distro |
---|---|---|
Victoria | 4.2.8-victoria | ubuntu centos |
Wallaby | 4.2.8-wallaby | ubuntu centos |
Yoga | 4.2.8-yoga | ubuntu centos |
Zed | 4.2.8-zed | ubuntu rocky |
1] Pre-requisites
Please ensure the following points are met before starting the upgrade process:
Either 4.1 or 4.2 GA OR any hotfix patch against 4.1/4.2 should be already deployed
No Snapshot OR Restore is running
Global job scheduler should be disabled
wlm-cron is disabled on the primary Trilio Appliance
Access to the gemfury repository to fetch new packages
1.1] Deactivating the wlm-cron service
The following sets of commands will disable the wlm-cron service and verify that it has been completly shut-down.
2] Clone latest configuration scripts
Before the latest configuration script is loaded it is recommended to take a backup of the existing config scripts' folder & Trilio ansible roles. The following command can be used for this purpose:
Clone the latest configuration scripts of the required branch and access the deployment script directory for Kolla Ansible Openstack.
Copy the downloaded Trilio ansible role into the Kolla-Ansible roles directory.
3] Append Trilio variables
3.1] Clean old Trilio variables and append new Trilio variables
This step is not always required. It is recommended to comparetriliovault_globals.yml
with the Trilio entries in the/etc/kolla/globals.yml
file.
In case of no changes, this step can be skipped.
This is required, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_globals.yml
they need to be updated in /etc/kolla/globals.yml
file.
3.2] Clean old Trilio passwords and append new Trilio password variables
This step is not always required. It is recommended to comparetriliovault_passwords.yml
with the Trilio entries in the/etc/kolla/passwords.yml
file.
In case of no changes, this step can be skipped.
This step is required, when some password variable names have been added, changed, or removed in the latest triliovault_passwords.yml. In this case, the /etc/kolla/passwords.yml needs to be updated.
3.3] Append triliovault_site.yml content to kolla ansible's site.yml
This step is not always required. It is recommended to comparetriliovault_site.yml
with the Trilio entries in the/usr/local/share/kolla-ansible/ansible/site.yml
file.
In case of no changes, this step can be skipped.
This is required because, in case of some variable names changed, some new variables have been added, or old variables removed in the latest triliovault_site.yml
they need to be updated in /usr/local/share/kolla-ansible/ansible/site.yml
file.
3.4] Append triliovault_inventory.txt to the kolla-ansible inventory file
This step is not always required. It is recommended to comparetriliovault_inventory.yml
ith the Trilio entries in the/root/multinode
file.
In case of no changes, this step can be skipped.
By default, the triliovault-datamover-api service gets installed on ‘control' hosts and the trilio-datamover service gets installed on 'compute’ hosts. You can edit the T4O groups in the inventory file as per your cloud architecture.
T4O group names are ‘triliovault-datamover-api’ and ‘triliovault-datamover’
4] Configure multi-IP NFS as Trilio backup target
This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs
On kolla-ansible server node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.
Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.
vi triliovault_nfs_map_input.yml
The triliovault_nfs_map_input.yml
is explained here.
Update PyYAML
on the kolla-ansible server node only
Expand the map file to create a one-to-one mapping of compute and NFS share.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all necessary NFS shares.
Append this output map file to triliovault_globals.yml
File Path: /home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml
Ensure to set multi_ip_nfs_enabled in _`_triliovault_globals.yml` file to yes
A new parameter is added to
triliovault_globals.yml
, set this parameter value to 'yes' if backup target NFS supports multiple endpoints/IPs. File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’multi_ip_nfs_enabled: 'yes'
Later append
triliovault_globals.yaml
file to/etc/kolla/globals.yml
5] Edit globals.yml to set T4O parameters
Edit /etc/kolla/globals.yml
file to fill triliovault backup target and build details. You will find the triliovault related parameters at the end of globals.yml
. The user needs to fill in details like triliovault build version, backup target type, backup target details, etc.
Following is the list of parameters that the user needs to edit.
Parameter | Defaults/choices | comments |
---|---|---|
triliovault_tag | <triliovault_tag > | Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned at the start of this document |
horizon_image___full | Uncomment | By default, Trilio Horizon container would not get deployed. Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container. |
triliovault_docker___username | <dockerhub-login-username> | Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team |
triliovault_docker___password | <dockerhub-login-password> | Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team |
triliovault_docker___registry | Default: docker.io | If users want to use a different container registry for the triliovault containers, then the user can edit this value. In that case, the user first needs to manually pull triliovault containers from docker.io and push them to the other registry. |
triliovault_backup___target | nfs amazon_s3 ceph_s3 | 'nfs': If the backup target is NFS 'amazon_s3': If the backup target is Amazon S3 'ceph_s3': If the backup target type is S3 but not amazon S3. |
multi_ip_nfs_enabled | yes no Default: no | This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault. |
dmapi_workers | Default: 16 | If dmapi_workers field is not present in config file. The Default value will be equals to number of cores present on the node |
triliovault_nfs___shares | <NFS-IP/FQDN>:/<NFS path> | Only with nfs for triliovault_backup_target User needs to provide NFS share path, e.g.: 192.168.122.101:/opt/tvault |
triliovault_nfs___options | Default: | -Only with nfs for triliovault_backup_target -Keep default values if unclear |
triliovault_s3___access_key | S3 Access Key | Only with amazon_s3 or cephs3 for triliovault_backuptarget Provide S3 access key |
triliovault_s3___secret_key | S3 Secret Key | Only with amazon_s3 or cephs3 for triliovault_backuptarget Provide S3 secret key |
triliovault_s3___region_name | S3 Region Name | Only with amazon_s3 or cephs3 for triliovault_backuptarget Provide S3 region or keep default if no region required |
triliovault_s3___bucket_name | S3 Bucket name | Only with amazon_s3 or cephs3 for triliovault_backuptarget Provide S3 bucket |
triliovault_s3___endpoint_url | S3 Endpoint URL | Valid for |
triliovault_s3___ssl_enabled | True False | Only with ceph_s3 for triliovault_backup_target Set to true if endpoint is on HTTPS |
triliovault_s3__ssl_cert__file_name | s3-cert-pem | Only with ceph_s3 for triliovault_backup_target and if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority user needs to copy the 'ceph s3 ca chain file' to "/etc/kolla/config/triliovault/" directory on ansible server. Create this directory if it does not exist already. |
triliovault_copy__ceph_s3__ssl_cert | True False | Set to true if: ceph_s3 for triliovault_backup_target and if SSL is enabled on S3 endpoint URL and SSL certificates are self-signed OR issued by a private authority |
6] Enable T4O Snapshot mount feature
This step is already part of the 4.2 GA installation procedure and should only be verified.
To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml
and find nova_libvirt_default_volumes
variable. Append the Trilio mount bind /var/trilio:/var/trilio:shared
to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
Next, find the variable nova_compute_default_volumes
in the same file and append the mount bind /var/trilio:/var/trilio:shared
to the list.
After the change will the variable look for a default Kolla installation as follows:
In the case of using Ironic compute nodes one more entry needs to be adjusted in the same file.
Find the variable nova_compute_ironic_default_volumes
and append trilio mount /var/trilio:/var/trilio:shared
to the list.
After the changes the variable will look like the following:
7] Pull containers in case of private repository
In case, the user doesn’t want to use the docker hub registry for triliovault containers during cloud deployment, then the user can pull triliovault images before starting cloud deployment and push them to other preferred registries.
Following are the triliovault container image URLs for 4.2 releases.
Replace kolla_base_distro
and triliovault_tag
variables with their values.
This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro This {{ triliovault_tag }} is mentioned at the start of this document.
Trilio supports the Source-based containers from the OpenStack Yoga release **** onwards.
Below are the Source-based OpenStack deployment images
Below are the Binary-based OpenStack deployment images
8] Pull T4O container images
Activate the login into dockerhub for Trilio tagged containers.
Please get the Dockerhub login credentials from Trilio Sales/Support team
Run the below command from the directory with the multinode file tull pull the required images.
9] Run Kolla-Ansible upgrade command
Run the below command from the directory with the multinode file to start the upgrade process.
10] Verify Trilio deployment
Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.
11] Advance settings/configuration for Trilio services
11.1] Customize HAproxy configuration parameters for Trilio datamover api service
Following are the default haproxy conf parameters set against triliovault datamover api service.
These values work best for triliovault dmapi service. It’s not recommended to change these parameter values. However, in some exceptional cases, If needed to change any of the above parameter values then same can be done on kolla-ansible server in the following file.
After editing, run kolla-ansible deploy command again to push these changes to openstack cloud.
Post kolla-ansible deploy, to verify the changes, please check following file, available on all controller/haproxy nodes.
12] Enable mound-bind for NFS
T4O 4.2 is changing the calculation for the mount point hash value when using NFS backups.
Please follow this documentation to ensure that Backups taken from T4O 4.1 or older can be used with T4O 4.2.
Last updated