Refer to the below-mentioned acceptable values for the placeholders triliovault_tag, trilio_branch and kolla_base_distro , in this document as per the Openstack environment:
Trilio Release
triliovault_tag
trilio_branch
kolla_base_distro
OpenStack Version
Trilio requires OpenStack CLI to be installed and available for use on the Kolla Ansible Control node.
1.1] Select backup target type
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key
- Secret Key
- Region
- Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key
- Secret Key
- Region
- Endpoint URL (Valid for S3 other than Amazon S3)
- Bucket name
2] Clone Trilio Deployment Scripts
Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterwards, copy Trilio Ansible role into Kolla-ansible roles directory
git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/
mkdir -p /usr/local/share/kolla-ansible/ansible/roles/triliovault
# For Rocky and Ubuntu Zed and Antelope
cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
# For Rocky and Ubuntu Bobcat and Caracal
cp -R ansible/roles/triliovault-bobcat/* /usr/local/share/kolla-ansible/ansible/roles/triliovault/
3] Hook Trilio deployment scripts to Kolla-ansible deploy scripts
3.1] Add Trilio global variables to globals.yml
## For Rocky and Ubuntu
- Take backup of globals.yml
cp /etc/kolla/globals.yml /opt/
- Append Trilio global variables to globals.yml for Zed
cat ansible/triliovault_globals_zed.yml >> /etc/kolla/globals.yml
- Append Trilio global variables to globals.yml for Antelope
cat ansible/triliovault_globals_2023.1.yml >> /etc/kolla/globals.yml
- Append Trilio global variables to globals.yml for Bobcat
cat ansible/triliovault_globals_2023.2.yml >> /etc/kolla/globals.yml
- Append Trilio global variables to globals.yml for Caracal
cat ansible/triliovault_globals_2024.1.yml >> /etc/kolla/globals.yml
3.2] Add Trilio passwords to kolla passwords.yaml
Generate triliovault passwords and append triliovault_passwords.yml to /etc/kolla/passwords.yml.
cd ansible
./scripts/generate_password.sh
## For Rocky and Ubuntu
- Take backup of passwords.yml
cp /etc/kolla/passwords.yml /opt/
- Append Trilio global variables to passwords.yml
cat triliovault_passwords.yml >> /etc/kolla/passwords.yml
3.3] Append Trilio site.yml content to kolla ansible’s site.yml
# For Rocky and Ubuntu
- Take backup of site.yml
cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/
- Append Trilio site variables to site.yml for Zed
cat ansible/triliovault_site_yoga.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
- Append Trilio site variables to site.yml for Antelope
cat ansible/triliovault_site_2023.1.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
- Append Trilio site variables to site.yml for Bobcat
cat ansible/triliovault_site_2023.2.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
- Append Trilio site variables to site.yml for Caracal
cat ansible/triliovault_site_2024.1.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
3.4] Append triliovault_inventory.txt to your cloud’s kolla-ansible inventory file.
For example:
If your inventory file name path '/root/multinode' then use following command.
cat ansible/triliovault_inventory.txt >> /root/multinode
3.5] Configure multi-IP NFS
This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs
On kolla-ansible server node, change directory
cd triliovault-cfg-scripts/common/
Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.
If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.
Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.
vi triliovault_nfs_map_input.yml
The triliovault_nfs_map_imput.yml is explained here.
Update PyYAML on the kolla-ansible server node only
pip3 install -U pyyaml
Expand the map file to create one to one mapping of compute and nfs share.
python ./generate_nfs_map.py
Result will be in file - 'triliovault_nfs_map_output.yml'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all necessary nfs shares.
Append this output map file to 'triliovault_globals.yml'
File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’
Ensure to set multi_ip_nfs_enabled in __ triliovault_globals.yml file to yes
4] Edit globals.yml to set Trilio parameters
Edit /etc/kolla/globals.yml file to fill Trilio backup target and build details.
You will find the Trilio related parameters at the end of globals.yml file.
Details like Trilio version, backup target type, backup target details, etc need to be filled out.
Following is the list of parameters that the usr needs to edit.
In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.
Following are the triliovault container image URLs for 5.x release**.**
Replace kolla_base_distro and triliovault_tag variables with their values.\
This {{ kolla_base_distro }} variable can be either 'rocky' or 'ubuntu' depends on your base OpenStack distro
To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variables. Append the Trilio mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
In case of using Ironic compute nodes one more entry need to be adjusted in the same file.
Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.
After the changes the variable will looks like the following:
All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.
This is just an example command. You need to use your cloud deploy command.
kolla-ansible -i multinode deploy
Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container
9] Verify Trilio deployment
Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.
The example is shown for 5.2.2 release from Kolla Rocky Zed setup.
[root@controller ~]# docker ps | grep datamover-api
9bf847ec4374 trilio/kolla-rocky-trilio-datamover-api:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_datamover_api
[root@controller ~]# ssh compute "docker ps | grep datamover"
2b590ab33dfa trilio/kolla-rocky-trilio-datamover:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_datamover
[root@controller ~]# docker ps | grep horizon
1333f1ccdcf1 trilio/kolla-rocky-trilio-horizon-plugin:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours (healthy) horizon
[root@controller ~]# docker ps -a | grep wlm
fedc17b12eaf trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Exited (0) 23 hours ago wlm_cloud_trust
60bc1f0d0758 trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_cron
499b8ca89bd6 trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_scheduler
7e3749026e8e trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_workloads
932a41bf7024 trilio/kolla-rocky-trilio-wlm:5.2.2-zed "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_wlm_api
10] Troubleshooting Tips
10.1 ] Check Trilio containers and their startup logs
To see all TriloVault containers running on a specific node use the docker ps command.
docker ps -a | grep trilio
To check the startup logs use the docker logs <container name> command.
11.1] We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user' in the file '/etc/kolla/globals.yaml'.
Details about multiple ceph configuration can be found here.