Please ensure that the Trilio Appliance has been updated to the latest maintenance release before continuing the installation.
Refer to the below-mentioned acceptable values for the placeholders triliovault_tag and ``` **kolla_base_distro`** , in this document as per the Openstack environment:
Openstack Version
triliovault_tag
kolla_base_distro
1.1] Select backup target type
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key
- Secret Key
- Region
- Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key
- Secret Key
- Region
- Endpoint URL (Valid for S3 other than Amazon S3)
- Bucket name
2] Clone Trilio Deployment Scripts
Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterwards, copy Trilio Ansible role into Kolla-ansible roles directory
git clone -b TVO/4.2.8 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/
# For Centos and Ubuntu
cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
3] Hook Trilio deployment scripts to Kolla-ansible deploy scripts
3.1] Add Trilio global variables to globals.yml
## For Centos and Ubuntu
- Take backup of globals.yml
cp /etc/kolla/globals.yml /opt/
- Append Trilio global variables to globals.yml
cat ansible/triliovault_globals.yml >> /etc/kolla/globals.yml
3.2] Add Trilio passwords to kolla passwords.yaml
Append triliovault_passwords.yml to /etc/kolla/passwords.yml. Passwords are empty. Set these passwords manually in the /etc/kolla/passwords.yml.
## For Centos and Ubuntu
- Take backup of passwords.yml
cp /etc/kolla/passwords.yml /opt/
- Append Trilio global variables to passwords.yml
cat ansible/triliovault_passwords.yml >> /etc/kolla/passwords.yml
- Edit '/etc/kolla/passwords.yml', go to end of the file and set trilio passwords.
3.3] Append Trilio site.yml content to kolla ansible’s site.yml
# For Centos and Ubuntu
- Take backup of site.yml
cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/
# If the OpenStack release is ‘yoga' append below Trilio code to site.yml
cat ansible/triliovault_site_yoga.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
# If the OpenStack release is other than 'yoga' append below Trilio code to site.yml
cat ansible/triliovault_site.yml >> /usr/local/share/kolla-ansible/ansible/site.yml
3.4] Append triliovault_inventory.txt to your cloud’s kolla-ansible inventory file.
For example:
If your inventory file name path '/root/multinode' then use following command.
cat ansible/triliovault_inventory.txt >> /root/multinode
3.5] Configure multi-IP NFS
This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs
On kolla-ansible server node, change directory
cd triliovault-cfg-scripts/common/
Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.
If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.
Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.
vi triliovault_nfs_map_input.yml
The triliovault_nfs_map_imput.yml is explained here.
Update PyYAML on the kolla-ansible server node only
pip3 install -U pyyaml
Expand the map file to create one to one mapping of compute and nfs share.
python ./generate_nfs_map.py
Result will be in file - 'triliovault_nfs_map_output.yml'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all necessary nfs shares.
Append this output map file to 'triliovault_globals.yml'
File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’
Ensure to set multi_ip_nfs_enabled in __ triliovault_globals.yml file to yes
4] Edit globals.yml to set Trilio parameters
Edit /etc/kolla/globals.yml file to fill Trilio backup target and build details.
You will find the Trilio related parameters at the end of globals.yml file.
Details like Trilio build version, backup target type, backup target details, etc need to be filled out.
Following is the list of parameters that the usr needs to edit.
In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.
Following are the triliovault container image URLs for 4.2 releases**.**
Replace kolla_base_distro and triliovault_tag variables with their values.\
This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro
Trilio supports the Source-based containers from the OpenStack Yoga release **** onwards.
Below are the Source-based OpenStack deployment images
To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variables. Append the Trilio mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
In case of using Ironic compute nodes one more entry need to be adjusted in the same file.
Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.
After the changes the variable will looks like the following:
All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.
This is just an example command. You need to use your cloud deploy command.
kolla-ansible -i multinode deploy
Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container
8] Verify Trilio deployment
Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.
The example is shown for 4.2.7 maintenance release from Kolla Victoria CentOS binary based setup.
[root@controller ~]# docker ps | grep datamover-api
cd9d0ccc19b6 trilio/kolla-centos-trilio-datamover-api:4.2.7-victoria "dumb-init --single-…" 21 hours ago Up 21 hours triliovault_datamover_api
[root@compute ~]# docker ps | grep datamover
fae5e4f2e04a trilio/kolla-centos-trilio-datamover:4.2.7-victoria "dumb-init --single-…" 21 hours ago Up 21 hours triliovault_datamover
[root@controller ~]# docker ps | grep horizon
f019ef071d3c trilio/centos-binary-trilio-horizon-plugin:4.2.7-victoria "dumb-init --single-…" 21 hours ago Up 21 hours (unhealthy) horizon
The example is shown for 4.2.7 maintenance release from Kolla Yoga Ubuntu source based setup.
root@controller:~# docker ps | grep triliovault_datamover_api
5e9f87240a25 trilio/kolla-ubuntu-trilio-datamover-api:4.2.7-yoga "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_datamover_api
root@controller:~# docker ps | grep horizon
4cd644f0486c trilio/kolla-ubuntu-trilio-horizon-plugin:4.2.7-yoga "dumb-init --single-…" 23 hours ago Up 23 hours (healthy) horizon
root@compute1:~# docker ps | grep triliovault_datamover
7b6001ef43b9 trilio/kolla-ubuntu-trilio-datamover:4.2.7-yoga "dumb-init --single-…" 23 hours ago Up 23 hours triliovault_datamover
root@compute1:~#
The example is shown for 4.2.7 maintenance release from Kolla Yoga Ubuntu binary based setup.
root@controller:~# docker ps | grep triliovault_datamover_api
686b1aff0165 trilio/kolla-ubuntu-trilio-datamover-api:4.2.7-yoga "dumb-init --single-…" 3 hours ago Up 3 hours triliovault_datamover_api
root@controller:~# docker ps | grep horizon
d49ac6f52af4 trilio/ubuntu-binary-trilio-horizon-plugin:4.2.7-yoga "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) horizon
root@compute:~# docker ps | grep triliovault_datamover
c5a01651ddc7 trilio/kolla-ubuntu-trilio-datamover:4.2.7-yoga "dumb-init --single-…" 3 hours ago Up 3 hours triliovault_datamover
root@compute:~#
9] Troubleshooting Tips
9.1 ] Check Trilio containers and their startup logs
To see all TriloVault containers running on a specific node use the docker ps command.
docker ps -a | grep trilio
To check the startup logs use the docker logs <container name> command.
Note: This step needs to be done on Trilio Appliance node. Not on OpenStack node.
Pre-requisite:
You should have already launched Trilio appliance VM
In Kolla openstack distribution, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all Trilio nodes:
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that 'nova' user and group id has changed to '42436'
After this step, you can proceed to 'Configuring Trilio' section.
## Download the shell script
$ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
## Assign executable permissions
$ chmod +x nova_userid.sh
## Execute the shell script to change 'nova' user and group id to '42436'
$ ./nova_userid.sh
## Ignore any errors and verify that 'nova' user and group id has changed to '42436'
$ id nova
uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
11. Advanced configurations - [Optional]
11.1] We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user' in the file '/etc/kolla/globals.yaml'.
If user wants to edit this parameter value they can do it. Impact will be, cinder's ceph user and triliovault datamover's ceph user will be updated upon next kolla-ansible deploy command.