Installing on Kolla Openstack
This page lists all steps required to deploy Trilio components on Kolla-ansible deployed OpenStack cloud.
Last updated
This page lists all steps required to deploy Trilio components on Kolla-ansible deployed OpenStack cloud.
Last updated
Please ensure that the Trilio Appliance has been updated to the latest maintenance release before continuing the installation.
Refer to the below-mentioned acceptable values for the placeholders
triliovault_tag
andkolla_base_distro
, in this document as per the Openstack environment:
Openstack Version | triliovault_tag | kolla_base_distro |
---|---|---|
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterwards, copy Trilio Ansible role into Kolla-ansible roles directory
Append triliovault_passwords.yml
to /etc/kolla/passwords.yml
. Passwords are empty. Set these passwords manually in the /etc/kolla/passwords.yml
.
This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs
On kolla-ansible server node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.
Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.
vi triliovault_nfs_map_input.yml
The triliovault_nfs_map_imput.yml is explained here.
Update PyYAML
on the kolla-ansible server node only
Expand the map file to create one to one mapping of compute and nfs share.
Result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all necessary nfs shares.
Append this output map file to 'triliovault_globals.yml' File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’
Ensure to set multi_ip_nfs_enabled in __ triliovault_globals.yml file to yes
Edit /etc/kolla/globals.yml
file to fill Trilio backup target and build details.
You will find the Trilio related parameters at the end of globals.yml
file.
Details like Trilio build version, backup target type, backup target details, etc need to be filled out.
Following is the list of parameters that the usr needs to edit.
In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.
Following are the triliovault container image URLs for 4.3 releases**.**
Replace kolla_base_distro
and triliovault_tag
variables with their values.\
This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro
Trilio supports the Source-based containers from the OpenStack Yoga release **** onwards.
Below are the Source-based OpenStack deployment images
Below are the Binary-based OpenStack deployment images
To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml
and find nova_libvirt_default_volumes
variables. Append the Trilio mount bind /var/trilio:/var/trilio:shared
to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
Next, find the variable nova_compute_default_volumes
in the same file and append the mount bind /var/trilio:/var/trilio:shared
to the list.
After the change will the variable look for a default Kolla installation as follows:
In case of using Ironic compute nodes one more entry need to be adjusted in the same file.
Find the variable nova_compute_ironic_default_volumes
and append trilio mount /var/trilio:/var/trilio:shared
to the list.
After the changes the variable will looks like the following:
Activate the login into dockerhub for Trilio tagged containers.
Please get the Dockerhub login credentials from Trilio Sales/Support team
Pull the Trilio container images from the dockerhub based on the existing inventory file. In the example is the inventory file named multinode
.
All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.
This is just an example command. You need to use your cloud deploy command.
Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container
Verify on the controller and compute nodes that the Trilio containers are in UP state.
Following is a sample output of commands from controller and compute nodes. triliovault_tag
will have value as per the openstack release where deployment being done.
To see all TriloVault containers running on a specific node use the docker ps command.
To check the startup logs use the docker logs <container name> command.
Verify that the Trilio Appliance is configured. The Horizon tabs are only shown, when a configured Trilio appliance is available.
Verify that the Trilio horizon container is installed and in a running state.
Trilio datamover api service logs on datamover api node
Trilio datamover service logs on datamover node
Note: This step needs to be done on Trilio Appliance node. Not on OpenStack node.
Pre-requisite: You should have already launched Trilio appliance VM
In Kolla openstack distribution, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all Trilio nodes:
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that 'nova' user and group id has changed to '42436'
After this step, you can proceed to 'Configuring Trilio' section.
11.1] We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user' in the file '/etc/kolla/globals.yaml'.
If user wants to edit this parameter value they can do it. Impact will be, cinder's ceph user and triliovault datamover's ceph user will be updated upon next kolla-ansible deploy command.
Parameter | Defaults/choices | comments |
---|---|---|
Victoria
4.3.2-victoria
ubuntu centos
Wallaby
4.3.2-wallaby
ubuntu centos
Yoga
4.3.2-yoga
ubuntu centos
Zed
4.3.2-zed
ubuntu rocky
triliovault_tag
<triliovault_tag >
Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned in the 1st step
horizon_image_full
Uncomment
By default, Trilio Horizon container would not get deployed.
Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.
triliovault_docker_username
<dockerhub-login-username>
Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team
triliovault_docker_password
<dockerhub-login-password>
Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team
triliovault_docker_registry
Default value: docker.io
Edit this value if a different container registry for Trilio containers is to be used. Containers need to be pulled from docker.io and pushed to chosen registry first.
triliovault_backup_target
nfs
amazon_s3
other_s3_compatible
nfs
if the backup target is NFS
amazon_s3
if the backup target is Amazon S3
other_s3_compatible
if the backup target type is S3 but not amazon S3.
multi_ip_nfs_enabled
yes no default: no
This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.
triliovault_nfs_shares
<NFS-IP/FQDN>:/<NFS path>
NFS share path example: ‘192.168.122.101:/nfs/tvault’
triliovault_nfs_options
'nolock,soft,timeo=180,
intr,lookupcache=none'
.
for Cohesity nfs: 'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
'
-These parameter set NFS mount options. -Keep default values, unless a special requirement exists.
triliovault_s3_access_key
S3 Access Key
Valid for amazon_s3
and
triliovault_s3_secret_key
S3 Secret Key
Valid for amazon_s3
and other_s3_compatible
triliovault_s3_region_name
Default value: us-east-1
S3 Region name
Valid for amazon_s3
and other_s3_compatible
If s3 storage doesn't have region parameter keep default
triliovault_s3_bucket_name
S3 Bucket name
Valid for amazon_s3
and other_s3_compatible
triliovault_s3_endpoint_url
S3 Endpoint URL
Valid for other_s3_compatible
only
triliovault_s3_ssl_enabled
True
False
Valid for other_s3_compatible
only
Set true for SSL enabled S3 endpoint URL
triliovault_s3_ssl_cert_file_name
s3-cert.pem
Valid for other_s3_compatible
only with SSL enabled and self signed certificates
OR issued by a private authority.
In this case, copy the ceph s3 ca chain file
to/etc/kolla/config/triliovault/
directory on ansible server. Create this directory if it does not exist already.
triliovault_copy_ceph_s3_ssl_cert
True
False
Valid for other_s3_compatible
only
Set to True when: SSL enabled with self-signed certificates or issued by a private authority.