Getting started with Trilio on Kolla-Ansible OpenStack

1] Plan for Deployment

Refer to the below-mentioned acceptable values for the placeholders triliovault_tag, trilio_branch and kolla_base_distro , in this document as per the Openstack environment:

Trilio Releasetriliovault_tagtrilio_branchkolla_base_distroOpenStack Version

5.2.2

5.2.2-2023.2 5.2.2-2023.1 5.2.2-zed

5.2.2

ubuntu/rocky

Bobcat Antelope Zed

5.2.1

5.2.1-2023.1 5.2.1-zed

5.2.1

ubuntu/rocky

Antelope Zed

5.2.0

5.2.0-2023.1 5.2.0-zed

5.2.0

ubuntu/rocky

Antelope Zed

5.1.0

5.1.0-zed

5.1.0

ubuntu/rocky

Zed

Trilio requires OpenStack CLI to be installed and available for use on the Kolla Ansible Control node.

1.1] Select backup target type

Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.

a) NFS

Need NFS share path

b) Amazon S3

- S3 Access Key - Secret Key - Region - Bucket name

c) Other S3 compatible storage (Like, Ceph based S3)

- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

2] Clone Trilio Deployment Scripts

Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterwards, copy Trilio Ansible role into Kolla-ansible roles directory

git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/kolla-ansible/

# For Rocky and Ubuntu
cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/

3] Hook Trilio deployment scripts to Kolla-ansible deploy scripts

3.1] Add Trilio global variables to globals.yml

## For Rocky and Ubuntu
- Take backup of globals.yml
cp /etc/kolla/globals.yml /opt/

- Append Trilio global variables to globals.yml for Zed
cat ansible/triliovault_globals_zed.yml >> /etc/kolla/globals.yml

- Append Trilio global variables to globals.yml for Antelope
cat ansible/triliovault_globals_2023.1.yml >> /etc/kolla/globals.yml

3.2] Add Trilio passwords to kolla passwords.yaml

Generate triliovault passwords and append triliovault_passwords.yml to /etc/kolla/passwords.yml.

cd ansible
./scripts/generate_password.sh

## For Rocky and Ubuntu
- Take backup of passwords.yml
cp /etc/kolla/passwords.yml /opt/

- Append Trilio global variables to passwords.yml 
cat triliovault_passwords.yml >> /etc/kolla/passwords.yml

3.3] Append Trilio site.yml content to kolla ansible’s site.yml

# For Rocky and Ubuntu
- Take backup of site.yml
cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/

- Append Trilio site variables to site.yml for Zed
cat ansible/triliovault_site_yoga.yml >> /usr/local/share/kolla-ansible/ansible/site.yml    

- Append Trilio site variables to site.yml for Antelope
cat ansible/triliovault_site_2023.1.yml >> /usr/local/share/kolla-ansible/ansible/site.yml

3.4] Append triliovault_inventory.txt to your cloud’s kolla-ansible inventory file.

For example:
If your inventory file name path '/root/multinode' then use following command.

cat ansible/triliovault_inventory.txt >> /root/multinode

3.5] Configure multi-IP NFS

This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs

On kolla-ansible server node, change directory

cd triliovault-cfg-scripts/common/

Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.

Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.

vi triliovault_nfs_map_input.yml

The triliovault_nfs_map_imput.yml is explained here.

Update PyYAML on the kolla-ansible server node only

pip3 install -U pyyaml

Expand the map file to create one to one mapping of compute and nfs share.

python ./generate_nfs_map.py

Result will be in file - 'triliovault_nfs_map_output.yml'

Validate output map file

Open file 'triliovault_nfs_map_output.yml

vi triliovault_nfs_map_output.yml

available in the current directory and validate that all compute nodes are covered with all necessary nfs shares.

Append this output map file to 'triliovault_globals.yml' File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’

cat triliovault_nfs_map_output.yml >> ../kolla-ansible/ansible/triliovault_globals.yml

Ensure to set multi_ip_nfs_enabled in __ triliovault_globals.yml file to yes

4] Edit globals.yml to set Trilio parameters

Edit /etc/kolla/globals.yml file to fill Trilio backup target and build details. You will find the Trilio related parameters at the end of globals.yml file. Details like Trilio version, backup target type, backup target details, etc need to be filled out.

Following is the list of parameters that the usr needs to edit.

ParameterDefaults/choicescomments

cloud_admin_username

<cloud_admin_username >

Use the username of cloud admin user. The user must to have assigned a 'creator' role

cloud_admin_password

<cloud_admin_password >

Use the password of cloud admin user

cloud_admin_projectname

<cloud_admin_projectname >

Use the project name of cloud admin user

cloud_admin_projectid

<cloud_admin_projectid >

Use the project ID of cloud admin user

cloud_admin_domainname

<cloud_admin_domainname >

Use the domain name of cloud admin user

cloud_admin_domainid

<cloud_admin_domainid >

Use the domain ID of cloud admin user

trustee_role

<trustee_role >

Comma separated list of trustee roles required.

For Zed, trustee_role should be creator.

For Antelope, trustee_role should be creator,member

os_endpoint_type

<internal/public >

Choose required endpoint type which Trilio APIs will use for communication

triliovault_tag

<triliovault_tag >

Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned in the 1st step

horizon_image_full

Uncomment

By default, Trilio Horizon container would not get deployed.

Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.

triliovault_docker_username

<dockerhub-login-username>

Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team

triliovault_docker_password

<dockerhub-login-password>

Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team

triliovault_docker_registry

Default value: docker.io

Edit this value if a different container registry for Trilio containers is to be used. Containers need to be pulled from docker.io and pushed to chosen registry first.

triliovault_backup_target

  • nfs

  • amazon_s3

  • other_s3_compatible

nfs if the backup target is NFS

amazon_s3 if the backup target is Amazon S3

other_s3_compatible if the backup target type is S3 but not amazon S3.

multi_ip_nfs_enabled

yes no default: no

This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.

triliovault_nfs_shares

<NFS-IP/FQDN>:/<NFS path>

NFS share path example: ‘192.168.122.101:/nfs/tvault’

triliovault_nfs_options

'nolock,soft,timeo=180, intr,lookupcache=none'. for Cohesity nfs: 'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10'

-These parameter set NFS mount options. -Keep default values, unless a special requirement exists.

triliovault_s3_access_key

S3 Access Key

Valid for amazon_s3 and

triliovault_s3_secret_key

S3 Secret Key

Valid for amazon_s3 and other_s3_compatible

triliovault_s3_region_name

  • Default value: us-east-1

  • S3 Region name

Valid for amazon_s3 and other_s3_compatible

If s3 storage doesn't have region parameter keep default

triliovault_s3_bucket_name

S3 Bucket name

Valid for amazon_s3 and other_s3_compatible

triliovault_s3_endpoint_url

S3 Endpoint URL

Valid for other_s3_compatible only

triliovault_s3_ssl_enabled

  • True

  • False

Valid for other_s3_compatible only

Set true for SSL enabled S3 endpoint URL

triliovault_s3_ssl_cert_file_name

s3-cert.pem

Valid for other_s3_compatible only with SSL enabled and self signed certificates

OR issued by a private authority. In this case, copy the ceph s3 ca chain file to/etc/kolla/config/triliovault/

directory on ansible server. Create this directory if it does not exist already.

triliovault_copy_ceph_s3_ssl_cert

  • True

  • False

Valid for other_s3_compatible only

Set to True when: SSL enabled with self-signed certificates or issued by a private authority.

In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.

Following are the triliovault container image URLs for 5.x release**.** Replace kolla_base_distro and triliovault_tag variables with their values.\

This {{ kolla_base_distro }} variable can be either 'rocky' or 'ubuntu' depends on your base OpenStack distro

Below are the OpenStack deployment images


1. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
2. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
3. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-horizon-plugin:{{ triliovault_tag }}
4. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-wlm:{{ triliovault_tag }}

## EXAMPLE from Kolla Ubuntu OpenStack
docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:{{ triliovault_tag }}
docker.io/trilio/kolla-ubuntu-trilio-wlm:{{ triliovault_tag }}

5] Enable Trilio Snapshot mount feature

To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.

Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variables. Append the Trilio mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.

For a default Kolla installation, will the variable look as follows afterward:

nova_libvirt_default_volumes:
  - "{{ node_config_directory }}/nova-libvirt/:{{ container_config_directory }}/:ro"
  - "/etc/localtime:/etc/localtime:ro"
  - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
  - "/lib/modules:/lib/modules:ro"
  - "/run/:/run/:shared"
  - "/dev:/dev"
  - "/sys/fs/cgroup:/sys/fs/cgroup"
  - "kolla_logs:/var/log/kolla/"
  - "libvirtd:/var/lib/libvirt"
  - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
  - "
{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}





"
  - "nova_libvirt_qemu:/etc/libvirt/qemu"
  - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }
  - "/var/trilio:/var/trilio:shared"

Next, find the variable nova_compute_default_volumes in the same file and append the mount bind /var/trilio:/var/trilio:shared to the list.

After the change will the variable look for a default Kolla installation as follows:

nova_compute_default_volumes:
  - "{{ node_config_directory }}/nova-compute/:{{ container_config_directory }}/:ro"
  - "/etc/localtime:/etc/localtime:ro"
  - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
  - "/lib/modules:/lib/modules:ro"
  - "/run:/run:shared"
  - "/dev:/dev"
  - "kolla_logs:/var/log/kolla/"
  - "
{% if enable_iscsid | bool %}iscsi_info:/etc/iscsi{% endif %}"
  - "libvirtd:/var/lib/libvirt"
  - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
  - "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}


"
  - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
  - "/var/trilio:/var/trilio:shared"

In case of using Ironic compute nodes one more entry need to be adjusted in the same file. Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.

After the changes the variable will looks like the following:

nova_compute_ironic_default_volumes:
  - "{{ node_config_directory }}/nova-compute-ironic/:{{ container_config_directory }}/:ro"
  - "/etc/localtime:/etc/localtime:ro"
  - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
  - "kolla_logs:/var/log/kolla/"
  - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
  - "/var/trilio:/var/trilio:shared"

6] Prepare Horizon custom settings

To enable workloadmanager quota feature on Horizon dashboard, it is necessary to create custom settings for Horizon.

Create following directory if not exists on kolla ansible server node.

mkdir -p /etc/kolla/config/horizon

Set ownership to user:group that you are using for deployment.

chown <DEPLOYMENT_USER>:<DEPLOYMENT_GROUP> /etc/kolla/config/horizon

For example, if you are using 'root' system user for deployment, chown command will look like below.

chown root:root /etc/kolla/config/horizon

Create the settings file in above directory.

echo 'from openstack_dashboard.settings import HORIZON_CONFIG
HORIZON_CONFIG["customization_module"] = "trilio_dashboard.overrides"' >> /etc/kolla/config/horizon/custom_local_settings

7] Pull Trilio container images

Activate the login into dockerhub for Trilio tagged containers.

Please get the Dockerhub login credentials from Trilio Sales/Support team

ansible -i multinode control -m shell -a "docker login -u <docker-login-username> -p <docker-login-password> docker.io" --become

Pull the Trilio container images from the dockerhub based on the existing inventory file. In the example is the inventory file named multinode.

kolla-ansible -i multinode pull --tags triliovault

8] Deploy Trilio

All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.

This is just an example command. You need to use your cloud deploy command.

kolla-ansible -i multinode deploy

Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

9] Verify Trilio deployment

Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.

The example is shown for 5.2.2 release from Kolla Rocky Zed setup.

[root@controller ~]# docker ps | grep datamover-api
9bf847ec4374   trilio/kolla-rocky-trilio-datamover-api:5.2.2-zed       "dumb-init --single-…"   23 hours ago   Up 23 hours                       triliovault_datamover_api

[root@controller ~]# ssh compute "docker ps | grep datamover"
2b590ab33dfa   trilio/kolla-rocky-trilio-datamover:5.2.2-zed          "dumb-init --single-…"   23 hours ago   Up 23 hours                     triliovault_datamover

[root@controller ~]# docker ps | grep horizon
1333f1ccdcf1   trilio/kolla-rocky-trilio-horizon-plugin:5.2.2-zed      "dumb-init --single-…"   23 hours ago   Up 23 hours (healthy)             horizon

[root@controller ~]# docker ps -a | grep wlm
fedc17b12eaf   trilio/kolla-rocky-trilio-wlm:5.2.2-zed                 "dumb-init --single-…"   23 hours ago   Exited (0) 23 hours ago               wlm_cloud_trust
60bc1f0d0758   trilio/kolla-rocky-trilio-wlm:5.2.2-zed                 "dumb-init --single-…"   23 hours ago   Up 23 hours                           triliovault_wlm_cron
499b8ca89bd6   trilio/kolla-rocky-trilio-wlm:5.2.2-zed                 "dumb-init --single-…"   23 hours ago   Up 23 hours                           triliovault_wlm_scheduler
7e3749026e8e   trilio/kolla-rocky-trilio-wlm:5.2.2-zed                 "dumb-init --single-…"   23 hours ago   Up 23 hours                           triliovault_wlm_workloads
932a41bf7024   trilio/kolla-rocky-trilio-wlm:5.2.2-zed                 "dumb-init --single-…"   23 hours ago   Up 23 hours                           triliovault_wlm_api

10] Troubleshooting Tips

10.1 ] Check Trilio containers and their startup logs

To see all TriloVault containers running on a specific node use the docker ps command.

docker ps -a | grep trilio

To check the startup logs use the docker logs <container name> command.

docker logs triliovault_datamover_api
docker logs triliovault_datamover
docker logs triliovault_wlm_api
docker logs triliovault_wlm_scheduler
docker logs triliovault_wlm_cron
docker logs triliovault_wlm_workloads
docker logs wlm_cloud_trust

10.2] Trilio Horizon tabs are not visible in Openstack

Verify that the Trilio Appliance is configured. The Horizon tabs are only shown, when a configured Trilio appliance is available.

Verify that the Trilio horizon container is installed and in a running state.

docker ps | grep horizon

10.3] Trilio Service logs

  • Trilio workloadmgr api service logs on workloadmgr api node

/var/log/kolla/triliovault-wlm-api/triliovault-wlm-api.log
  • Trilio workloadmgr cron service logs on workloadmgr cron node

/var/log/kolla/triliovault-wlm-cron/triliovault-wlm-cron.log
  • Trilio workloadmgr scheduler service logs on workloadmgr scheduler node

/var/log/kolla/triliovault-wlm-scheduler/triliovault-wlm-scheduler.log
  • Trilio workloadmgr workloads service logs on workloadmgr workloads node

/var/log/kolla/triliovault-wlm-workloads/triliovault-wlm-workloads.log
  • Trilio datamover api service logs on datamover api node

/var/log/kolla/triliovault-datamover-api/triliovault-datamover-api.log
  • Trilio datamover service logs on datamover node

/var/log/kolla/triliovault-datamover/triliovault-datamover.log

11. Advanced configurations - [Optional]

11.1] We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user' in the file '/etc/kolla/globals.yaml'.

Details about multiple ceph configuration can be found here.

Last updated