Once the Trilio VM or the Cluster of Trilio VMs has been spun, the actual installation process can begin. This process contains of the following steps:
Install the Trilio dm-api service on the control plane
Install the Trilio datamover service on the compute plane
Install the Trilio Horizon plugin into the Horizon service
How these steps look in detail is depending on the Openstack distribution Trilio is installed in. Each supported Openstack distribution has its own deployment tools. Trilio is integrating into these deployment tools to provide a native integration from the beginning to the end.
Test text
Please ensure that the Trilio Appliance has been updated to the latest hotfix before continuing the installation.
The 'nova' user ID and Group ID on the Trilio nodes need to be set the same as in the compute node(s). Trilio is by default using the nova user ID (UID) and Group ID (GID) 162:162. Ansible OpenStack is not always 'nova' user id 162 on compute node. Do the following steps on all Trilio nodes in case of nova UID & GID are not in sync with the Compute Node(s)
Download the shell script that will change the user-id
Assign executable permissions
Edit script to use the correct nova id
Execute the script
Verify that 'nova' user and group id has changed to the desired value
Clone triliovault-cfg-scripts from github repository on Ansible Host.
Available values for <branch>: hotfix-4-TVO/4.2
Copy Ansible roles and vars to required places.
In case of installing on OSA Victora or OSA Wallaby edit OPENSTACK_DIST in the file /etc/openstack_/user_tvault_vars.yml to victoria or wallaby respectively
Add Trilio playbook to /opt/openstack-ansible/playbooks/setup-openstack.yml
at the end of the file.
Add the following content at the end of the file /etc/openstack_deploy/user_variables.yml
Create the following file /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml
Add the following content to the created file.
Edit the file /etc/openstack_deploy/openstack_user_config.yml
according to the example below to set host entries for Trilio components.
Edit the common editable parameter section in the file /etc/openstack_deploy/user_tvault_vars.yml
Append the required details like Trilio Appliance IP address, Openstack distribution, snapshot storage backend, SSL related information, etc.
Note:
From 4.2HF4 onwards default prefilled value i.e 4.2.64 will be used for TVAULT_PACKAGE_VERSION .
In case of more than one nova virtual environment If the user wants to install tvault-contego service in a specific nova virtual environment on compute node(s) then needs to uncomment var nova_virtual_env and then set the value of nova_virtual_env
In case of more than one horizon plugin configured on openstack user can specify under which horizon plugins to install Trilio Horizon Plugins by setting horizon_virtual_env parameter. Default value of horizon_virtual_env is ' /openstack/venvs/horizon*'\
NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs
New parameter added to /etc/openstack_deploy/user_tvault_vars.yml
file for Mutli-IP NFS\
Change the Directory
Edit file 'triliovault_nfs_map_input.yml
' in current directory and provide compute host and NFS share/IP map.
Please take a look at this page to learn about the format of the file.
Update pyyaml on the Openstack Ansible server node only
Execute generate_nfs_map.py
file to create one to one mapping of compute and nfs share.
Result will be in file - 'triliovault_nfs_map_output.yml
' of the current directory
Validate output map file
Open file 'triliovault_nfs_map_output.yml
available in current directory and validate that all compute nodes are mapped with all necessary nfs shares.
Append the content of triliovault_nfs_map_output.yml
file to /etc/openstack_deploy/user_tvault_vars.yml
Run the following commands to deploy only Trilio components in case of an already deployed Ansible Openstack.
If Ansible Openstack is not already deployed then run the native Openstack deployment commands to deploy Openstack and Trilio Components together. An example for the native deployment command is given below:
Verify triliovault datamover api service deployed and started well. Run the below commands on controller node(s).
Verify triliovault datamover service deployed and started well on compute node(s). Run the following command oncompute node(s).
Verify that triliovault horizon plugin, contegoclient, and workloadmgrclient are installed on the Horizon container.
Run the following command on Horizon container.
Verify that haproxy setting on controller node using below commands.
After the deployment has been verified it is recommended to update to the latest hotfix to ensure the best possible experience.
To update the environment follow this procedure.
This page lists all steps required to deploy Trilio components on Kolla-ansible deployed OpenStack cloud.
Please ensure that the Trilio Appliance has been updated to the latest maintenance release before continuing the installation.
Refer to the below-mentioned acceptable values for the placeholders
triliovault_tag
and ```**
kolla_base_distro`** , in this document as per the Openstack environment:
Openstack Version | triliovault_tag | kolla_base_distro |
---|
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterwards, copy Trilio Ansible role into Kolla-ansible roles directory
Append triliovault_passwords.yml
to /etc/kolla/passwords.yml
. Passwords are empty. Set these passwords manually in the /etc/kolla/passwords.yml
.
This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs
On kolla-ansible server node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.
Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.
vi triliovault_nfs_map_input.yml
Update PyYAML
on the kolla-ansible server node only
Expand the map file to create one to one mapping of compute and nfs share.
Result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all necessary nfs shares.
Append this output map file to 'triliovault_globals.yml' File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’
Ensure to set multi_ip_nfs_enabled in __ triliovault_globals.yml file to yes
Edit /etc/kolla/globals.yml
file to fill Trilio backup target and build details.
You will find the Trilio related parameters at the end of globals.yml
file.
Details like Trilio build version, backup target type, backup target details, etc need to be filled out.
Following is the list of parameters that the usr needs to edit.
In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.
Following are the triliovault container image URLs for 4.2 releases**.**
Replace kolla_base_distro
and triliovault_tag
variables with their values.\
This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro
Trilio supports the Source-based containers from the OpenStack Yoga release **** onwards.
Below are the Source-based OpenStack deployment images
Below are the Binary-based OpenStack deployment images
To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml
and find nova_libvirt_default_volumes
variables. Append the Trilio mount bind /var/trilio:/var/trilio:shared
to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
Next, find the variable nova_compute_default_volumes
in the same file and append the mount bind /var/trilio:/var/trilio:shared
to the list.
After the change will the variable look for a default Kolla installation as follows:
In case of using Ironic compute nodes one more entry need to be adjusted in the same file.
Find the variable nova_compute_ironic_default_volumes
and append trilio mount /var/trilio:/var/trilio:shared
to the list.
After the changes the variable will looks like the following:
Activate the login into dockerhub for Trilio tagged containers.
Please get the Dockerhub login credentials from Trilio Sales/Support team
Pull the Trilio container images from the dockerhub based on the existing inventory file. In the example is the inventory file named multinode
.
All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.
This is just an example command. You need to use your cloud deploy command.
Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container
Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.
The example is shown for 4.2.7 maintenance release from Kolla Victoria CentOS binary based setup.
The example is shown for 4.2.7 maintenance release from Kolla Yoga Ubuntu source based setup.
The example is shown for 4.2.7 maintenance release from Kolla Yoga Ubuntu binary based setup.
To see all TriloVault containers running on a specific node use the docker ps command.
To check the startup logs use the docker logs <container name> command.
Verify that the Trilio Appliance is configured. The Horizon tabs are only shown, when a configured Trilio appliance is available.
Verify that the Trilio horizon container is installed and in a running state.
Trilio datamover api service logs on datamover api node
Trilio datamover service logs on datamover node
Note: This step needs to be done on Trilio Appliance node. Not on OpenStack node.
Pre-requisite: You should have already launched Trilio appliance VM
In Kolla openstack distribution, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all Trilio nodes:
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that 'nova' user and group id has changed to '42436'
After this step, you can proceed to 'Configuring Trilio' section.
11.1] We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user' in the file '/etc/kolla/globals.yaml'.
If user wants to edit this parameter value they can do it. Impact will be, cinder's ceph user and triliovault datamover's ceph user will be updated upon next kolla-ansible deploy command.
The is the supported and recommended method to deploy and maintain any RHOSP installation.
Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
Following backup target types are supported by Trilio
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.
All commands need to be run as user 'stack' on undercloud node
The following command clones the triliovault-cfg-scripts github repository.
Next access the Red Hat Director scripts according to the used RHOSP version.
The remaining documentation will use the following path for examples:
If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE___Directory/puppet/trilio/files'
The following commands upload the Trilio puppet module to the overcloud registry. The actual upload happens upon the next deployment.
Trilio puppet module is uploaded to overcloud as a swift deploy artifact with heat resource name 'DeployArtifactURLs'. If you check trilio's puppet module artifact file it looks like following.
Note: If your overcloud deploy command using any other deploy artifact through a environment file, then you need to merge trilio deploy artifact url and your url in single file.
How to check if your overcloud deploy environment files using deploy artifacts? You need to check string 'DeployArtifactURLs' in your environment files (only those mentioned in overcloud deploy command with -e option). If you find it any such environment file that is mentioned in overcloud dpeloy command with '-e' option then your deploy command is using deploy artifact.
In that case you need to merge all deploy artifacts in single file. Refer following steps.
Let's say, your artifact file path is "/home/stack/templates/user-artifacts.yaml" then refer following steps to merge both urls in single file and pass that new file to overcloud deploy command with '-e' option.
Trilio contains multiple services. Add these services to your roles_data.yaml.
In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
All commands need to be run as user 'stack'
This service needs to share the same role as the keystone
and database
service.
In case of the pre-defined roles will these services run on the role Controller
.
In case of custom defined roles, it is necessary to use the same role where 'OS::TripleO::Services::Keystone' service installed.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In case of custom defined roles, it is necessary to use the role the nova-compute
service is using.
Add the following line to the identified role:
This service needs to share the same role as the OpenStack Horizon server. In case of the pre-defined roles will the Horizon service run on the role Controller. Add the following line to the identified role:
All commands need to be run as user 'stack'
Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.
Please note that using the hotfix containers requires that the Trilio Appliance is getting upgraded to the desired hotfix level as well.
Refer to the word <HOTFIX-TAG-VERSION> as 4.2.8 in the below sections
There are three registry methods available in RedHat Openstack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
In this method, container images gets downloaded directly on overcloud nodes during overcloud deploy/update command execution. User can set remote registry to redhat registry or any other private registry that he wants to use. User needs to provide credentials of the registry in 'containers-prepare-parameter.yaml' file.
Make sure other openstack service images are also using the same method to pull container images. If it's not the case you can not use this method.
Populate 'containers-prepare-parameter.yaml' with content like following. Important parameters are 'push_destination: false', ContainerImageRegistryLogin: true and registry credentials. Trilio container images are published to registry 'registry.connect.redhat.com'. Credentials of registry 'registry.redhat.io' will work for 'registry.connect.redhat.com' registry too.
Note: This file -'containers-prepare-parameter.yaml'
Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat
3. Make sure you have network connectivity to above registries from all overcloud nodes. Otherwise image pull operation will fail.
4. User need to manually populate the trilio_env.yaml file with Trilio container image URLs as given below:
trilio_env.yaml file path:
At this step, you have configured Trilio image urls in the necessary environment file.
Follow this section when 'local registry' is used on the undercloud.
In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.
At this step, you have downloaded triliovault container images and configured triliovault image urls in necessary environment file.
Follow this section when a Satellite Server is used for the container registry.
Populate the trilio_env.yaml with container urls.
At this step, you have downloaded triliovault container images into RedHat sattelite server and configured triliovault image urls in necessary environment file.
This section is only required when the multi-IP feature for NFS is required.
This feature allows to set the IP to access the NFS Volume per datamover instead of globally.
On Undercloud node, change directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/ip map.
Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml
' file.
Run this command on undercloud by sourcing 'stackrc'.
vi triliovault_nfs_map_input.yml
Update pyyaml on the undercloud node only
If pip isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Validate the changes in file 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Include the environment file (trilio_nfs_map.yaml
) in overcloud deploy command with '-e' option as shown below.
Ensure that MultiIPNfsEnabled
is set to true in
trilio_env.yaml file and that nfs is used as backup target.
Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.
The following information are required additionally:
Network for the datamover api
datamover password
Backup target type {nfs/s3}
In case of NFS
list of NFS Shares
NFS options
MultiIPNfsEnabled
NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
In case of S3
S3 type {amazon_s3/ceph_s3}
S3 Access key
S3 Secret key
S3 Region name
S3 Bucket
S3 Endpoint URL
S3 Signature Version
S3 Auth Version
S3 SSL Enabled {true/false}
S3 SSL Cert
Use ceph_s3 for any non-aws S3 backup targets.
The existing default haproxy configuration works fine with most of the environments. Only when timeout issues with the dmapi are observed or other reasons are known, change the configuration as described here.
Following is the haproxy conf file location on haproxy nodes of the overcloud. Trilio datamover api service haproxy configuration gets added to this file.
Trilio datamover haproxy default configuration from the above file looks as follows:
The user can change the following configuration parameter values.
To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for edit (Edit <RHOSP_RELEASE> with your cloud's release information. Valid values are - rhosp13, rhosp16, rhosp16.1)
For RHOSP13
For RHOSP16.1
For RHOSP16.2
For RHOSP17.0
ii) Search the following entries and edit as required
iii) Save the changes.
If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use the following heat environment file. A variable named 'TrilioDatamoverOptVolumes' is available in this file.
This variable 'TrilioDatamoverOptVolumes' accepts list of volume/bind mounts.
User needs to edit this file and add their volume mounts in below format.
For example:
In this volume mount "/mnt/dir2:/var/dir2", "/mnt/dir2" is a diretcory available on compute host and "/var/dir2" is the mount point inside datamover container.
Next, User needs to pass this file to overcloud deploy command with '-e' option Like below.
Use the following heat environment file and roles data file in overcloud deploy command:
trilio_env.yaml
roles_data.yaml
Use correct Trilio endpoint map file as per available Keystone endpoint configuration
Instead of tls-endpoints-public-dns.yaml
file, use environments/trilio_env_tls_endpoints_public_dns.yaml
Instead of tls-endpoints-public-ip.yaml
file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml
Instead of tls-everywhere-endpoints-dns.yaml
file, useenvironments/trilio_env_tls_everywhere_dns.yaml
If activated use the correct trilio_nfs_map.yaml
file
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
Verify the haproxy configuration under:
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.
If Trilio Horizon container is in restarted state on RHOSP 16.1.8/RHSOP 16.2.4 then use below workaroud
Trilio components will be deployed using puppet scripts.
The triliovault_nfs_map_imput.yml is explained .
Parameter | Defaults/choices | comments |
---|
Redhat document for remote registry method: \
Pull the Trilio containers on the Red Hat Satellite using the given
Edit input map file and fill all the details. Refer to the for details about the structure.
oIn case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights:
Victoria | 4.2.8-victoria | ubuntu centos |
Wallaby | 4.2.8-wallaby | ubuntu centos |
Yoga | 4.2.8-yoga | ubuntu centos |
Zed | 4.2.8-zed | ubuntu rocky |
triliovault_tag | <triliovault_tag > | Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned in the 1st step |
horizon_image_full | Uncomment | By default, Trilio Horizon container would not get deployed. Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container. |
triliovault_docker_username | <dockerhub-login-username> | Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team |
triliovault_docker_password | <dockerhub-login-password> | Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team |
triliovault_docker_registry | Default value: docker.io | Edit this value if a different container registry for Trilio containers is to be used. Containers need to be pulled from docker.io and pushed to chosen registry first. |
triliovault_backup_target |
|
|
multi_ip_nfs_enabled | yes no default: no | This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault. |
triliovault_nfs_shares | <NFS-IP/FQDN>:/<NFS path> | NFS share path example: ‘192.168.122.101:/nfs/tvault’ |
triliovault_nfs_options |
| -These parameter set NFS mount options. -Keep default values, unless a special requirement exists. |
triliovault_s3_access_key | S3 Access Key | Valid for |
triliovault_s3_secret_key | S3 Secret Key | Valid for |
triliovault_s3_region_name |
| Valid for If s3 storage doesn't have region parameter keep default |
triliovault_s3_bucket_name | S3 Bucket name | Valid for |
triliovault_s3_endpoint_url | S3 Endpoint URL | Valid for |
triliovault_s3_ssl_enabled |
| Valid for Set true for SSL enabled S3 endpoint URL |
triliovault_s3_ssl_cert_file_name | s3-cert.pem | Valid for OR issued by a private authority.
In this case, copy the directory on ansible server. Create this directory if it does not exist already. |
triliovault_copy_ceph_s3_ssl_cert |
| Valid for Set to True when: SSL enabled with self-signed certificates or issued by a private authority. |
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
The following backup target types are supported by Trilio
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.
All commands need to be run as user 'stack' on undercloud node
TripleO CentOS8 is not supported anymore as CentOS Linux 8 has reached End of Life on December 31st,2021.
The following command clones the triliovault-cfg-scripts github repository.
Please note that the Trilio Appliance needs to get updated to the latest HF as well.
If your backup target is ceph S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, the user needs to rename his ca chain cert file to s3-cert.pem
and copy it into the directory triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE_Directory/puppet/trilio/files
Trilio contains multiple services. Add these services to your roles_data.yaml.
In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
All commands need to be run as user 'stack'
This service needs to share the same role as the keystone
and database
service.
In the case of the pre-defined roles will these services run on the role Controller
.
In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone
service is installed.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In the case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In the case of custom-defined roles, it is necessary to use the role the nova-compute
service is used.
Add the following line to the identified role:
This service needs to share the same role as the OpenStack Horizon server. In the case of the pre-defined roles, Horizon service runs on the role Controller. Add the following line to the identified role:
All commands need to be run as user 'stack'
Refer to the word <HOTFIX-TAG-VERSION> as 4.2.8 in the below sections
Trilio containers are pushed to 'Dockerhub'. Registry URL: 'docker.io'. Container pull URLs are given below.
There are two registry methods available in TripleO Openstack Platform.
Remote Registry
Local Registry
Follow this section when 'Remote Registry' is used.
For this method, it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from the Dockerhub registry.
Populate the trilio_env.yaml with container URLs for:
Trilio Datamover container
Trilio Datamover api container
Trilio Horizon Plugin
trilio_env.yaml will be available in
__triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments
Follow this section when 'local registry' is used on the undercloud.
Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.
The changes can be verified using the following commands.
This section is only required when the multi-IP feature for NFS is required.
This feature allows setting the IP to access the NFS Volume per datamover instead of globally.
On Undercloud node, change the directory
Edit file 'triliovault_nfs_map_input.yml
' in the current directory and provide compute host and NFS share/IP map.
Get the compute hostnames from the following command. Check the 'Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml
' file.
Run this command on undercloud by sourcing 'stackrc'.
Edit the input map file and fill in all the details. Refer to this page for details about the structure.
vi triliovault_nfs_map_input.yml
Update pyYAML
on the undercloud node only
If pip
isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.
The result will be in file - 'triliovault_nfs_map_output.yml
'
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
Append this output map file to 'triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
'
Validate the changes in file triliovault-cfg-scripts
/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
Include the environment file (trilio_nfs_map.yaml
) in overcloud deploy command with '-e' option as shown below.
Ensure that MultiIPNfsEnabled
is set to true in
trilio_env.yaml file and that NFS is used as the backup target.
Fill Trilio details in the file /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml
, triliovault environment file is self-explanatory. Fill in details of the backup target, verify image URLs, and other details.
NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
Use the following heat environment file and roles data file in overcloud deploy command
trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations
roles_data.yaml: This file contains overcloud roles data with Trilio roles added.
Use the correct trilio endpoint map file as per your keystone endpoint configuration.
- Instead of tls-endpoints-public-dns.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’
- Instead of tls-endpoints-public-ip.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’
- Instead of tls-everywhere-endpoints-dns.yaml
this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
Deploy command with triliovault environment file looks like the following.
Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
Verify the haproxy configuration under:
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.
Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment fails do the following command to provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html
Trilio and Canonical have started a partnership to ensure a native deployment of Trilio using JuJu Charms.
Those JuJu Charms are publicly available as Open Source Charms.
Trilio is providing the JuJu Charms to deploy Trilio 4.2 in Canonical OpenStack from Yoga release onwards only. JuJu Charms to deploy Trilio 4.2 in Canonical OpenStack up to wallaby release are developed and maintained by Canonical.
Canonical OpenStack doesn't require the Trilio Cluster. The required services are installed and managed via JuJu Charms.
The documentation of the charms can be found here:
Prerequisite
Have a canonical OpenStack base setup deployed for a required release like Jammy Zed/Yoga, Focal yoga/Wallaby/Victoria/Ussuri, or Bionic Ussuri/Queens.
Steps to install the Trilio charms
Export the OpenStack base bundle
2. Create a Trilio overlay bundle as per the OpenStack setup release using the charms given above.
Some sample Trilio overlay bundles can be found here.
NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
Trilio File Search functionality requires that the Trilio Workload manager (trilio-wlm) be deployed as a virtual machine. File Search will not function if the Trilio Workload manager (trilio-wlm) is running as a lxd container(s).
3. If file search functionality is required, provision any additional node(s) that will be required for deploying the Trilio Workload manager (trilio-wlm) as a VM instead of lxd container(s).
4. Commission the additional node from MAAS UI.
5. Do a dry run to check if the Trilio bundle is working
6. Do the deployment
7. Wait till all the Trilio units are deployed successfully. Check the status via juju status
command.
8. Once the deployment is complete, perform the below operations:
Create cloud admin trust
Add license
Note: Reach out to the Trilio support team for the license file.
For multipath enabled environments, perform the following actions
log into each nova compute node
add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf
restart tvault-contego service
Sample Trilio overlay bundles
For bionic-queens openstack-origin
parameter value for trilio-dm-api
charm must be cloud:bionic-train
For the AWS S3 storage backend, we need to use `http://s3.amazonaws.com` as S3 end-point URL.
A few Sample overlay bundles for different OpenStack versions can be found HERE.
Charm names
Channel
Supported releases
latest/edge
Jammy (Ubuntu 22.04)
latest/edge
Jammy (Ubuntu 22.04)
latest/edge
Jammy (Ubuntu 22.04)
latest/edge
Jammy (Ubuntu 22.04)
latest/edge
Focal (Ubuntu 20.04)
latest/edge
Focal (Ubuntu 20.04)
latest/edge
Focal (Ubuntu 20.04)
latest/edge
Focal (Ubuntu 20.04)
Charm names
Channel
Supported releases
4.2/stable
Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)
4.2/stable
Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)
4.2/stable
Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)
4.2/stable
Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)