Trilio Installation on RHOSO

The Red Hat OpenStack Services on OpenShift 18.0 is the supported and recommended method to deploy and maintain any RHOSO installation.

Trilio is integrating natively into the RHOSO. Manual deployment methods are not supported for RHOSO.

1. Prepare for deployment

Refer to the link Resources to get release specific values of the placeholders, viz Container URLs, trilio_branch, RHOSO Version and CONTAINER-TAG-VERSION in this document as per the OpenStack environment:

1.1] Clone triliovault-cfg-scripts repository

The following steps are to be done on the bastion node on an already installed RHOSO environment.

The following command clones the triliovault-cfg-scripts github repository.

git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp18

1.2] Create namespace for Trilio control plane services

oc create namespace trilio-openstack

1.3] Create the trilio-openstack-secret in the trilio-openstack namespace.

In this secret we keep passwords and s3 keys in base64 encoded form.

In this trilio-openstack-secret.yaml file you need to fill the base64 encoded strings for container registry password, s3-keys and other passwords.

If you want to use s3 as backup storage then you need to provide s3 access keys and secret keys for all the s3 buckets in base64 encoded format.

If you dont want to use s3 as backup storage, you need to remove all s3 key parameters from this secret file.

To get base64 encoded string please use following linux command

Parameter
Description

DmapiDatabasePassword

User needs to set this value as per their choice, but it must be Base64 encoded.

DmapiKeystonePassword

User needs to set this value as per their choice, but it must be Base64 encoded.

DmapiRabbitPassword

User needs to set this value as per their choice, but it must be Base64 encoded.

WlmDatabasePassword

User needs to set this value as per their choice, but it must be Base64 encoded.

WlmKeystonePassword

User needs to set this value as per their choice, but it must be Base64 encoded.

WlmRabbitPassword

User needs to set this value as per their choice, but it must be Base64 encoded.

ContainerRegistryPassword

User need to set this right value but it must be Base64 encoded.

BT1_S3_s3_access_key

Parameter name should be in <backup-target-name>_s3_secret_key format. User need to encode the actual value with base64 encoding and set it here. If you dont want to use s3 as backup storage, you need to remove this access key parameter from secret file.

BT1_S3_s3_secret_key

Parameter name should be in <backup-target-name>_s3_secret_key format. User need to encode the actual value with base64 encoding and set it here. If you dont want to use s3 as backup storage, you need to remove this access key parameter from secret file.

1.4] Create image pull secret

Following script creates image pull secret for Trilio container images. User need to pass image registry url and authentication user name of that registry. This script reads registry user password from trilio-openstack-secret secret.

By default Trilio container images are available in RedHat connect registry, it's url is registry.connect.redhat.com. User needs to provide this registry url and it's user name to the script.

If user wants to use a different registry to keep the Trilio container images, then user need to pull Trilio images from registry.connect.redhat.com and push them to user's registry. In this case user need to provide it's registry url and user name to following command.

2] Install Trilio Control Plane Services

2.1] Install operator - tvo-operator

Please get value of parameter TVO_OPERATOR_CONTAINER_IMAGE_URL from this release artifact documentation. This is Trilio for OpenStack Operator container image tag.

  • Verify the tvo operator pod got created

  • Verify that operator CRD is installed

2.2] Attaching Additional Networks to Trilio Pods (Optional)

Trilio pods can attach additional networks using the networks parameter in tvo-operator-inputs.yaml. This enables Trilio components to connect to custom or isolated networks.


2.2.1] Copy NetworkAttachmentDefinition (NAD) to trilio-openstack

If a network exists in the openstack namespace, it must be copied to trilio-openstack.

Example: copying the storage network.


  • Export NAD from openstack

  • Remove namespace-specific metadata

  • Apply NAD in trilio-openstack

  • Verify NADs

  • Example output:


2.2.2] Configure networks in tvo-operator-inputs.yaml in step 2.3

Each Trilio pod includes a networks field where you can specify additional networks.


2.2.3] Specify networks for each pod

Provide a comma-separated list of network names. These networks must exist in the trilio-openstack namespace.

Example:

This attaches the storage and tenant networks to the triliovault_wlm_api pod.


Note:

  • All networks listed in networks must exist as NADs in trilio-openstack.

  • If a network exists in another namespace, export and recreate it in trilio-openstack.

2.3] Edit tvo-operator-inputs.yaml file

This file named tvo-operator-inputs.yaml is used to create CR of kind TVOControlPlane. This CR is responsible to deploy Trilio control plane services.

Run the script below to fill in some of the details automatically in tvo-operator-inputs.yaml.

Above script automatically fills in some details like Trilio image tags, memcached_servers, keystone, database, and rabbitmq.

User need to manually fill in backup target details in the file tvo-operator-inputs.yaml.

Review the file tvo-operator-inputs.yaml and make sure all the details are correct. If there is any change required, you can edit this file. Above script only takes default parameters during script execution.

Operator parameter details from file tvo-operator-inputs.yaml that user need to edit:

Parameter
Description
Edit Mode

images

Please refer to Resources

Automated

common

- trustee_role should be creator,member if barbican is enabled. Otherwise trustee_role should be member. Any openstack user that wants to create backup jobs and take backups needs this role in respective openstack project. - memcached_servers value should be fetched using command oc -n openstack get memcached -o jsonpath='{.items[*].status.serverList[*]}'| tr ' ' ','

Automated

triliovault_backup_targets

- User need to choose which backup targets(Where backups taken by TVO will get stored) to use for this TVO deployment. - User can use multiple backup targets of type ‘NFS' or 'S3’ type like NFS share, Amazon S3 bucket, Ceph S3 bucket etc. - For Amazon S3 backup target s3_type: ‘amazon_s3’ - For all other S3 backup targets s3_type: 'other_s3' - For Amazon S3, s3_endpoint_url value will be empty string. Internally we pick it correctly. - For Amazon s3 s3_self_signed_cert is always 'false'

Manual

openstack

- 'namespace': Namespace named in which openstack control plane services are installed. - 'osp_secret_name': OpenStack secret name which holds all the admin passwords for openstack services. - 'rabbitmq_admin_secret_name': Rabbitmq secret name which holds the admin user credentials.

Automated

keystone.common

- 'keystone_interface' set it to any of the value [’internal', 'public']. This interface will be used for communication between TVO and OpenStack services. - 'service_project_name': This is project name where all services are registered. - ‘service_project_domain_name': service project’s domain name - 'admin_role_name': Admin role name - 'cloud_admin_user_name': OpenStack cloud admin user name - 'cloud_admin_project_name': Cloud admin project name - 'auth_url': Keystone auth url of respective interface provided in keystone_interface parameter. - ‘auth_uri': Just append '/v3’ to auth_url - 'keystone_auth_protocol': https or http Auth protocol of keystone endpoint url of provided keystone_interface - 'keystone_auth_host': Full host name from keystone auth_url

Automated

keystone.commom.is_self_signed_ssl_cert

True/False, Whether the TLS certs used by keystone endpoint url mentioned in auth_url parameter uses self signed certs or not

Manual

keystone.datamover_api and keystone.wlm_api

For both components datamover_api and wlm_api we have same set of parameters. - ‘user': This is openstack user that is used by service datamover_api. Please don’t change this one. - ‘service_name': Don’t need to change - 'service_type': Don’t need to change - 'service_desc': Don’t need to change - ‘internal_endpoint': Trilio service internal endpoint. Please refer other openstack service endpoints and set this one accordingly. - ‘public_endpoint': User just need to set replace parameter 'PUBLIC_ENDPOINT_DOMAIN’ here. Please refer other openstack services public endpoint url. - ‘public_auth_host': FQDN mentioned in parameter 'public_endpoint’

Automated

database.common

- 'root_user_name': OpenStack database system root user name. Keep this as it is. Don’t need to change unless you know that root username is changed. - 'host': Database host/FQDN name oc -n openstack get secret nova-api-config-data -o jsonpath='{.data.01-nova\.conf}' | base64 --decode | awk '/connection =/ {match($0, /@([^?/]+)/, arr); print arr[1]; exit}' - 'port': Database port

database.datamover_api and database.wlm_api

- 'user': Do not change - 'database': Do not change

Automated

rabbitmq.common

- ‘admin_user': Provide rabbitmq admin user name using command. oc -n openstack exec -it rabbitmq-server-0 bash rabbitmqctl list_users - 'host': Provide rabbitmq cluster host name using command oc -n openstack get secret rabbitmq-default-user -o jsonpath='{.data.host}' | base64 -d - 'port': Provide rabbitmq management API port on which it can be connected using rabbitmqadmin command. Generally this is 15671 for RHOSO. So you can keep it as it is. 5671 is not a management API port. Refer command:oc -n openstack get cm rabbitmq-server-conf -o jsonpath='{.data.userDefinedConfiguration\.conf}' | grep management.ssl.port - 'driver': Rabbitmq driver - ‘ssl': If SSL/TLS is enabled on rabbitmq, set this to true other wise set it to false. This is boolean parameter.

Automated

rabbitmq.datamover_api and rabbitmq.wlm_api

- 'user': Do not change this. - 'vhost': Do not change this ‘transport_url': User needs to set this. Edit ‘${PASSWORD}' and '${RABBITMQ_HOST}’ from given default url. You can edit SSL and port settings if necessary. oc describe secret rabbitmq-default-user -n openstack oc get secret rabbitmq-default-user -n openstack -o jsonpath='{.data.username}' | base64 --decode

Automated

pod

These parameters sets additional networks on pods.

Manual

pod.replicas

These parameters sets number of replicas for Trilio components. Default values are standard. Unless needed you don’t need to change it. Please note that number of replicas for triliovault_wlm_cron pod should always be set to 1.

Automated

2.4] Set correct labels to Kubernetes nodes.

Trilio control plane services will be deployed on OpenShift nodes having label trilio-control-plane=enabled . It is recommended to use three Kubernetes nodes for Trilio control plane services. Please use following commands to assign correct labels to nodes.

Get list of OpenShift nodes

Assign ‘trilio-control-plane=enabled' label to any three nodes of your choice where you want to deploy TVO control plane services.

Verify list of nodes having 'trilio-control-plane=enabled' label

2.5] Create TLS certificate secrets

Following script creates TLS certificates for Trilio services and defines secrets having these certs.

Edit '$PUBLIC_ENDPOINT_DOMAIN' parameter in utils/certicate.yaml file and set it to correct value. Refer openstack keystone service public endpoint.

Create certificates and secrets

You can verify if these cert secrets are created in 'trilio-openstack' namespace.

2.6] Run deploy command

2.7] Check logs

2.8] Check deployment status

2.9] Verify successful deployment of T4O control plane services.

Verify if wlm cloud trust created successfully

3] Install Trilio Data Plane Services

Set context to ‘openstack' namespace. All the trilio data plane resources will be created in 'openstack’ namespace.

3.1] Create the trilio-openstack-secret in the openstack namespace. This secret is needed for the Trilio data plane.

3.2] User need to review below parameters fill all input parameters needed for Trilio data plane services in following cm-trilio-datamover.yaml file.

Run the script below to fill in some of the details automatically in cm-trilio-datamover.yaml.

Above script automatically fills in some details like rabbit_host, rabbit_ssl, database_host, database_port, triliovault_backup_targets.

User need to manually fill in trilio-container-image-tags for datamover and wlm images, trilio_container_registry_username, trilio_container_registry_password and trilio_container_registry_url in the file cm-trilio-datamover.yaml.

Review the file cm-trilio-datamover.yaml and make sure all the details are correct. If there is any change required, you can edit this file.

Parameter
Description
Edit Mode

rabbit_host, rabbit_port, rabbit_ssl

User does not need to change these parameters

Semi-automated

database_host, database_port

User does not need to change these parameters

Automated

cinder_backend_ceph, libvirt_images_rbd_ceph_conf, ceph_cinder_user and oslomsg_rpc_use_ssl

User does not need to change these parameters

Manual

images

Please refer to Resources

Manual

trilio_container_registry_username, trilio_container_registry_password and trilio_container_registry_url

User needs to update the following fileds with using either Docker or the Redhat registry details, as per the requirement.

Manual

triliovault_backup_targets

- User need to choose which backup targets(Where backups taken by TVO will get stored) to use for this TVO deployment. - User can use multiple backup targets of type ‘NFS' or 'S3’ type like NFS share, Amazon S3 bucket, Ceph S3 bucket etc. - For Amazon S3 backup target s3_type: ‘amazon_s3’ - For all other S3 backup targets s3_type: 'other_s3' - For Amazon S3, s3_endpoint_url value will be empty string. Internally we pick it correctly. - For Amazon s3 s3_self_signed_cert is always 'false' - Note:- Please provide the same BTT details that are used in the tvo-operator-inputs.yaml file

Semi-automated

Create config map having all input parameters for Trilio data plane services deployment.

3.3] Create cm-trilio-datamover config map

3.4] Edit file ‘trilio-datamover-service.yaml' and set correct tag for container image 'openStackAnsibleEERunnerImage

3.5] Following script creates CRD “OpenStackDataPlaneService“ resource for Trilio

3.6] Trigger Deployment of Trilio data plane services

In this step we will trigger the ansible scripts execution to deploy Trilio data plane components. Get Data Plane NodeSet names using following command

Edit two things in following file

  • Set Unique ‘name' for every ansible execution for ‘OpenStackDataPlaneDeployment’

  • Set correct name for 'nodeSets’ parameter. Take nodeSet name from previous step.

To check list of deployment names alreday used, please use following command

Trigger Trilio Data plane deployment execution

3.7] Check deployment logs.

Edit parameter name : <OpenStackDataPlaneDeployment_NAME>, use name from above steps.

If it fails or completes and you want to run it again, you need to change the name of CR resource ‘OpenStackDataPlaneDeployment' to something new and unique in following template 'trilio-data-plane-deployment.yaml' and create it again using oc create command.

3.8] Verify deployment completed well

Login to one of the compute node and check trilio compute service containers.

4] Install Trilio Horizon Plugin

Pre-requisite: You should have created image pull secret for Trilio container images.

  1. Get the openstackversion CR

  1. Edit the openstackversion CR resource/object and change horizonImage undercustomContainerImages Set 'horizonImage:' to Trilio Horizon Plugin container image url as shown below.

For example: if resource name is 'openstack-controlplane'

  1. Save changes and exit. [Use escape + Colon + wq like linux vi editor.]

  2. Verify if changes are done correctly using below command.

  1. Increase HAproxy route timeout

Run the following command to set the route timeout to 180 seconds:

This annotation configures the HAProxy router to allow backend requests up to the specified timeout duration.

Verify the HAproxy route timeout

Confirm that the timeout annotation has been applied successfully:

You should see an output similar to:

  1. You can access the OpenStack horizon using your same URL and login using same credentials. This openstack horizon will have Trilio UI components as well by verifying using the UI horizon.

Last updated

Was this helpful?