LogoLogo
T4O-6.x
T4O-6.x
  • About Trilio for Openstack
    • Welcome to Trilio for OpenStack
    • T4O Architecture
    • Release Notes
    • Features
    • Compatibility Matrix
    • Resources
      • 6.1.0
      • 6.0.0
  • Getting Started
    • Requirements
      • Network Considerations
      • Installation Strategy and Preparation
    • Getting started with Trilio on Red-Hat OpenStack Platform (RHOSP)
      • Trilio Installation on RHOCP (with RHOSP17.1)
      • Post Installation Health-Check
      • Trilio Installation on RHOSO18.0
      • Add new backup target on RHOSO18.0
    • Getting started with Trilio on Canonical OpenStack
    • Licensing
    • Installing WorkloadManager CLI client
    • Uninstall Trilio
      • Uninstalling from RHOSP
  • Upgrading to T4O-6.x from older supported versions
    • Supported Trilio Upgrade Path
    • Upgrading on RHOSP
    • Upgrading on RHOSO18.0
  • Advanced Configuration
    • Switching NFS Backing file
    • Multi-IP NFS Backup target mapping file configuration
    • Advanced Ceph configurations
      • Additions for multiple CEPH configurations
    • Multi-Region Deployments
    • Serial Upload per Instance during Snapshot
  • User Guide
    • Workloads
    • Snapshots
    • Restores
    • File Search
    • Snapshot Mount
    • Schedulers
    • E-Mail Notifications
  • Admin Guide
    • Backups-Admin Area
    • Backup Targets
    • Workload Policies
    • Workload Quotas
    • Managing Trusts
    • Workload Import & Migration
    • Disaster Recovery
      • Example runbook for Disaster Recovery using NFS
    • Migrating encrypted Workloads
    • Rebasing existing workloads
  • Troubleshooting
    • Frequently Asked Questions
    • General Troubleshooting Tips
    • Important log files
  • API GUIDE
    • Backup Targets
    • Workloads
    • Snapshots
    • Restores
    • File Search
    • Snapshot Mount
    • Schedulers
    • E-Mail Notification Settings
    • Workload Policies
    • Workload Quotas
    • Managing Trusts
    • Workload Import and Migration
Powered by GitBook
On this page
  • 1. Prepare for deployment
  • 1.1] Clone triliovault-cfg-scripts repository
  • 1.2] Create image pull secret
  • 2] Install Trilio for OpenStack Operator
  • 2.1] Install operator - tvo-operator
  • 2.2] Verify the tvo operator pod got created
  • 2.3] Verify that operator CRD is installed
  • 3] Install Trilio OpenStack Control Plane Services
  • 3.1] Create namespace for Trilio control plane services
  • 3.2] Provide necessary Trilio inputs like backup target details, openstack details etc in yaml format
  • 3.3] Set correct labels to Kubernetes nodes.
  • 3.4] Create TLS certificate secrets
  • 3.5] Run deploy command
  • 3.6] Check logs
  • 3.7] Check deployment status
  • 3.8] Verify successful deployment of T4O control plane services.
  • 4] Install Trilio Data Plane Services
  • 4.1] Fill all input parameters needed for Trilio data plane services in following config map yaml file.
  • 4.2] Create cm-trilio-datamover config map
  • 4.3] Edit file ‘trilio-datamover-service.yaml' and set correct tag for container image 'openStackAnsibleEERunnerImage
  • 4.4] Following script creates CRD “OpenStackDataPlaneService“ resource for Trilio
  • 4.5] Trigger Deployment of Trilio data plane services
  • 4.6] Check deployment logs.
  • 4.7] Verify deployment completed well
  • 4.8] Load necessary Linux drivers on all Compute hosts
  • 5] Install Trilio Horizon Plugin

Was this helpful?

Export as PDF
  1. Getting Started
  2. Getting started with Trilio on Red-Hat OpenStack Platform (RHOSP)

Trilio Installation on RHOSO18.0

PreviousPost Installation Health-CheckNextAdd new backup target on RHOSO18.0

Last updated 1 month ago

Was this helpful?

The is the supported and recommended method to deploy and maintain any RHOSO installation.

Trilio is integrating natively into the RHOSO. Manual deployment methods are not supported for RHOSO.

1. Prepare for deployment

Refer to the link to get release specific values of the placeholders, viz Container URLs, trilio_branch, RHOSO Version and CONTAINER-TAG-VERSION in this document as per the Openstack environment:

1.1] Clone triliovault-cfg-scripts repository

The following steps are to be done on the bastion node on an already installed RHOSO environment.

The following command clones the triliovault-cfg-scripts github repository.

git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp18

1.2] Create image pull secret

cd triliovault-cfg-scripts/redhat-director-scripts/rhosp18/ctlplane-scripts/
chmod +x create-image-pull-secret.sh
./create-image-pull-secret.sh <TRILIO_IMAGE_REGISTRY_USERNAME> <TRILIO_IMAGE_REGISTRY_PASSWORD>

2] Install Trilio for OpenStack Operator

2.1] Install operator - tvo-operator

Please get value of parameter CONTAINER-TAG-VERSION from install document. This is Trilio for OpenStack Operator container image tag.

cd redhat-director-scripts/rhosp18/ctlplane-scripts
chmod +x install_operator.sh
./install_operator.sh <CONTAINER-TAG-VERSION>

2.2] Verify the tvo operator pod got created

oc get pods -A | grep tvo-operator

2.3] Verify that operator CRD is installed

oc get crds | grep tvo

3] Install Trilio OpenStack Control Plane Services

3.1] Create namespace for Trilio control plane services

oc create namespace trilio-openstack

3.2] Provide necessary Trilio inputs like backup target details, openstack details etc in yaml format

vi tvo-operator-inputs.yaml

Operator parameter details from file tvo-operator-inputs.yaml that user need to edit:

Parameter
Description

images

common

- trustee_role should be creator,member if barbican is enabled. Otherwise trustee_role should be member. Any openstack user that wants to create backup jobs and take backups needs this role in respective openstack project. - memcached_servers value should be fetched using command oc -n openstack get memcached -o jsonpath='{.items[*].status.serverList[*]}'| tr ' ' ','

triliovault_backup_targets

- User need to choose which backup targets(Where backups taken by TVO will get stored) to use for this TVO deployment. - User can use multiple backup targets of type ‘NFS' or 'S3’ type like NFS share, Amazon S3 bucket, Ceph S3 bucket etc. - For Amazon S3 backup target s3_type: ‘amazon_s3’ - For all other S3 backup targets s3_type: 'other_s3' - For Amazon S3, s3_endpoint_url value will be empty string. Internally we pick it correctly. - For Amazon s3 s3_self_signed_cert is always 'false'

keystone.common

- 'keystone_interface' set it to any of the value [’internal', 'public']. This interface will be used for communication between TVO and OpenStack services. - 'service_project_name': This is project name where all services are registered. - ‘service_project_domain_name': service project’s domain name - 'admin_role_name': Admin role name - 'cloud_admin_user_name': OpenStack cloud admin user name - 'cloud_admin_user_password': OpenStack cloud admin user password - 'cloud_admin_project_name': Cloud admin project name - 'auth_url': Keystone auth url of respective interface provided in keystone_interface parameter. - ‘auth_uri': Just append '/v3’ to auth_url - 'keystone_auth_protocol': https or http Auth protocol of keystone endpoint url of provided keystone_interface - 'keystone_auth_host': Full host name from keystone auth_url -'is_self_signed_ssl_cert': True/False, Whether the TLS certs used by keystone endpoint url mentioned in auth_url parameter uses self signed certs or not

keystone.datamover_api and keystone.wlm_api

For both components datamover_api and wlm_api we have same set of parameters. - ‘user': This is openstack user that is used by service datamover_api. Please don’t change this one. - ‘password': User can set this to any value. This is password for openstack user mentioned in parameter 'user’ - ‘service_name': Don’t need to change - 'service_type': Don’t need to change - 'service_desc': Don’t need to change - ‘internal_endpoint': Trilio service internal endpoint. Please refer other openstack service endpoints and set this one accordingly. - ‘public_endpoint': User just need to set replace parameter 'PUBLIC_ENDPOINT_DOMAIN’ here. Please refer other openstack services public endpoint url. - ‘public_auth_host': FQDN mentioned in parameter 'public_endpoint’

database.common

- 'root_user_name': OpenStack database system root user name. Keep this as it is. Don’t need to change unless you know that root username is changed. - 'root_password': Database root user password using command oc -n openstack get secret osp-secret -o jsonpath='{.data.DbRootPassword}' | base64 --decode - 'host': Database host/FQDN name oc -n openstack get secret nova-api-config-data -o jsonpath='{.data.01-nova\.conf}' | base64 --decode | awk '/connection =/ {match($0, /@([^?/]+)/, arr); print arr[1]; exit}' - 'port': Database port

database.datamover_api and database.wlm_api

- 'user': Do not change - 'password': Set any password for trilio database users - 'database': Do not change

rabbitmq.common

- ‘admin_user': Provide rabbitmq admin user name using command. oc -n openstack exec -it rabbitmq-server-0 bash rabbitmqctl list_users - ‘admin_password': Provide rabbitmq admin user’s password. Generally in rhoso18 cloud, default_user… is the admin user. But if the default user is not administrator then you need to use some other secrets or commands to find out the password of that user. Refer command: oc -n openstack get secret rabbitmq-default-user -o jsonpath='{.data.password}' | base64 -d - 'host': Provide rabbitmq cluster host name using command oc -n openstack get secret rabbitmq-default-user -o jsonpath='{.data.host}' | base64 -d - 'port': Provide rabbitmq management API port on which it can be connected using rabbitmqadmin command. Generally this is 15671 for RHOSO. So you can keep it as it is. 5671 is not a management API port. Refer command:oc -n openstack get cm rabbitmq-server-conf -o jsonpath='{.data.userDefinedConfiguration.conf}' | grep management.ssl.port - 'driver': Rabbitmq driver - ‘ssl': If SSL/TLS is enabled on rabbitmq, set this to true other wise set it to false. This is boolean parameter.

rabbitmq.datamover_api and rabbitmq.wlm_api

- 'user': Do not change this. - ‘password': User need to set this as per their choice - 'vhost': Do not change this ‘transport_url': User needs to set this. Edit ‘${PASSWORD}' and '${RABBITMQ_HOST}’ from given default url. You can edit SSL and port settings if necessary. oc describe secret rabbitmq-default-user -n openstack oc get secret rabbitmq-default-user -n openstack -o jsonpath='{.data.username}' | base64 --decode

pod.replicas

These parameters sets number of replicas for Trilio components. Default values are standard. Unless needed you don’t need to change it. Please note that number of replicas for triliovault_wlm_cron pod should always be set to 1.

3.3] Set correct labels to Kubernetes nodes.

Trilio control plane services will be deployed on OpenShift nodes having label trilio-control-plane=enabled . It is recommended to use three Kubernetes nodes for Trilio control plane services. Please use following commands to assign correct labels to nodes.

Get list of OpenShift nodes

oc get nodes

Assign ‘trilio-control-plane=enabled' label to any three nodes of your choice where you want to deploy TVO control plane services.

oc label nodes <Openshift_node_name1> trilio-control-plane=enabled
oc label nodes <Openshift_node_name2> trilio-control-plane=enabled
oc label nodes <Openshift_node_name3> trilio-control-plane=enabled

Verify list of nodes having 'trilio-control-plane=enabled' label

oc get nodes -l trilio-control-plane=enabled

3.4] Create TLS certificate secrets

Following script creates TLS certificates for Trilio services and defines secrets having these certs.

Edit '$PUBLIC_ENDPOINT_DOMAIN' parameter in utils/certicate.yaml file and set it to correct value. Refer openstack keystone service public endpoint.

cd redhat-director-scripts/rhosp18/ctlplane-scripts/
vi certificate.yaml

Create certificates and secrets

cd redhat-director-scripts/rhosp18/ctlplane-scripts/
./create_cert_secrets.sh

You can verify if these cert secrets are created in 'trilio-openstack' namespace.

oc -n trilio-openstack describe secret cert-triliovault-datamover-public-svc 
oc -n trilio-openstack describe secret cert-triliovault-datamover-internal-svc
oc -n trilio-openstack describe secret cert-triliovault-wlm-public-svc
oc -n trilio-openstack describe secret cert-triliovault-wlm-internal-svc

3.5] Run deploy command

cd redhat-director-scripts/rhosp18/ctlplane-scripts/
./deploy_tvo_control_plane.sh

3.6] Check logs

oc logs -f tvo-operator-controller-manager-846f46787-5qnm2 -n tvo-operator-system

3.7] Check deployment status

oc get tvocontrolplane -n trilio-openstack
oc describe tvocontrolplane <TVO_CONTROL_PLANE_OBEJCT_NAME> -n trilio-openstack

3.8] Verify successful deployment of T4O control plane services.

[root@localhost ctlplane-scripts]# oc get pods -n trilio-openstack
NAME                                                READY   STATUS      RESTARTS   AGE
job-triliovault-datamover-api-db-init-ttq46         0/1     Completed   0          6m27s
job-triliovault-datamover-api-keystone-init-2ddh6   0/1     Completed   0          9m6s
job-triliovault-datamover-api-rabbitmq-init-27sx9   0/1     Completed   0          8m51s
job-triliovault-wlm-cloud-trust-lcncp               0/1     Completed   0          4m59s
job-triliovault-wlm-db-init-c48z4                   0/1     Completed   0          6m22s
job-triliovault-wlm-keystone-init-gxlmc             0/1     Completed   0          8m7s
job-triliovault-wlm-rabbitmq-init-6g94w             0/1     Completed   0          6m31s
triliovault-datamover-api-6f5fc957c9-j426z          1/1     Running     0          5m
triliovault-datamover-api-6f5fc957c9-sn9z8          1/1     Running     0          5m
triliovault-datamover-api-6f5fc957c9-xmvs5          1/1     Running     0          5m
triliovault-object-store-bt1-s3-9bfdf45d-5pqqh      1/1     Running     0          5m
triliovault-object-store-bt1-s3-9bfdf45d-g6g7m      1/1     Running     0          5m
triliovault-object-store-bt1-s3-9bfdf45d-tc7zb      1/1     Running     0          5m
triliovault-object-store-bt2-s3-68ff4548cc-9kc9s    1/1     Running     0          5m
triliovault-object-store-bt2-s3-68ff4548cc-cj94f    1/1     Running     0          5m
triliovault-object-store-bt2-s3-68ff4548cc-wfkgf    1/1     Running     0          5m
triliovault-object-store-bt3-s3-6bf58b9f77-6sqp7    1/1     Running     0          15m
triliovault-object-store-bt3-s3-6bf58b9f77-9tlh6    1/1     Running     0          15m
triliovault-object-store-bt3-s3-6bf58b9f77-g67cl    1/1     Running     0          15m
triliovault-wlm-api-66b46c7b6-7kjw4                 1/1     Running     0          5m
triliovault-wlm-api-66b46c7b6-rsknb                 1/1     Running     0          5m
triliovault-wlm-api-66b46c7b6-s8r92                 1/1     Running     0          5m
triliovault-wlm-cron-59f8ccfd-fhn8p                 1/1     Running     0          5m
triliovault-wlm-scheduler-569ccc654-bphrd           1/1     Running     0          5m
triliovault-wlm-scheduler-569ccc654-jjhjh           1/1     Running     0          5m
triliovault-wlm-scheduler-569ccc654-klbn4           1/1     Running     0          5m
triliovault-wlm-workloads-6869cff4b8-8spmz          1/1     Running     0          4m59s
triliovault-wlm-workloads-6869cff4b8-sf652          1/1     Running     0          4m59s
triliovault-wlm-workloads-6869cff4b8-x8h7z          1/1     Running     0          4m59s

Verify if wlm cloud trust created successfully

oc logs <job-triliovault-wlm-cloud-trust> -n trilio-openstack

4] Install Trilio Data Plane Services

Set context to ‘openstack' namespace. All the trilio data plane resources will be created in 'openstack’ namespace.

oc config set-context --current --namespace=openstack

4.1] Fill all input parameters needed for Trilio data plane services in following config map yaml file.

Create config map having all input parameters for Trilio data plane services deployment.

cd redhat-director-scripts/rhosp18/dataplane-scripts/
vi cm-trilio-datamover.yaml

To get 'dmapi_database_connection' you can refer following command:

oc -n trilio-openstack get secret triliovault-datamover-api-etc -o jsonpath='{.data.triliovault-datamover-api\.conf}' | base64 -d | grep connection

4.2] Create cm-trilio-datamover config map

## Create config map
oc -n openstack apply -f cm-trilio-datamover.yaml

4.3] Edit file ‘trilio-datamover-service.yaml' and set correct tag for container image 'openStackAnsibleEERunnerImage

vi trilio-datamover-service.yaml

4.4] Following script creates CRD “OpenStackDataPlaneService“ resource for Trilio

oc -n openstack apply -f trilio-datamover-service.yaml

4.5] Trigger Deployment of Trilio data plane services

In this step we will trigger the ansible scripts execution to deploy Trilio data plane components. Get Data Plane NodeSet names using following command

oc -n openstack get OpenStackDataPlaneNodeSet 

Edit two things in following file

  • Set Unique ‘name' for every ansible execution for ‘OpenStackDataPlaneDeployment’

  • Set correct name for 'nodeSets’ parameter. Take nodeSet name from previous step.

vi trilio-data-plane-deployment.yaml

To check list of deployment names alreday used, please use following command

## To check list of deployment names alreday used, please use following command
oc -n openstack get OpenStackDataPlaneDeployment

Trigger Trilio Data plane deployment execution

## Trigger deployment
oc -n openstack apply -f trilio-data-plane-deployment.yaml 

4.6] Check deployment logs.

Edit parameter name : <OpenStackDataPlaneDeployment_NAME>, use name from above steps.

oc -n openstack get pod -l openstackdataplanedeployment=<OpenStackDataPlaneDeployment_NAME>
oc -n openstack logs -f <trilio-datamover-pod-name>

If it fails or completes and you want to run it again, you need to change the name of CR resource ‘OpenStackDataPlaneDeployment' to something new and unique in following template 'trilio-data-plane-deployment.yaml' and create it again using oc create command.

4.7] Verify deployment completed well

Login to one of the compute node and check trilio compute service containers.

podman ps | grep trilio

4.8] Load necessary Linux drivers on all Compute hosts

For TrilioVault functionality to work, we need the following Linux kernel modules to be loaded on all compute nodes(Where Trilio Datamover services are going to be installed). Install nbd module using commands:

modprobe nbd nbds_max=128
lsmod | grep nbd

5] Install Trilio Horizon Plugin

Pre-requisite: You should have created image pull secret for Trilio container images.

  1. Get the openstackversion CR

[kni@localhost ~]$ oc get openstackversion -n openstack
NAME                     TARGET VERSION      AVAILABLE VERSION   DEPLOYED VERSION
openstack-controlplane   18.0.2-20240923.2   18.0.2-20240923.2   18.0.2-20240923.2
  1. Edit the openstackversion CR resource/object and change horizonImage undercustomContainerImages Set 'horizonImage:' to Trilio Horizon Plugin container image url as shown below.

oc edit openstackversion <OPENSTACKVERSION_RESOURCE_NAME> -n openstack

For example: if resource name is 'openstack-controlplane'

oc edit openstackversion openstack-controlplane -n openstack
apiVersion: core.openstack.org/v1beta1
kind: OpenStackVersion
metadata:
  name: openstack-controlplane
spec:
  customContainerImages:
    horizonImage: docker.io/trilio/trilio-horizon-plugin:<IMAGE_TAG>

[...]
  1. Save changes and exit. [Use escape + Colon + wq like linu vi editor.]

  2. Verify if changes are done correctly.

Please refer to

Red Hat OpenStack Services on OpenShift 18.0
Resources
Resources