Trilio Installation on RHOCP (with RHOSP17.1)

The Red Hat OpenStack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.

Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.

1. Prepare for deployment

Refer to the below-mentioned acceptable values for the placeholders triliovault_tag, trilio_branch, RHOSP_version and CONTAINER-TAG-VERSION in this document as per the Openstack environment:

Trilio Releasetriliovault_tagtrilio_branchRHOSP_versionCONTAINER-TAG-VERSION

6.0.0

6.0.0-beta-1-rhosp17.1

6.0.0-beta-1

RHOSP17.1

6.0.0-beta-1

1.1] Select 'backup target' type

Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

The following backup target types are supported by Trilio

a) NFS

Need NFS share path

b) Amazon S3

- S3 Access Key - Secret Key - Region - Bucket name

c) Other S3 compatible storage (Like Ceph based S3)

- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

1.2] Clone triliovault-cfg-scripts repository

The following steps are to be done on the workstation node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.

The following command clones the triliovault-cfg-scripts github repository.

cd /home/stack
source stackrc
git clone -b {{ trilio_branch }} https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17

1.3] Set executable permissions for all shell scripts

cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17/scripts/
chmod +x *.sh

1.4] If the backup target type is 'Ceph based S3' with SSL:

If your backup target is CEPH S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For all s3 backup target with self signed TLS certificates, user need to copy ca chain files in following location and in given file name format in trilio puppet module. Edit <S3_BACKUP_TARGET_NAME>, <S3_SELF_SIGNED_CERT_CA_CHAIN_FILE> parameters in following command.

 cp <S3_SELF_SIGNED_CERT_CA_CHAIN_FILE> /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/puppet/trilio/files/s3-cert-<S3_BACKUP_TARGET_NAME>.pem

For example if S3_BACKUP_TARGET_NAME = BT2_S3 and your S3_SELF_SIGNED_CERT_CA_CHAIN_FILE='s3-ca.pem' then command to copy this ca chain file to trilio puppet module would be

cp s3-ca.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/puppet/trilio/files/s3-cert-BT2_S3.pem

1.5] If the VM migration feature needs to be enabled:

More about this feature can be found here.

Please refer to this page to collect the necessary artifacts before continuing further.

1.5.1] Copy the VMware vSphere Virtual Disk Development Kit 7.0

Rename the file as vddk.tar.gz and place it at

/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/vddk.tar.gz

1.5.2] For secure connection with the vCenter

Copy the vCenter SSL cert file to

/home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/

2] Update the overcloud roles data file to include Trilio services

Trilio contains multiple services. Add these services to your roles_data.yaml.

You need to find roles_data.yaml file that is getting used for openstack deployment. You will find it in 'custom-templates' directory on workstation node, where cloud administrator has kept all custom heat templates. This directory name can be anything.

Please add all Trilio services to this roles_data.yaml file.

2.1] Add Trilio Datamover Api and Trilio Workload Manager services to role data file

This service needs to share the same role as the keystone and database service. In the case of the pre-defined roles, these services will run on the role Controller. In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone service installed.

Add the following line to the identified role:

- OS::TripleO::Services::TrilioDatamoverApi
- OS::TripleO::Services::TrilioWlmApi
- OS::TripleO::Services::TrilioWlmWorkloads
- OS::TripleO::Services::TrilioWlmScheduler
- OS::TripleO::Services::TrilioWlmCron
- OS::TripleO::Services::TrilioObjectStore

2.2] Add Trilio Datamover Service to role data file

This service needs to share the same role as the nova-compute service. In the case of the pre-defined roles will the nova-compute service run on the role Compute. In the case of custom-defined roles, it is necessary to use the role that the nova-compute service uses.

Add the following line to the identified role:

- OS::TripleO::Services::TrilioDatamover
- OS::TripleO::Services::TrilioObjectStore

This change will trigger step X: "Recreate config map tripleo-tarball-config due to change in custom templates"

3] Prepare Trilio container images

Trilio containers are pushed to the RedHat Container Registry. Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.

Trilio Datamover Container:       registry.connect.redhat.com/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
Trilio Horizon Plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1
Trilio WLM Container:             registry.connect.redhat.com/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1

There are two registry methods available in the RedHat OpenStack Platform 17.1 on RHOCP.

  1. Remote Registry

  2. Image Registry on Red Hat Satellite Server

3.1] Remote Registry

1] Follow this section when 'Remote Registry' is used.

In this method, container images get downloaded directly on overcloud nodes during overcloud deploy/update command execution.

Add 'registry.connect.redhat.com' redhat connect registry credentials to containers-prepare-parameter.yaml env file. Please refer below example.

File Name: containers-prepare-parameter.yaml
parameter_defaults:
  ContainerImagePrepare:
  - push_destination: false
    set:
      namespace: registry.redhat.io/...
      ...
  ...
  ContainerImageRegistryCredentials:
    registry.redhat.io:
      <REDHAT_REGISTRY_USERNAME> : <REDHAT_REGISTRY_PASSWORD>
    registry.connect.redhat.com:
      <REDHAT_REGISTRY_USERNAME> : <REDHAT_REGISTRY_PASSWORD>
  ContainerImageRegistryLogin: true

If you want to use TrilioVault images from dockerhub, please use following approach. Add 'docker.io' registry to containers-prepare-parameter.yaml env file. Please refer below example.

File Name: containers-prepare-parameter.yaml
parameter_defaults:
  ContainerImagePrepare:
  - push_destination: false
    set:
      namespace: registry.redhat.io/...
      ...
  ...
  ContainerImageRegistryCredentials:
    docker.io:
      <DOCKERHUB_REGISTRY_USERNAME> : <DOCKERHUB_REGISTRY_PASSWORD>
    registry.connect.redhat.com:
      <REDHAT_REGISTRY_USERNAME> : <REDHAT_REGISTRY_PASSWORD>
  ContainerImageRegistryLogin: true

Redhat document for remote registry method: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/director_installation_and_usage/preparing-for-director-installation#container-image-preparation-parameters

Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat

2. Make sure you have network connectivity to the above registries from all overcloud nodes. Otherwise image pull operation will fail.

3. The user needs to manually populate the trilio_env.yaml file with Trilio container image URLs as given below:

trilio_env.yaml file path:

cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/

RedHat Connect Registry URL: registry.connect.redhat.com Dockerhub Registry URL: docker.io

You can use any one registry from above. But make sure to use same registry as you configured in step 1 of this section

 ## TrilioVault container Images.
   ContainerTriliovaultDatamoverImage: <REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultDatamoverApiImage: <REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultWlmImage: <REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
   ## If you do not want Trilio's horizon plugin to replace your horizon container, just comment following line.
   ContainerHorizonImage: <REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1

At this step, you have configured Trilio image URLs in the necessary environment file. You can skip step 3.2

3.2] Image Regisry on Red Hat Satellite Server

Follow this section when a Satellite Server is used for the container registry.

Pull the Trilio containers on the Red Hat Satellite using the given Red Hat registry URLs.

Populate the trilio_env.yaml with container URLs.

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments

$ grep '<CONTAINER-TAG-VERSION>-rhosp17.1' trilio_env.yaml
   ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp17.1
   ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp17.1

At this step, you have downloaded Trilio container images into the RedHat satellite server and configured Trilio image URLs in the necessary environment file.

4] Provide environment details in trilio-env.yaml

Edit /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml file and provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still, it is recommended to verify the container URLs.

You don't need to provide anything for resource_registry, keep it as it is.

ParameterDescription

CloudAdminUserName

Default value is admin.

Provide the cloudadmin user name of your overcloud

CloudAdminProjectName

Default value is admin.

Provide the cloudadmin project name of your overcloud

CloudAdminDomainName

Default value is default.

Provide the cloudadmin project name of your overcloud

CloudAdminPassword

Provide the cloudadmin user's password of your overcloud

ContainerTriliovaultDatamoverImage

Trilio Datamover Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

ContainerTriliovaultDatamoverApiImage

Trilio DatamoverApi Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

ContainerTriliovaultWlmImage

Trilio WLM Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

ContainerHorizonImage

Horizon Container image name have already been populated in the preparation of the container images.

Still it is recommended to verify the container URL.

TrilioBackupTargets

List of Backup Targets for TrilioVault. These backup targets will be used to store backups taken by TrilioVault. Backup target examples and format of NFS and S3 types are already provided in th trilio_env.yaml file.

TrilioDatamoverOptVolumes

User can specify list of extra volumes that they want to mount on 'triliovault_datamover' container.

Refer the Configure Custom Volume/Directory Mounts for the Trilio Datamover Service section in this doc

Following are the valid keys to define any backup target

NfsShares

Provide the nfs share you want to use as backup target for snapshots taken by Triliovault

NfsOptions

This parameter set NFS mount options.

Keep default values, unless a special requirement exists.

S3Type

If your Backup target is S3 then either provide the amazon_s3 or ceph_s3 depends on s3 type

S3AccessKey

Provide S3 Access key

S3SecretKey

Provide Secret key

S3RegionName

Provide S3 region.

If your S3 type does not have region parameter, just keep the parameter as it is

S3Bucket

Provide S3 bucket name

S3EndpointUrl

Provide S3 endpoint url, if your S3 type does not not required keep it as it is.

Not required for Amazon S3

S3SignatureVersion

Provide S3 singature version.

S3AuthVersion

Provide S3 auth version.

S3SslEnabled

Default value is False.

If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint url then change it to 'True', otherwise keep it as 'False'.

4.1] Update triliovault object store yaml

After you fill in details of backup targets in trilio_env.yaml, user needs to run following script from ‘scripts' directory on undercloud node. This script will update ‘services/triliovault-object-store.yaml' file. User don’t need to verify that.

cd redhat-director-scripts/rhosp17/scripts/
dnf install python3-ruamel-yaml
python3 update_object_store_yaml.py

4.2] For enabling the VM migration feature, populate the below mentioned parameters in trilio_env.yaml

More about this feature can be found here.

ParametersDescription

VmwareToOpenstackMigrationEnabled

Set it to True if this feature is required to be enabled, otherwise keep it to False Populate all the below mentioned parameters if it is set to True

VcenterUrl

vCenter access URL example: https://vcenter-1.infra.trilio.io/

VcenterUsername

Access username (Check out the privilege requirement here)

VcenterPassword

Access user's Password

VcenterNoSsl

If the connection is to be established securely, set it to False Set it to True if SSL verification is to be ignored

VcenterCACertFileName

If VcenterNoSsl is set to False, provide the name of the SSL certificate file which is uploaded at step 1.5.2 Otherwise, keep it blank.

5] Generate random passwords for TrilioVault OpenStack resources []

The user needs to generate random passwords for Trilio resources using the following script.

This script will generate random passwords for all Trilio resources that are going to get created in OpenStack cloud.

5.1] Change the directory and run the script

cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/
./generate_passwords.sh

5.2] Output will be written to the below file.

Include this file in your overcloud deploy command as an environment file with the option "-e"

-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/passwords.yaml

6] Fetch ids of required OpenStack resources [Run from OpenStackClient Pod]

Run all steps of this section from 'openstackclient' pod.

Login to openstackclient pod.

oc rsh -n openstack openstackclient

6.1] User needs to source 'overcloudrc' file of cloud admin user. This is needed to run OpenStack CLI and Clone repo

For only this section user needs to source the cloudrc file of overcloud node

source <OVERCLOUD_RC_FILE>
git clone https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/
git checkout << trilio_branch >>

6.2] User must have filled in the cloud admin user details of overcloud in 'trilio_env.yaml' file in the 'Provide environment details in trilio-env.yaml' section. If not please do so.

cd redhat-director-scripts/rhosp17/environments/
vi /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml

6.3] Cloud admin user should have admin role on cloud admin domain

openstack role add --user <cloud-Admin-UserName> --domain <Cloud-Admin-DomainName> admin

# Example
openstack role add --user admin --domain default admin

6.4] After this, user can run the following script.

cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts
./create_wlm_ids_conf.sh

The output will be written to

cat triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/triliovault_wlm_ids.conf

7] Load necessary Linux drivers on all Controller and Compute nodes

For TrilioVault functionality to work, we need the following Linux kernel modules to be loaded on all controllers and compute nodes(Where Trilio WLM and Datamover services are going to be installed).

7.1] Install nbd module

modprobe nbd nbds_max=128
lsmod | grep nbd

7.2] Install fuse module

modprobe fuse
lsmod | grep fuse

8] Upload Trilio puppet module [From inside openstackclient pod]

Use ansible ad hoc commands or playbooks to copy the Trilio puppet module from openstack client pod to all overcloud nodes.

8.1] Login to openstackclient pod

Login to openstackclient pod.

oc rsh -n openstack openstackclient

8.2] Create a ansible playbook file at path '/home/cloud-admin/copy_trilio_puppet_module.yaml' and copy following content into it.

---
- hosts: all
  become: yes
  tasks:
    - name: Copy trilio puppet module
      ansible.builtin.copy:
        src: /home/cloud-admin/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/puppet/trilio
        dest: /etc/puppet/modules/
        owner: root
        group: root
        mode: '0777'

8.3] Run ansible playbook to copy Trilio puppet module to all controller and compute nodes

This step copies Trilio puppet module from path '/home/cloud-admin/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/puppet/trilio' to all controller and compute nodes at path '/etc/puppet/modules/trilio'

ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/copy_trilio_puppet_module.yaml --limit Controller,Compute

9] Recreate config map tripleo-tarball-config due to change in custom templates [On workstation node]

In this step we will copy all TrilioVault heat service templates to custom templates folder and re-create custom-config.tar.gz This section is valid for following TrilioVault heat templates

services/triliovault-datamover-api.yaml
services/triliovault-datamover.yaml
services/triliovault-horizon.yaml
services/triliovault-wlm-api.yaml
services/triliovault-wlm-cron-pacemaker.yaml
services/triliovault-wlm-scheduler.yaml
services/triliovault-wlm-workloads.yaml

All heat templates paths should be relative to following directory :

/usr/share/openstack-tripleo-heat-templates

9.1] Identify custom_templates folder location(From where tar file custom-config.tar.gz created) and navigate to it. If you are not using any custom heat templates, you can skip this step.

cd PATH_TO/custom_templates/
mkdir -p deployment/triliovault
cd deployment/triliovault/
cp ~/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/services/*.yaml .

9.2] If you are is not using this custom_templates tar (Using all default heat templates) then we need to create one for TrilioVault. [Skip this step if you performed previous step]

cd ~
mkdir -p custom_templates/deployment/triliovault
cd custom_templates/deployment/triliovault/
cp ~/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/services/*.yaml .

9.3] Copy endpoint_map.yaml heat template having TrilioVault endpoint data appended to openstack endpoint map data.

cd PATH_TO/custom_templates/
mkdir -p network/endpoints
cd network/endpoints/
cp ~/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/endpoint_map.yaml .

9.4] Create tar ball custom-config.tar.gz

cd PATH_TO/custom_templates/
tar -cvzf ../custom-config.tar.gz .

9.5] Create config map named tripleo-tarball-config using the above tar ball. If the config map is already there on RHOCP, you may need to delete existing config map or use other oc command to update existing config map. Check config map name in “openstack-config-generator.yaml“, use the same name.

First we need to delete existing config map

oc delete cm tripleo-tarball-config -n openstack
cd ~/
oc create configmap tripleo-tarball-config --from-file=custom-config.tar.gz -n openstack
oc get configmap/tripleo-tarball-config -n openstack

10] Recreate config map 'heat-env-config' due to change in environment files [On workstation node]

RHOSP Reference document: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_adding-custom-environment-files-to-the-overcloud-configuration_OSPdO-with-HCI

10.1] On Workstation node, identify custom environment files directory that overcloud deployment is using to create heat-env-config config map.

In this step user need to copy Trilio environment files to custome environment files directory. This diretcory name can be anything.

Let's say this directory path is "PATH_TO/dir_custom_environment_files"

cd <PATH_TO>/dir_custom_environment_files/
cp <PATH_TO_TRILIO_CFG_SCRIPTS>/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/defaults.yaml .
cp <PATH_TO_TRILIO_CFG_SCRIPTS>/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/trilio_env.yaml .
cp <PATH_TO_TRILIO_CFG_SCRIPTS>/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/passwords.yaml .
cp <PATH_TO_TRILIO_CFG_SCRIPTS>/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/environments/<TRILIO_ENV_TLS_YAML_FILE> .

10.2] Create the heat-env-config ConfigMap object again. You can delete recreate the existing config map or update it. Check config map name in "openstack-config-generator.yaml", use the same name

In following commands, replace <dir_custom_environment_files> with the directory that contains the environment files you want to use in your overcloud deployment. The ConfigMap object stores these as individual data entries.

oc delete cm heat-env-config -n openstack

oc create configmap -n openstack heat-env-config \
 --from-file=$PATH_TO/<dir_custom_environment_files>/ \
 --dry-run=client -o yaml | oc apply -f -

oc get configmap/heat-env-config -n openstack

11] Creating Ansible playbooks for overcloud configuration with the OpenStackConfigGenerator CRD [On Workstation node]

RHOSP Reference Document: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_creating-ansible-playbooks-for-overcloud-configuration-with-the-openstackplaybookgenerator-CRD_OSPdO-overcloud-deploy

11.1] Delete existing OpenStackConfigGenerator CRD resource 'default' . This step does not delete overcloud.

Before running this step, please take permission from Redhat deployment team.

oc delete osconfiggenerator default -n openstack

11.2] Create OpenStackConfigGenerator resource again. Make sure you are in directory where 'openstack-config-generator.yaml' file exists. This yaml file must have been used to deploy RHOSP cloud.

oc create -f openstack-config-generator.yaml -n openstack

Check status of the resource, wait till it’s status chnges to “Finished“

oc get openstackconfiggenerator/default -n openstack

[kni@localhost heat-config]$ oc get openstackconfiggenerator/default -n openstack
NAME      STATUS
default   Finished

12] Deploy overcloud with Trilio environment [From Old doc step number 9] [On Workstation node]

RHOSP Reference Document: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_applying-overcloud-configuration-with-director-operator_OSPdO-overcloud-deploy

In this step we will apply the new ansible playbooks generated in previous step to overcloud. Use config version of the playbooks generated in previous step. Update config version in openstack-deployment.yaml file

vi openstack-deployment.yaml

Apply the updated definition.

oc apply -f openstack-deployment.yaml -n openstack

Check deployment progress and logs

oc logs -f jobs/deploy-openstack-default -n openstack

13] Verify deployment

  • Make sure you see a successful deployment message at the bottom of the following logs You may need to adjust your deployment name deploy-openstack-default' as per your environment.

oc logs -f jobs/deploy-openstack-default -n openstack

Last updated