arrow-left

All pages
gitbookPowered by GitBook
1 of 6

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Installing Trilio Components

Once the Trilio VM or the Cluster of Trilio VMs has been spun, the actual installation process can begin. This process contains of the following steps:

  1. Install the Trilio dm-api service on the control plane

  2. Install the Trilio datamover service on the compute plane

  3. Install the Trilio Horizon plugin into the Horizon service

How these steps look in detail is depending on the Openstack distribution Trilio is installed in. Each supported Openstack distribution has its own deployment tools. Trilio is integrating into these deployment tools to provide a native integration from the beginning to the end.

  • Test text

Installing on RHOSP

The Red Hat Openstack Platform Directorarrow-up-right is the supported and recommended method to deploy and maintain any RHOSP installation.

Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.

hashtag
1. Prepare for deployment

hashtag
1.1] Select 'backup target' type

Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

Following backup target types are supported by Trilio

a) NFS

Need NFS share path

b) Amazon S3

- S3 Access Key - Secret Key - Region - Bucket name

c) Other S3 compatible storage (Like, Ceph based S3)

- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

hashtag
1.2] Clone triliovault-cfg-scripts repository

The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.

circle-exclamation

All commands need to be run as user 'stack' on undercloud node

The following command clones the triliovault-cfg-scripts github repository.

Next access the Red Hat Director scripts according to the used RHOSP version.

hashtag
RHOSP 13

hashtag
RHOSP 16.1

hashtag
RHOSP 16.2

hashtag
RHOSP 17.0

circle-exclamation

The remaining documentation will use the following path for examples:

hashtag
1.3] If backup target type is 'Ceph based S3' with SSL:

If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE___Directory/puppet/trilio/files'

hashtag
2] Upload trilio puppet module

The following commands upload the Trilio puppet module to the overcloud registry. The actual upload happens upon the next deployment.

Trilio puppet module is uploaded to overcloud as a swift deploy artifact with heat resource name 'DeployArtifactURLs'. If you check trilio's puppet module artifact file it looks like following.

Note: If your overcloud deploy command using any other deploy artifact through a environment file, then you need to merge trilio deploy artifact url and your url in single file.

  • How to check if your overcloud deploy environment files using deploy artifacts? You need to check string 'DeployArtifactURLs' in your environment files (only those mentioned in overcloud deploy command with -e option). If you find it any such environment file that is mentioned in overcloud dpeloy command with '-e' option then your deploy command is using deploy artifact.

  • In that case you need to merge all deploy artifacts in single file. Refer following steps.

Let's say, your artifact file path is "/home/stack/templates/user-artifacts.yaml" then refer following steps to merge both urls in single file and pass that new file to overcloud deploy command with '-e' option.

hashtag
3] Update overcloud roles data file to include Trilio services

Trilio contains multiple services. Add these services to your roles_data.yaml.

circle-info

In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:

/usr/share/openstack-tripleo-heat-templates/roles_data.yaml

Add the following services to the roles_data.yaml

circle-exclamation

All commands need to be run as user 'stack'

hashtag
3.1] Add Trilio Datamover Api Service to role data file

This service needs to share the same role as the keystone and database service. In case of the pre-defined roles will these services run on the role Controller. In case of custom defined roles, it is necessary to use the same role where 'OS::TripleO::Services::Keystone' service installed.

Add the following line to the identified role:

hashtag
3.2] Add Trilio Datamover Service to role data file

This service needs to share the same role as the nova-compute service. In case of the pre-defined roles will the nova-compute service run on the role Compute. In case of custom defined roles, it is necessary to use the role the nova-compute service is using.

Add the following line to the identified role:

hashtag
3.3] Add Trilio Horizon Service role data file

This service needs to share the same role as the OpenStack Horizon server. In case of the pre-defined roles will the Horizon service run on the role Controller. Add the following line to the identified role:

hashtag
4] Prepare Trilio container images

circle-exclamation

All commands need to be run as user 'stack'

Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.

circle-exclamation

Please note that using the hotfix containers requires that the Trilio Appliance is getting upgraded to the desired hotfix level as well.

circle-info

Refer to the word <HOTFIX-TAG-VERSION> as 4.3.2 in the below sections

hashtag
RHOSP 13

hashtag
RHOSP 16.1

hashtag
RHOSP 16.2

hashtag
RHOSP 17.0

There are three registry methods available in RedHat Openstack Platform.

  1. Remote Registry

  2. Local Registry

  3. Satellite Server

hashtag
4.1] Remote Registry

Follow this section when 'Remote Registry' is used.

In this method, container images gets downloaded directly on overcloud nodes during overcloud deploy/update command execution. User can set remote registry to redhat registry or any other private registry that he wants to use. User needs to provide credentials of the registry in 'containers-prepare-parameter.yaml' file.

  1. Make sure other openstack service images are also using the same method to pull container images. If it's not the case you can not use this method.

  2. Populate 'containers-prepare-parameter.yaml' with content like following. Important parameters are 'push_destination: false', ContainerImageRegistryLogin: true and registry credentials. Trilio container images are published to registry 'registry.connect.redhat.com'. Credentials of registry 'registry.redhat.io' will work for 'registry.connect.redhat.com' registry too.

Note: This file -'containers-prepare-parameter.yaml'

Redhat document for remote registry method: \

Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat

3. Make sure you have network connectivity to above registries from all overcloud nodes. Otherwise image pull operation will fail.

4. User need to manually populate the trilio_env.yaml file with Trilio container image URLs as given below:

circle-info

trilio_env.yaml file path:

At this step, you have configured Trilio image urls in the necessary environment file.

hashtag
4.2] Local Registry

Follow this section when 'local registry' is used on the undercloud.

In this case it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.

hashtag
RHOSP13

hashtag
RHOSP 16.1

hashtag
RHOSP16.2

hashtag
RHOSP17.0

At this step, you have downloaded triliovault container images and configured triliovault image urls in necessary environment file.

hashtag
4.3] Red Hat Satellite Server

Follow this section when a Satellite Server is used for the container registry.

Pull the Trilio containers on the Red Hat Satellite using the given

Populate the trilio_env.yaml with container urls.

hashtag
RHOSP 13

hashtag
RHOSP 16.1

hashtag
RHOSP16.2

hashtag
RHOSP17.0

At this step, you have downloaded triliovault container images into RedHat sattelite server and configured triliovault image urls in necessary environment file.

hashtag
5] Configure multi-IP NFS

circle-info

This section is only required when the multi-IP feature for NFS is required.

This feature allows to set the IP to access the NFS Volume per datamover instead of globally.

On Undercloud node, change directory

Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

Get the compute hostnames from the following command. Check ‘Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.

circle-info

Run this command on undercloud by sourcing 'stackrc'.

Edit input map file and fill all the details. Refer to the for details about the structure.

vi triliovault_nfs_map_input.yml

Update pyyaml on the undercloud node only

circle-exclamation

If pip isn't available please install pip on the undercloud.

Expand the map file to create a one-to-one mapping of the compute nodes and the nfs shares.

The result will be in file - 'triliovault_nfs_map_output.yml'

Validate output map file

Open file 'triliovault_nfs_map_output.yml

vi triliovault_nfs_map_output.yml

available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

Validate the changes in file 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that nfs is used as backup target.

hashtag
6] Provide environment details in trilio-env.yaml

Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.

The following information are required additionally:

  • Network for the datamover api

  • datamover password

  • Backup target type {nfs/s3}

circle-info

NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

  • In case of S3

    • S3 type {amazon_s3/ceph_s3}

    • S3 Access key

circle-info

Use ceph_s3 for any non-aws S3 backup targets.

hashtag
7] Advanced Settings/Configuration

hashtag
7.1] Haproxy customized configuration for Trilio dmapi service __

circle-info

The existing default haproxy configuration works fine with most of the environments. Only when timeout issues with the dmapi are observed or other reasons are known, change the configuration as described here.

Following is the haproxy conf file location on haproxy nodes of the overcloud. Trilio datamover api service haproxy configuration gets added to this file.

Trilio datamover haproxy default configuration from the above file looks as follows:

The user can change the following configuration parameter values.

To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for edit (Edit <RHOSP_RELEASE> with your cloud's release information. Valid values are - rhosp13, rhosp16, rhosp16.1)

For RHOSP13

For RHOSP16.1

For RHOSP16.2

For RHOSP17.0

ii) Search the following entries and edit as required

iii) Save the changes.

hashtag
7.2] Configure Custom Volume/Directory Mounts for the Trilio Datamover Service

  • If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use the following heat environment file. A variable named 'TrilioDatamoverOptVolumes' is available in this file.

  • This variable 'TrilioDatamoverOptVolumes' accepts list of volume/bind mounts.

  • User needs to edit this file and add their volume mounts in below format.

  • For example:

  • Next, User needs to pass this file to overcloud deploy command with '-e' option Like below.

hashtag
8] Deploy overcloud with trilio environment

Use the following heat environment file and roles data file in overcloud deploy command:

  1. trilio_env.yaml

  2. roles_data.yaml

  3. Use correct Trilio endpoint map file as per available Keystone endpoint configuration

To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:

circle-info

Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

hashtag
9] Verify deployment

triangle-exclamation

If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

hashtag
9.1] On Controller node

Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

Verify the haproxy configuration under:

hashtag
9.2] On Compute node

Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

hashtag
9.3] On the node with Horizon service

Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.

If Trilio Horizon container is in restarted state on RHOSP 16.1.8/RHSOP 16.2.4 then use below workaroud

hashtag
10] Troubleshooting for overcloud deployment failures

Trilio components will be deployed using puppet scripts.

oIn case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights:

In case of NFS
  • list of NFS Shares

  • NFS options

  • MultiIPNfsEnabled

S3 Secret key

  • S3 Region name

  • S3 Bucket

  • S3 Endpoint URL

  • S3 Signature Version

  • S3 Auth Version

  • S3 SSL Enabled {true/false}

  • S3 SSL Cert

  • In this volume mount "/mnt/dir2:/var/dir2", "/mnt/dir2" is a diretcory available on compute host and "/var/dir2" is the mount point inside datamover container.
  • Instead of tls-endpoints-public-dns.yaml file, use environments/trilio_env_tls_endpoints_public_dns.yaml

  • Instead of tls-endpoints-public-ip.yaml file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml

  • Instead of tls-everywhere-endpoints-dns.yaml file, useenvironments/trilio_env_tls_everywhere_dns.yaml

  • If activated use the correct trilio_nfs_map.yaml file

  • Herearrow-up-right
    Red Hat registry URLs.arrow-up-right
    this page
    https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.htmlarrow-up-right
    cd /home/stack
    git clone -b 4.3.2 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp13/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/
    cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/puppet/trilio/files
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/scripts/
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following for RHOSP13, RHOSP16.1 and RHOSP16.2
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    ## Command is same for RHOSP17.0 but the command output and file content would be different
    
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/scripts/
    ./upload_puppet_module.sh
    
    ## Output of above command looks like following for RHOSP17.0
    Creating tarball...
    Tarball created.
    renamed '/tmp/puppet-modules-P3duCg9/puppet-modules.tar.gz' -> '/var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz'
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    ## Above command creates following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    ## For RHOSP13, RHOSP16.1 and RHOSP16.2
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml
    # Heat environment to deploy artifacts via Swift Temp URL(s)
    parameter_defaults:
        DeployArtifactURLs:
        - 'http://172.25.0.103:8080/v1/AUTH_46ba596219d143c8b076e9fcc4139fed/overcloud-artifacts/puppet-modules.tar.gz?temp_url_sig=c3972b7ce75226c278ab3fa8237d31cc1f2115bd&temp_url_expires=1646738377'
    
    ## For RHOSP17.0
    (undercloud) [stack@undercloud17-3 scripts]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml
    parameter_defaults:
      DeployArtifactFILEs:
      - /var/lib/tripleo/artifacts/overcloud-artifacts/puppet-modules.tar.gz
    
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml | grep http >> /home/stack/templates/user-artifacts.yaml
    (undercloud) [stack@ucloud161 ~]$ cat /home/stack/templates/user-artifacts.yaml
    # Heat environment to deploy artifacts via Swift Temp URL(s)
    parameter_defaults:
        DeployArtifactURLs:
        - 'http://172.25.0.103:8080/v1/AUTH_57ba596219d143c8b076e9fcc4139f3g/overcloud-artifacts/some-artifact.tar.gz?temp_url_sig=dc972b7ce75226c278ab3fa8237d31cc1f2115sc&temp_url_expires=3446738365'
        - 'http://172.25.0.103:8080/v1/AUTH_46ba596219d143c8b076e9fcc4139fed/overcloud-artifacts/puppet-modules.tar.gz?temp_url_sig=c3972b7ce75226c278ab3fa8237d31cc1f2115bd&temp_url_expires=1646738377'
    
    'OS::TripleO::Services::TrilioDatamoverApi'
    'OS::TripleO::Services::TrilioDatamover'
    OS::TripleO::Services::TrilioHorizon
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    Trilio Datamover container:        registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
    Trilio Datamover Api Container:   registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
    Trilio horizon plugin:            registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    File Name: containers-prepare-parameter.yaml
    parameter_defaults:
      ContainerImagePrepare:
      - push_destination: false
        set:
          namespace: registry.redhat.io/...
          ...
      ...
      ContainerImageRegistryCredentials:
        registry.redhat.io:
          myuser: 'p@55w0rd!'
        registry.connect.redhat.com:
          myuser: 'p@55w0rd!'
      ContainerImageRegistryLogin: true
    cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
    # For RHOSP13
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    
    # For RHOSP16.1
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.1' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
    # For RHOSP16.2
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.2' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    
    # For RHOSP17.0
    $ grep '<HOTFIX-TAG-VERSION>-rhosp17.0' trilio_env.yaml
       DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/scripts/
    
    ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp13
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: 172.25.2.2:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: 172.25.2.2:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: 172.25.2.2:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    
    $ docker image list | grep <HOTFIX-TAG-VERSION>-rhosp13
    172.30.5.101:8787/trilio/trilio-datamover                  <HOTFIX-TAG-VERSION>-rhosp13        f2dfb36bb176        8 weeks ago         3.61 GB
    registry.connect.redhat.com/trilio/trilio-datamover        <HOTFIX-TAG-VERSION>-rhosp13        f2dfb36bb176        8 weeks ago         3.61 GB
    172.30.5.101:8787/trilio/trilio-datamover-api              <HOTFIX-TAG-VERSION>-rhosp13        5d62f572a00c        8 weeks ago         2.24 GB
    registry.connect.redhat.com/trilio/trilio-datamover-api    <HOTFIX-TAG-VERSION>-rhosp13        5d62f572a00c        8 weeks ago         2.24 GB
    registry.connect.redhat.com/trilio/trilio-horizon-plugin   <HOTFIX-TAG-VERSION>-rhosp13        27c4de28e5ae        2 months ago        2.27 GB
    172.30.5.101:8787/trilio/trilio-horizon-plugin             <HOTFIX-TAG-VERSION>-rhosp13        27c4de28e5ae        2 months ago        2.27 GB
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp16.1
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.1' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
    $ openstack tripleo container image list | grep <HOTFIX-TAG-VERSION>-rhosp16.1
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1 |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1      |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1  |
    -----------------------------------------------------------------------------------------------------
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp16.2
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.2' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    
    $ openstack tripleo container image list | grep <HOTFIX-TAG-VERSION>-rhosp16.2
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2 |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2      |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2  |
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_ip/hostname> <HOTFIX-TAG-VERSION>-rhosp17.0
    
    ## Verify changes
    $ grep '<HOTFIX-TAG-VERSION>-rhosp17.0' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    
    $ openstack tripleo container image list | grep <HOTFIX-TAG-VERSION>-rhosp17.0
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0 |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0      |
    | docker://tlsundercloud.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0  |
    $ grep '<HOTFIX-TAG-VERSION>-rhosp13' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp13
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp13
       DockerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp13
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.1' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    $ grep '<HOTFIX-TAG-VERSION>-rhosp16.2' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2
    $ grep '<HOTFIX-TAG-VERSION>-rhosp17.0' trilio_env.yaml
       DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp17.0
       DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp17.0
       ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp17.0
    cd triliovault-cfg-scripts/common/
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    ## On Python3 env
    sudo pip3 install PyYAML==5.1 3
    
    ## On Python2 env
    sudo pip install PyYAML==5.1
    ## On Python3 env
    python3 ./generate_nfs_map.py
    
    ## On Python2 env
    python ./generate_nfs_map.py
    grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
    resource_registry:
      OS::TripleO::Services::TrilioDatamover: ../services/trilio-datamover.yaml
      OS::TripleO::Services::TrilioDatamoverApi: ../services/trilio-datamover-api.yaml
      OS::TripleO::Services::TrilioHorizon: ../services/trilio-horizon.yaml
    
      # NOTE: If there are addition customizations to the endpoint map (e.g. for
      # other integratiosn), this will need to be regenerated.
      OS::TripleO::EndpointMap: endpoint_map.yaml
    
    parameter_defaults:
    
       ## Enable Trilio's quota functionality on horizon
       ExtraConfig:
         horizon::customization_module: 'dashboards.overrides'
    
       ## Define network map for trilio datamover api service
       ServiceNetMap:
           TrilioDatamoverApiNetwork: internal_api
    
       ## Trilio Datamover Password for keystone and database
       TrilioDatamoverPassword: "test1234"
    
       ## Trilio container pull urls
       DockerTrilioDatamoverImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.1
       DockerTrilioDmApiImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.1
    
       ## If you do not want Trilio's horizon plugin to replace your horizon container, just comment following line.
       ContainerHorizonImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.1
    
       ## Backup target type nfs/s3, used to store snapshots taken by triliovault
       BackupTargetType: 'nfs'
    
       ## If backup target NFS share support multiple IPs and you want to use those IPs(more than one) then
       ## set this parameter to True. Otherwise keep it False.
       MultiIPNfsEnabled: False
    
       ## For backup target 'nfs'
       NfsShares: '192.168.122.101:/opt/tvault'
       NfsOptions: 'nolock,soft,timeo=180,intr,lookupcache=none'
    
       ## For backup target 's3'
       ## S3 type: amazon_s3/ceph_s3
       S3Type: 'amazon_s3'
    
       ## S3 access key
       S3AccessKey: ''
    
       ## S3 secret key
       S3SecretKey: ''
    
       ## S3 region, if your s3 does not have any region, just keep the parameter as it is
       S3RegionName: ''
    
       ## S3 bucket name
       S3Bucket: ''
    
       ## S3 endpoint url, not required for Amazon S3, keep it as it is
       S3EndpointUrl: ''
    
       ## S3 signature version
       S3SignatureVersion: 'default'
    
       ## S3 Auth version
       S3AuthVersion: 'DEFAULT'
    
       ## If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint url then change it to 'True', otherwise keep it as 'False'
       S3SslEnabled: False
    
       ## If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint URL and SSL certificates are self signed, then
       ## user need to set this parameter value to: '/etc/tvault-contego/s3-cert.pem', otherwise keep it's value as empty string.
       S3SslCert: ''
    
       ## Configure 'dmapi_workers' parameter of '/etc/dmapi/dmapi.conf' file
       ## This parameter value used to spawn the number of dmapi processes to handle the incoming api requests.
       ## If your dmapi node has ‘n' cpu cores, It is recommended, to set this parameter to '4*n’.
       ## If dmapi_workers field is not present in config file. The Default value will be equals to number of cores present on the node
       DmApiWorkers: 16
    
       ## Don't edit following parameter
       EnablePackageInstall: True
    
    
       ## Load 'rbd' kernel module on all compute nodes
       ComputeParameters:
         ExtraKernelModules:
           rbd: {}
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    listen trilio_datamover_api
      bind 172.25.0.107:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
      bind 172.25.0.107:8784 transparent
      balance roundrobin
      http-request set-header X-Forwarded-Proto https if { ssl_fc }
      http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
      http-request set-header X-Forwarded-Port %[dst_port]
      maxconn 50000
      option httpchk
      option httplog
      retries 5
      timeout check 10m
      timeout client 10m
      timeout connect 10m
      timeout http-request 10m
      timeout queue 10m
      timeout server 10m
      server overcloud-controller-0.internalapi.localdomain 172.25.0.106:8784 check fall 5 inter 2000 rise 2
    
    retries 5
    timeout http-request 10m
    timeout queue 10m
    timeout connect 10m
    timeout client 10m
    timeout server 10m
    timeout check 10m
    balance roundrobin
    maxconn 50000
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/services/trilio-datamover-api.yaml
    /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp17.0/services/trilio-datamover-api.yaml
              tripleo::haproxy::trilio_datamover_api::options:
                 'retries': '5'
                 'maxconn': '50000'
                 'balance': 'roundrobin'
                 'timeout http-request': '10m'
                 'timeout queue': '10m'
                 'timeout connect': '10m'
                 'timeout client': '10m'
                 'timeout server': '10m'
                 'timeout check': '10m'
    triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE>/environments/trilio_datamover_opt_volumes.yaml
    parameter_defaults:
      TrilioDatamoverOptVolumes:
        - /opt/dir1:/opt/dir1
        - /mnt/dir2:/var/dir2
    openstack overcloud deploy --templates \
    -e <> \
    .
    .
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_datamover_opt_volumes.yaml
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_env_tls_endpoints_public_dns.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/environments/trilio_nfs_map.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
    [root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
    26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-rhosp16.2        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2       kolla_start           5 days ago  Up 5 days ago         horizon
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-rhosp16.2                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-rhosp16.2       kolla_start           5 days ago  Up 5 days ago         horizon
    ## Either of the below workarounds should be performed on all the controller nodes where issue occurs for horizon pod.
    
    option-1: Restart the memcached service on controller using systemctl (command: systemctl restart tripleo_memcached.service)
    
    option-2: Restart the memcached pod (command: podman restart memcached)
    openstack stack failures list overcloud
    heat stack-list --show-nested -f "status=FAILED"
    heat resource-list --nested-depth 5 overcloud | grep FAILED
    
    => If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_dmapi
    
    tailf /var/log/containers/trilio-datamover-api/dmapi.log
    
    
    
    => If trilio datamover containers does not start well or in restarting state, use following logs to debug.
    
    
    docker logs trilio_datamover
    
    tailf /var/log/containers/trilio-datamover/tvault-contego.log

    Installing on Canonical OpenStack

    Trilio and Canonical have started a partnership to ensure a native deployment of Trilio using JuJu Charms.

    Those JuJu Charms are publicly available as Open Source Charms.

    circle-exclamation

    Trilio is providing the JuJu Charms to deploy Trilio 4.2 in Canonical OpenStack from Yoga release onwards only. JuJu Charms to deploy Trilio 4.2 in Canonical OpenStack up to wallaby release are developed and maintained by Canonical.

    circle-check

    Canonical OpenStack doesn't require the Trilio Cluster. The required services are installed and managed via JuJu Charms.

    The documentation of the charms can be found here:

    hashtag
    Juju charms for OpenStack Yoga release onwards

    hashtag
    Juju charms for other supported OpenStack releases upto wallaby

    Prerequisite

    Have a canonical OpenStack base setup deployed for a required release like Jammy Zed/Yoga, Focal yoga/Wallaby/Victoria/Ussuri, or Bionic Ussuri/Queens.

    Steps to install the Trilio charms

    1. Export the OpenStack base bundle

    2. Create a Trilio overlay bundle as per the OpenStack setup release using the charms given above.

    Some sample Trilio overlay bundles can be found .

    circle-info

    NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    triangle-exclamation

    Trilio File Search functionality requires that the Trilio Workload manager (trilio-wlm) be deployed as a virtual machine. File Search will not function if the Trilio Workload manager (trilio-wlm) is running as a lxd container(s).

    3. If file search functionality is required, provision any additional node(s) that will be required for deploying the Trilio Workload manager (trilio-wlm) as a VM instead of lxd container(s).

    4. Commission the additional node from MAAS UI.

    5. Do a dry run to check if the Trilio bundle is working

    6. Do the deployment

    7. Wait till all the Trilio units are deployed successfully. Check the status via juju status command.

    8. Once the deployment is complete, perform the below operations:

    1. Create cloud admin trust

    1. Add license

    Note: Reach out to the Trilio support team for the license file.

    For multipath enabled environments, perform the following actions

    1. log into each nova compute node

    2. add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf

    3. restart tvault-contego service

    Sample Trilio overlay bundles

    circle-info

    For bionic-queens openstack-origin parameter value for trilio-dm-api charm must be cloud:bionic-train

    circle-info

    For the AWS S3 storage backend, we need to use `` as S3 end-point URL.

    A few Sample overlay bundles for different OpenStack versions can be found .

    latest/edge

    Jammy (Ubuntu 22.04)

    latest/edge

    Jammy (Ubuntu 22.04)

    latest/edge

    Focal (Ubuntu 20.04)

    latest/edge

    Focal (Ubuntu 20.04)

    latest/edge

    Focal (Ubuntu 20.04)

    latest/edge

    Focal (Ubuntu 20.04)

    4.2/stable

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    4.2/stable

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    Charm names

    Channel

    Supported releases

    trilio-charmers-trilio-wlm-jammyarrow-up-right

    latest/edge

    Jammy (Ubuntu 22.04)

    trilio-charmers-trilio-dm-api-jammyarrow-up-right

    latest/edge

    Charm names

    Channel

    Supported releases

    trilio-wlmarrow-up-right

    4.2/stable

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    trilio-data-moverarrow-up-right

    4.2/stable

    herearrow-up-right
    http://s3.amazonaws.comarrow-up-right
    HEREarrow-up-right

    Jammy (Ubuntu 22.04)

    Focal (Ubuntu 20.04), Bionic (Ubuntu 18.04)

    juju export-bundle --filename openstack_base_file.yaml
    juju deploy --dry-run ./openstack_base_file.yaml --overlay <Trilio bundle path>
    juju deploy ./openstack_base_file.yaml --overlay <Trilio bundle path>
    Juju 2.x
    juju run-action --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
    Juju 3.x
    juju run --wait trilio-wlm/leader create-cloud-admin-trust password=<openstack admin password>
    juju attach-resource trilio-wlm license=<Path to trilio license file>
    Juju 2.x
    juju run-action --wait trilio-wlm/leader create-license
    Juju 3.x
    juju run --wait trilio-wlm/leader create-license
    trilio-charmers-trilio-data-mover-jammyarrow-up-right
    trilio-charmers-trilio-horizon-plugin-jammyarrow-up-right
    trilio-charmers-trilio-wlm-focalarrow-up-right
    trilio-charmers-trilio-dm-api-focalarrow-up-right
    trilio-charmers-trilio-data-mover-focalarrow-up-right
    trilio-charmers-trilio-horizon-plugin-focalarrow-up-right
    trilio-dm-apiarrow-up-right
    trilio-horizon-pluginarrow-up-right

    Installing on TripleO Train

    hashtag
    1. Prepare for deployment

    hashtag
    1.1] Select 'backup target' type

    Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

    The following backup target types are supported by Trilio

    a) NFS

    Need NFS share path

    b) Amazon S3

    - S3 Access Key - Secret Key - Region - Bucket name

    c) Other S3 compatible storage (Like, Ceph based S3)

    - S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

    hashtag
    1.2] Clone triliovault-cfg-scripts repository

    The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.

    circle-exclamation

    All commands need to be run as user 'stack' on undercloud node

    circle-exclamation

    TripleO CentOS8 is not supported anymore as CentOS Linux 8 has reached End of Life on December 31st,2021.

    The following command clones the triliovault-cfg-scripts github repository.

    circle-exclamation

    Please note that the Trilio Appliance needs to get updated to the latest HF as well.

    hashtag
    1.3] If the backup target type is 'Ceph based S3' with SSL:

    If your backup target is ceph S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, the user needs to rename his ca chain cert file to s3-cert.pem and copy it into the directory triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE_Directory/puppet/trilio/files

    hashtag
    2] Upload Trilio puppet module

    hashtag
    3] Update overcloud roles data file to include Trilio services

    Trilio contains multiple services. Add these services to your roles_data.yaml.

    circle-info

    In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:

    /usr/share/openstack-tripleo-heat-templates/roles_data.yaml

    Add the following services to the roles_data.yaml

    circle-exclamation

    All commands need to be run as user 'stack'

    hashtag
    3.1] Add Trilio Datamover Api Service to role data file

    This service needs to share the same role as the keystone and database service. In the case of the pre-defined roles will these services run on the role Controller. In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone service is installed.

    Add the following line to the identified role:

    hashtag
    3.2] Add Trilio Datamover Service to role data file

    This service needs to share the same role as the nova-compute service. In the case of the pre-defined roles will the nova-compute service run on the role Compute. In the case of custom-defined roles, it is necessary to use the role the nova-compute service is used.

    Add the following line to the identified role:

    hashtag
    3.3] Add Trilio Horizon Service role data file4] Prepare Trilio container images

    This service needs to share the same role as the OpenStack Horizon server. In the case of the pre-defined roles, Horizon service runs on the role Controller. Add the following line to the identified role:

    circle-exclamation

    All commands need to be run as user 'stack'

    circle-info

    Refer to the word <HOTFIX-TAG-VERSION> as 4.3.2 in the below sections

    Trilio containers are pushed to 'Dockerhub'. Registry URL: 'docker.io'. Container pull URLs are given below.

    hashtag
    CentOS7

    There are two registry methods available in TripleO Openstack Platform.

    1. Remote Registry

    2. Local Registry

    hashtag
    4.1] Remote Registry

    Follow this section when 'Remote Registry' is used.

    For this method, it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from the Dockerhub registry.

    Populate the trilio_env.yaml with container URLs for:

    • Trilio Datamover container

    • Trilio Datamover api container

    • Trilio Horizon Plugin

    circle-info

    trilio_env.yaml will be available in __triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments

    hashtag
    4.2] Local Registry

    Follow this section when 'local registry' is used on the undercloud.

    Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.

    The changes can be verified using the following commands.

    hashtag
    5] Configure multi-IP NFS

    circle-info

    This section is only required when the multi-IP feature for NFS is required.

    This feature allows setting the IP to access the NFS Volume per datamover instead of globally.

    On Undercloud node, change the directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/IP map.

    Get the compute hostnames from the following command. Check the 'Name' column. Use exact hostnames in 'triliovault_nfs_map_input.yml' file.

    circle-info

    Run this command on undercloud by sourcing 'stackrc'.

    Edit the input map file and fill in all the details. Refer to for details about the structure.

    vi triliovault_nfs_map_input.yml

    Update pyYAML on the undercloud node only

    circle-exclamation

    If pip isn't available please install pip on the undercloud.

    Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.

    The result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.

    Append this output map file to 'triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml'

    Validate the changes in file triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml

    Include the environment file (trilio_nfs_map.yaml) in overcloud deploy command with '-e' option as shown below.

    Ensure that MultiIPNfsEnabled is set to true in trilio_env.yaml file and that NFS is used as the backup target.

    hashtag
    6] Fill in Trilio environment details

    Fill Trilio details in the file /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml, triliovault environment file is self-explanatory. Fill in details of the backup target, verify image URLs, and other details.

    circle-info

    NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    hashtag
    7] Install Trilio on Overcloud

    Use the following heat environment file and roles data file in overcloud deploy command

    1. trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations

    2. roles_data.yaml: This file contains overcloud roles data with Trilio roles added.

    3. Use the correct trilio endpoint map file as per your keystone endpoint configuration. - Instead of tls-endpoints-public-dns.yaml this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’ - Instead of tls-endpoints-public-ip.yaml

    Deploy command with triliovault environment file looks like the following.

    circle-info

    Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

    hashtag
    8] Verify the deployment

    triangle-exclamation

    If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

    hashtag
    8.1] On the Controller node

    Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.

    Verify the haproxy configuration under:

    hashtag
    8.2] On Compute node

    Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

    hashtag
    8.3] On the node with Horizon service

    Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have the latest OpenStack horizon + Trilio's horizon plugin.

    hashtag
    10] Troubleshooting for overcloud deployment failures

    Trilio components will be deployed using puppet scripts.

    In case of the overcloud deployment fails do the following command to provide the list of errors. The following document also provides valuable insights:

    Installing on Ansible Openstack

    circle-info

    Please ensure that the Trilio Appliance has been updated to the latest hotfix before continuing the installation.

    hashtag
    Change the nova user id on the Trilio Nodes

    this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’ - Instead of
    tls-everywhere-endpoints-dns.yaml
    this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
    this page
    https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.htmlarrow-up-right
    The 'nova' user ID and Group ID on the Trilio nodes need to be set the same as in the compute node(s). Trilio is by default using the nova user ID (UID) and Group ID (GID) 162:162. Ansible OpenStack is not always 'nova' user id 162 on compute node. Do the following steps on all Trilio nodes in case of nova UID & GID are not in sync with the Compute Node(s)
    1. Download the shell script that will change the user-id

    2. Assign executable permissions

    3. Edit script to use the correct nova id

    4. Execute the script

    5. Verify that 'nova' user and group id has changed to the desired value

    hashtag
    Prepare deployment host

    Clone triliovault-cfg-scripts from github repository on Ansible Host.

    Available values for <branch>: hotfix-4-TVO/4.2

    Copy Ansible roles and vars to required places.

    circle-info

    In case of installing on OSA Victora or OSA Wallaby edit OPENSTACK_DIST in the file /etc/openstack_/user_tvault_vars.yml to victoria or wallaby respectively

    Add Trilio playbook to /opt/openstack-ansible/playbooks/setup-openstack.ymlat the end of the file.

    Add the following content at the end of the file /etc/openstack_deploy/user_variables.yml

    Create the following file /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml

    Add the following content to the created file.

    Edit the file /etc/openstack_deploy/openstack_user_config.yml according to the example below to set host entries for Trilio components.

    Edit the common editable parameter section in the file /etc/openstack_deploy/user_tvault_vars.yml

    Append the required details like Trilio Appliance IP address, Openstack distribution, snapshot storage backend, SSL related information, etc.

    circle-info

    Note:

    1. From 4.2HF4 onwards default prefilled value i.e 4.2.64 will be used for TVAULT_PACKAGE_VERSION .

    2. In case of more than one nova virtual environment If the user wants to install tvault-contego service in a specific nova virtual environment on compute node(s) then needs to uncomment var nova_virtual_env and then set the value of nova_virtual_env

    3. In case of more than one horizon plugin configured on openstack user can specify under which horizon plugins to install Trilio Horizon Plugins by setting horizon_virtual_env parameter. Default value of horizon_virtual_env is ' /openstack/venvs/horizon*'\

    4. NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10

    hashtag
    Configure Multi-IP NFS

    circle-info

    This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs

    New parameter added to /etc/openstack_deploy/user_tvault_vars.yml file for Mutli-IP NFS\

    Change the Directory

    Edit file 'triliovault_nfs_map_input.yml' in current directory and provide compute host and NFS share/IP map.

    Please take a look at this page to learn about the format of the file.

    Update pyyaml on the Openstack Ansible server node only

    Execute generate_nfs_map.py file to create one to one mapping of compute and nfs share.

    Result will be in file - 'triliovault_nfs_map_output.yml' of the current directory

    Validate output map file Open file 'triliovault_nfs_map_output.yml

    available in current directory and validate that all compute nodes are mapped with all necessary nfs shares.

    Append the content of triliovault_nfs_map_output.yml file to /etc/openstack_deploy/user_tvault_vars.yml

    hashtag
    Deploy Trilio components

    Run the following commands to deploy only Trilio components in case of an already deployed Ansible Openstack.

    If Ansible Openstack is not already deployed then run the native Openstack deployment commands to deploy Openstack and Trilio Components together. An example for the native deployment command is given below:

    hashtag
    Verify the Trilio deployment

    Verify triliovault datamover api service deployed and started well. Run the below commands on controller node(s).

    Verify triliovault datamover service deployed and started well on compute node(s). Run the following command oncompute node(s).

    Verify that triliovault horizon plugin, contegoclient, and workloadmgrclient are installed on the Horizon container.

    Run the following command on Horizon container.

    Verify that haproxy setting on controller node using below commands.

    hashtag
    Update to the latest hotfix

    After the deployment has been verified it is recommended to update to the latest hotfix to ensure the best possible experience.

    To update the environment follow this procedure.

    cd /home/stack
    git clone -b 4.3.2 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/
    cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/puppet/trilio/files/
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    chmod +x *.sh
    ./upload_puppet_module.sh
    
    ## Output of the above command looks like the following.
    Creating tarball...
    Tarball created.
    Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
    Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
    +-----------------------+---------------------+----------------------------------+
    | object                | container           | etag                             |
    +-----------------------+---------------------+----------------------------------+
    | puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
    +-----------------------+---------------------+----------------------------------+
    
    ## Above command creates the following file.
    ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
    
    'OS::TripleO::Services::TrilioDatamoverApi'
    'OS::TripleO::Services::TrilioDatamover'
    'OS::TripleO::Services::TrilioHorizon'
    Trilio Datamove container:        docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
    Trilio Datamover Api Container:   docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
    Trilio horizon plugin:            docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    # For TripleO Train Centos7
    $ grep 'Image' trilio_env.yaml
       DockerTrilioDatamoverImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
       DockerTrilioDmApiImage: docker.io/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
       DockerHorizonImage: docker.io/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/scripts/
    
    sudo ./prepare_trilio_images.sh <undercloud_registry_hostname_or_ip> <OS_platform> <4.1-TRIPLEO-CONTAINER> <container_tool_available_on_undercloud>
    
    Options OS_platform: [centos7]
    Options container_tool_available_on_undercloud: [docker, podman]
    
    ## To get undercloud registry hostname/ip, we have two approaches. Use either one.
    1. openstack tripleo container image list
    
    2. find your 'containers-prepare-parameter.yaml' (from overcloud deploy command) and search for 'push_destination'
    cat /home/stack/containers-prepare-parameter.yaml | grep push_destination
     - push_destination: "undercloud.ctlplane.ooo.prod1:8787"
    
    Here, 'undercloud.ctlplane.ooo.prod1' is undercloud registry hostname. Use it in our command like following example.
    
    # Command Example:
    sudo ./prepare_trilio_images.sh undercloud.ctlplane.ooo.prod1 centos7 <HOTFIX-TAG-VERSION>-tripleo podman
    
    ## Verify changes
    # For TripleO Train Centos7
    $ grep '<HOTFIX-TAG-VERSION>-tripleo' ../environments/trilio_env.yaml
       DockerTrilioDatamoverImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo
       DockerTrilioDmApiImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo
       DockerHorizonImage: prod1-undercloud.demo:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo
    ## For Centos7 Train
    
    (undercloud) [stack@undercloud redhat-director-scripts/tripleo-train]$ openstack tripleo container image list | grep trilio
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo                       |                        |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo                  |                   |
    | docker://undercloud.ctlplane.localdomain:8787/trilio/tripleo-train-centos7-trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo                  |
    cd triliovault-cfg-scripts/common/
    (undercloud) [stack@ucqa161 ~]$ openstack server list
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | ID                                   | Name                          | Status | Networks             | Image          | Flavor  |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    | 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2  | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
    | 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0  | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
    | a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1  | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
    | 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
    | c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
    +--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
    ## On Python3 env
    sudo pip3 install PyYAML==5.1
    
    ## On Python2 env
    sudo pip install PyYAML==5.1
    ## On Python3 env
    python3 ./generate_nfs_map.py
    
    ## On Python2 env
    python ./generate_nfs_map.py
    grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
    -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_nfs_map.yaml
    openstack overcloud deploy --templates \
      -e /home/stack/templates/node-info.yaml \
      -e /home/stack/templates/overcloud_images.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \
      -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \
      -e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env_tls_endpoints_public_dns.yaml \
      --ntp-server 192.168.1.34 \
      --libvirt-type qemu \
      --log-file overcloud_deploy.log \
      -r /home/stack/templates/roles_data.yaml
    [root@overcloud-controller-0 heat-admin]# podman ps | grep trilio
    26fcb9194566  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:<HOTFIX-TAG-VERSION>-tripleo        kolla_start           5 days ago  Up 5 days ago         trilio_dmapi
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo       kolla_start           5 days ago  Up 5 days ago         horizon
    /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
    [root@overcloud-novacompute-0 heat-admin]# podman ps | grep trilio
    b1840444cc59  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:<HOTFIX-TAG-VERSION>-tripleo                    kolla_start           5 days ago  Up 5 days ago         trilio_datamover
    [root@overcloud-controller-0 heat-admin]# podman ps | grep horizon
    094971d0f5a9  rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:<HOTFIX-TAG-VERSION>-tripleo       kolla_start           5 days ago  Up 5 days ago         horizon
    openstack stack failures list overcloud
    heat stack-list --show-nested -f "status=FAILED"
    heat resource-list --nested-depth 5 overcloud | grep FAILED
    
    ##=> If trilio datamover api containers does not start well or in restarting state, use following logs to debug.
    docker logs trilio_dmapi
    tailf /var/log/containers/trilio-datamover-api/dmapi.log
    
    ##=> If trilio datamover containers does not start well or in restarting state, use following logs to debug.
    docker logs trilio_datamover
    tailf /var/log/containers/trilio-datamover/tvault-contego.log
    curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    chmod +x nova_userid.sh
    vi nova_userid.sh  # change nova user_id and group_id to uid & gid present on compute nodes. 
    ./nova_userid.sh
    id nova
    git clone -b <branch> https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/
    cp -R ansible/roles/* /opt/openstack-ansible/playbooks/roles/
    cp ansible/main-install.yml   /opt/openstack-ansible/playbooks/os-tvault-install.yml
    cp ansible/environments/group_vars/all/vars.yml /etc/openstack_deploy/user_tvault_vars.yml
    cp ansible/tvault_pre_install.yml /opt/openstack-ansible/playbooks/
    - import_playbook: os-tvault-install.yml
    # Datamover haproxy setting
    haproxy_extra_services:
      - service:
          haproxy_service_name: datamover_service
          haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
          haproxy_ssl: "{{ haproxy_ssl }}"
          haproxy_port: 8784
          haproxy_balance_type: http
          haproxy_balance_alg: roundrobin
          haproxy_timeout_client: 10m
          haproxy_timeout_server: 10m
          haproxy_backend_options:
            - "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
    cat > /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml 
    component_skel:
      dmapi_api:
        belongs_to:
          - dmapi_all
    
    container_skel:
      dmapi_container:
        belongs_to:
          - tvault-dmapi_containers
        contains:
          - dmapi_api
    
    physical_skel:
      tvault-dmapi_containers:
        belongs_to:
          - all_containers
      tvault-dmapi_hosts:
        belongs_to:
          - hosts
    #tvault-dmapi
    tvault-dmapi_hosts:   # Add controller details in this section as tvault DMAPI is resides on controller nodes.
      infra-1:            # controller host name. 
        ip: 172.26.0.3    # Ip address of controller
      infra-2:            # If we have multiple controllers add controllers details in same manner as shown in Infra-2  
        ip: 172.26.0.4    
        
    #tvault-datamover
    tvault_compute_hosts: # Add compute details in this section as tvault datamover is resides on compute nodes.
      infra-1:            # compute host name. 
        ip: 172.26.0.7    # Ip address of compute node
      infra-2:            # If we have multiple compute nodes add compute details in same manner as shown in Infra-2
        ip: 172.26.0.8
    ##common editable parameters required for installing tvault-horizon-plugin, tvault-contego and tvault-datamover-api
    #ip address of TVM
    IP_ADDRESS: sample_tvault_ip_address
    
    ##Time Zone
    TIME_ZONE: "Etc/UTC"
    
    ## Don't update or modify the value of TVAULT_PACKAGE_VERSION
    ## The default value of is '4.2.64'
    TVAULT_PACKAGE_VERSION: 4.2.64
    
    # Update Openstack dist code name like ussuri etc.
    OPENSTACK_DIST: ussuri
    
    #Need to add the following statement in nova sudoers file
    #nova ALL = (root) NOPASSWD: /home/tvault/.virtenv/bin/privsep-helper *
    #These changes require for Datamover, Otherwise Datamover will not work
    #Are you sure? Please set variable to 
    #  UPDATE_NOVA_SUDOERS_FILE: proceed
    #other wise ansible tvault-contego installation will exit
    UPDATE_NOVA_SUDOERS_FILE: proceed
    
    ##### Select snapshot storage type #####
    #Details for NFS as snapshot storage , NFS_SHARES should begin with "-".
    ##True/False
    NFS: False
    NFS_SHARES: 
              - sample_nfs_server_ip1:sample_share_path
              - sample_nfs_server_ip2:sample_share_path
    
    #if NFS_OPTS is empty then default value will be "nolock,soft,timeo=180,intr,lookupcache=none"
    NFS_OPTS: ""
    
    ## Valid for 'nfs' backup target only.
    ## If backup target NFS share supports multiple endpoints/ips but in backend it's a single share then
    ## set 'multi_ip_nfs_enabled' parameter to 'True'. Otherwise it's value should be 'False'
    multi_ip_nfs_enabled: False
    
    #### Details for S3 as snapshot storage
    ##True/False
    S3: False
    VAULT_S3_ACCESS_KEY: sample_s3_access_key
    VAULT_S3_SECRET_ACCESS_KEY: sample_s3_secret_access_key
    VAULT_S3_REGION_NAME: sample_s3_region_name
    VAULT_S3_BUCKET: sample_s3_bucket
    VAULT_S3_SIGNATURE_VERSION: default
    #### S3 Specific Backend Configurations
    #### Provide one of follwoing two values in s3_type variable, string's case should be match
    #Amazon/Other_S3_Compatible
    s3_type: sample_s3_type
    #### Required field(s) for all S3 backends except Amazon
    VAULT_S3_ENDPOINT_URL: ""
    #True/False
    VAULT_S3_SECURE: True
    VAULT_S3_SSL_CERT: ""
    
    ###details of datamover API
    ##If SSL is enabled "DMAPI_ENABLED_SSL_APIS" value should be dmapi.
    #DMAPI_ENABLED_SSL_APIS: dmapi
    ##If SSL is disabled "DMAPI_ENABLED_SSL_APIS" value should be empty.
    DMAPI_ENABLED_SSL_APIS: ""
    DMAPI_SSL_CERT: ""
    DMAPI_SSL_KEY: ""
    
    ## Trilio dmapi_workers count
    ## Default value of dmapi_workers is 16
    dmapi_workers: 16
    
    #### Any service is using Ceph Backend then set ceph_backend_enabled value to True
    #True/False
    ceph_backend_enabled: False
    
    ## Provide Horizon Virtual Env path from Horizon_container
    ## e.g. '/openstack/venvs/horizon-23.1.0'
    horizon_virtual_env: '/openstack/venvs/horizon*'
    
    ## When More Than One Nova Virtual Env. On Compute Node(s) and
    ## User Wants To Specify Specific Nova Virtual Env. From Existing
    ## Then Only Uncomment the var nova_virtual_env and pass value like 'openstack/venvs/nova-23.2.0'
    
    #nova_virtual_env: 'openstack/venvs/nova-23.2.0'
    
    #Set verbosity level and run playbooks with -vvv option to display custom debug messages
    verbosity_level: 3
    
    #******************************************************************************************************************************************************************
    ###static fields for tvault contego extension ,Please Do not Edit Below Variables
    #******************************************************************************************************************************************************************
    #SSL path
    DMAPI_SSL_CERT_DIR: /opt/config-certs/dmapi
    VAULT_S3_SSL_CERT_DIR: /opt/config-certs/s3
    RABBITMQ_SSL_DIR: /opt/config-certs/rabbitmq
    DMAPI_SSL_CERT_PATH: /opt/config-certs/dmapi/dmapi-ca.pem
    DMAPI_SSL_KEY_PATH: /opt/config-certs/dmapi/dmapi.key
    VAULT_S3_SSL_CERT_PATH: /opt/config-certs/s3/ca_cert.pem
    RABBITMQ_SSL_CERT_PATH: /opt/config-certs/rabbitmq/rabbitmq.pem
    RABBITMQ_SSL_KEY_PATH: /opt/config-certs/rabbitmq/rabbitmq.key
    RABBITMQ_SSL_CA_CERT_PATH: /opt/config-certs/rabbitmq/rabbitmq-ca.pem
    
    PORT_NO: 8085
    PYPI_PORT: 8081
    DMAPI_USR: dmapi
    DMAPI_GRP: dmapi
    #tvault contego file path
    TVAULT_CONTEGO_CONF: /etc/tvault-contego/tvault-contego.conf
    TVAULT_OBJECT_STORE_CONF: /etc/tvault-object-store/tvault-object-store.conf
    NOVA_CONF_FILE: /etc/nova/nova.conf
    #Nova distribution specific configuration file path
    NOVA_DIST_CONF_FILE: /usr/share/nova/nova-dist.conf
    TVAULT_CONTEGO_EXT_USER: nova
    TVAULT_CONTEGO_EXT_GROUP: nova
    TVAULT_DATA_DIR_MODE: 0775
    TVAULT_DATA_DIR_OLD: /var/triliovault
    TVAULT_DATA_DIR: /var/triliovault-mounts
    TVAULT_CONTEGO_VIRTENV: /home/tvault
    TVAULT_CONTEGO_VIRTENV_PATH: "{{TVAULT_CONTEGO_VIRTENV}}/.virtenv"
    TVAULT_CONTEGO_EXT_BIN: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/bin/tvault-contego"
    TVAULT_CONTEGO_EXT_PYTHON: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/bin/python"
    TVAULT_CONTEGO_EXT_OBJECT_STORE: ""
    TVAULT_CONTEGO_EXT_BACKEND_TYPE: ""
    TVAULT_CONTEGO_EXT_S3: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/lib/python2.7/site-packages/contego/nova/extension/driver/s3vaultfuse.py"
    privsep_helper_file: /home/tvault/.virtenv/bin/privsep-helper
    pip_version: 7.1.2
    virsh_version: "1.2.8"
    contego_service_file_path: /etc/systemd/system/tvault-contego.service
    contego_service_ulimits_count: 65536
    contego_service_debian_path: /etc/init/tvault-contego.conf
    objstore_service_file_path:  /etc/systemd/system/tvault-object-store.service
    objstore_service_debian_path: /etc/init/tvault-object-store.conf
    ubuntu: "Ubuntu"
    centos: "CentOS"
    redhat: "RedHat"
    Amazon: "Amazon"
    Other_S3_Compatible: "Other_S3_Compatible"
    tvault_datamover_api: tvault-datamover-api
    datamover_service_file_path: /etc/systemd/system/tvault-datamover-api.service
    datamover_service_debian_path: /etc/init/tvault-datamover.conf
    datamover_log_dir: /var/log/dmapi
    trilio_yum_repo_file_path: /etc/yum.repos.d/trilio.repo
    
    
    verbosity_level: 3
    ## Valid for 'nfs' backup target only.
    ## If backup target NFS share supports multiple endpoints/ips but in backend it's a single share then 
    ## set 'multi_ip_nfs_enabled' paremeter to 'True'. Otherwise it's value should be 'False'
    multi_ip_nfs_enabled: False
    cd triliovault-cfg-scripts/common/
    vi triliovault_nfs_map_input.yml
    pip3 install -U pyyaml
    python ./generate_nfs_map.py
    vi triliovault_nfs_map_output.yml
    cat triliovault_nfs_map_output.yml >> /etc/openstack_deploy/user_tvault_vars.yml
    cd /opt/openstack-ansible/playbooks
    
    ## Run tvault_pre_install.yml to install lxc packages
    ansible-playbook tvault_pre_install.yml
    
    # To create Dmapi container
    openstack-ansible lxc-containers-create.yml 
    
    #To Deploy Trilio Components
    openstack-ansible os-tvault-install.yml
    
    #To configure Haproxy for Dmapi
    openstack-ansible haproxy-install.yml
    openstack-ansible setup-infrastructure.yml --syntax-check
    openstack-ansible setup-hosts.yml
    openstack-ansible setup-infrastructure.yml
    openstack-ansible setup-openstack.yml
    lxc-ls                                           # Check the dmapi container is present on controller node.
    lxc-info -s controller_dmapi_container-a11984bf  # Confirm running status of the container
    systemctl status tvault-contego.service
    systemctl status tvault-object-store  # If Storage backend is S3
    df -h                                 # Verify the mount point is mounted on compute node(s)
    lxc-attach -n controller_horizon_container-1d9c055c                                   # To login on horizon container
    apt list | egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient'              # For ubuntu based container
    dnf list installed |egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient'     # For CentOS based container
     haproxy -c -V -f /etc/haproxy/haproxy.cfg # Verify the keyword datamover_service-back is present in output.

    Installing on Kolla Openstack

    This page lists all steps required to deploy Trilio components on Kolla-ansible deployed OpenStack cloud.

    hashtag
    1] Plan for Deployment

    circle-info

    Please ensure that the Trilio Appliance has been updated to the latest maintenance release before continuing the installation.

    Refer to the below-mentioned acceptable values for the placeholders triliovault_tag and kolla_base_distro , in this document as per the Openstack environment:

    Openstack Version
    triliovault_tag
    kolla_base_distro

    hashtag
    1.1] Select backup target type

    Backup target storage is used to store backup images taken by Trilio and details needed for configuration:

    Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.

    a) NFS

    Need NFS share path

    b) Amazon S3

    - S3 Access Key - Secret Key - Region - Bucket name

    c) Other S3 compatible storage (Like, Ceph based S3)

    - S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name

    hashtag
    2] Clone Trilio Deployment Scripts

    Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterwards, copy Trilio Ansible role into Kolla-ansible roles directory

    hashtag
    3] Hook Trilio deployment scripts to Kolla-ansible deploy scripts

    hashtag
    3.1] Add Trilio global variables to globals.yml

    hashtag
    3.2] Add Trilio passwords to kolla passwords.yaml

    Append triliovault_passwords.yml to /etc/kolla/passwords.yml. Passwords are empty. Set these passwords manually in the /etc/kolla/passwords.yml.

    hashtag
    3.3] Append Trilio site.yml content to kolla ansible’s site.yml

    hashtag
    3.4] Append triliovault_inventory.txt to your cloud’s kolla-ansible inventory file.

    hashtag
    3.5] Configure multi-IP NFS

    circle-info

    This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs

    On kolla-ansible server node, change directory

    Edit file 'triliovault_nfs_map_input.yml' in the current directory and provide compute host and NFS share/ip map.

    circle-info

    If IP addresses are used in the kolla-ansible inventory file then you should use same IP addresses in 'triliovault_nfs_map_input.yml' file too. If you used hostnames there then you need to use same hostnames here in nfs map input file.

    Compute host names or IP addresses that you are using in nfs map input file here should match with kolla-ansible inventory file entries.

    vi triliovault_nfs_map_input.yml

    The triliovault_nfs_map_imput.yml is explained .

    Update PyYAML on the kolla-ansible server node only

    Expand the map file to create one to one mapping of compute and nfs share.

    Result will be in file - 'triliovault_nfs_map_output.yml'

    Validate output map file

    Open file 'triliovault_nfs_map_output.yml

    vi triliovault_nfs_map_output.yml

    available in the current directory and validate that all compute nodes are covered with all necessary nfs shares.

    Append this output map file to 'triliovault_globals.yml' File Path: '/home/stack/triliovault-cfg-scripts/kolla-ansible/ansible/triliovault_globals.yml’

    Ensure to set multi_ip_nfs_enabled in __ triliovault_globals.yml file to yes

    hashtag
    4] Edit globals.yml to set Trilio parameters

    Edit /etc/kolla/globals.yml file to fill Trilio backup target and build details. You will find the Trilio related parameters at the end of globals.yml file. Details like Trilio build version, backup target type, backup target details, etc need to be filled out.

    Following is the list of parameters that the usr needs to edit.

    Parameter
    Defaults/choices
    comments

    In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.

    Following are the triliovault container image URLs for 4.3 releases**.** Replace kolla_base_distro and triliovault_tag variables with their values.\

    This {{ kolla_base_distro }} variable can be either 'centos' or 'ubuntu' depends on your base OpenStack distro

    circle-info

    Trilio supports the Source-based containers from the OpenStack Yoga release **** onwards.

    Below are the Source-based OpenStack deployment images

    Below are the Binary-based OpenStack deployment images

    hashtag
    5] Enable Trilio Snapshot mount feature

    To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.

    Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml and find nova_libvirt_default_volumes variables. Append the Trilio mount bind /var/trilio:/var/trilio:shared to the list of already existing volumes.

    For a default Kolla installation, will the variable look as follows afterward:

    Next, find the variable nova_compute_default_volumes in the same file and append the mount bind /var/trilio:/var/trilio:shared to the list.

    After the change will the variable look for a default Kolla installation as follows:

    In case of using Ironic compute nodes one more entry need to be adjusted in the same file. Find the variable nova_compute_ironic_default_volumes and append trilio mount /var/trilio:/var/trilio:shared to the list.

    After the changes the variable will looks like the following:

    hashtag
    6] Pull Trilio container images

    Activate the login into dockerhub for Trilio tagged containers.

    Please get the Dockerhub login credentials from Trilio Sales/Support team

    Pull the Trilio container images from the dockerhub based on the existing inventory file. In the example is the inventory file named multinode.

    hashtag
    7] Deploy Trilio

    All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.

    This is just an example command. You need to use your cloud deploy command.

    circle-info

    Post deployment for multipath enabled environment, log into respective datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container

    hashtag
    8] Verify Trilio deployment

    Verify on the controller and compute nodes that the Trilio containers are in UP state.

    Following is a sample output of commands from controller and compute nodes. triliovault_tag will have value as per the openstack release where deployment being done.

    hashtag
    9] Troubleshooting Tips

    hashtag
    9.1 ] Check Trilio containers and their startup logs

    To see all TriloVault containers running on a specific node use the docker ps command.

    To check the startup logs use the docker logs <container name> command.

    hashtag
    9.2] Trilio Horizon tabs are not visible in Openstack

    Verify that the Trilio Appliance is configured. The Horizon tabs are only shown, when a configured Trilio appliance is available.

    Verify that the Trilio horizon container is installed and in a running state.

    hashtag
    9.3] Trilio Service logs

    • Trilio datamover api service logs on datamover api node

    • Trilio datamover service logs on datamover node

    hashtag
    10. Change the nova user id on the Trilio Nodes

    Note: This step needs to be done on Trilio Appliance node. Not on OpenStack node.

    Pre-requisite: You should have already launched Trilio appliance VM

    In Kolla openstack distribution, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all Trilio nodes:

    1. Download the shell script that will change the user id

    2. Assign executable permissions

    3. Execute the script

    hashtag
    11. Advanced configurations - [Optional]

    11.1] We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user' in the file '/etc/kolla/globals.yaml'.

    If user wants to edit this parameter value they can do it. Impact will be, cinder's ceph user and triliovault datamover's ceph user will be updated upon next kolla-ansible deploy command.

    4.3.2-yoga

    ubuntu centos

    Zed

    4.3.2-zed

    ubuntu rocky

    <dockerhub-login-username>

    Default docker user of Trilio (read permission only). Get the Dockerhub login credentials from Trilio Sales/Support team

    triliovault_docker_password

    <dockerhub-login-password>

    Password for default docker user of Trilio Get the Dockerhub login credentials from Trilio Sales/Support team

    triliovault_docker_registry

    Default value: docker.io

    Edit this value if a different container registry for Trilio containers is to be used. Containers need to be pulled from docker.io and pushed to chosen registry first.

    triliovault_backup_target

    • nfs

    • amazon_s3

    • other_s3_compatible

    nfs if the backup target is NFS

    amazon_s3 if the backup target is Amazon S3

    other_s3_compatible if the backup target type is S3 but not amazon S3.

    multi_ip_nfs_enabled

    yes no default: no

    This parameter is valid only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault.

    triliovault_nfs_shares

    <NFS-IP/FQDN>:/<NFS path>

    NFS share path example: ‘192.168.122.101:/nfs/tvault’

    triliovault_nfs_options

    'nolock,soft,timeo=180, intr,lookupcache=none'. for Cohesity nfs: 'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10'

    -These parameter set NFS mount options. -Keep default values, unless a special requirement exists.

    triliovault_s3_access_key

    S3 Access Key

    Valid for amazon_s3 and

    triliovault_s3_secret_key

    S3 Secret Key

    Valid for amazon_s3 and other_s3_compatible

    triliovault_s3_region_name

    • Default value: us-east-1

    • S3 Region name

    Valid for amazon_s3 and other_s3_compatible

    If s3 storage doesn't have region parameter keep default

    triliovault_s3_bucket_name

    S3 Bucket name

    Valid for amazon_s3 and other_s3_compatible

    triliovault_s3_endpoint_url

    S3 Endpoint URL

    Valid for other_s3_compatible only

    triliovault_s3_ssl_enabled

    • True

    • False

    Valid for other_s3_compatible only

    Set true for SSL enabled S3 endpoint URL

    triliovault_s3_ssl_cert_file_name

    s3-cert.pem

    Valid for other_s3_compatible only with SSL enabled and self signed certificates

    OR issued by a private authority. In this case, copy the ceph s3 ca chain file to/etc/kolla/config/triliovault/

    directory on ansible server. Create this directory if it does not exist already.

    triliovault_copy_ceph_s3_ssl_cert

    • True

    • False

    Valid for other_s3_compatible only

    Set to True when: SSL enabled with self-signed certificates or issued by a private authority.

    Verify that 'nova' user and group id has changed to '42436'
  • After this step, you can proceed to 'Configuring Trilio' section.

  • Victoria

    4.3.2-victoria

    ubuntu centos

    Wallaby

    4.3.2-wallaby

    ubuntu centos

    triliovault_tag

    <triliovault_tag >

    Use the triliovault tag as per your Kolla openstack version. Exact tag is mentioned in the 1st step

    horizon_image_full

    Uncomment

    By default, Trilio Horizon container would not get deployed.

    Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.

    here

    Yoga

    triliovault_docker_username

    git clone -b 4.3.2 https://github.com/trilioData/triliovault-cfg-scripts.git
    cd triliovault-cfg-scripts/kolla-ansible/
    
    # For Centos and Ubuntu
    cp -R ansible/roles/triliovault /usr/local/share/kolla-ansible/ansible/roles/
    ## For Centos and Ubuntu
    - Take backup of globals.yml
    cp /etc/kolla/globals.yml /opt/
    
    - If the OpenStack release is other than 'zed' append below Trilio global variables to globals.yml
    cat ansible/triliovault_globals.yml >> /etc/kolla/globals.yml
    
    - If the OpenStack release is ‘zed' append below Trilio global variables to globals.yml
    cat ansible/triliovault_globals_zed.yml >> /etc/kolla/globals.yml
    ## For Centos and Ubuntu
    - Take backup of passwords.yml
    cp /etc/kolla/passwords.yml /opt/
    
    - Append Trilio global variables to passwords.yml 
    cat ansible/triliovault_passwords.yml >> /etc/kolla/passwords.yml
    
    - Edit '/etc/kolla/passwords.yml', go to end of the file and set trilio passwords.
    # For Centos and Ubuntu
    - Take backup of site.yml
    cp /usr/local/share/kolla-ansible/ansible/site.yml /opt/
    
    # If the OpenStack release is ‘yoga' append below Trilio code to site.yml  
    cat ansible/triliovault_site_yoga.yml >> /usr/local/share/kolla-ansible/ansible/site.yml    
    
    # If the OpenStack release is other than 'yoga' append below Trilio code to site.yml 
    cat ansible/triliovault_site.yml >> /usr/local/share/kolla-ansible/ansible/site.yml                                                            
    For example:
    If your inventory file name path '/root/multinode' then use following command.
    
    cat ansible/triliovault_inventory.txt >> /root/multinode
    cd triliovault-cfg-scripts/common/
    pip3 install -U pyyaml
    python ./generate_nfs_map.py
    cat triliovault_nfs_map_output.yml >> ../kolla-ansible/ansible/triliovault_globals.yml
    
    1. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
    2. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
    3. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-horizon-plugin:{{ triliovault_tag }}
    
    ## EXAMPLE from Kolla Ubuntu source based OpenStack
    docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-horizon-plugin:{{ triliovault_tag }}
    1. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover:{{ triliovault_tag }}
    2. docker.io/trilio/kolla-{{ kolla_base_distro }}-trilio-datamover-api:{{ triliovault_tag }}
    3. docker.io/trilio/{{ kolla_base_distro }}-binary-trilio-horizon-plugin:{{ triliovault_tag }}
    
    ## EXAMPLE from Kolla Ubuntu binary based OpenStack
    docker.io/trilio/kolla-ubuntu-trilio-datamover:{{ triliovault_tag }}
    docker.io/trilio/kolla-ubuntu-trilio-datamover-api:{{ triliovault_tag }}
    docker.io/trilio/ubuntu-binary-trilio-horizon-plugin:{{ triliovault_tag }}
    nova_libvirt_default_volumes:
      - "{{ node_config_directory }}/nova-libvirt/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run/:/run/:shared"
      - "/dev:/dev"
      - "/sys/fs/cgroup:/sys/fs/cgroup"
      - "kolla_logs:/var/log/kolla/"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "
    {% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    
    
    
    "
      - "nova_libvirt_qemu:/etc/libvirt/qemu"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }
      - "/var/trilio:/var/trilio:shared"
    nova_compute_default_volumes:
      - "{{ node_config_directory }}/nova-compute/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "/lib/modules:/lib/modules:ro"
      - "/run:/run:shared"
      - "/dev:/dev"
      - "kolla_logs:/var/log/kolla/"
      - "
    {% if enable_iscsid | bool %}iscsi_info:/etc/iscsi{% endif %}"
      - "libvirtd:/var/lib/libvirt"
      - "{{ nova_instance_datadir_volume }}:/var/lib/nova/"
      - "{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}
    
    
    "
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    nova_compute_ironic_default_volumes:
      - "{{ node_config_directory }}/nova-compute-ironic/:{{ container_config_directory }}/:ro"
      - "/etc/localtime:/etc/localtime:ro"
      - "{{ '/etc/timezone:/etc/timezone:ro' if ansible_os_family == 'Debian' else '' }}"
      - "kolla_logs:/var/log/kolla/"
      - "{{ kolla_dev_repos_directory ~ '/nova/nova:/var/lib/kolla/venv/lib/python' ~ distro_python_version ~ '/site-packages/nova' if nova_dev_mode | bool else '' }}"
      - "/var/trilio:/var/trilio:shared"
    ansible -i multinode control -m shell -a "docker login -u <docker-login-username> -p <docker-login-password> docker.io"
    kolla-ansible -i multinode pull --tags triliovault
    kolla-ansible -i multinode deploy
    [controller] docker ps  | grep "trilio-"
    a2a3593f76db   trilio/kolla-centos-trilio-datamover-api:<triliovault_tag>       "dumb-init --single-…"   23 hours ago    Up 23 hours    triliovault_datamover_api
    5f573caa7b02   trilio/kolla-centos-trilio-horizon-plugin:<triliovault_tag>      "dumb-init --single-…"   23 hours ago    Up 23 hours              horizon
    
    [compute] docker ps | grep "trilio-"
    f6d443c2942c   trilio/kolla-centos-trilio-datamover:<triliovault_tag>          "dumb-init --single-…"   23 hours ago    Up 23 hours    triliovault_datamover
    docker ps -a | grep trilio
    docker logs trilio_datamover_api
    docker logs trilio_datamover
    docker ps | grep horizon
    /var/log/kolla/triliovault-datamover-api/dmapi.log
    /var/log/kolla/triliovault-datamover/tvault-contego.log
    ## Download the shell script
    $ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
    
    ## Assign executable permissions
    $ chmod +x nova_userid.sh
    
    ## Execute the shell script to change 'nova' user and group id to '42436'
    $ ./nova_userid.sh
    
    ## Ignore any errors and verify that 'nova' user and group id has changed to '42436'
    $ id nova
       uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
    TrilioVault data protection — charm-guide 0.0.1.dev819 documentationdocs.openstack.orgchevron-right
    Logo