Getting started with Trilio on Red-Hat OpenStack Platform (RHOSP)
The Red Hat OpenStack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.
Trilio is integrating natively into the RHOSP Director.
Manual deployment methods are not supported for RHOSP.
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
The following backup target types are supported by Trilio
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key
- Secret Key
- Region
- Bucket name
c) Other S3 compatible storage (Like Ceph based S3)
- S3 Access Key
- Secret Key
- Region
- Endpoint URL (Valid for S3 other than Amazon S3)
- Bucket name
The following steps are to be done on the
undercloud
node on an already installed RHOSP environment.
The overcloud-deploy command has to be run successfully already and the overcloud should be available.All commands need to be run as a
stack
user on the undercloud
nodeThe following command clones the triliovault-cfg-scripts github repository.
cd /home/stack
git clone -b 5.1.0 https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/
If your backup target is CEPH S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For that, the user needs to rename his ca chain cert file to
s3-cert.pem
and copy it into the directory - triliovault-cfg-scripts/redhat-director-scripts/rhosp16/puppet/trilio/files
cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/puppet/trilio/files
Trilio contains multiple services. Add these services to your
roles_data.yaml
.In the case of uncustomized roles_data.yaml can the default file be found on the
undercloud
at:/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
All commands need to be run as a 'stack' user
This service needs to share the same role as the
keystone
and database
service.
In the case of the pre-defined roles, these services will run on the role Controller
.
In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone
service installed.Add the following line to the identified role:
'OS::TripleO::Services::TrilioDatamoverApi'
'OS::TripleO::Services::TrilioWlmApi'
'OS::TripleO::Services::TrilioWlmWorkloads'
'OS::TripleO::Services::TrilioWlmScheduler'
'OS::TripleO::Services::TrilioWlmCron'
This service needs to share the same role as the
nova-compute
service.
In the case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In the case of custom-defined roles, it is necessary to use the role that the nova-compute
service uses.Add the following line to the identified role:
'OS::TripleO::Services::TrilioDatamover'
All commands need to be run as a 'stack' user
Trilio containers are pushed to the
RedHat Container Registry
.
Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.Refer to the word <CONTAINER-TAG-VERSION> as 5.1.0 in the below sections
Trilio Datamover Container: registry.connect.redhat.com/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
Trilio Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
Trilio Horizon Plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1
Trilio WLM Container: registry.connect.redhat.com/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
Trilio Datamover Container: registry.connect.redhat.com/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
Trilio Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
Trilio Horizon Plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2
Trilio WLM Container: registry.connect.redhat.com/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
There are three registry methods available in the RedHat OpenStack Platform.
- 1.Remote Registry
- 2.Local Registry
- 3.Satellite Server
Follow this section when 'Remote Registry' is used.
In this method, container images get downloaded directly on overcloud nodes during overcloud deploy/update command execution. Users can set the remote registry to a redhat registry or any other private registry that they want to use.
The user needs to provide credentials for the registry in
containers-prepare-parameter.yaml
file.- 1.Make sure other OpenStack service images are also using the same method to pull container images. If it's not the case you can not use this method.
- 2.Populate
containers-prepare-parameter.yaml
with content like the following. Important parameters arepush_destination: false
, ContainerImageRegistryLogin: true and registry credentials. Trilio container images are published to registryregistry.connect.redhat.com
. Credentials of registry 'registry.redhat.io' will work forregistry.connect.redhat.com
registry too.
Note: This file
containers-prepare-parameter.yaml
File Name: containers-prepare-parameter.yaml
parameter_defaults:
ContainerImagePrepare:
- push_destination: false
set:
namespace: registry.redhat.io/...
...
...
ContainerImageRegistryCredentials:
registry.redhat.io:
myuser: 'p@55w0rd!'
registry.connect.redhat.com:
myuser: 'p@55w0rd!'
ContainerImageRegistryLogin: true
Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat
3. Make sure you have network connectivity to the above registries from all overcloud nodes. Otherwise image pull operation will fail.
4. The user needs to manually populate the
trilio_env.yaml
file with Trilio container image URLs as given below:trilio_env.yaml file path:
cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16/
# For RHOSP16.1
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1
# For RHOSP16.2
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.2' trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2
At this step, you have configured Trilio image URLs in the necessary environment file.
Follow this section when 'local registry' is used on the undercloud.
In this case, it is necessary to push the Trilio containers to the undercloud registry.
Trilio provides shell scripts that will pull the containers from
registry.connect.redhat.com
and push them to the undercloud and update the trilio_env.yaml
.cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp16.1
## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME', 'undercloud2-161.ctlplane.trilio.local' is the undercloud registry hostname in below example
$ openstack tripleo container image list | grep keystone
| docker://undercloud2-161.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-keystone:16.1 |
| docker://undercloud2-161.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.1 |
## Example of running the script with parameters
sudo ./prepare_trilio_images.sh undercloud2-161.ctlplane.trilio.local 5.0.14-rhosp16.1
## Verify changes
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' ../environments/trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1
$ openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp16.1
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1 |
| docker://undercloudqa161.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1 |
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER-TAG-VERSION>-rhosp16.2
## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME', 'undercloudqa162.ctlplane.trilio.local' is the undercloud registry hostname in below example
$ openstack tripleo container image list | grep keystone
| docker://undercloudqa162.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/rhosp-rhel8/openstack-keystone:16.2 |
## Example of running the script with parameters
sudo ./prepare_trilio_images_dockerhub.sh undercloudqa162.ctlplane.trilio.local 5.0.14-rhosp16.2
## Verify changes
grep '<CONTAINER-TAG-VERSION>-rhosp16.2' ../environments/trilio_env.yaml
ContainerTriliovaultDatamoverImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2
$ openstack tripleo container image list | grep <CONTAINER-TAG-VERSION>-rhosp16.2
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2 |
| docker://undercloudqa162.ctlplane.trilio.local:8787/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2 |
At this step, you have downloaded Trilio container images and configured Trilio image URLs in the necessary environment file.
Follow this section when a Satellite Server is used for the container registry.
Populate the
trilio_env.yaml
with container URLs.cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.1' trilio_env.yaml
ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.1
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.1
$ grep '<CONTAINER-TAG-VERSION>-rhosp16.2' trilio_env.yaml
ContainerTriliovaultDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultDatamoverApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerTriliovaultWlmImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-wlm:<CONTAINER-TAG-VERSION>-rhosp16.2
ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:<CONTAINER-TAG-VERSION>-rhosp16.2
At this step, you have downloaded Trilio container images into the RedHat satellite server and configured Trilio image URLs in the necessary environment file.
Edit
/home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml
file and provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still, it is recommended to verify the container URLs.You don't need to provide anything for
resource_registry
, keep it as it is.Parameter | Comments |
---|---|
CloudAdminUserName | Default value is admin .Provide the cloudadmin user name of your overcloud |
CloudAdminProjectName | Default value is admin .Provide the cloudadmin project name of your overcloud |
CloudAdminDomainName | Default value is default .Provide the cloudadmin project name of your overcloud |
CloudAdminPassword | Provide the cloudadmin user's password of your overcloud |
ContainerTriliovaultDatamoverImage | Trilio Datamover Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
ContainerTriliovaultDatamoverApiImage | Trilio DatamoverApi Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
ContainerTriliovaultWlmImage | Trilio WLM Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
ContainerHorizonImage | Horizon Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
BackupTargetType | Default value is nfs .Set either 'nfs' or 's3' as a target backend for snapshots taken by Triliovault |
MultiIPNfsEnabled | Default value is False .Set to True only If you want to use multiple IP/endpoints based NFS share/shares as backup target for TriloVault. |
NfsShares | Provide the nfs share you want to use as backup target for snapshots taken by Triliovault |
NfsOptions | This parameter set NFS mount options. Keep default values, unless a special requirement exists. |
S3Type | If your Backup target is S3 then either provide the amazon_s3 or ceph_s3 depends on s3 type |
S3AccessKey | Provide S3 Access key |
S3SecretKey | Provide Secret key |
S3RegionName | Provide S3 region. If your S3 type does not have region parameter, just keep the parameter as it is |
S3Bucket | Provide S3 bucket name |
S3EndpointUrl | Provide S3 endpoint url, if your S3 type does not not required keep it as it is. Not required for Amazon S3 |
S3SignatureVersion | Provide S3 singature version. |
S3AuthVersion | Provide S3 auth version. |
S3SslEnabled | Default value is False .If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint url then change it to 'True', otherwise keep it as 'False'. |
TrilioDatamoverOptVolumes | User can specify list of extra volumes that they want to mount on 'triliovault_datamover' container. Refer the Configure Custom Volume/Directory Mounts for the Trilio Datamover Service section in this doc |
The user needs to generate random passwords for Trilio resources using the following script.
This script will generate random passwords for all Trilio resources that are going to get created in OpenStack cloud.
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
./generate_passwords.sh
Include this file in your overcloud deploy command as an environment file with the option
-e
.-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/passwords.yaml
For only this section user needs to source the cloudrc file of overcloud node
source <OVERCLOUD_RC_FILE>
vi /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml
openstack role add --user <cloud-Admin-UserName> --domain <Cloud-Admin-DomainName> admin
# Example
openstack role add --user admin --domain default admin
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts
./create_wlm_ids_conf.sh
The output will be written to
cat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/puppet/trilio/files/triliovault_wlm_ids.conf
For TrilioVault functionality to work, we need the following Linux kernel modules to be loaded on all controllers and compute nodes(Where Trilio WLM and Datamover services are going to be installed).
modprobe nbd nbds_max=128
lsmod | grep nbd
modprobe fuse
lsmod | grep fuse
All commands need to be run as a 'stack' user on undercloud node
source stackrc
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/scripts/
./upload_puppet_module.sh
## Output of the above command looks like following
Creating tarball...
Tarball created.
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
Uploading file to swift: /tmp/puppet-modules-B1bp1Bk/puppet-modules.tar.gz
+-----------------------+---------------------+----------------------------------+
| object | container | etag |
+-----------------------+---------------------+----------------------------------+
| puppet-modules.tar.gz | overcloud-artifacts | 17ed9cb7a08f67e1853c610860b8ea99 |
+-----------------------+---------------------+----------------------------------+
Upload complete
## Above command creates the following file
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/defaults.yaml
- 1.
trilio_env.yaml
- 2.
roles_data.yaml
- 3.
passwords.yaml
- 4.
defaults.yaml
- 5.Use the correct Trilio endpoint map file as per the available Keystone endpoint configuration
- 1.Instead of
tls-endpoints-public-dns.yaml
file, usetriliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env_tls_endpoints_public_dns.yaml
- 2.Instead of
tls-endpoints-public-ip.yaml
file, usetriliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env_tls_endpoints_public_ip.yaml
- 3.Instead of
tls-everywhere-endpoints-dns.yaml
file, usetriliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env_tls_everywhere_dns.yaml
To include new environment files use
-e
option and for roles data files use -r
option.\Below is an example of an overcloud deploy command with Trilio environment:
openstack overcloud deploy --stack overcloudtrain5 --templates \
--libvirt-type qemu \
--ntp-server 192.168.1.34 \
-e /home/stack/templates/node-info.yaml \
-e /home/stack/containers-prepare-parameter.yaml \
-e /home/stack/templates/ceph-config.yaml \
-e /home/stack/templates/cinder_size.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/barbican.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/barbican-backend-simple-crypto.yaml \
-e /home/stack/templates/configure-barbican.yaml \
-e /home/stack/templates/multidomain_horizon.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-internal-tls.yaml \
-e /home/stack/templates/tls-parameters.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/haproxy-public-tls-certmonger.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env_tls_everywhere_dns.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/defaults.yaml \
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/passwords.yaml \
-r /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Post-deployment for the multipath enabled environment, log into respective Datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container
Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing do the following command provide the list of errors.
The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html
openstack stack failures list overcloud
heat stack-list --show-nested -f "status=FAILED"
heat resource-list --nested-depth 5 overcloud | grep FAILED
=> If any Trilio containers do not start well or are in a restarting state on the Controller/Compute node, use the following logs to debug.
podman logs <trilio-container-name>
tailf /var/log/containers/<trilio-container-name>/<trilio-container-name>.log
This section is only required when the Multi-IP feature for NFS is required.
This feature allows us to set the IP to access the NFS Volume per datamover instead of globally.
cd triliovault-cfg-scripts/common/
Get the overcloud Controller and Compute hostnames from the following command. Check
Name
column. Use exact host names in the triliovault_nfs_map_input.yml
file.Run this command on undercloud by sourcing
stackrc
.(undercloud) [stack@ucqa161 ~]$ openstack server list
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
| 8c3d04ae-fcdd-431c-afa6-9a50f3cb2c0d | overcloudtrain1-controller-2 | ACTIVE | ctlplane=172.30.5.18 | overcloud-full | control |
| 103dfd3e-d073-4123-9223-b8cf8c7398fe | overcloudtrain1-controller-0 | ACTIVE | ctlplane=172.30.5.11 | overcloud-full | control |
| a3541849-2e9b-4aa0-9fa9-91e7d24f0149 | overcloudtrain1-controller-1 | ACTIVE | ctlplane=172.30.5.25 | overcloud-full | control |
| 74a9f530-0c7b-49c4-9a1f-87e7eeda91c0 | overcloudtrain1-novacompute-0 | ACTIVE | ctlplane=172.30.5.30 | overcloud-full | compute |
| c1664ac3-7d9c-4a36-b375-0e4ee19e93e4 | overcloudtrain1-novacompute-1 | ACTIVE | ctlplane=172.30.5.15 | overcloud-full | compute |
+--------------------------------------+-------------------------------+--------+----------------------+----------------+---------+
Edit the input map file
triliovault_nfs_map_input.yml
and fill in all the details. Refer to this page for details about the structure.Below is an example of how you can set the multi-IP NFS details:
You can not configure the different IPs for the Controllers/WLM nodes, you need to use the same share on all the controller nodes.
You can configure the different IPs for Compute/Datamover nodes
$ cat /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_nfs_map.yaml
# TriliovaultMultiIPNfsMap represents datamover, WLM nodes (compute and controller nodes) and it's NFS share mapping.
parameter_defaults:
TriliovaultMultiIPNfsMap:
overcloudtrain4-controller-0: 172.30.1.11:/rhospnfs
overcloudtrain4-controller-1: 172.30.1.11:/rhospnfs
overcloudtrain4-controller-2: 172.30.1.11:/rhospnfs
overcloudtrain4-novacompute-0: 172.30.1.12:/rhospnfs
overcloudtrain4-novacompute-1: 172.30.1.13:/rhospnfs
If pip isn't available please install pip on the undercloud.
sudo pip3 install PyYAML==5.1 3
Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.
python3 ./generate_nfs_map.py
The result will be stored in the
triliovault_nfs_map_output.yml
fileOpen file
triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.grep ':.*:' triliovault_nfs_map_output.yml >> ../redhat-director-scripts/rhosp16/environments/trilio_nfs_map.yaml
Validate the changes in the file
triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_nfs_map.yaml
-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_nfs_map.yaml
/home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml
The existing default HAproxy configuration works fine with most of the environments. Only when timeout issues with the Trilio Datamover Api are observed or other reasons are known, change the configuration as described here.
Following is the HAproxy conf file location on HAproxy nodes of the overcloud.
Trilio Datamover API service HAproxy configuration gets added to this file.
/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
Trilio Datamover HAproxy default configuration from the above file looks as follows:
listen triliovault_datamover_api
bind 172.30.4.53:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.30.4.53:8784 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
balance roundrobin
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Port %[dst_port]
maxconn 50000
option httpchk
option httplog
option forwardfor
retries 5
timeout check 10m
timeout client 10m
timeout connect 10m
timeout http-request 10m
timeout queue 10m
timeout server 10m
server overcloudtraindev2-controller-0.internalapi.trilio.local 172.30.4.57:8784 check fall 5 inter 2000 rise 2 verifyhost overcloudtraindev2-controller-0.internalapi.trilio.local
The user can change the following configuration parameter values.
retries 5
timeout http-request 10m
timeout queue 10m
timeout connect 10m
timeout client 10m
timeout server 10m
timeout check 10m
balance roundrobin
maxconn 50000
To change these default values, you need to do the following steps.
i) On the undercloud node, open the following file for editing.
/home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16/services/triliovault-datamover-api.yaml
ii) Search the following entries and edit as required
tripleo::haproxy::trilio_datamover_api::options:
'retries': '5'
'maxconn': '50000'
'balance': 'roundrobin'
'timeout http-request': '10m'
'timeout queue': '10m'
'timeout connect': '10m'
'timeout client': '10m'
'timeout server': '10m'
'timeout check': '10m'
iii) Save the changes and do the overcloud deployment again to reflect these changes for overcloud nodes.
i) If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use a variable named 'TrilioDatamoverOptVolumes' is available in the below file.
triliovault-cfg-scripts/redhat-director-scripts/rhosp16/environments/trilio_env.yaml
To add one more extra volume/directoy mount to the Trilio Datamover Service container it is necessary that volumes/directories should already be mounted on the Compute host
ii) The variable 'TrilioDatamoverOptVolumes' accepts a list of volume/bind mounts. User needs to edit the file and add their volume mounts in the below format.
TrilioDatamoverOptVolumes:
- <mount-dir-on-compute-host>:<mount-dir-inside-the-datamover-container>
## For example, below is the `/mnt/mount-on-host` mount directory mounted on Compute host that directory you want to mount on the `/mnt/mount-inside-container` directory inside the Datamover container
[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436 2.5T 2.3T 234G 91% /mnt/mount-on-host
## Then provide that mount in the below format
TrilioDatamoverOptVolumes:
- /mnt/mount-on-host:/mnt/mount-inside-container
iii) Lastly you need to do overcloud deploy/update.
After successful deployment, you will see that volume/directory mount will be mounted inside the Trilio Datamover Service container.
[root@overcloudtrain5-novacompute-0 heat-admin]# podman exec -itu root triliovault_datamover bash
[root@overcloudtrain5-novacompute-0 heat-admin]# df -h | grep 172.
172.25.0.10:/mnt/tvault/42436 2.5T 2.3T 234G 91% /mnt/mount-inside-container
Last modified 3h ago