The Red Hat OpenStack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.
Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
Refer to the below-mentioned acceptable values for the placeholders triliovault_tag
, trilio_branch
, RHOSP_version
and CONTAINER-TAG-VERSION
in this document as per the Openstack environment:
Trilio Release | triliovault_tag | trilio_branch | RHOSP_version | CONTAINER-TAG-VERSION |
---|---|---|---|---|
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
The following backup target types are supported by Trilio
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
The following steps are to be done on the undercloud
node on an already installed RHOSP environment.
The overcloud-deploy command has to be run successfully already and the overcloud should be available.
All commands need to be run as a stack
user on the undercloud
node
Refer to the word <RHOSP_RELEASE_DIRECTORY> as rhosp17 in the below sections
The following command clones the triliovault-cfg-scripts github repository.
If your backup target is CEPH S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For all s3 backup target with self signed TLS certificates, user need to copy ca chain files in following location and in given file name format in trilio puppet module. Edit <S3_BACKUP_TARGET_NAME>, <S3_SELF_SIGNED_CERT_CA_CHAIN_FILE> parameters in following command.
For example if S3_BACKUP_TARGET_NAME = BT2_S3 and your S3_SELF_SIGNED_CERT_CA_CHAIN_FILE='s3-ca.pem' then command to copy this ca chain file to trilio puppet module would be
More about this feature can be found here.
Please refer to this page to collect the necessary artifacts before continuing further.
Rename the file as vddk.tar.gz
and place it at
Copy the vCenter SSL cert file to
Trilio contains multiple services. Add these services to your roles_data.yaml
.
In the case of uncustomized roles_data.yaml can the default file be found on the undercloud
at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
All commands need to be run as a 'stack' user
This service needs to share the same role as the keystone
and database
service.
In the case of the pre-defined roles, these services will run on the role Controller
.
In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone
service installed.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In the case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In the case of custom-defined roles, it is necessary to use the role that the nova-compute
service uses.
Add the following line to the identified role:
All commands need to be run as a 'stack' user
Trilio containers are pushed to the RedHat Container Registry
.
Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.
There are three registry methods available in the RedHat OpenStack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
In this method, container images get downloaded directly on overcloud nodes during overcloud deploy/update command execution. Users can set the remote registry to a redhat registry or any other private registry that they want to use.
The user needs to provide credentials for the registry in containers-prepare-parameter.yaml
file.
Make sure other OpenStack service images are also using the same method to pull container images. If it's not the case you can not use this method.
Populate containers-prepare-parameter.yaml
with content like the following. Important parameters are push_destination: false
,
ContainerImageRegistryLogin: true and registry credentials.
Trilio container images are published to registry registry.connect.redhat.com
.
Credentials of registry 'registry.redhat.io' will work for registry.connect.redhat.com
registry too.
Note: This file containers-prepare-parameter.yaml
Redhat document for remote registry method: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/director_installation_and_usage/preparing-for-director-installation#container-image-preparation-parameters
Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat
3. Make sure you have network connectivity to the above registries from all overcloud nodes. Otherwise image pull operation will fail.
4. The user needs to manually populate the trilio_env.yaml
file with Trilio container image URLs as given below:
trilio_env.yaml file path:
cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
At this step, you have configured Trilio image URLs in the necessary environment file.
Follow this section when 'local registry' is used on the undercloud.
In this case, it is necessary to push the Trilio containers to the undercloud registry.
Trilio provides shell scripts that will pull the containers from registry.connect.redhat.com
and push them to the undercloud and update the trilio_env.yaml
.
At this step, you have downloaded Trilio container images and configured Trilio image URLs in the necessary environment file.
Follow this section when a Satellite Server is used for the container registry.
Pull the Trilio containers on the Red Hat Satellite using the given Red Hat registry URLs.
Populate the trilio_env.yaml
with container URLs.
At this step, you have downloaded Trilio container images into the RedHat satellite server and configured Trilio image URLs in the necessary environment file.
Edit /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml
file and provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still, it is recommended to verify the container URLs.
You don't need to provide anything for resource_registry
, keep it as it is.
T4O supports setting up multiple target backend for storing snapshots. User can define any number of storage backends as required. At a high level, NFS and S3 are supported.
Following table provides the details of parameters to be set in trilio_env.yaml
file against all the S3 target backends.
Following table provides the details of parameters to be set in trilio_env.yaml
file against all the NFS target backends.
After you fill in details of backup targets in trilio_env.yaml, user needs to run following script from ‘scripts' directory on undercloud node. This script will update ‘services/triliovault-object-store.yaml' file. User don’t need to verify that.
More about this feature can be found here.
The user needs to generate random passwords for Trilio resources using the following script.
This script will generate random passwords for all Trilio resources that are going to get created in OpenStack cloud.
Include this file in your overcloud deploy command as an environment file with the option "-e"
For only this section user needs to source the cloudrc file of overcloud node
The output will be written to
For TrilioVault functionality to work, we need the following Linux kernel modules to be loaded on all controllers and compute nodes(Where Trilio WLM and Datamover services are going to be installed).
All commands need to be run as a 'stack' user on undercloud node
defaults.yaml
in overcloud deploy command with `-e` option as shown below.This YAML file holds the default values, like default Trustee Role is creator
and Keystone endpoint interface is Internal
. There are some other parameters as well those User can update as per their requirements.
trilio_env.yaml
roles_data.yaml
passwords.yaml
defaults.yaml
Use the correct Trilio endpoint map file as per the available Keystone endpoint configuration. You have to remove your OpenStack's endpoint map file from overcloud deploy command and instead of that use Trilio endpoint map file.
Instead of tls-endpoints-public-dns.yaml
file, use triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_dns.yaml
Instead of tls-endpoints-public-ip.yaml
file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_ip.yaml
Instead of tls-everywhere-endpoints-dns.yaml
file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_everywhere_dns.yaml
Instead of no-tls-endpoints-public-ip.yaml
file, usetriliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_non_tls_endpoints_ip.yaml
To include new environment files use -e
option and for roles data files use -r
option.\
Below is an example of an overcloud deploy command with Trilio environment:
Post-deployment for the multipath enabled environment, log into respective Datamover container and add uxsock_timeout with value as 60000 (i.e. 60 sec) in /etc/multipath.conf. Restart datamover container
Please follow this documentation to verify the deployment.
Trilio components will be deployed using puppet scripts.
In case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html
=> If any Trilio containers do not start well or are in a restarting state on the Controller/Compute node, use the following logs to debug.
This section is only required when the Multi-IP feature for NFS is required.
This feature allows us to set the IP to access the NFS Volume per datamover instead of globally.
triliovault_nfs_map_input.yml
in the current directory and provide compute host and NFS share/IP map.Get the overcloud Controller and Compute hostnames from the following command. Check Name
column. Use exact host names in the triliovault_nfs_map_input.yml
file.
Run this command on undercloud by sourcing stackrc
.
Edit the input map file triliovault_nfs_map_input.yml
and fill in all the details. Refer to this page for details about the structure.
Below is an example of how you can set the multi-IP NFS details:
You can not configure the different IPs for the Controllers/WLM nodes, you need to use the same share on all the controller nodes. You can configure the different IPs for Compute/Datamover nodes
If pip isn't available please install pip on the undercloud.
Expand the map file to create a one-to-one mapping of the compute nodes and the NFS shares.
The result will be stored in the triliovault_nfs_map_output.yml
file
Open file triliovault_nfs_map_output.yml
available in the current directory and validate that all compute nodes are covered with all the necessary NFS shares.
triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
Validate the changes in the file triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_nfs_map.yaml
trilio_nfs_map.yaml
) in overcloud deploy command with '-e' option as shown below.MultiIPNfsEnabled
is set to true in the trilio_env.yaml
file and that NFS is used as a backup target.The existing default HAproxy configuration works fine with most of the environments. Only when timeout issues with the Trilio Datamover Api are observed or other reasons are known, change the configuration as described here.
Following is the HAproxy conf file location on HAproxy nodes of the overcloud. Trilio Datamover API service HAproxy configuration gets added to this file.
Trilio Datamover HAproxy default configuration from the above file looks as follows:
The user can change the following configuration parameter values.
To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for editing.
ii) Search the following entries and edit as required
iii) Save the changes and do the overcloud deployment again to reflect these changes for overcloud nodes.
i) If the user wants to add one or more extra volume/directory mounts to the Trilio Datamover Service container, they can use a variable named 'TrilioDatamoverOptVolumes' is available in the below file.
To add one more extra volume/directoy mount to the Trilio Datamover Service container it is necessary that volumes/directories should already be mounted on the Compute host
ii) The variable 'TrilioDatamoverOptVolumes' accepts a list of volume/bind mounts. User needs to edit the file and add their volume mounts in the below format.
iii) Lastly you need to do overcloud deploy/update.
After successful deployment, you will see that volume/directory mount will be mounted inside the Trilio Datamover Service container.
We are using cinder's ceph user for interacting with Ceph cinder storage. This user name is defined using parameter - 'ceph_cinder_user'.
Details about multiple ceph configuration can be found here.
After the installation and configuration of Trilio for Openstack did succeed the following steps can be done to verify that the Trilio installation is healthy.
Make sure below containers are in a running state, triliovault-wlm-cron
would be running on only one of the controllers in case of multi-controller setup.
triliovault_datamover_api
triliovault_wlm_api
triliovault_wlm_scheduler
triliovault_wlm_workloads
triliovault-wlm-cron
If the containers are in a restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
After successful deployment, triliovault-wlm-cron
service would get added in pcs cluster as a cluster resource, you can verify through pcs status
command
Verify the HAproxy configuration under:
Make sure the Trilio Datamover container is in running state and no other Trilio container is deployed on compute nodes.
Check if provided backup target is mounted well on Compute host.
Make sure the horizon container is in a running state. Please note that the Horizon
container is replaced with Trilio's Horizon container. This container will have the latest OpenStack horizon + Trilio horizon plugin
.
Parameter | Description |
---|---|
Parameters | Description |
---|---|
Parameters | Description |
---|---|
Parameters | Description |
---|---|
6.0.0
6.0.0-beta-2-rhosp17.1
6.0.0-beta-2
RHOSP17.1
6.0.0-beta-2
CloudAdminUserName
Default value is admin
.
Provide the cloudadmin user name of your overcloud
CloudAdminProjectName
Default value is admin
.
Provide the cloudadmin project name of your overcloud
CloudAdminDomainName
Default value is default
.
Provide the cloudadmin project name of your overcloud
CloudAdminPassword
Provide the cloudadmin user's password of your overcloud
ContainerTriliovaultDatamoverImage
Trilio Datamover Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
ContainerTriliovaultDatamoverApiImage
Trilio DatamoverApi Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
ContainerTriliovaultWlmImage
Trilio WLM Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
ContainerHorizonImage
Horizon Container image name have already been populated in the preparation of the container images.
Still it is recommended to verify the container URL.
TrilioBackupTargets
List of Backup Targets for TrilioVault. These backup targets will be used to store backups taken by TrilioVault. Backup target examples and format of NFS and S3 types are already provided in the trilio_env.yaml file. Details of respective prameters under TrilioBackupTargets given in next section
TrilioDatamoverOptVolumes
User can specify list of extra volumes that they want to mount on 'triliovault_datamover' container.
Refer the Configure Custom Volume/Directory Mounts for the Trilio Datamover Service
section in this doc
backup_target_name
User Defined Name of the target backend. Can be any name which can help in quick identifying respective target
backup_target_type
s3
is_default
Can be true
or false
. Ideally, any one of the multiple target backends specified in trilio_env.yaml
file must be marked as true
.
s3_type
Could be either of amazon s3
OR ceph_s3
depending upon which S3 is to be configured with T4O.
s3_access_key
S3 Access Key
s3_secret_key
S3 Secret Key
s3_region_name
S3 Region name
s3_bucket
S3 Bucket
s3_endpoint_url
S3 endpoint url
s3_signature_version
Provide S3 signature version
s3_auth_version
Provide S3 auth version
s3_ssl_enabled
true
s3_ssl_verify
true
s3_self_signed_cert
true
s3_bucket_object_lock_enabled
If S3 bucket is having object lock enabled, then this should be set as true
else false
backup_target_name
User Defined Name of the target backend. Can be any name which can help in quick identifying respective target
backup_target_type
nfs
is_default
Can be true
or false
. Ideally, any one of the multiple target backends specified in trilio_env.yaml
file must be marked as true
.
nfs_options
'nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10'
These parameter set NFS mount options. Keep default values, unless a special requirement exists
is_multi_ip_nfs
true
or false
depending upon whether the storage backend is NFS having single or multiple IP.
nfs_shares
NFS IP and share path. To be kept in case of single IP NFS. Eg. 11.30.1.10:/mnt/share
multi_ip_nfs_map
NFS IPs and share paths. To be kept in case of multiple NFS IPs. Sample below multi_ip_nfs_map:  controller1: 192.168.2.3:/var/nfsshare  controller2: 192.168.2.4:/var/nfsshare  compute0: 192.168.3.2:/var/nfsshare  compute1: 192.168.3.4:/var/nfsshare
VmwareToOpenstackMigrationEnabled
Set it to True
if this feature is required to be enabled, otherwise keep it to False
Populate all the below mentioned parameters if it is set to True
VcenterUrl
vCenter access URL
example:
https://vcenter-1.infra.trilio.io/
VcenterUsername
Access username (Check out the privilege requirement here)
VcenterPassword
Access user's Password
VcenterNoSsl
If the connection is to be established securely, set it to False
Set it to True
if SSL verification is to be ignored
VcenterCACertFileName
If VcenterNoSsl
is set to False
, provide the name of the SSL certificate file which is uploaded at step 1.5.2
Otherwise, keep it blank.
The is the supported and recommended method to deploy and maintain any RHOSP installation.
Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
Refer to the below-mentioned acceptable values for the placeholders triliovault_tag
, trilio_branch
, RHOSP_version
and CONTAINER-TAG-VERSION
in this document as per the Openstack environment:
Trilio Release | triliovault_tag | trilio_branch | RHOSP_version | CONTAINER-TAG-VERSION |
---|
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
The following backup target types are supported by Trilio
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
The following steps are to be done on the workstation
node on an already installed RHOSP environment.
The overcloud-deploy command has to be run successfully already and the overcloud should be available.
The following command clones the triliovault-cfg-scripts github repository.
If your backup target is CEPH S3 with SSL and SSL certificates are self-signed or authorized by a private CA, then the user needs to provide a CA chain certificate to validate the SSL requests. For all s3 backup target with self signed TLS certificates, user need to copy ca chain files in following location and in given file name format in trilio puppet module. Edit <S3_BACKUP_TARGET_NAME>, <S3_SELF_SIGNED_CERT_CA_CHAIN_FILE> parameters in following command.
For example if S3_BACKUP_TARGET_NAME = BT2_S3 and your S3_SELF_SIGNED_CERT_CA_CHAIN_FILE='s3-ca.pem' then command to copy this ca chain file to trilio puppet module would be
Rename the file as vddk.tar.gz
and place it at
Copy the vCenter SSL cert file to
Trilio contains multiple services. Add these services to your roles_data.yaml
.
You need to find roles_data.yaml file that is getting used for openstack deployment. You will find it in 'custom-templates' directory on workstation node, where cloud administrator has kept all custom heat templates. This directory name can be anything.
Please add all Trilio services to this roles_data.yaml file.
This service needs to share the same role as the keystone
and database
service.
In the case of the pre-defined roles, these services will run on the role Controller
.
In the case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone
service installed.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In the case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In the case of custom-defined roles, it is necessary to use the role that the nova-compute
service uses.
Add the following line to the identified role:
This change will trigger step X: "Recreate config map tripleo-tarball-config due to change in custom templates"
Trilio containers are pushed to the RedHat Container Registry
.
Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.
There are two registry methods available in the RedHat OpenStack Platform 17.1 on RHOCP.
Remote Registry
Image Registry on Red Hat Satellite Server
1] Follow this section when 'Remote Registry' is used.
In this method, container images get downloaded directly on overcloud nodes during overcloud deploy/update command execution.
Add 'registry.connect.redhat.com' redhat connect registry credentials to containers-prepare-parameter.yaml
env file. Please refer below example.
If you want to use TrilioVault images from dockerhub, please use following approach. Add 'docker.io' registry to containers-prepare-parameter.yaml
env file. Please refer below example.
Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat
2. Make sure you have network connectivity to the above registries from all overcloud nodes. Otherwise image pull operation will fail.
3. The user needs to manually populate the trilio_env.yaml
file with Trilio container image URLs as given below:
trilio_env.yaml file path:
cd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
RedHat Connect Registry URL: registry.connect.redhat.com Dockerhub Registry URL: docker.io
You can use any one registry from above. But make sure to use same registry as you configured in step 1 of this section
At this step, you have configured Trilio image URLs in the necessary environment file. You can skip step 3.2
Follow this section when a Satellite Server is used for the container registry.
Populate the trilio_env.yaml
with container URLs.
At this step, you have downloaded Trilio container images into the RedHat satellite server and configured Trilio image URLs in the necessary environment file.
Edit /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml
file and provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still, it is recommended to verify the container URLs.
You don't need to provide anything for resource_registry
, keep it as it is.
After you fill in details of backup targets in trilio_env.yaml, user needs to run following script from ‘scripts' directory on undercloud node. This script will update ‘services/triliovault-object-store.yaml' file. User don’t need to verify that.
The user needs to generate random passwords for Trilio resources using the following script.
This script will generate random passwords for all Trilio resources that are going to get created in OpenStack cloud.
Include this file in your overcloud deploy command as an environment file with the option "-e"
Run all steps of this section from 'openstackclient' pod.
Login to openstackclient pod.
For only this section user needs to source the cloudrc file of overcloud node
The output will be written to
For TrilioVault functionality to work, we need the following Linux kernel modules to be loaded on all controllers and compute nodes(Where Trilio WLM and Datamover services are going to be installed).
Use ansible ad hoc commands or playbooks to copy the Trilio puppet module from openstack client pod to all overcloud nodes.
Login to openstackclient pod.
This step copies Trilio puppet module from path '/home/cloud-admin/triliovault-cfg-scripts/redhat-director-scripts/rhosp17/puppet/trilio' to all controller and compute nodes at path '/etc/puppet/modules/trilio'
In this step we will copy all TrilioVault heat service templates to custom templates folder and re-create custom-config.tar.gz This section is valid for following TrilioVault heat templates
All heat templates paths should be relative to following directory :
First we need to delete existing config map
RHOSP Reference document: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_adding-custom-environment-files-to-the-overcloud-configuration_OSPdO-with-HCI
In this step user need to copy Trilio environment files to custome environment files directory. This diretcory name can be anything.
Let's say this directory path is "PATH_TO/dir_custom_environment_files"
In following commands, replace <dir_custom_environment_files> with the directory that contains the environment files you want to use in your overcloud deployment. The ConfigMap object stores these as individual data entries.
RHOSP Reference Document: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_creating-ansible-playbooks-for-overcloud-configuration-with-the-openstackplaybookgenerator-CRD_OSPdO-overcloud-deploy
Before running this step, please take permission from Redhat deployment team.
Check status of the resource, wait till it’s status chnges to “Finished“
RHOSP Reference Document: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_applying-overcloud-configuration-with-director-operator_OSPdO-overcloud-deploy
In this step we will apply the new ansible playbooks generated in previous step to overcloud. Use config version of the playbooks generated in previous step. Update config version in openstack-deployment.yaml file
Apply the updated definition.
Check deployment progress and logs
Make sure you see a successful deployment message at the bottom of the following logs You may need to adjust your deployment name deploy-openstack-default' as per your environment.
More about this feature can be found .
Please refer to to collect the necessary artifacts before continuing further.
Redhat document for remote registry method:
Pull the Trilio containers on the Red Hat Satellite using the given
Parameter | Description |
---|
More about this feature can be found .
Parameters | Description |
---|
Please follow to verify the deployment.
CloudAdminUserName | Default value is Provide the cloudadmin user name of your overcloud |
CloudAdminProjectName | Default value is Provide the cloudadmin project name of your overcloud |
CloudAdminDomainName | Default value is Provide the cloudadmin project name of your overcloud |
CloudAdminPassword | Provide the cloudadmin user's password of your overcloud |
ContainerTriliovaultDatamoverImage | Trilio Datamover Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
ContainerTriliovaultDatamoverApiImage | Trilio DatamoverApi Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
ContainerTriliovaultWlmImage | Trilio WLM Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
ContainerHorizonImage | Horizon Container image name have already been populated in the preparation of the container images. Still it is recommended to verify the container URL. |
TrilioBackupTargets | List of Backup Targets for TrilioVault. These backup targets will be used to store backups taken by TrilioVault. Backup target examples and format of NFS and S3 types are already provided in th trilio_env.yaml file. |
TrilioDatamoverOptVolumes | User can specify list of extra volumes that they want to mount on 'triliovault_datamover' container. Refer the |
Following are the valid keys to define any backup target |
NfsShares | Provide the nfs share you want to use as backup target for snapshots taken by Triliovault |
NfsOptions | This parameter set NFS mount options. Keep default values, unless a special requirement exists. |
S3Type | If your Backup target is S3 then either provide the |
S3AccessKey | Provide S3 Access key |
S3SecretKey | Provide Secret key |
S3RegionName | Provide S3 region. If your S3 type does not have region parameter, just keep the parameter as it is |
S3Bucket | Provide S3 bucket name |
S3EndpointUrl | Provide S3 endpoint url, if your S3 type does not not required keep it as it is. Not required for Amazon S3 |
S3SignatureVersion | Provide S3 singature version. |
S3AuthVersion | Provide S3 auth version. |
S3SslEnabled | Default value is If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint url then change it to 'True', otherwise keep it as 'False'. |
6.0.0 | 6.0.0-beta-1-rhosp17.1 | 6.0.0-beta-1 | RHOSP17.1 | 6.0.0-beta-1 |
VmwareToOpenstackMigrationEnabled | Set it to |
VcenterUrl | vCenter access URL
example:
|
VcenterUsername |
VcenterPassword | Access user's Password |
VcenterNoSsl | If the connection is to be established securely, set it to |
VcenterCACertFileName |
Access username (Check out the privilege requirement )
If VcenterNoSsl
is set to False
, provide the name of the SSL certificate file which is uploaded at step
Otherwise, keep it blank.