Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The installation of Trilio for Openstack on Kolla Victoria with Trilio 4.1 is following this procedure:
Deplo T4O-4.1 GA Appliance
Upgrade to 4.1 HF5 packages on the appliance
Deploy Trilio components on Openstack Victoria
Update Trilio components on Openstack Victora
Configure the Trilio appliance
Please follow this deployment guide to spin up the base Trilio 4.1GA appliance.
Trilio supports Ansible OpenStack Victoria from 4.1HF5 onwards, so it is recommended to upgrade to the latest available hotfix on 4.1 to make deployment successful. Please follow this upgrade guide to upgrade the appliance to the latest 4.1 Hotfix.
Run the deployment of the components following this guide using the following values:
Variable | Value |
---|---|
Change parameterOPENSTACK_DIST
in the file/etc/openstack_deploy/user_tvault_vars.yml
to victoria
Follow this guide to update the packages on the OpenStack environment.
Please follow this guide to configure the upgraded Trilio 4.1 appliance.
Branch
hotfix-13-TVO/4.1
Trilio and Canonical have started a partnership to ensure a native deployment of Trilio using JuJu Charms.
Those JuJu Charms are publicly available as Open Source Charms.
Trilio is not providing the JuJu Charms to deploy Trilio 4.1 in Canonical Openstack. These are developed and maintained by Canonical.
Canonical Openstack doesn't require the Trilio Cluster. The required services are installed and managed via JuJu Charms.
The following charms exist:
trilio-wlm Installs and manages Trilio Controller services.
trilio-dm-api Installs and manages the Trilio Datamover API service.
trilio-data-mover Installs and manages the Trilio Datamover service.
trilio-horizon-plugin Installs and manages the Trilio Horizon Plugin.
The documentation of the charms can be found here:
The installation of Trilio for Openstack on Kolla Victoria with Trilio 4.1 is following this procedure:
Deplo T4O-4.1 GA Appliance
Upgrade to 4.1 HF5 or higher on the appliance
Deploy Trilio components of 4.1 HF5 or higher on the Kolla Openstack Victoria
Configure the Trilio appliance
Please follow to spin up the base Trilio 4.1GA appliance.
Trilio supports Kolla Victoria from 4.1HF5 onwards, so it is recommended to upgrade to the latest available hotfix on 4.1 to make deployment successful. Please follow to upgrade the appliance to the latest 4.1 Hotfix.
Run the deployment of the components following using the following values:
Variable | Value |
---|
Please follow to configure the upgraded Trilio 4.1 appliance.
Branch | hotfix-13-TVO/4.1 |
Tag | 4.1.94-hotfix-12-victoria |
Once the Trilio VM or the Cluster of Trilio VMs has been spun, the installation process can begin. This process contains the following steps:
Install the Trilio dm-api service on the control plane.
Install the Trilio datamover service on the compute plane.
Install the Trilio Horizon plugin into the Horizon service.
How these steps look in detail is dependent on the Openstack distribution that Trilio is installed in. Each supported Openstack distribution has its own deployment tools. Trilio integrates into these deployment tools to provide a native integration from the beginning to the end.
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
Following backup target types are supported by Trilio
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.
All commands need to be run as user 'stack' on undercloud node
The following command clones the triliovault-cfg-scripts github repository.
Please note that the Trilio Appliance needs to get updated to hf3 as well.
If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE___Directory/puppet/trilio/files'
Trilio contains multiple services. Add these services to your roles_data.yaml.
In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
All commands need to be run as user 'stack'
This service needs to share the same role as the keystone
and database
service.
In case of the pre-defined roles will these services run on the role Controller
.
In case of custom defined roles, it is necessary to use the same role where 'OS::TripleO::Services::Keystone' service installed.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In case of custom defined roles, it is necessary to use the role the nova-compute
service is using.
Add the following line to the identified role:
All commands need to be run as user 'stack'
Refer to the below-mentioned value of the respective placeholder in this document. HOTFIX-TAG-VERSION : 4.1.94-hotfix-12-tripleo
Trilio containers are pushed to 'Dockerhub'. Registry URL: 'docker.io'. Container pull URLs are given below.
There are two registry methods available in TripleO Openstack Platform.
Remote Registry
Local Registry
Follow this section when 'Remote Registry' is used.
For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the Trilio container URLs from Dockerhub registry.
Populate the trilio_env.yaml with container URLs for:
Trilio Datamover container
Trilio Datamover api container
Trilio Horizon Plugin
trilio_env.yaml will be available in
__triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments
Follow this section when 'local registry' is used on the undercloud.
Run the following script. Script pulls the triliovault containers and updates the triliovault environment file with URLs.
Acceptable values for the below two parameters:
OS_platform: [centos7, centos8]
container_tool_available_on_undercloud: [docker, podman]
The changes can be verified using the following commands.
Fill triliovault details in file - '/home/stack/triliovault-cfg-scripts/redhat-director-scripts/tripleo-train/environments/trilio_env.yaml', triliovault environment file is self explanatory. Fill details of backup target, verify image urls and other details.
Use the following heat environment file and roles data file in overcloud deploy command
trilio_env.yaml: This environment file contains Trilio backup target details and Trilio container image locations
roles_data.yaml: This file contains overcloud roles data with Trilio roles added.
Use the correct trilio endpoint map file as per your keystone endpoint configuration.
- Instead of tls-endpoints-public-dns.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_dns.yaml’
- Instead of tls-endpoints-public-ip.yaml
this file, use ‘environments/trilio_env_tls_endpoints_public_ip.yaml’
- Instead of tls-everywhere-endpoints-dns.yaml
this file, use ‘environments/trilio_env_tls_everywhere_dns.yaml’
Deploy command with triliovault environment file looks like following.
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
Verify the haproxy configuration under:
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.
Please follow this deployment guide to spin up the base Trilio 4.1GA appliance.
Trilio supports TripleO Train from 4.1HF5 onwards, so it is recommended to upgrade to the latest available hotfix on 4.1 to make deployment successful. Please follow this upgrade guide to upgrade the appliance to the latest 4.1 Hotfix.
Please follow this guide to configure the upgraded Trilio 4.1 appliance.
Please ensure that the Trilio Appliance has been updated to the latest hotfix before continuing the installation.
Trilio is by default using the nova user id and group id 997:998 Ansible Openstack is not always 'nova' user id 162 on nova-compute containers. The 'nova' user id on the Trilio nodes need to be set the same as in the nova-compute containers. Do the following steps on all Trilio nodes in case of nova id not being 162:162:
Download the shell script that will change the user-id
Assign executable permissions
Edit script to use the correct nova id
Execute the script
Verify that 'nova' user and group id has changed to the desired value
Clone triliovault-cfg-scripts from github repository on Ansible Host.
Available values for <branch>:
Copy Ansible roles and vars to required places.
In case of installing on OSA Victora edit OPENSTACK_DIST in the file /etc/openstack_/user_tvault_vars.yml to Victoria
Add Trilio playbook to /opt/openstack-ansible/playbooks/setup-openstack.yml
at the end of the file.
Add the following content at the end of the file /etc/openstack_deploy/user_variables.yml
Create the following file /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml
Edit the file /etc/openstack_deploy/openstack_user_config.yml
according to the example below to set host entries for Trilio components.
Edit the common editable parameter section in the file /etc/openstack_deploy/user_tvault_vars.yml
Append the required details like Trilio Appliance IP address, Trilio package version, Openstack distribution, snapshot storage backend, SSL related information, etc.
The possible package versions are:
GA Trilio 4.1: 4.1.94
Run the following commands to deploy only Trilio components in case of an already deployed Ansible Openstack.
If Ansible Openstack is not already deployed then run the native Openstack deployment commands to deploy Openstack and Trilio Components together. An example for the native deployment command is given below:
Verify triliovault datamover api service deployed and started well. Run the below commands on controller node(s).
Verify triliovault datamover service deployed and started well on compute node(s). Run the following command oncompute node(s).
Verify that triliovault horizon plugin, contegoclient, and workloadmgrclient are installed on the Horizon container.
Run the following command on Horizon container.
Verify that haproxy setting on controller node using below commands.
After the deployment has been verified it is recommended to update to the latest hotfix to ensure the best possible experience.
To update the environment follow this procedure.
This page lists all steps required to deploy Trilio components on Kolla-ansible deployed OpenStack cloud.
Please ensure that the Trilio Appliance has been updated to the latest hotfix before continuing the installation.
Refer to the below-mentioned acceptable values for the placeholders in this document as per the Openstack environment: kolla_base_distro : ubuntu / centos triliovault_tag : 4.1.94-hotfix-13-ussuri / 4.1.94-hotfix-12-victoria
Backup target storage is used to store backup images taken by Trilio and details needed for configuration:
Following backup target types are supported by Trilio. Select one of them and get it ready before proceeding to the next step.
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
Clone triliovault-cfg-scripts GitHub repository on Kolla ansible server at '/root' or any other directory of your preference. Afterward, copy the Trilio Ansible role into the Kolla-ansible roles directory
Append triliovault_passwords.yml
to /etc/kolla/passwords.yml
. Passwords are empty. Set these passwords manually in the /etc/kolla/passwords.yml
.
Edit /etc/kolla/passwords.yml
, go to the end of the file and set trilio passwords.
Edit /etc/kolla/globals.yml
file to fill Trilio backup target and build details.
You will find the Trilio related parameters at the end of globals.yml
file.
Details like Trilio build version, backup target type, backup target details, etc need to be filled out.
Following is the list of parameters that the usr needs to edit.
In the case of a different registry than docker hub, Trilio containers need to be pulled from docker.io and pushed to preferred registries.
Following are the triliovault container image URLs. Replace kolla_base_distro and triliovault_tag variables with their values.
To enable Trilio's Snapshot mount feature it is necessary to make the Trilio Backup target available to the nova-compute and nova-libvirt containers.
Edit /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml
and find nova_libvirt_default_volumes
variable. Append the Trilio mount bind /var/trilio:/var/trilio:shared
to the list of already existing volumes.
For a default Kolla installation, will the variable look as follows afterward:
Next, find the variable nova_compute_default_volumes
in the same file and append the mount bind /var/trilio:/var/trilio:shared
to the list.
After the change will the variable look for a default Kolla installation as follows:
In case of using Ironic compute nodes, one more entry needs to be adjusted in the same file.
Find the variable nova_compute_ironic_default_volumes
and append trilio mount /var/trilio:/var/trilio:shared
to the list.
After the changes the variable will look like the following:
Pull the Trilio container images from the Dockerhub based on the existing inventory file. In the example is the inventory file named multinode
.
All that is left, is to run the deploy command using the existing inventory file. In the example is the inventory file named 'multinode'.
This is just an example command. You need to use your cloud deploy command.
Verify on the nodes that are supposed to run the Trilio containers, that those are available and healthy.
To see all TriloVault containers running on a specific node use the docker ps command.
To check the startup logs use the docker logs <container name> command.
Verify that the Trilio Appliance is configured. The Horizon tabs are only shown when a configured Trilio appliance is available.
Verify that the Trilio horizon container is installed and in a running state.
Trilio datamover api service logs on datamover api node
Trilio datamover service logs on datamover node
Note: This step needs to be done on Trilio Appliance node. Not on OpenStack node.
Pre-requisite: You should have already launched Trilio appliance VM
In Kolla OpenStack distribution, nova
user id on nova-compute docker container is set to '42436'. The nova
user id on the Trilio nodes needs to be set the same. Do the following steps on all Trilio nodes:
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that nova
user and group id have changed to '42436'
After this step, you can proceed to the 'Configuring Trilio' section.
The is the supported and recommended method to deploy and maintain any RHOSP installation. Trilio integrates natively into the RHOSP Director and manual deployment methods are not supported for RHOSP.
Backup target storage is used to store backup images taken by Trilio and also associated configuration needs. The following backup target types are supported by Trilio:
Backup Target Types | Required Configuration |
---|
The overcloud-deploy command must already have been run successfully prior to this point and overcloud should be available. Perform the following steps for 'undercloud' node on an existing RHOSP environment:
All commands need to be run as user 'stack' on undercloud node
RHOSP 16.0 is not supported anymore as RedHat has officially stopped supporting it. However, Trilio maintained it for some time and stopped the support from 4.1HF11 onwards. The latest hotfix available for RHOSP16.0 is 41.HF10. Reach out to the Support team for any help.
Ensure that the Trilio appliance connected to this installation is on the latest Hotfix version. Failure to ensure this may lead to your installation not working as expected.
Refer to this doc :
Run the following command to clone the triliovault-cfg-scripts github repository:
``
If your backup target type is 'Ceph-based S3' with SSL, skip this step. Otherwise, access the Red Hat Director scripts according to the RHOSP version being used:
RHOSP 13 - cd triliovault-cfg-scripts/redhat-director-scripts/rhosp13/
RHOSP 16.1 - cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/
RHOSP 16.2 - cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/
If your backup target is Ceph S3 with SSL and your SSL certificates are self-signed or authorized by private CA, you must provide the CA chain certificate to validate the SSL requests. Otherwise, skip this step. To do this:
Rename your CA chain cert file to s3-cert.pem.
Copy the renamed file into the following directory:
triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE_Directory>/puppet/trilio/files
If your overcloud deploy command uses any other deploy artifact through an environment file, then you must merge Trilio deploy artifact url and your url in a single file.
Then access the Red Hat Director scripts according to the version being used:
RHOSP 13 - cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp13/puppet/trilio/files/
RHOSP 16.1 - cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/puppet/trilio/files/
RHOSP 16.2 - cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/rhosp16.2/puppet/trilio/files/
From this point onwards in the documentation, only the following path will be used for examples: cd triliovault-cfg-scripts/redhat-director-scripts/rhosp16.1/
The following commands upload the Trilio puppet module to the overcloud registry. The upload only happens upon the next deployment.
Step 1: -
The output of the above command looks like the following.
Trilio puppet module is uploaded to overcloud as a swift deploy artifact with heat resource name 'DeployArtifactURLs'.
Step 2: - Check Trilio's Puppet module artifact file and ensure that it looks like the following:
Step 3: -
Firstly, check to make sure that your overcloud deploy environment files uses deploy artifacts. To do this check string DeployArtifactURLs in your environment files (only those mentioned in the overcloud deploy command with -e option). If you find any environment file with the -e option, then your deploy command is using deploy artifacts.
If your deploy command is using deploy artifact, you must merge all deploy artifacts in a single file. For example, if your artifact file path is /home/stack/templates/user-artifacts.yaml, then perform the following steps to merge both urls in single file, which is passed to the overcloud deploy command with the -e option.
Trilio contains multiple services. Add these services to your roles_data.yaml.
In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
All commands need to be run as user 'stack'
This service needs to share the same role as the keystone
and database
service.
In case of the pre-defined roles will these services run on the role Controller
.
In case of custom-defined roles, it is necessary to use the same role where OS::TripleO::Services::Keystone
service installed.
Add the following line to the identified role:
This service needs to share the same role as the nova-compute
service.
In case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In case of custom-defined roles, it is necessary to use the role that the nova-compute
service is using.
Add the following line to the identified role:
All commands need to be run as user 'stack'
Trilio containers are pushed to 'RedHat Container Registry'.
Registry URL: registry.connect.redhat.com
Container pull URLs are given below.
Please note that using the hotfix containers requires that the Trilio Appliance is getting upgraded to the desired hotfix level as well.
Refer to the word <HOTFIX-TAG-VERSION> as 4.1.94-hotfix-16 in the below sections
There are three registry methods available in the RedHat OpenStack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
In this method, container images get downloaded directly on overcloud nodes during overcloud deploy/update command execution. User can set the remote registry to the RedHat registry or any other private registry that he wants to use.
The user needs to provide credentials for the registry in containers-prepare-parameter.yaml
file.
Make sure other OpenStack service images are also using the same method to pull container images. If it's not the case you can not use this method.
Populate containers-prepare-parameter.yaml
with content like following. Important parameters are 'push_destination: false',
ContainerImageRegistryLogin: true and registry credentials.
Trilio container images are published to the registry registry.connect.redhat.com
Credentials of registry 'registry.redhat.io' will work for registry.connect.redhat.com
registry too.
Note: This file - containers-prepare-parameter.yaml
Note: File 'containers-prepare-parameter.yaml' gets created as output of command 'openstack tripleo container image prepare'. Refer above document by RedHat
3. Make sure you have network connectivity to the above registries from all overcloud nodes. Otherwise, image pull operation will fail.
4. Populate the trilio_env.yaml with Trilio container image URLs:
Trilio Datamover container
Trilio Datamover API container
Trilio Horizon Plugin
trilio_env.yaml will be available in
Follow this section when 'local registry' is used on the undercloud.
In this case, it is necessary to push the Trilio containers to the undercloud registry. Trilio provides shell scripts that will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and update the trilio_env.yaml.
Verify the changes
Verify the changes:
Verify the changes
The changes can be verified using the following commands.
Follow this section when a Satellite Server is used for the container registry.
Populate the trilio_env.yaml with container urls.
Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.
The following information are required additionally:
Network for the datamover api
datamover password
Backup target type {nfs/s3}
In case of NFS
list of NFS Shares
NFS options
In case of S3
S3 type {amazon_s3/ceph_s3}
S3 Access key
S3 Secret key
S3 Region name
S3 Bucket
S3 Endpoint URL
S3 Signature Version
S3 Auth Version
S3 SSL Enabled {true/false}
S3 SSL Cert
Use ceph_s3 for any non-aws S3 backup targets.
The existing default haproxy configuration works fine with most of the environments. Only when timeout issues with the dmapi are observed or other reasons are known, change the configuration as described here.
Following is the haproxy conf file location on haproxy nodes of the overcloud. Trilio datamover api service haproxy configuration gets added to this file.
Trilio datamover haproxy default configuration from the above file looks as follows:
The user can change the following configuration parameter values.
To change these default values, you need to do the following steps. i) On the undercloud node, open the following file for edit (Edit <RHOSP_RELEASE> with your cloud's release information. Valid values are - rhosp13, rhosp16, rhosp16.1)
For RHOSP13
For RHOSP16.0
For RHOSP16.1
For RHOSP16.2
ii) Search the following entries and edit as required
iii) Save the changes.
Use the following heat environment file and roles data file in overcloud deploy command:
trilio_env.yaml
roles_data.yaml
Use the correct Trilio endpoint map file as per available Keystone endpoint configuration
Instead of tls-endpoints-public-dns.yaml
file, use environments/trilio_env_tls_endpoints_public_dns.yaml
Instead of tls-endpoints-public-ip.yaml
file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml
Instead of tls-everywhere-endpoints-dns.yaml
file, useenvironments/trilio_env_tls_everywhere_dns.yaml
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
Verify the haproxy configuration under:
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + Trilio's horizon plugin.
In RHOSP, nova
user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes needs to be set the same. Do the following steps on all Trilio nodes:
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that nova
user and group id have changed to '42436'
Trilio components will be deployed using puppet scripts.
Openstack Version | Branch |
---|---|
Parameter | Defaults/choices | comments |
---|---|---|
Redhat document for remote registry method:
Pull the Trilio containers on the Red Hat Satellite using the given
In case if the overcloud deployment is failing, do the following command to provide the list of errors. The following document also provides valuable insights:
Ussuri
hotfix-13-TVO/4.1
Victoria
hotfix-13-TVO/4.1
triliovault_tag
<triliovault_tag>
Container tags. Use ussuri tagged containers for Ussuri and victoria tagged containers for Victoria
horizon_image_full
Keep Default
By default will the Trilio Horizon container not get deployed.
Uncomment this parameter to deploy Trilio Horizon container instead of Openstack Horizon container.
triliovault_docker_username
triliodocker
default docker user of Trilio (read permission only)
triliovault_docker_password
triliopassword
password for default docker user of Trilio
triliovault_docker_registry
Default value: docker.io
Edit this value if a different container registry for Trilio containers is to be used. Containers need to be pulled from docker.io and pushed to chosen registry first.
triliovault_backup_target
nfs
amazon_s3
ceph_s3
nfs
if the backup target is NFS
amazon_s3
if the backup target is Amazon S3
ceph_s3
if the backup target type is S3 but not amazon S3.
triliovault_nfs_shares
<NFS-IP/FQDN>:/<NFS path>
NFS share path example: ‘192.168.122.101:/nfs/tvault’
triliovault_nfs_options
'nolock,soft,timeo=180,
intr,lookupcache=none'
These parameter set NFS mount options. Keep default values, unless a special requirement exists.
triliovault_s3_access_key
S3 Access Key
Valid for amazon_s3
and ceph_s3
triliovault_s3_secret_key
S3 Secret Key
Valid for amazon_s3
and ceph_s3
triliovault_s3_region_name
Default value: us-east-1
S3 Region name
Valid for amazon_s3
and ceph_s3
If s3 storage doesn't have region parameter keep default
triliovault_s3_bucket_name
S3 Bucket name
Valid for amazon_s3
and ceph_s3
triliovault_s3_endpoint_url
S3 Endpoint URL
Valid for ceph_s3
only
triliovault_s3_ssl_enabled
True
False
Valid for ceph_s3
only
Set true for SSL enabled S3 endpoint URL
triliovault_s3_ssl_cert_file_name
s3-cert.pem
Valid for ceph_s3
only with SSL enabled and self signed certificates
OR issued by a private authority.
In this case, copy the ceph s3 ca chain file
to/etc/kolla/config/triliovault/
directory on ansible server. Create this directory if it does not exist already.
triliovault_copy_ceph_s3_ssl_cert
True
False
Valid for ceph_s3
only
Set to True when: SSL enabled with self-signed certificates or issued by a private authority.
NFS |
|
Amazon S3 |
|
Other S3 compatible storage, e.g. Ceph-based S3 |
|