The Red Hat Openstack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.
TrilioVault is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.
​
Backup target storage is used to store backup images taken by TrilioVault and details needed for configuration:
Following backup target types are supported by TrilioVault
a) NFS
Need NFS share path
b) Amazon S3
- S3 Access Key - Secret Key - Region - Bucket name
c) Other S3 compatible storage (Like, Ceph based S3)
- S3 Access Key - Secret Key - Region - Endpoint URL (Valid for S3 other than Amazon S3) - Bucket name
The following steps are to be done on 'undercloud' node on an already installed RHOSP environment. The overcloud-deploy command has to be run successfully already and the overcloud should be available.
All commands need to be run as user 'stack' on undercloud node
The following command clones the triliovault-cfg-scripts github repository.
cd /home/stackgit clone -b stable/4.1 https://github.com/trilioData/triliovault-cfg-scripts.gitcd triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/
Available RHOSP_RELEASE_DIRECTORY values are:
rhosp13 rhosp16 rhosp16.1
​
If your backup target is ceph S3 with SSL and SSL certificates are self signed or authorized by private CA, then user needs to provide CA chain certificate to validate the SSL requests. For that, user needs to rename his ca chain cert file to 's3-cert.pem' and copy it into directory - 'triliovault-cfg-scripts/redhat-director-scripts/redhat-director-scripts/<RHOSP_RELEASE_Directory/puppet/trilio/files'
cp s3-cert.pem /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/puppet/trilio/files/
The following commands upload the Trilio puppet module to the overcloud registry. The actual upload happens upon the next deployment.
​
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/./upload_puppet_module.sh​## Output of above command looks like following.Creating tarball...Tarball created.Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yamlUploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz+-----------------------+---------------------+----------------------------------+| object | container | etag |+-----------------------+---------------------+----------------------------------+| puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |+-----------------------+---------------------+----------------------------------+​## Above command creates following file.ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml
Trilio puppet module is uploaded to overcloud as a swift deploy artifact with heat resource name 'DeployArtifactURLs'. If you check trilio's puppet module artifact file it looks like following.
(undercloud) [[email protected] ~]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml# Heat environment to deploy artifacts via Swift Temp URL(s)parameter_defaults:DeployArtifactURLs:- 'http://172.25.0.103:8080/v1/AUTH_46ba596219d143c8b076e9fcc4139fed/overcloud-artifacts/puppet-modules.tar.gz?temp_url_sig=c3972b7ce75226c278ab3fa8237d31cc1f2115bd&temp_url_expires=1646738377'​
Note: If your overcloud deploy command using any other deploy artifact through a environment file, then you need to merge trilio deploy artifact url and your url in single file.
How to check if your overcloud deploy environment files using deploy artifacts? You need to check string 'DeployArtifactURLs' in your environment files (only those mentioned in overcloud deploy command with -e option). If you find it any such environment file that is mentioned in overcloud dpeloy command with '-e' option then your deploy command is using deploy artifact.
In that case you need to merge all deploy artifacts in single file. Refer following steps.
Let's say, your artifact file path is "/home/stack/templates/user-artifacts.yaml" then refer following steps to merge both urls in single file and pass that new file to overcloud deploy command with '-e' option.
(undercloud) [[email protected] ~]$ cat /home/stack/.tripleo/environments/puppet-modules-url.yaml | grep http >> /home/stack/templates/user-artifacts.yaml(undercloud) [[email protected] ~]$ cat /home/stack/templates/user-artifacts.yaml# Heat environment to deploy artifacts via Swift Temp URL(s)parameter_defaults:DeployArtifactURLs:- 'http://172.25.0.103:8080/v1/AUTH_57ba596219d143c8b076e9fcc4139f3g/overcloud-artifacts/some-artifact.tar.gz?temp_url_sig=dc972b7ce75226c278ab3fa8237d31cc1f2115sc&temp_url_expires=3446738365'- 'http://172.25.0.103:8080/v1/AUTH_46ba596219d143c8b076e9fcc4139fed/overcloud-artifacts/puppet-modules.tar.gz?temp_url_sig=c3972b7ce75226c278ab3fa8237d31cc1f2115bd&temp_url_expires=1646738377'​
​
TrilioVault contains multiple services. Add these services to your roles_data.yaml.
In the case of uncustomized roles_data.yaml can the default file be found on the undercloud at:
/usr/share/openstack-tripleo-heat-templates/roles_data.yaml
Add the following services to the roles_data.yaml
All commands need to be run as user 'stack'
This service needs to share the same role as the keystone
and database
service.
In case of the pre-defined roles will these services run on the role Controller
.
In case of custom defined roles, it is necessary to use the same role where 'OS::TripleO::Services::Keystone' service installed.
Add the following line to the identified role:
'OS::TripleO::Services::TrilioDatamoverApi'
This service needs to share the same role as the nova-compute
service.
In case of the pre-defined roles will the nova-compute
service run on the role Compute
.
In case of custom defined roles, it is necessary to use the role the nova-compute
service is using.
Add the following line to the identified role:
'OS::TripleO::Services::TrilioDatamover'
All commands need to be run as user 'stack'
Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: 'registry.connect.redhat.com'. Container pull URLs are given below.
TrilioVault Datamove container: registry.connect.redhat.com/trilio/trilio-datamover:4.1.94-rhosp13TrilioVault Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:4.1.94-rhosp13TrilioVault horizon plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.1.94-rhosp13
TrilioVault Datamove container: registry.connect.redhat.com/trilio/trilio-datamover:4.1.94-rhosp16TrilioVault Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:4.1.94-rhosp16TrilioVault horizon plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.1.94-rhosp16
TrilioVault Datamove container: registry.connect.redhat.com/trilio/trilio-datamover:4.1.94-rhosp16.1TrilioVault Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:4.1.94-rhosp16.1TrilioVault horizon plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.1.94-rhosp16.1
There are three registry methods available in RedHat Openstack Platform.
Remote Registry
Local Registry
Satellite Server
Follow this section when 'Remote Registry' is used.
For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml file with the TrilioVault container URLs from Redhat registry.
Populate the trilio_env.yaml with container URLs for:
TrilioVault Datamover container
TrilioVault Datamover api container
TrilioVault Horizon Plugin
trilio_env.yaml will be available in
triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments
# For RHOSP13$ grep 'Image' trilio_env.yamlDockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:4.1.94-rhosp16DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:4.1.94-rhosp16DockerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.1.94-rhosp13​# For RHOSP16$ grep 'Image' trilio_env.yamlDockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:4.1.94-rhosp16DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:4.1.94-rhosp16ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.1.94-rhosp16# For RHOSP16.1$ grep 'Image' trilio_env.yamlDockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:4.1.94-rhosp16.1DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:4.1.94-rhosp16.1ContainerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:4.1.94-rhosp16.1
Follow this section when 'local registry' is used on the undercloud.
In this case it is necessary to push the TrilioVault containers to the undercloud registry. TrilioVault provides shell scripts which will pull the containers from 'registry.connect.redhat.com' and push them to the undercloud and updates the trilio_env.yaml.
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/​./prepare_trilio_images.sh <undercloud_ip> <container_tag>​# Example:./prepare_trilio_images.sh 192.168.13.34 4.1.94-rhosp13​## Verify changes# For RHOSP13$ grep '4.1.94-rhosp13' ../environments/trilio_env.yamlDockerTrilioDatamoverImage: 172.25.2.2:8787/trilio/trilio-datamover:4.1.94-rhosp13DockerTrilioDmApiImage: 172.25.2.2:8787/trilio/trilio-datamover-api:4.1.94-rhosp13DockerHorizonImage: 172.25.2.2:8787/trilio/trilio-horizon-plugin:4.1.94-rhosp13
cd /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/scripts/​sudo ./prepare_trilio_images.sh <UNDERCLOUD_REGISTRY_HOSTNAME> <CONTAINER_TAG>​## Run following command to find 'UNDERCLOUD_REGISTRY_HOSTNAME'.-- In the below example 'trilio-undercloud.ctlplane.localdomain' is <UNDERCLOUD_REGISTRY_HOSTNAME>$ openstack tripleo container image list | grep keystone| docker://trilio-undercloud.ctlplane.localdomain:8787/rhosp-rhel8/openstack-keystone:16.0-82 || docker://trilio-undercloud.ctlplane.localdomain:8787/rhosp-rhel8/openstack-barbican-keystone-listener:16.0-84​## 'CONTAINER_TAG' format for RHOSP16: <TRILIOVAULT_VERSION>-rhosp16## 'CONTAINER_TAG' format for RHOSP16.1: <TRILIOVAULT_VERSION>-rhosp16.1For example. If TrilioVault version='4.1.94', then 'CONTAINER_TAG'=4.1.94-rhosp16For example. If TrilioVault version='4.1.94', then 'CONTAINER_TAG'=4.1.94-rhosp16.1​## Examplesudo ./prepare_trilio_images.sh trilio-undercloud.ctlplane.localdomain 4.1.94-rhosp16
The changes can be verified using the following commands.
(undercloud) [[email protected] redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>]$ openstack tripleo container image list | grep trilio| docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:4.1.94-rhosp16 | || docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:4.1.94-rhosp16 | || docker://undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:4.1.94-rhosp16 |​-----------------------------------------------------------------------------------------------------​(undercloud) [[email protected] redhat-director-scripts]$ grep 'Image' ../environments/trilio_env.yamlDockerTrilioDatamoverImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:4.1.94-rhosp16DockerTrilioDmApiImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:4.1.94-rhosp16ContainerHorizonImage: undercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:4.1.94-rhosp16
Follow this section when a Satellite Server is used for the container registry.
Pull the TrilioVault containers on the Red Hat Satellite using the given Red Hat registry URLs.​
Populate the trilio_env.yaml with container urls.
$ grep '4.1.94-rhosp13' ../environments/trilio_env.yamlDockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:4.1.94-rhosp13DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:4.1.94-rhosp13DockerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:4.1.94-rhosp13
## For RHOSP16$ grep 'Image' trilio_env.yamlDockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:4.1.94-rhosp16DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:4.1.94-rhosp16ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:4.1.94-rhosp16​## For RHOSP16.1$ grep 'Image' trilio_env.yamlDockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:4.1.94-rhosp16.1DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:4.1.94-rhosp16.1ContainerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:4.1.94-rhosp16.1
Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.
The following information are required additionally:
Network for the datamover api
datamover password
Backup target type {nfs/s3}
In case of NFS
list of NFS Shares
NFS options
In case of S3
S3 type {amazon_s3/ceph_s3}
S3 Access key
S3 Secret key
S3 Region name
S3 Bucket
S3 Endpoint URL
S3 Signature Version
S3 Auth Version
S3 SSL Enabled {true/false}
S3 SSL Cert
Use ceph_s3 for any non-aws S3 backup targets.
resource_registry:OS::TripleO::Services::TrilioDatamover: ../services/trilio-datamover.yamlOS::TripleO::Services::TrilioDatamoverApi: ../services/trilio-datamover-api.yaml# NOTE: If there are addition customizations to the endpoint map (e.g. for# other integratiosn), this will need to be regenerated.OS::TripleO::EndpointMap: endpoint_map.yaml​parameter_defaults:​## Enable TrilioVault's quota functionality on horizonExtraConfig:horizon::customization_module: 'dashboards.overrides'​## Define network map for trilio datamover api serviceServiceNetMap:TrilioDatamoverApiNetwork: internal_api​## TrilioVault Datamover Password for keystone and databaseTrilioDatamoverPassword: "test1234"​## TrilioVault container pull urlsDockerTrilioDatamoverImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-datamover:4.0.41-rhosp16DockerTrilioDmApiImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-datamover-api:4.0.41-rhosp16​## If you do not want Trilio's horizon plugin to replace your horizon container, just comment following line.ContainerHorizonImage: devundercloud.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:4.0.41-rhosp16​## Backup target type nfs/s3, used to store snapshots taken by triliovaultBackupTargetType: 'nfs'​## For backup target 'nfs'NfsShares: '192.168.122.101:/opt/tvault'NfsOptions: 'nolock,soft,timeo=180,intr,lookupcache=none'​## For backup target 's3'## S3 type: amazon_s3/ceph_s3S3Type: 'amazon_s3'​## S3 access keyS3AccessKey: ''​## S3 secret keyS3SecretKey: ''​## S3 region, if your s3 does not have any region, just keep the parameter as it isS3RegionName: ''​## S3 bucket nameS3Bucket: ''​## S3 endpoint url, not required for Amazon S3, keep it as it isS3EndpointUrl: ''​## S3 signature versionS3SignatureVersion: 'default'​## S3 Auth versionS3AuthVersion: 'DEFAULT'​## If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint url then change it to 'True', otherwise keep it as 'False'S3SslEnabled: False​## If S3 backend is not Amazon S3 and SSL is enabled on S3 endpoint URL and SSL certificates are self signed, then## user need to set this parameter value to: '/etc/tvault-contego/s3-cert.pem', otherwise keep it's value as empty string.S3SslCert: ''​## Don't edit following parameterEnablePackageInstall: True
Use the following heat environment file and roles data file in overcloud deploy command:
trilio_env.yaml
roles_data.yaml
Use correct Trilio endpoint map file as per available Keystone endpoint configuration
Instead of tls-endpoints-public-dns.yaml
file, use environments/trilio_env_tls_endpoints_public_dns.yaml
Instead of tls-endpoints-public-ip.yaml
file, useenvironments/trilio_env_tls_endpoints_public_ip.yaml
Instead of tls-everywhere-endpoints-dns.yaml
file, useenvironments/trilio_env_tls_everywhere_dns.yaml
To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:
openstack overcloud deploy --templates \-e /home/stack/templates/node-info.yaml \-e /home/stack/templates/overcloud_images.yaml \-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env.yaml \-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \-e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \-e /home/stack/triliovault-cfg-scripts/redhat-director-scripts/<RHOSP_RELEASE_DIRECTORY>/environments/trilio_env_tls_endpoints_public_dns.yaml \--ntp-server 192.168.1.34 \--libvirt-type qemu \--log-file overcloud_deploy.log \-r /home/stack/templates/roles_data.yaml
If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.
Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes. When the role for these containers is not "controller" check on respective nodes according to configured roles_data.yaml.
[[email protected] heat-admin]# podman ps | grep trilio26fcb9194566 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover-api:4.1.94-rhosp16 kolla_start 5 days ago Up 5 days ago trilio_dmapi094971d0f5a9 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:4.1.94-rhosp16 kolla_start 5 days ago Up 5 days ago horizon
Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.
[[email protected] heat-admin]# podman ps | grep triliob1840444cc59 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-datamover:4.1.94-rhosp16 kolla_start 5 days ago Up 5 days ago trilio_datamover
Make sure horizon container is in running state. Please note that 'Horizon' container is replaced with Trilio Horizon container. This container will have latest OpenStack horizon + TrilioVault's horizon plugin.
[[email protected] heat-admin]# podman ps | grep horizon094971d0f5a9 rhosptrainqa.ctlplane.localdomain:8787/trilio/trilio-horizon-plugin:4.1.94-rhosp16 kolla_start 5 days ago Up 5 days ago horizon
In RHOSP, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the TrilioVault nodes need to be set the same. Do the following steps on all TrilioVault nodes:
Download the shell script that will change the user id
Assign executable permissions
Execute the script
Verify that 'nova' user and group id has changed to '42436'
## Download the shell script$ curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh​## Assign executable permissions$ chmod +x nova_userid.sh​## Execute the shell script to change 'nova' user and group id to '42436'$ ./nova_userid.sh​## Ignore any errors and verify that 'nova' user and group id has changed to '42436'$ id novauid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)
Trilio components will be deployed using puppet scripts.
oIn case of the overcloud deployment failing do the following command provide the list of errors. The following document also provides valuable insights: https://docs.openstack.org/tripleo-docs/latest/install/troubleshooting/troubleshooting-overcloud.html​
openstack stack failures list overcloudheat stack-list --show-nested -f "status=FAILED"heat resource-list --nested-depth 5 overcloud | grep FAILED​=> If trilio datamover api containers does not start well or in restarting state, use following logs to debug.​​docker logs trilio_dmapi​tailf /var/log/containers/trilio-datamover-api/dmapi.log​​=> If trilio datamover containers does not start well or in restarting state, use following logs to debug.​​docker logs trilio_datamover​tailf /var/log/containers/trilio-datamover/tvault-contego.log