Installing on RHOSP13

The Red Hat Openstack Platform Director is the supported and recommended method to deploy and maintain any RHOSP installation.

Trilio is integrating natively into the RHOSP Director. Manual deployment methods are not supported for RHOSP.

Prepare for deployment

Depending whether the RHOSP environment is already installed or is getting installed for the first time different steps are done to be able to deploy Trilio.

Clone triliovault-cfg-scripts repository and upload Trilio puppet module

All commands need to be run as user 'stack'

The following commands upload the Trilio puppet module to the overcloud. The actual upload happens upon the next deployment.

Gitub branches to use:

Trilio 4.0 GA == stable/4.0 Trilio 4.0 SP1 == v4.0maintenance

cd /home/stack
git clone -b <branch> https://github.com/trilioData/triliovault-cfg-scripts.git
cd triliovault-cfg-scripts/redhat-director-scripts/
# ./upload_puppet_module.sh
Creating tarball...
Tarball created.
Creating heat environment file: /home/stack/.tripleo/environments/puppet-modules-url.yaml
Uploading file to swift: /tmp/puppet-modules-8Qjya2X/puppet-modules.tar.gz
+-----------------------+---------------------+----------------------------------+
| object                | container           | etag                             |
+-----------------------+---------------------+----------------------------------+
| puppet-modules.tar.gz | overcloud-artifacts | 368951f6a4d39cfe53b5781797b133ad |
+-----------------------+---------------------+----------------------------------+

## Make sure that
ls -ll /home/stack/.tripleo/environments/puppet-modules-url.yaml

Update overcloud roles data file to include Trilio services

Trilio contains of multiple services. Add these services to your roles_data.yaml.

In case of uncostomized roles_data.yaml can the default file be found on the undercloud at:

/usr/share/openstack-tripleo-heat-templates/roles_data.yaml

Add the following services to the roles_data.yaml

Trilio Datamover Api Service

This service needs to share the same role as the nova-api service. In case of the pre-defined roles will the nova-api service run on the role Controller. In case of custom defined roles, it is necessary to use the role the nova-api service is using.

Add the following line to the identified role:

'OS::TripleO::Services::TrilioDatamoverApi'

Trilio Datamover Service

This service needs to share the same role as the nova-compute service. In case of the pre-defined roles will the nova-compute service run on the role Compute. In case of custom defined roles, it is necessary to use the role the nova-compute service is using.

Add the following line to the identified role:

'OS::TripleO::Services::TrilioDatamover' 

Prepare Trilio container images

All commands need to be run as user 'stack'

Trilio containers are pushed to 'RedHat Container Registry'. Registry URL: 'registry.connect.redhat.com'. Container pull urls are given below.

Trilio Datamover container: registry.connect.redhat.com/trilio/trilio-datamover:<container-tag> Trilio Datamover Api Container: registry.connect.redhat.com/trilio/trilio-datamover-api:<container-tag> Trilio horizon plugin: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<container-tag>

'4.0.92' is Trilio 4.0 build version. Container tag: 4.0.92-rhosp13 '4.0.115' is Trilio 4.0 SP1 build version. Container tag: 4.0.115-rhosp13

There are three registry methods available in RedHat Openstack Platform.

  1. Remote Registry

  2. Local Registry

  3. Satellite Server

Remote Registry

Follow this section when 'Remote Registry' is used.

For this method it is not necessary to pull the containers in advance. It is only necessary to populate the trilio_env.yaml and overcloud_images.yaml with the RedHat Registry URLs to the right containers.

Populate the trilio_env.yaml with container urls for:

  • Trilio Datamover container

  • Trilio Datamover api container

$ grep '<container-tag>' trilio_env.yaml
   DockerTrilioDatamoverImage: registry.connect.redhat.com/trilio/trilio-datamover:<container-tag>
   DockerTrilioDmApiImage: registry.connect.redhat.com/trilio/trilio-datamover-api:<container-tag>

In the overcloud_images.yaml replace the standard Horizon Container with the Trilio Horizon container. This is necessary to gain access to the Trilio Horizon Dashboard.

$ grep 'trilio' /home/stack/templates/overcloud_images.yaml
   DockerHorizonImage: registry.connect.redhat.com/trilio/trilio-horizon-plugin:<container-tag>

Local Registry

Follow this section when 'local registry' is used on the undercloud.

In this case it is necessary to pull the Trilio containers to the undercloud registry. Trilio provides shell scripts which will pull the containers from 'registry.connect.redhat.com' to the undercloud and updates the trilio_env.yaml with the values for the datamover and datamover api containers.

## Script argument details: 
./prepare_trilio_images.sh <undercloud_ip> <container_tag>

## Example execution:
./prepare_trilio_images.sh 192.168.13.34 4.0.92-rhosp13

The script assumes that the undercloud container registry is running on port 8787. If the registry is running on a different port, the script needs to be adjusted manually.

The changes can be verified in the trilio_env.yaml.

[stack@undercloud$ grep '4.0.92-rhosp13' trilio_env.yaml
   DockerTrilioDatamoverImage: 192.168.122.10:8787/trilio/trilio-datamover:4.0.92-rhosp13
   DockerTrilioDmApiImage: 192.168.122.10:8787/trilio/trilio-datamover-api:4.0.92-rhosp13

In the overcloud_images.yaml replace the standard Horizon Container with the Trilio Horizon container. This is necessary to gain access to the Trilio Horizon Dashboard.

$ grep 'trilio' /home/stack/templates/overcloud_images.yaml
  DockerHorizonImage: 192.168.122.10:8787/trilio/trilio-horizon-plugin:4.0.92-rhosp13

Satellite Server

Follow this section when a Satellite Server is used for the container registry.

Pull the Trilio containers on the Red Hat Satellite using the given Red Hat registry URLs.

Populate the trilio_env.yaml with container urls for:

  • Trilio Datamover container

  • Trilio Datamover api container

$ grep '4.0.92-rhosp13' trilio_env.yaml
   DockerTrilioDatamoverImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover:4.0.92-rhosp13
   DockerTrilioDmApiImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-datamover-api:4.0.92-rhosp13

In the overcloud_images.yaml replace the standard Horizon Container with the Trilio Horizon container. This is necessary to gain access to the Trilio Horizon Dashboard.

$ grep 'trilio' /home/stack/templates/overcloud_images.yaml
   DockerHorizonImage: <SATELLITE_REGISTRY_URL>/trilio/trilio-horizon-plugin:4.0.92-rhosp13

Provide environment details in trilio-env.yaml

Provide backup target details and other necessary details in the provided environment file. This environment file will be used in the overcloud deployment to configure Trilio components. Container image names have already been populated in the preparation of the container images. Still it is recommended to verify the container URLs.

The following information are required additionally:

  • Network for the datamover api

  • Backup target type {nfs/s3}

  • In case of NFS

    • list of NFS Shares

    • NFS options

  • In case of S3

    • s3 type {amazon_s3/ceph_s3}

    • Access key

    • Secret key

    • S3 Region name

    • S3 Bucket name

    • S3 Endpoint URL

    • S3 SSL Enabled {true/false}

resource_registry:
  OS::TripleO::Services::TrilioDatamover: docker/services/trilio-datamover.yaml
  OS::TripleO::Services::TrilioDatamoverApi: docker/services/trilio-datamover-api.yaml


parameter_defaults:

   ## Define network map for trilio datamover api service
   ServiceNetMap:
       TrilioDatamoverApiNetwork: internal_api

   ## Container locations
   DockerTrilioDatamoverImage: 192.168.122.10:8787/trilio/trilio-datamover:3.1.58-queens

   DockerTrilioDmApiImage: 192.168.122.10:8787/trilio/trilio-datamover-api:3.1.58-queens

   ## Backup target type nfs/s3, used to store snapshots taken by triliovault
   BackupTargetType: 'nfs'

   ## For backup target 'nfs'
   NfsShares: '192.168.122.101:/opt/tvault'
   NfsOptions: 'nolock,soft,timeo=180,intr,lookupcache=none'

   ## For backup target 's3'
   ## S3 type: amazon_s3/ceph_s3
   S3Type: 'amazon_s3'

   ## S3 access key
   S3AccessKey: ''
  
   ## S3 secret key
   S3SecretKey: ''

   ## S3 region, if your s3 does not have any region, just keep the parameter as it is
   S3RegionName: ''

   ## S3 bucket name
   S3Bucket: ''

   ## S3 endpoint url, not required for Amazon S3, keep it as it is
   S3EndpointUrl: ''

   ## If SSL enabled on S3 url, not required for Amazon S3, just keep it as it is
   S3SslEnabled: false

   ## Don't edit following parameter
   EnablePackageInstall: True

Deploy overcloud with trilio environment

Use the following heat environment file and roles data file in overcloud deploy command:

  1. trilio_env.yaml

  2. roles_data.yaml

To include new environment files use '-e' option and for roles data file use '-r' option. An example overcloud deploy command is shown below:

openstack overcloud deploy --templates \
-e /home/stack/templates/overcloud_images.yaml
-e ${basedir}/trilio_env.yaml \
-r ${basedir}/roles_data.yaml \
--control-scale 1 --compute-scale 1 --control-flavor control --compute-flavor compute \
--ntp-server 0.north-america.pool.ntp.org \

Verify deployment

On Controller node

Make sure Trilio dmapi and horizon containers are in a running state and no other Trilio container is deployed on controller nodes.

[root@overcloud-controller-0 trilio]# docker container ls | grep trilio
cad4b68a6436 192.168.122.151:8787/trilio/trilio-datamover-api:4.0.92-rhosp13  "kolla_start"  2 days ago  Up 2 days  trilio_dmapi
10b95b501092 192.168.122.151:8787/trilio/trilio-horizon-plugin:4.0.92-rhosp13  "kolla_start"  2 days ago  Up 2 days  horizon

If the containers are in restarting state or not listed by the following command then your deployment is not done correctly. Please recheck if you followed the complete documentation.

On Compute node

Make sure Trilio datamover container is in running state and no other Trilio container is deployed on compute nodes.

# docker container ls | grep trilio
2598963695c7  192.168.122.151:8787/trilio/trilio-datamover:4.0.92-rhosp13  "kolla_start"  2 days ago  Up 2 days  trilio_datamover

Configure Trilio Appliance

Once RHOSP13 Installation steps have completed successfully, follow the instructions below to now configure the Trilio Appliance.

Change the nova user id on the Trilio Nodes

In RHOSP, 'nova' user id on nova-compute docker container is set to '42436'. The 'nova' user id on the Trilio nodes need to be set the same. Do the following steps on all Trilio nodes:

  1. Download the shell script that will change the user id

  2. Assign executable permissions

  3. Execute the script

  4. Verify that 'nova' user and group id has changed to '42436'

# curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
# chmod +x nova_userid.sh
# ./nova_userid.sh
# id nova
  uid=42436(nova) gid=42436(nova) groups=42436(nova),990(libvirt),36(kvm)

Run the Trilio configurator

Follow this documentation to configure Trilio Appliance.

Change the workloadmgr.conf to use the right mountpoint

It is necessary to first configure the Trilio appliance, before the steps of this section can be done.

RHOSP13 is using a different mount point in its datamover containers than other Openstack distribution. It is necessary to adjust the mountpoint of the Trilio Nodes to match this. If the mountpoints are not getting aligned, will files created by the datamover and read by the Trilio appliance not match in their paths and backup and restore processes will fail.

Please follow these steps to align the mountpoints:

  1. Edit /etc/workloadmgr/workloadmgr.conf file

  2. Set parameter 'vault_data_directory' to '/var/lib/nova/triliovault-mounts'

  3. create the directory for the mountpoint

  4. assign the created directory to nova:nova

  5. unmount the old mountpoint

  6. Update the Trilio configurator

  7. Restart the Trilio services

  8. Verify the mountpoint has been configured correctly

# cat /etc/workloadmgr/workloadmgr.conf | grep vault_data_directory
vault_data_directory = /var/lib/nova/triliovault-mounts
vault_data_directory_old = /var/triliovault

# mkdir -p /var/lib/nova/triliovault-mounts

# chown nova:nova /var/lib/nova/triliovault-mounts

# umount /var/triliovault-mounts

# cat /home/stack/myansible/lib/python3.6/site-packages/workloadmgr/tvault_configurator/ansible-play/roles/ansible-workloadmgr/templates/workloadmgr.conf.j2 | grep vault_data_directory
vault_data_directory = /var/lib/nova/triliovault-mounts

#pcs resource restart wlm-cron 

#pcs resource restart wlm-scheduler 

#systemctl restart wlm-api 

#systemctl restart wlm-workloads

#df-h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               3.8G     0  3.8G   0% /dev
tmpfs                  3.8G   38M  3.8G   1% /dev/shm
tmpfs                  3.8G  427M  3.4G  12% /run
tmpfs                  3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/vda1               40G  8.8G   32G  22% /
tmpfs                  773M     0  773M   0% /run/user/996
tmpfs                  773M     0  773M   0% /run/user/0
10.10.2.20:/upstream  1008G  704G  254G  74% /var/lib/nova/triliovault-mounts/MTAuMTAuMi4yMDovdXBzdHJlYW0= 

Troubleshooting for overcloud deployment failures

Trilio components will be deployed using puppet scripts.

In case of the overcloud deployment failing does the following command provide the list of errors:

openstack stack failures list overcloud

Further commands that can help identifying any errors.

heat stack-list --show-nested -f "status=FAILED"
heat resource-list --nested-depth 5 overcloud | grep FAILED

In case of the trilio datamover api container is failing to start or stuck in restart, check these logs:

docker logs trilio_dmapi
tailf /var/log/containers/trilio-datamover-api/dmapi.log

In case of Trilio datamover container failing to start or being stuck in restart, check these logs:

docker logs trilio_datamover
tailf /var/log/containers/trilio-datamover/tvault-contego.log

Cinder backend is Ceph - additional steps

Add Ceph details to configuration file

If Cinder backend is Ceph it is necessary to manually add the ceph details to tvault-contego.conf on all compute nodes.

The file can be found here: /var/lib/config-data/puppet-generated/triliodm/etc/tvault-contego/tvault-contego.conf

Add the following information:

[libvirt]
images_rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = <rbd_user>

[ceph] 
keyring_ext = .<rbd_user>.keyring

The same block of information can be found in the nova.conf file.

Restart the datamover container

sudo docker restart trilio_datamover

Last updated