Please ensure that the Trilio Appliance has been updated to the latest hotfix before continuing the installation.
Change the nova user id on the Trilio Nodes
The 'nova' user ID and Group ID on the Trilio nodes need to be set the same as in the compute node(s). Trilio is by default using the nova user ID (UID) and Group ID (GID) 162:162. Ansible OpenStack is not always 'nova' user id 162 on compute node. Do the following steps on all Trilio nodes in case of nova UID & GID are not in sync with the Compute Node(s)
Download the shell script that will change the user-id
Assign executable permissions
Edit script to use the correct nova id
Execute the script
Verify that 'nova' user and group id has changed to the desired value
curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
chmod +x nova_userid.sh
vi nova_userid.sh # change nova user_id and group_id to uid & gid present on compute nodes.
./nova_userid.sh
id nova
Prepare deployment host
Clone triliovault-cfg-scripts from github repository on Ansible Host.
In case of installing on OSA Victora or OSA Wallaby edit OPENSTACK_DIST in the file /etc/openstack_/user_tvault_vars.yml to victoria or wallaby respectively
Add Trilio playbook to /opt/openstack-ansible/playbooks/setup-openstack.ymlat the end of the file.
- import_playbook: os-tvault-install.yml
Add the following content at the end of the file /etc/openstack_deploy/user_variables.yml
Edit the file /etc/openstack_deploy/openstack_user_config.yml according to the example below to set host entries for Trilio components.
#tvault-dmapi
tvault-dmapi_hosts: # Add controller details in this section as tvault DMAPI is resides on controller nodes.
infra-1: # controller host name.
ip: 172.26.0.3 # Ip address of controller
infra-2: # If we have multiple controllers add controllers details in same manner as shown in Infra-2
ip: 172.26.0.4
#tvault-datamover
tvault_compute_hosts: # Add compute details in this section as tvault datamover is resides on compute nodes.
infra-1: # compute host name.
ip: 172.26.0.7 # Ip address of compute node
infra-2: # If we have multiple compute nodes add compute details in same manner as shown in Infra-2
ip: 172.26.0.8
Edit the common editable parameter section in the file /etc/openstack_deploy/user_tvault_vars.yml
Append the required details like Trilio Appliance IP address, Openstack distribution, snapshot storage backend, SSL related information, etc.
Note:
From 4.2HF4 onwards default prefilled value i.e 4.2.64 will be used for TVAULT_PACKAGE_VERSION .
In case of more than one nova virtual environment If the user wants to install tvault-contego service in a specific nova virtual environment on compute node(s) then needs to uncomment var nova_virtual_env and then set the value of nova_virtual_env
In case of more than one horizon plugin configured on openstack user can specify under which horizon plugins to install Trilio Horizon Plugins by setting horizon_virtual_env parameter. Default value of horizon_virtual_env is ' /openstack/venvs/horizon*'\
NFS options for Cohesity NFS : nolock,soft,timeo=600,intr,lookupcache=none,nfsvers=3,retrans=10
##common editable parameters required for installing tvault-horizon-plugin, tvault-contego and tvault-datamover-api
#ip address of TVM
IP_ADDRESS: sample_tvault_ip_address
##Time Zone
TIME_ZONE: "Etc/UTC"
## Don't update or modify the value of TVAULT_PACKAGE_VERSION
## The default value of is '4.2.64'
TVAULT_PACKAGE_VERSION: 4.2.64
# Update Openstack dist code name like ussuri etc.
OPENSTACK_DIST: ussuri
#Need to add the following statement in nova sudoers file
#nova ALL = (root) NOPASSWD: /home/tvault/.virtenv/bin/privsep-helper *
#These changes require for Datamover, Otherwise Datamover will not work
#Are you sure? Please set variable to
# UPDATE_NOVA_SUDOERS_FILE: proceed
#other wise ansible tvault-contego installation will exit
UPDATE_NOVA_SUDOERS_FILE: proceed
##### Select snapshot storage type #####
#Details for NFS as snapshot storage , NFS_SHARES should begin with "-".
##True/False
NFS: False
NFS_SHARES:
- sample_nfs_server_ip1:sample_share_path
- sample_nfs_server_ip2:sample_share_path
#if NFS_OPTS is empty then default value will be "nolock,soft,timeo=180,intr,lookupcache=none"
NFS_OPTS: ""
## Valid for 'nfs' backup target only.
## If backup target NFS share supports multiple endpoints/ips but in backend it's a single share then
## set 'multi_ip_nfs_enabled' parameter to 'True'. Otherwise it's value should be 'False'
multi_ip_nfs_enabled: False
#### Details for S3 as snapshot storage
##True/False
S3: False
VAULT_S3_ACCESS_KEY: sample_s3_access_key
VAULT_S3_SECRET_ACCESS_KEY: sample_s3_secret_access_key
VAULT_S3_REGION_NAME: sample_s3_region_name
VAULT_S3_BUCKET: sample_s3_bucket
VAULT_S3_SIGNATURE_VERSION: default
#### S3 Specific Backend Configurations
#### Provide one of follwoing two values in s3_type variable, string's case should be match
#Amazon/Other_S3_Compatible
s3_type: sample_s3_type
#### Required field(s) for all S3 backends except Amazon
VAULT_S3_ENDPOINT_URL: ""
#True/False
VAULT_S3_SECURE: True
VAULT_S3_SSL_CERT: ""
###details of datamover API
##If SSL is enabled "DMAPI_ENABLED_SSL_APIS" value should be dmapi.
#DMAPI_ENABLED_SSL_APIS: dmapi
##If SSL is disabled "DMAPI_ENABLED_SSL_APIS" value should be empty.
DMAPI_ENABLED_SSL_APIS: ""
DMAPI_SSL_CERT: ""
DMAPI_SSL_KEY: ""
## Trilio dmapi_workers count
## Default value of dmapi_workers is 16
dmapi_workers: 16
#### Any service is using Ceph Backend then set ceph_backend_enabled value to True
#True/False
ceph_backend_enabled: False
## Provide Horizon Virtual Env path from Horizon_container
## e.g. '/openstack/venvs/horizon-23.1.0'
horizon_virtual_env: '/openstack/venvs/horizon*'
## When More Than One Nova Virtual Env. On Compute Node(s) and
## User Wants To Specify Specific Nova Virtual Env. From Existing
## Then Only Uncomment the var nova_virtual_env and pass value like 'openstack/venvs/nova-23.2.0'
#nova_virtual_env: 'openstack/venvs/nova-23.2.0'
#Set verbosity level and run playbooks with -vvv option to display custom debug messages
verbosity_level: 3
#******************************************************************************************************************************************************************
###static fields for tvault contego extension ,Please Do not Edit Below Variables
#******************************************************************************************************************************************************************
#SSL path
DMAPI_SSL_CERT_DIR: /opt/config-certs/dmapi
VAULT_S3_SSL_CERT_DIR: /opt/config-certs/s3
RABBITMQ_SSL_DIR: /opt/config-certs/rabbitmq
DMAPI_SSL_CERT_PATH: /opt/config-certs/dmapi/dmapi-ca.pem
DMAPI_SSL_KEY_PATH: /opt/config-certs/dmapi/dmapi.key
VAULT_S3_SSL_CERT_PATH: /opt/config-certs/s3/ca_cert.pem
RABBITMQ_SSL_CERT_PATH: /opt/config-certs/rabbitmq/rabbitmq.pem
RABBITMQ_SSL_KEY_PATH: /opt/config-certs/rabbitmq/rabbitmq.key
RABBITMQ_SSL_CA_CERT_PATH: /opt/config-certs/rabbitmq/rabbitmq-ca.pem
PORT_NO: 8085
PYPI_PORT: 8081
DMAPI_USR: dmapi
DMAPI_GRP: dmapi
#tvault contego file path
TVAULT_CONTEGO_CONF: /etc/tvault-contego/tvault-contego.conf
TVAULT_OBJECT_STORE_CONF: /etc/tvault-object-store/tvault-object-store.conf
NOVA_CONF_FILE: /etc/nova/nova.conf
#Nova distribution specific configuration file path
NOVA_DIST_CONF_FILE: /usr/share/nova/nova-dist.conf
TVAULT_CONTEGO_EXT_USER: nova
TVAULT_CONTEGO_EXT_GROUP: nova
TVAULT_DATA_DIR_MODE: 0775
TVAULT_DATA_DIR_OLD: /var/triliovault
TVAULT_DATA_DIR: /var/triliovault-mounts
TVAULT_CONTEGO_VIRTENV: /home/tvault
TVAULT_CONTEGO_VIRTENV_PATH: "{{TVAULT_CONTEGO_VIRTENV}}/.virtenv"
TVAULT_CONTEGO_EXT_BIN: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/bin/tvault-contego"
TVAULT_CONTEGO_EXT_PYTHON: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/bin/python"
TVAULT_CONTEGO_EXT_OBJECT_STORE: ""
TVAULT_CONTEGO_EXT_BACKEND_TYPE: ""
TVAULT_CONTEGO_EXT_S3: "{{TVAULT_CONTEGO_VIRTENV_PATH}}/lib/python2.7/site-packages/contego/nova/extension/driver/s3vaultfuse.py"
privsep_helper_file: /home/tvault/.virtenv/bin/privsep-helper
pip_version: 7.1.2
virsh_version: "1.2.8"
contego_service_file_path: /etc/systemd/system/tvault-contego.service
contego_service_ulimits_count: 65536
contego_service_debian_path: /etc/init/tvault-contego.conf
objstore_service_file_path: /etc/systemd/system/tvault-object-store.service
objstore_service_debian_path: /etc/init/tvault-object-store.conf
ubuntu: "Ubuntu"
centos: "CentOS"
redhat: "RedHat"
Amazon: "Amazon"
Other_S3_Compatible: "Other_S3_Compatible"
tvault_datamover_api: tvault-datamover-api
datamover_service_file_path: /etc/systemd/system/tvault-datamover-api.service
datamover_service_debian_path: /etc/init/tvault-datamover.conf
datamover_log_dir: /var/log/dmapi
trilio_yum_repo_file_path: /etc/yum.repos.d/trilio.repo
verbosity_level: 3
Configure Multi-IP NFS
This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs
New parameter added to /etc/openstack_deploy/user_tvault_vars.yml file for Mutli-IP NFS\
## Valid for 'nfs' backup target only.
## If backup target NFS share supports multiple endpoints/ips but in backend it's a single share then
## set 'multi_ip_nfs_enabled' paremeter to 'True'. Otherwise it's value should be 'False'
multi_ip_nfs_enabled: False
Change the Directory
cd triliovault-cfg-scripts/common/
Edit file 'triliovault_nfs_map_input.yml' in current directory and provide compute host and NFS share/IP map.
Please take a look at this page to learn about the format of the file.
vi triliovault_nfs_map_input.yml
Update pyyaml on the Openstack Ansible server node only
pip3 install -U pyyaml
Execute generate_nfs_map.py file to create one to one mapping of compute and nfs share.
python ./generate_nfs_map.py
Result will be in file - 'triliovault_nfs_map_output.yml' of the current directory
Validate output map file
Open file 'triliovault_nfs_map_output.yml
vi triliovault_nfs_map_output.yml
available in current directory and validate that all compute nodes are mapped with all necessary nfs shares.
Append the content of triliovault_nfs_map_output.yml file to /etc/openstack_deploy/user_tvault_vars.yml
Run the following commands to deploy only Trilio components in case of an already deployed Ansible Openstack.
cd /opt/openstack-ansible/playbooks
## Run tvault_pre_install.yml to install lxc packages
ansible-playbook tvault_pre_install.yml
# To create Dmapi container
openstack-ansible lxc-containers-create.yml
#To Deploy Trilio Components
openstack-ansible os-tvault-install.yml
#To configure Haproxy for Dmapi
openstack-ansible haproxy-install.yml
If Ansible Openstack is not already deployed then run the native Openstack deployment commands to deploy Openstack and Trilio Components together.
An example for the native deployment command is given below:
Verify triliovault datamover api service deployed and started well. Run the below commands on controller node(s).
lxc-ls # Check the dmapi container is present on controller node.
lxc-info -s controller_dmapi_container-a11984bf # Confirm running status of the container
Verify triliovault datamover service deployed and started well on compute node(s). Run the following command oncompute node(s).
systemctl status tvault-contego.service
systemctl status tvault-object-store # If Storage backend is S3
df -h # Verify the mount point is mounted on compute node(s)
Verify that triliovault horizon plugin, contegoclient, and workloadmgrclient are installed on the Horizon container.
Run the following command on Horizon container.
lxc-attach -n controller_horizon_container-1d9c055c # To login on horizon container
apt list | egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient' # For ubuntu based container
dnf list installed |egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient' # For CentOS based container
Verify that haproxy setting on controller node using below commands.
haproxy -c -V -f /etc/haproxy/haproxy.cfg # Verify the keyword datamover_service-back is present in output.
Update to the latest hotfix
After the deployment has been verified it is recommended to update to the latest hotfix to ensure the best possible experience.