Installing on Ansible Openstack Ussuri

Change the nova user id on the TrilioVault Nodes

TrilioVault is by default using the nova user id and group id 162:162 Ansible Openstack is not always 'nova' user id 162 on nova-compute containers. The 'nova' user id on the TrilioVault nodes need to be set the same as in the nova-compute containers. Do the following steps on all TrilioVault nodes in case of nova id not being 162:162:

  1. Download the shell script that will change the user-id

  2. Assign executable permissions

  3. Edit script to use the correct nova id

  4. Execute the script

  5. Verify that 'nova' user and group id has changed to the desired value

curl -O
chmod +x
vi # change nova user_id and group_id to uid & gid present on compute nodes.
id nova

Prepare deployment host

Clone triliovault-cfg-scripts from github repository on Ansible Host.

git clone -b master

Copy Ansible roles and vars to required places.

cd triliovault-cfg-scripts/
cp -R ansible/roles/* /opt/openstack-ansible/playbooks/roles/
cp ansible/main-install.yml /opt/openstack-ansible/playbooks/os-tvault-install.yml
cp ansible/environments/group_vars/all/vars.yml /etc/openstack_deploy/user_tvault_vars.yml

Add TrilioVault playbook to /opt/openstack-ansible/playbooks/setup-openstack.ymlat the end of the file.

- import_playbook: os-tvault-install.yml

Add the following content at the end of the file /etc/openstack_deploy/user_variables.yml

# Datamover haproxy setting
- service:
haproxy_service_name: datamover_service
haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
haproxy_ssl: "{{ haproxy_ssl }}"
haproxy_port: 8784
haproxy_balance_type: http
- "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"

Create the following file /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml

Add the following content to the created file.

cat > /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml
- dmapi_all
- tvault-dmapi_containers
- dmapi_api
- all_containers
- hosts

Edit the file /etc/openstack_deploy/openstack_user_config.yml according to the example below to set host entries for TrilioVault components.

tvault-dmapi_hosts: # Add controller details in this section as tvault DMAPI is resides on controller nodes.
infra-1: # controller host name.
ip: # Ip address of controller
infra-2: # If we have multiple controllers add controllers details in same manner as shown in Infra-2
tvault_compute_hosts: # Add compute details in this section as tvault datamover is resides on compute nodes.
infra-1: # compute host name.
ip: # Ip address of compute node
infra-2: # If we have multiple compute nodes add compute details in same manner as shown in Infra-2

Edit the common editable parameter section in the file /etc/openstack_deploy/user_tvault_vars.yml

Append the required details like TrilioVault Appliance IP address, TrilioVault package version, Openstack distribution, snapshot storage backend, SSL related information, etc.

The possible package versions are:

GA TrilioVault 4.1: 4.1.94

##common editable parameters required for installing tvault-horizon-plugin, tvault-contego and tvault-datamover-api
#ip address of TVM
IP_ADDRESS: sample_tvault_ip_address
##Time Zone
#Update TVAULT package version here, we will install mentioned version plugins for Example# TVAULT_PACKAGE_VERSION: 3.3.36
TVAULT_PACKAGE_VERSION: 4.1.94 #GA build version
# Update Openstack dist code name like ussuri etc.
#Need to add the following statement in nova sudoers file
#nova ALL = (root) NOPASSWD: /home/tvault/.virtenv/bin/privsep-helper *
#These changes require for Datamover, Otherwise Datamover will not work
#Are you sure? Please set variable to
#other wise ansible tvault-contego installation will exit
##### Select snapshot storage type #####
#Details for NFS as snapshot storage , NFS_SHARES should begin with "-".
NFS: False
- sample_nfs_server_ip1:sample_share_path
- sample_nfs_server_ip2:sample_share_path
#if NFS_OPTS is empty then default value will be "nolock,soft,timeo=180,intr,lookupcache=none"
#### Details for S3 as snapshot storage
S3: False
VAULT_S3_ACCESS_KEY: sample_s3_access_key
VAULT_S3_SECRET_ACCESS_KEY: sample_s3_secret_access_key
VAULT_S3_REGION_NAME: sample_s3_region_name
VAULT_S3_BUCKET: sample_s3_bucket
#### S3 Specific Backend Configurations
#### Provide one of follwoing two values in s3_type variable, string's case should be match
s3_type: sample_s3_type
#### Required field(s) for all S3 backends except Amazon
###details of datamover API
##If SSL is enabled "DMAPI_ENABLED_SSL_APIS" value should be dmapi.
##If SSL is disabled "DMAPI_ENABLED_SSL_APIS" value should be empty.
#### Any service is using Ceph Backend then set ceph_backend_enabled value to True
ceph_backend_enabled: False
#Set verbosity level and run playbooks with -vvv option to display custom debug messages
verbosity_level: 3

Deploy TrilioVault components

Run the following commands to deploy only TrilioVault components in case of an already deployed Ansible Openstack.

cd /opt/openstack-ansible/playbooks
# To create Dmapi container
openstack-ansible lxc-containers-create.yml
#To Deploy Trilio Components
openstack-ansible os-tvault-install.yml
#To configure Haproxy for Dmapi
openstack-ansible haproxy-install.yml

If Ansible Openstack is not already deployed then run the native Openstack deployment commands to deploy Openstack and Trilio Components together. An example for the native deployment command is given below:

openstack-ansible setup-infrastructure.yml --syntax-check
openstack-ansible setup-hosts.yml
openstack-ansible setup-infrastructure.yml
openstack-ansible setup-openstack.yml

Verify the TrilioVault deployment

Verify triliovault datamover api service deployed and started well. Run the below commands on controller node(s).

lxc-ls # Check the dmapi container is present on controller node.
lxc-info -s controller_dmapi_container-a11984bf # Confirm running status of the container

Verify triliovault datamover service deployed and started well on compute node(s). Run the following command oncompute node(s).

systemctl status tvault-contego.service
systemctl status tvault-objest-store # If Storage backend is S3
df -h # Verify the mount point is mounted on compute node(s)

Verify that triliovault horizon plugin, contegoclient, and workloadmgrclient are installed on the Horizon container.

Run the following command on Horizon container.

lxc-attach -n controller_horizon_container-1d9c055c # To login on horizon container
apt list | egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient' # For ubuntu based container
yum list installed |egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient' # For CentOS based container

Verify that haproxy setting on controller node using below commands.

haproxy -c -V -f /etc/haproxy/haproxy.cfg # Verify the keyword datamover_service-back is present in output.