Search…
TVO-4.2
Installing on Ansible Openstack Victoria + Wallaby
Please ensure that the TrilioVault Appliance has been updated to the latest hotfix before continuing the installation.

Change the nova user id on the TrilioVault Nodes

TrilioVault is by default using the nova user id and group id 162:162 Ansible Openstack is not always 'nova' user id 162 on nova-compute containers. The 'nova' user id on the TrilioVault nodes need to be set the same as in the nova-compute containers. Do the following steps on all TrilioVault nodes in case of nova id not being 162:162:
  1. 1.
    Download the shell script that will change the user-id
  2. 2.
    Assign executable permissions
  3. 3.
    Edit script to use the correct nova id
  4. 4.
    Execute the script
  5. 5.
    Verify that 'nova' user and group id has changed to the desired value
1
curl -O https://raw.githubusercontent.com/trilioData/triliovault-cfg-scripts/master/common/nova_userid.sh
2
chmod +x nova_userid.sh
3
vi nova_userid.sh # change nova user_id and group_id to uid & gid present on compute nodes.
4
./nova_userid.sh
5
id nova
Copied!

Prepare deployment host

Clone triliovault-cfg-scripts from github repository on Ansible Host.
1
git clone -b <branch> https://github.com/trilioData/triliovault-cfg-scripts.git
Copied!
Available values for <branch>:
Openstack Version
Branch
Victoria
stable/4.2
Copy Ansible roles and vars to required places.
1
cd triliovault-cfg-scripts/
2
cp -R ansible/roles/* /opt/openstack-ansible/playbooks/roles/
3
cp ansible/main-install.yml /opt/openstack-ansible/playbooks/os-tvault-install.yml
4
cp ansible/environments/group_vars/all/vars.yml /etc/openstack_deploy/user_tvault_vars.yml
Copied!
In case of installing on OSA Victora edit OPENSTACK_DIST in the file /etc/openstack_/user_tvault_vars.yml to Victoria
Add TrilioVault playbook to /opt/openstack-ansible/playbooks/setup-openstack.ymlat the end of the file.
1
- import_playbook: os-tvault-install.yml
Copied!
Add the following content at the end of the file /etc/openstack_deploy/user_variables.yml
1
# Datamover haproxy setting
2
haproxy_extra_services:
3
- service:
4
haproxy_service_name: datamover_service
5
haproxy_backend_nodes: "{{ groups['dmapi_all'] | default([]) }}"
6
haproxy_ssl: "{{ haproxy_ssl }}"
7
haproxy_port: 8784
8
haproxy_balance_type: http
9
haproxy_balance_alg: roundrobin
10
haproxy_timeout_client: 10m
11
haproxy_timeout_server: 10m
12
haproxy_backend_options:
13
- "httpchk GET / HTTP/1.0\\r\\nUser-agent:\\ osa-haproxy-healthcheck"
Copied!
Create the following file /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml
Add the following content to the created file.
1
cat > /opt/openstack-ansible/inventory/env.d/tvault-dmapi.yml
2
component_skel:
3
dmapi_api:
4
belongs_to:
5
- dmapi_all
6
7
container_skel:
8
dmapi_container:
9
belongs_to:
10
- tvault-dmapi_containers
11
contains:
12
- dmapi_api
13
14
physical_skel:
15
tvault-dmapi_containers:
16
belongs_to:
17
- all_containers
18
tvault-dmapi_hosts:
19
belongs_to:
20
- hosts
Copied!
Edit the file /etc/openstack_deploy/openstack_user_config.yml according to the example below to set host entries for TrilioVault components.
1
#tvault-dmapi
2
tvault-dmapi_hosts: # Add controller details in this section as tvault DMAPI is resides on controller nodes.
3
infra-1: # controller host name.
4
ip: 172.26.0.3 # Ip address of controller
5
infra-2: # If we have multiple controllers add controllers details in same manner as shown in Infra-2
6
ip: 172.26.0.4
7
8
#tvault-datamover
9
tvault_compute_hosts: # Add compute details in this section as tvault datamover is resides on compute nodes.
10
infra-1: # compute host name.
11
ip: 172.26.0.7 # Ip address of compute node
12
infra-2: # If we have multiple compute nodes add compute details in same manner as shown in Infra-2
13
ip: 172.26.0.8
Copied!
Edit the common editable parameter section in the file /etc/openstack_deploy/user_tvault_vars.yml
Append the required details like TrilioVault Appliance IP address, TrilioVault package version, Openstack distribution, snapshot storage backend, SSL related information, etc.
The possible package versions are:
GA TrilioVault 4.2: 4.2.64
1
##common editable parameters required for installing tvault-horizon-plugin, tvault-contego and tvault-datamover-api
2
#ip address of TVM
3
IP_ADDRESS: sample_tvault_ip_address
4
5
##Time Zone
6
TIME_ZONE: "Etc/UTC"
7
8
#Update TVAULT package version here, we will install mentioned version plugins for Example# TVAULT_PACKAGE_VERSION: 3.3.36
9
TVAULT_PACKAGE_VERSION: 4.2.64 #GA build version
10
11
# Update Openstack dist code name like ussuri etc.
12
OPENSTACK_DIST: ussuri
13
14
#Need to add the following statement in nova sudoers file
15
#nova ALL = (root) NOPASSWD: /home/tvault/.virtenv/bin/privsep-helper *
16
#These changes require for Datamover, Otherwise Datamover will not work
17
#Are you sure? Please set variable to
18
# UPDATE_NOVA_SUDOERS_FILE: proceed
19
#other wise ansible tvault-contego installation will exit
20
UPDATE_NOVA_SUDOERS_FILE: proceed
21
22
##### Select snapshot storage type #####
23
#Details for NFS as snapshot storage , NFS_SHARES should begin with "-".
24
##True/False
25
NFS: False
26
NFS_SHARES:
27
- sample_nfs_server_ip1:sample_share_path
28
- sample_nfs_server_ip2:sample_share_path
29
30
#if NFS_OPTS is empty then default value will be "nolock,soft,timeo=180,intr,lookupcache=none"
31
NFS_OPTS: ""
32
33
## Valid for 'nfs' backup target only.
34
## If backup target NFS share supports multiple endpoints/ips but in backend it's a single share then
35
## set 'multi_ip_nfs_enabled' paremeter to 'True'. Otherwise it's value should be 'False'
36
multi_ip_nfs_enabled: False
37
38
#### Details for S3 as snapshot storage
39
##True/False
40
S3: False
41
VAULT_S3_ACCESS_KEY: sample_s3_access_key
42
VAULT_S3_SECRET_ACCESS_KEY: sample_s3_secret_access_key
43
VAULT_S3_REGION_NAME: sample_s3_region_name
44
VAULT_S3_BUCKET: sample_s3_bucket
45
VAULT_S3_SIGNATURE_VERSION: default
46
#### S3 Specific Backend Configurations
47
#### Provide one of follwoing two values in s3_type variable, string's case should be match
48
#Amazon/Other_S3_Compatible
49
s3_type: sample_s3_type
50
#### Required field(s) for all S3 backends except Amazon
51
VAULT_S3_ENDPOINT_URL: ""
52
#True/False
53
VAULT_S3_SECURE: True
54
VAULT_S3_SSL_CERT: ""
55
56
###details of datamover API
57
##If SSL is enabled "DMAPI_ENABLED_SSL_APIS" value should be dmapi.
58
#DMAPI_ENABLED_SSL_APIS: dmapi
59
##If SSL is disabled "DMAPI_ENABLED_SSL_APIS" value should be empty.
60
DMAPI_ENABLED_SSL_APIS: ""
61
DMAPI_SSL_CERT: ""
62
DMAPI_SSL_KEY: ""
63
64
#### Any service is using Ceph Backend then set ceph_backend_enabled value to True
65
#True/False
66
ceph_backend_enabled: False
67
68
#Set verbosity level and run playbooks with -vvv option to display custom debug messages
69
verbosity_level: 3
Copied!

Configure Multi-IP NFS

This step is only required when the multi-IP NFS feature is used to connect different datamovers to the same NFS volume through multiple IPs
Change the Directory
1
cd triliovault-cfg-scripts/common/
Copied!
Edit file 'triliovault_nfs_map_input.yml' in current directory and provide compute host and NFS share/IP map.
Please take a look at this page to learn about the format of the file.
1
vi triliovault_nfs_map_input.yml
Copied!
Update pyyaml on the Openstack Ansible server node only
1
pip3 install -U pyyaml
Copied!
Execute generate_nfs_map.py file to create one to one mapping of compute and nfs share.
1
python ./generate_nfs_map.py
Copied!
Result will be in file - 'triliovault_nfs_map_output.yml' of the current directory
Validate output map file Open file 'triliovault_nfs_map_output.yml
1
vi triliovault_nfs_map_output.yml
Copied!
available in current directory and validate that all compute nodes are mapped with all necessary nfs shares.
Append the content of triliovault_nfs_map_output.yml file to /etc/openstack_deploy/user_tvault_vars.yml
1
cat triliovault_nfs_map_output.yml >> /etc/openstack_deploy/user_tvault_vars.yml
Copied!

Deploy TrilioVault components

Run the following commands to deploy only TrilioVault components in case of an already deployed Ansible Openstack.
1
cd /opt/openstack-ansible/playbooks
2
3
# To create Dmapi container
4
openstack-ansible lxc-containers-create.yml
5
6
#To Deploy Trilio Components
7
openstack-ansible os-tvault-install.yml
8
9
#To configure Haproxy for Dmapi
10
openstack-ansible haproxy-install.yml
Copied!
If Ansible Openstack is not already deployed then run the native Openstack deployment commands to deploy Openstack and Trilio Components together. An example for the native deployment command is given below:
1
openstack-ansible setup-infrastructure.yml --syntax-check
2
openstack-ansible setup-hosts.yml
3
openstack-ansible setup-infrastructure.yml
4
openstack-ansible setup-openstack.yml
Copied!

Verify the TrilioVault deployment

Verify triliovault datamover api service deployed and started well. Run the below commands on controller node(s).
1
lxc-ls # Check the dmapi container is present on controller node.
2
lxc-info -s controller_dmapi_container-a11984bf # Confirm running status of the container
Copied!
Verify triliovault datamover service deployed and started well on compute node(s). Run the following command oncompute node(s).
1
systemctl status tvault-contego.service
2
systemctl status tvault-object-store # If Storage backend is S3
3
df -h # Verify the mount point is mounted on compute node(s)
Copied!
Verify that triliovault horizon plugin, contegoclient, and workloadmgrclient are installed on the Horizon container.
Run the following command on Horizon container.
1
lxc-attach -n controller_horizon_container-1d9c055c # To login on horizon container
2
apt list | egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient' # For ubuntu based container
3
dnf list installed |egrep 'tvault-horizon-plugin|workloadmgrclient|contegoclient' # For CentOS based container
Copied!
Verify that haproxy setting on controller node using below commands.
1
haproxy -c -V -f /etc/haproxy/haproxy.cfg # Verify the keyword datamover_service-back is present in output.
Copied!

Update to the latest hotfix

After the deployment has been verified it is recommended to update to the latest hotfix to ensure the best possible experience.
To update the environment follow this procedure.