Clean Trilio Datamover API and Workloadmanager services
The following steps need to be run on all nodes, which have the Trilio Datamover API & Workloadmanager services running. Those nodes can be identified by checking the roles_data.yaml
for the role that contains the below entries
Copy OS::TripleO::Services::TrilioDatamoverApi
OS::TripleO::Services::TrilioWlmApi
OS::TripleO::Services::TrilioWlmWorkloads
OS::TripleO::Services::TrilioWlmScheduler
OS::TripleO::Services::TrilioWlmCron
Once the role that runs the Trilio Datamover API & Workloadmanager services has been identified will the following commands clean the nodes from the service.
Run all commands as root or user with sudo permissions.
Remove triliovault_datamover_api
container.
Copy podman rm -f triliovault_datamover_api
podman rm -f triliovault_datamover_api_db_sync
podman rm -f triliovault_datamover_api_init_log
Clean Trilio Datamover API service conf directory.
Copy rm -rf /var/lib/config-data/puppet-generated/triliovaultdmapi
rm /var/lib/config-data/puppet-generated/triliovaultdmapi.md5sum
rm -rf /var/lib/config-data/triliovaultdmapi*
rm -f /var/lib/config-data/triliovault_datamover_api*
Clean Trilio Datamover API service log directory.
Copy rm -rf /var/log/containers/triliovault-datamover-api/
Remove triliovault_wlm_api
container.
Copy podman rm -f triliovault_wlm_api
podman rm -f triliovault_wlm_api_cloud_trust_init
podman rm -f triliovault_wlm_api_db_sync
podman rm -f triliovault_wlm_api_config_dynamic
podman rm -f triliovault_wlm_api_init_log
Clean Trilio Workloadmanager API service conf directory.
Copy rm -rf /var/lib/config-data/puppet-generated/triliovaultwlmapi
rm /var/lib/config-data/puppet-generated/triliovaultwlmapi.md5sum
rm -rf /var/lib/config-data/triliovaultwlmapi*
rm -f /var/lib/config-data/triliovault_wlm_api*
Clean Trilio Workloadmanager API service log directory.
Copy rm -rf /var/log/containers/triliovault-wlm-api/
Remove triliovault_wlm_workloads
container.
Copy podman rm -f triliovault_wlm_workloads
podman rm -f triliovault_wlm_workloads_config_dynamic
podman rm -f triliovault_wlm_workloads_init_log
Clean Trilio Workloadmanager Workloads service conf directory.
Copy rm -rf /var/lib/config-data/puppet-generated/triliovaultwlmworkloads
rm /var/lib/config-data/puppet-generated/triliovaultwlmworkloads.md5sum
rm -rf /var/lib/config-data/triliovaultwlmworkloads*
Clean Trilio Workloadmanager Workloads service log directory.
Copy rm -rf /var/log/containers/triliovault-wlm-api/
Remove triliovault_wlm_scheduler
container.
Copy podman rm -f triliovault_wlm_scheduler
podman rm -f triliovault_wlm_scheduler_config_dynamic
podman rm -f triliovault_wlm_scheduler_init_log
Clean Trilio Workloadmanager Scheduler service conf directory.
Copy rm -rf /var/lib/config-data/puppet-generated/triliovaultwlmscheduler
rm /var/lib/config-data/puppet-generated/triliovaultwlmscheduler.md5sum
rm -rf /var/lib/config-data/triliovaultwlmscheduler*
Clean Trilio Workloadmanager Scheduler service log directory.
Copy rm -rf /var/log/containers/triliovault-wlm-scheduler/
Remove triliovault-wlm-cron-podman-0
container from controller.
Copy podman rm -f triliovault-wlm-cron-podman-0
podman rm -f triliovault_wlm_cron_config_dynamic
podman rm -f triliovault_wlm_cron_init_log
Clean Trilio Workloadmanager Cron service conf directory.
Copy rm -rf /var/lib/config-data/puppet-generated/triliovaultwlmcron
rm /var/lib/config-data/puppet-generated/triliovaultwlmcron.md5sum
rm -rf /var/lib/config-data//triliovaultwlmcron*
Clean Trilio Workloadmanager Cron service log directory.
Copy rm -rf /var/log/containers/triliovault-wlm-cron/
Clean Trilio Datamover Service
The following steps need to be run on all nodes, which have the Trilio Datamover service running. Those nodes can be identified by checking the roles_data.yaml
for the role that contains the entry OS::TripleO::Services::TrilioDatamover
.
Once the role that runs the Trilio Datamover service has been identified will the following commands clean the nodes from the service.
Run all commands as root or user with sudo permissions.
Remove triliovault_datamover
container.
Copy podman rm -f triliovault_datamover
Unmount the Trilio Backup Target on the compute host.
Copy ## Following steps are applicable for all supported RHOSP releases.
# Check triliovault backup target mount point
mount | grep trilio
# Unmount it
-- If it's NFS (COPY UUID_DIR from your compute host using above command)
umount /var/lib/nova/triliovault-mounts/<UUID_DIR>
-- If it's S3
umount /var/lib/nova/triliovault-mounts
# Verify that it's unmounted
mount | grep trilio
df -h | grep trilio
# Remove mount point directory after verifying that backup target unmounted successfully.
# Otherwise actual data from backup target may get cleaned.
rm -rf /var/lib/nova/triliovault-mounts
Clean Trilio Datamover service conf directory.
Copy rm -rf /var/lib/config-data/puppet-generated/triliovaultdm/
rm /var/lib/config-data/puppet-generated/triliovaultdm.md5sum
rm -rf /var/lib/config-data/triliovaultdm*
Clean log directory of Trilio Datamover service.
Copy rm -rf /var/log/containers/triliovault-datamover/
Clean wlm-cron resource from pcs cluster
Remove wlm cron resource from pcs cluster on the controller node.
Copy pcs resource delete triliovault-wlm-cron
Clean Trilio HAproxy resources
The following steps need to be run on all nodes, which have the HAproxy service running. Those nodes can be identified by checking the roles_data.yaml
for the role that contains the entry OS::TripleO::Services::HAproxy
.
Once the role that runs the HAproxy service has been identified will the following commands clean the nodes from all the Trilio resources.
Run all commands as root or user with sudo permissions.
Edit the following file inside the HAproxy container and remove all Trilio entries.
/var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
An example of these entries is given below.
Copy listen triliovault_datamover_api
bind 172.30.5.23:13784 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.30.5.23:8784 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
balance roundrobin
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Port %[dst_port]
maxconn 50000
option httpchk
option httplog
retries 5
timeout check 10m
timeout client 10m
timeout connect 10m
timeout http-request 10m
timeout queue 10m
timeout server 10m
server overcloudtrain1-controller-0.internalapi.trilio.local 172.30.5.28:8784 check fall 5 inter 2000 rise 2 verifyhost overcloudtrain1-controller-0.internalapi.trilio.local
listen triliovault_wlm_api
bind 172.30.5.23:13781 transparent ssl crt /etc/pki/tls/private/overcloud_endpoint.pem
bind 172.30.5.23:8781 transparent ssl crt /etc/pki/tls/certs/haproxy/overcloud-haproxy-internal_api.pem
balance roundrobin
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-Port %[dst_port]
maxconn 50000
option httpchk
option httplog
retries 5
timeout check 10m
timeout client 10m
timeout connect 10m
timeout http-request 10m
timeout queue 10m
timeout server 10m
server overcloudtrain1-controller-0.internalapi.trilio.local 172.30.5.28:8780 check fall 5 inter 2000 rise 2 verifyhost overcloudtrain1-controller-0.internalapi.trilio.local
Restart the HAproxy container once all edits have been done.
Copy podman restart haproxy-bundle-podman-0
Clean Trilio Keystone resources
Trilio registers services and users in Keystone. Those need to be unregistered and deleted.
Copy openstack service delete dmapi
openstack user delete dmapi
openstack service delete TrilioVaultWLM
openstack user delete triliovault
Clean Trilio database resources
Trilio creates databases for dmapi and workloadmgr services. These databases need to be cleaned.
Login into the database cluster
Copy podman exec -it galera-bundle-podman-0 mysql -u root
Run the following SQL statements to clean the database.
Copy ## Clean database
DROP DATABASE dmapi;
## Clean dmapi user
=> List 'dmapi' user accounts
MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
+-------+-----------------------------------------+
| user | host |
+-------+-----------------------------------------+
| dmapi | % |
| dmapi | 172.30.5.28 |
| dmapi | overcloudtrain1internalapi.trilio.local |
+-------+-----------------------------------------+
3 rows in set (0.000 sec)
=> Delete those user accounts
MariaDB [(none)]> DROP USER dmapi@'%';
Query OK, 0 rows affected (0.005 sec)
MariaDB [(none)]> DROP USER dmapi@172.30.5.28;
Query OK, 0 rows affected (0.006 sec)
MariaDB [(none)]> DROP USER dmapi@overcloudtrain1internalapi.trilio.local;
Query OK, 0 rows affected (0.005 sec)
=> Verify that dmapi user got cleaned
MariaDB [mysql]> select user, host from mysql.user where user='dmapi';
Empty set (0.00 sec)
## Clean database
DROP DATABASE workloadmgr;
## Clean workloadmgr user
=> List 'workloadmgr' user accounts
MariaDB [(none)]> select user, host from mysql.user where user='workloadmgr';
+-------------+-----------------------------------------+
| user | host |
+-------------+-----------------------------------------+
| workloadmgr | % |
| workloadmgr | 172.30.5.28 |
| workloadmgr | overcloudtrain1internalapi.trilio.local |
+-------------+-----------------------------------------+
3 rows in set (0.000 sec)
=> Delete those user accounts
MariaDB [(none)]> DROP USER workloadmgr@'%';
Query OK, 0 rows affected (0.012 sec)
MariaDB [(none)]> DROP USER workloadmgr@172.30.5.28;
Query OK, 0 rows affected (0.006 sec)
MariaDB [(none)]> DROP USER workloadmgr@overcloudtrain1internalapi.trilio.local;
Query OK, 0 rows affected (0.005 sec)
=> Verify that workloadmgr user got cleaned
MariaDB [(none)]> select user, host from mysql.user where user='workloadmgr';
Empty set (0.000 sec)
Revert overcloud deploy command
Remove the following entries from roles_data.yaml
used in the overcloud deploy command.
OS::TripleO::Services::TrilioDatamoverApi
OS::TripleO::Services::TrilioWlmApi
OS::TripleO::Services::TrilioWlmWorkloads
OS::TripleO::Services::TrilioWlmScheduler
OS::TripleO::Services::TrilioWlmCron
OS::TripleO::Services::TrilioDatamover
In the case that the overcloud deploy command used prior to the deployment of Trilio is still available, it can directly be used.
Follow these steps to clean the overcloud deploy command from all Trilio entries.
Remove trilio_env.yaml entry
Remove trilio endpoint map file
Replace with original map file if existing
Revert back to the original RHOSP Horizon container
Run the cleaned overcloud deploy command.