If it exists, check whether the enableUserWorkload: true parameter is present with its value set to true under config.yaml, as shown below. If it is present, user workload monitoring is enabled. If it is not present or is set to false, you must enable it using the next steps. Otherwise, skip the steps to enable user workload monitoring.
2.2] If user workload monitoring is not enabled, enable it
Check whether the following ConfigMap already exists. If it does not exist, create it using the following YAML file. No changes are required in the YAML file.
If the ConfigMap cluster-monitoring-config already exists, add the following parameter to it:
Edit the ConfigMap using:
3] Steps to Install Prometheus SQL Exporter
3.1] Create SQL Exporter ConfigMap
First, you need to create a ConfigMap that holds the SQL Exporter's configuration. This includes the database connection string.
Edit sql-exporter-config.yaml. Before applying, locate the data_source_name field and update it with the correct MySQL username, password, and hostname for your database.
Important: Do not change the database name or the overall format of the data_source_name string. Only update credentials and hostname.
Create the Secret.
Before applying the deployment, create the secret containing the DSN string:
Replace <username>, <password>, and <hostname> with the actual values.
Apply the ConfigMap:
3.2] Apply SQL Exporter Deployment
Deploy the SQL Exporter application using its Deployment manifest:
Verify pod status after deployment:
You should see a pod with the status Running.
3.3] Apply SQL Exporter Service
Create a Kubernetes Service to expose the SQL Exporter within the cluster so that other services (such as Prometheus) can discover and scrape its metrics:
Verify Service Status: Confirm that the Service has been created.
3.4] Apply SQL Exporter ServiceMonitor
Finally, create a ServiceMonitor resource. This tells Prometheus (if configured to discover ServiceMonitors in this namespace) to automatically scrape metrics from your SQL Exporter Service.
4] Steps to Install Prometheus OpenStack Exporter
Follow below steps to deploy the OpenStack Exporter:
4.1] Create ConfigMap with OpenStack Cloud Credentials
First, create a ConfigMap containing your OpenStack cloud credentials. This ConfigMap will be mounted into the exporter pod.
Replace all placeholder variables (such as auth URL, username, password, etc.) with actual values from your OpenStack cloud in the following config file.
4.2] Create Keystone CA Certificate Secret
Before deploying the OpenStack Exporter, create the required secret containing the Keystone CA certificate.
4.3] Deploy OpenStack Exporter
Next, deploy the OpenStack Exporter application using its Deployment manifest.
4.4] Verify OpenStack Exporter Pod Status
After deployment, verify that the exporter pod is running correctly.
You should see a pod with a status of Running.
4.5] Check OpenStack Exporter Pod Logs
To ensure the exporter is starting and connecting to OpenStack successfully, check its logs. Replace <openstack-exporter-pod-name> with the actual name of your exporter pod obtained from the previous step.
Look for messages indicating a successful connection to OpenStack and that it is listening for metrics requests (for example, on port 9180).
4.6] Create Service for OpenStack Exporter
Create a Kubernetes Service to expose the OpenStack Exporter within the cluster, allowing other services (like Prometheus) to discover and scrape its metrics.
4.7] Create ServiceMonitor for Prometheus Integration
Finally, create a ServiceMonitor resource. This tells Prometheus (if configured to discover ServiceMonitors in this namespace) to automatically scrape metrics from your OpenStack Exporter Service.
5] Install and Configure Grafana
5.1] Install Grafana Operator
Go to Administrator View → Operators → OperatorHub
Search for Grafana Operator and select it
Click Install
5.2] Create Grafana service account
5.3] Create credential secret for Grafana
Edit the following secret file and set fresh username and password values.
Token must be fetched using the following command:
After running the above command, the token is copied into the Grafana secret YAML file.
5.4] Create Grafana instance
For a fresh deployment, create a Grafana instance:
In case of a token update on an existing Grafana instance, this is not required if you are installing Grafana for the first time:
5.5] Verify Grafana installation
5.6] Expose Grafana service if route not found
If the above command to get the Grafana route returns a NotFound error, you need to expose the Grafana service manually to create the route:
5.7] Create Grafana datasource
Create Grafana datasource to fetch metrics from the cluster monitoring namespace
cd triliovault-cfg-scripts/redhat-director-scripts/monitoring-openshift/openstack_exporter/
# Fill in OpenStack cloud admin user credentials in the following YAML file
vi openstack-exporter-clouds-cm.yaml
# Save changes
# Apply the changes
oc -n openshift-user-workload-monitoring apply -f openstack-exporter-clouds-cm.yaml
cd triliovault-cfg-scripts/redhat-director-scripts/monitoring-openshift/openstack_exporter/
./create_keystone_cacert_secret.sh
BEARER_TOKEN=$(oc create token grafana-sa --duration=87600h -n openshift-user-workload-monitoring | base64 -w0)
cp grafana-secret-creds_original.yaml grafana-secret-creds.yaml
sed -i "s/\$BASE64_ENCODED_GRAFANA_SERVICE_ACCOUNT_BEARER_TOKEN/${BEARER_TOKEN}/g" grafana-secret-creds.yaml
# Using above sed command, token got copied to grafan secret yaml file
# Set username and password and save changes
vi grafana-secret-creds.yaml
# Apply the secret
oc -n openshift-user-workload-monitoring apply -f grafana-secret-creds.yaml
$ oc get GRAFANA
NAME VERSION STAGE STAGE STATUS AGE
grafana 11.3.0 complete success 49s
$ oc -n openshift-user-workload-monitoring get pods -l app=grafana
NAME READY STATUS RESTARTS AGE
grafana-deployment-645b4bbb56-5pj6x 1/1 Running 0 4m43s
$ oc -n openshift-user-workload-monitoring get route grafana -o jsonpath='{.spec.host}'
grafana-openshift-user-workload-monitoring.apps.roso18qa.prod.engineering.trilio.io