Monitoring and Logging

This section describes how to measure and monitor Trilio for Kubernetes.

Deprecated Documentation

This document is deprecated and no longer supported. For accurate, up-to-date information, please refer to the documentation for the latest version of Trilio.

Monitoring and Logging

Trilio Metrics Exporter for Prometheus

Trilio Metrics Exporter is a component of Trilio for Kubernetes written in Golang to export Trilio metrics. The project is based on the official Prometheus client library: prometheus_client.

Trilio Metrics Exporter runs as a separate deployment and is installed by the Operator as part of the installation process. A complete list of Prometheus metrics and their details are provided in the Appendix section - Trilio Prometheus Metrics

Consuming Metrics - Inside the Cluster

There are two methods to scrape the metrics from Trilio Exporter:

Using Annotations

Prometheus can discover any metric in cluster exporter using the standard annotation set on the exporter pod.

The annotations are described below:

  • prometheus.io/scrape: The default configuration will scrape all pods and, if set to false, this annotation will exclude the pod from the scraping process.

  • prometheus.io/path: If the metrics path is not /metrics, define it with this annotation.

  • prometheus.io/port: Scrape the pod on the indicated port instead of the pod’s declared ports, Here it is 8080.

Trilio for Kubernetes has pre-configured the above annotations on the Metrics Exporter pod. If you have a cluster with Prometheus already installed and configured, then your Prometheus instance should start collecting Trilio for Kubernetes metrics without any configuration change.

Using Scrape Job

You will need to configure a prometheus server to scrape the metrics from your newly running exporter. Add the following scrape job to your prometheus.yml configuration file.

  - job_name: trilio_exporter
    scrape_interval: 30s
    file_sd_configs:
    static_configs:
      - targets: ['EXPORTER_ADDRESS:8080']

EXPORTER_ADDRESS is the IP of the Prometheus exporter pod which can be found by running the following

#kubectl get pod -o wide |grep -i export
k8s-triliovault-exporter-787c7dd446-bcgkr            1/1   Running   0     6d4h  192.168.140.126  slave2  <none>      <none>

Prometheus Operator Helm Chart Setup

If you installed prometheus via helm as shown below helm install prometheus stable/prometheus-operator

Step 1: Expose Trilio Exporter: To expose the triliovault metrics, you need to create kubernetes Service. If your prometheus setup is not present in this cluster then expose via LoadBalancer Service.

You can create LoadBalancer Service using yaml

kind: Service
apiVersion: v1
metadata:
  name: k8s-triliovault-exporter-service
spec:
  ports:
    - name: web
      protocol: TCP
      port: 8080
      targetPort: 8080
  selector:
    app: k8s-triliovault-exporter
  type: NodePort

Step 2. Create ServiceMonitor: You will need to configure a ServiceMonitor for prometheus to scrape the metrics from your k8s-triliovault-exporter-service. Apply the following yaml to T4K (NS where T4K is installed)namespace.

  apiVersion: monitoring.coreos.com/v1
  kind: ServiceMonitor
  metadata:
    name: k8s-triliovault-exporter
    labels:
      app: k8s-triliovault-exporter
  spec:
    selector:
      matchLabels:
        app: k8s-triliovault-exporter
    endpoints:
    - port: web

Once the configuration is completed. You can find k8s-triliovault-exporter as active targets in Prometheus UI.

Please wait a couple of minutes for Prometheus to start scraping metrics. Access prometheus console at http://<prometheus-server-ip>/graph

If you leverage the Prometheus Operator stack, then configure the ServiceMonitor the same way as above configurations.

Consuming Metrics - Outside the cluster

To expose T4K metrics outside the cluster, perform the following two steps.

  1. Expose Trilio Exporter: Trilio metrics can be exposed outside the cluster via a LoadBalancer Service. Leverage kubectl to create service for k8s-triliovault-exporter

    kubectl expose deployment k8s-triliovault-exporter --type=LoadBalancer --name=exporter --namespace triliovault-integration

    OR you can create a LoadBalancer Service using a .yaml file

    kind: Service
    apiVersion: v1
    metadata:
      name: k8s-triliovault-exporter-service
    spec:
      ports:
        - protocol: TCP
          port: 8080
          targetPort: 8080
      selector:
        app: k8s-triliovault-exporter
        release: trilio
      type: LoadBalancer

    Once the Service is available capture the Public IP for k8s-triliovault-exporter-service, this will be used in second step.\

  2. Scrape Job: You will need to configure a prometheus server to scrape the metrics from your newly running exporter. Add the following scrape job to your promethus.yml configuration file.

      - job_name: trilio_exporter
        scrape_interval: 30s
        file_sd_configs:
        static_configs:
          - targets: ['EXPORTER_PUBLIC_IP:8080']

Visualizing Metrics with Grafana

Metrics from Prometheus can be visualized leveraging Grafana. Dashboards can be created in Grafana with the Trilio metrics exposed through Prometheus.

Trilio Grafana Dashboards

Trilio provides pre-created Grafana dashboards with the product to make monitoring and observing your backup landscape easy.

Grafana dashboards are pivoted on the following themes and provide high-level overview, summary and details around each theme.

  1. Backups

  2. Restores

  3. Targets

  4. BackupPlans/Application

Import Grafana Dashboards

There are two sources where T4K dashboards can be found

  1. T4K Grafana dashboards can be found on GitHub.

    1. These dashboards can be imported into a Grafana instance following instructions from the Grafana project page and using the \

  2. T4K dashboards can also be found on the Grafana within the Trilio org page

    1. Instructions for downloading and importing are provided with the dashboards.

Last updated