LogoLogo
5.0.X
5.0.X
  • About Trilio for Kubernetes
    • Welcome to Trilio For Kubernetes
    • Version 5.0.X Release Highlights
    • Compatibility Matrix
    • Marketplace Support
    • Features
    • Use Cases
  • Getting Started
    • Getting Started with Trilio on Red Hat OpenShift (OCP)
    • Getting Started with Trilio for Upstream Kubernetes (K8S)
    • Getting Started with Trilio for AWS Elastic Kubernetes Service (EKS)
    • Getting Started with Trilio on Google Kubernetes Engine (GKE)
    • Getting Started with Trilio on VMware Tanzu Kubernetes Grid (TKG)
    • More Trilio Supported Kubernetes Distributions
      • General Installation Prerequisites
      • Rancher Deployments
      • Azure Cloud AKS
      • Digital Ocean Cloud
      • Mirantis Kubernetes Engine
      • IBM Cloud
    • Licensing
    • Using Trilio
      • Overview
      • Post-Install Configuration
      • Management Console
        • About the UI
        • Navigating the UI
          • UI Login
          • Cluster Management (Home)
          • Backup & Recovery
            • Namespaces
              • Namespaces - Actions
              • Namespaces - Bulk Actions
            • Applications
              • Applications - Actions
              • Applications - Bulk Actions
            • Virtual Machines
              • Virtual Machine -Actions
              • Virtual Machine - Bulk Actions
            • Backup Plans
              • Create Backup Plans
              • Backup Plans - Actions
            • Targets
              • Create New Target
              • Targets - Actions
            • Hooks
              • Create Hook
              • Hooks - Actions
            • Policies
              • Create Policies
              • Policies - Actions
          • Monitoring
          • Guided Tours
        • UI How-to Guides
          • Multi-Cluster Management
          • Creating Backups
            • Pause Schedule Backups and Snapshots
            • Cancel InProgress Backups
            • Cleanup Failed Backups
          • Restoring Backups & Snapshots
            • Cross-Cluster Restores
            • Namespace & application scoped
            • Cluster scoped
          • Disaster Recovery Plan
          • Continuous Restore
      • Command-Line Interface
        • YAML Examples
        • Trilio Helm Operator Values
    • Upgrade
    • Air-Gapped Installations
    • Uninstall
  • Reference Guides
    • T4K Pod/Job Capabilities
      • Resource Quotas
    • Trilio Operator API Specifications
    • Custom Resource Definition - Application
  • Advanced Configuration
    • AWS S3 Target Permissions
    • Management Console
      • KubeConfig Authenticaton
      • Authentication Methods Via Dex
      • UI Authentication
      • RBAC Authentication
      • Configuring the UI
    • Resource Request Requirements
      • Fine Tuning Resource Requests and Limits
    • Observability
      • Observability of Trilio with Prometheus and Grafana
      • Exported Prometheus Metrics
      • Observability of Trilio with Openshift Monitoring
      • T4K Integration with Observability Stack
    • Modifying Default T4K Configuration
  • T4K Concepts
    • Supported Application Types
    • Support for Helm Releases
    • Support for OpenShift Operators
    • T4K Components
    • Backup and Restore Details
      • Immutable Backups
      • Application Centric Backups
    • Retention Process
      • Retention Use Case
    • Continuous Restore
      • Architecture and Concepts
  • Performance
    • S3 as Backup Target
      • T4K S3 Fuse Plugin performance
    • Measuring Backup Performance
  • Ecosystem
    • T4K Integration with Slack using BotKube
    • Monitoring T4K Logs using ELK Stack
    • Rancher Navigation Links for Trilio Management Console
    • Optimize T4K Backups with StormForge
    • T4K GitHub Runner
    • AWS RDS snapshots using T4K hooks
    • Deploying Trilio For Kubernetes with Openshift ACM Policies
  • Krew Plugins
    • T4K QuickStart Plugin
    • Trilio for Kubernetes Preflight Checks Plugin
    • T4K Log Collector Plugin
    • T4K Cleanup Plugin
  • Support
    • Troubleshooting Guide
    • Known Issues and Workarounds
    • Contacting Support
  • Appendix
    • Ignored Resources
    • OpenSource Software Disclosure
    • CSI Drivers
      • Installing VolumeSnapshot CRDs
      • Install AWS EBS CSI Driver
    • T4K Product Quickview
    • OpenShift OperatorHub Custom CatalogSource
      • Custom CatalogSource in a restricted environment
    • Configure OVH Object Storage as a Target
    • Connect T4K UI hosted with HTTPS to another cluster hosted with HTTP or vice versa
    • Fetch DigitalOcean Kubernetes Cluster kubeconfig for T4K UI Authentication
    • Force Update T4K Operator in Rancher Marketplace
    • Backup and Restore Virtual Machines running on OpenShift
    • T4K For Volumes with Generic Storage
    • T4K Best Practices
Powered by GitBook
On this page
  • Introduction
  • What is ELK Stack
  • Install and configure ELK on Kubernetes

Was this helpful?

  1. Ecosystem

Monitoring T4K Logs using ELK Stack

This section provides step by step instructions to install and configure Elastic Search, Logstash and Kibana to monitor T4K logs with other native Kubernetes resources.

Introduction

In Kubernetes, its important for users to be able to visualize the logging from different applications running in the environment. ELK stack can be used and integrated with Trilio For Kubernetes to read, parse and visualize the logs on a dashboard.

What is ELK Stack

ELK stack consist of Elastic Search, Logstash and Kibana.

  • Elastic Search parses and stores the logs received from Logstash

  • Logstash receives the data from Beats (Agent who reads the data from logfiles for forward to Logstash) and parse it

  • Kibana is a dashboard to visualize the logs stored in Elastic database and run the queries on this data to see the desired results

Install and configure ELK on Kubernetes

Install CRDs and Operators

  1. Install Custom Resource Definitions

kubectl create -f https://download.elastic.co/downloads/eck/1.7.1/crds.yaml

2. Verify the CRDs are deployed correctly on the Kubernetes cluster

kubectl get crd | grep elastic
agents.agent.k8s.elastic.co                          2021-08-30T07:36:15Z
apmservers.apm.k8s.elastic.co                        2021-08-30T07:36:15Z
beats.beat.k8s.elastic.co                            2021-08-30T07:36:16Z
elasticmapsservers.maps.k8s.elastic.co               2021-08-30T07:36:16Z
elasticsearches.elasticsearch.k8s.elastic.co         2021-08-30T07:36:16Z
enterprisesearches.enterprisesearch.k8s.elastic.co   2021-08-30T07:36:16Z
kibanas.kibana.k8s.elastic.co                        2021-08-30T07:36:16Z

3. Install Operator using yaml definition

kubectl apply -f https://download.elastic.co/downloads/eck/1.7.1/operator.yaml

4. Verify that the elastic-system namespace is created and elastic-operator statefulset is deployed. The elastic-operator-0 pod must be in Running state

kubectl get all -n elastic-system
NAME                     READY   STATUS    RESTARTS   AGE
pod/elastic-operator-0   1/1     Running   0          1m

NAME                             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/elastic-webhook-server   ClusterIP   10.60.6.150   <none>        443/TCP   1m

NAME                                READY   AGE
statefulset.apps/elastic-operator   1/1     1m

Deploy an Elasticsearch Cluster

  1. Create an elasticsearch cluster definition yaml file

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 7.14.0
  nodeSets:
  - name: default
    count: 2
    config:
      node.store.allow_mmap: false

Note: User can change spec.nodeSets[0].count to 1 or more than 1. This parameter defines the number of pods deployed in the Elasticsearch cluster.

If your Kubernetes cluster does not have at least 2GB of free memory then the pod will be stuck in Pending state.

2. Apply the elasticsearch definition yaml file

kubectl apply -f elasticsearch.yaml

3. Once the pods are deployed, it may take a few minutes until all the resources are created and the elasticsearch cluster is ready to use

4. Monitor the cluster health and creation progress

kubectl get elasticsearch
NAME         HEALTH   NODES   VERSION   PHASE   AGE
quickstart   green    2       7.14.0    Ready   1m

5. You can see the two pods deployed and in Running state

kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'
NAME                      READY   STATUS    RESTARTS   AGE
quickstart-es-default-0   1/1     Running   0          1m
quickstart-es-default-1   1/1     Running   0          1m

6. Get the credentials.

A default user names elastic is automatically create with the password stored in Kubernetes secret

PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')
echo $PASSWORD

Note: Save the $PASSWORD value since you will need this for login to elasticsearch cluster

7. Request the Elasticsearch endpoint from inside the Kubernetes cluster.

curl -u "elastic:$PASSWORD" -k "https://quickstart-es-http:9200"

8. From your local workstation, run port-forward command from a different terminal

kubectl port-forward service/quickstart-es-http 9200
curl -u "elastic:$PASSWORD" -k "https://localhost:9200"
{
  "name" : "quickstart-es-default-1",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "tkOvQq_5SgCEkTyzxmJqFQ",
  "version" : {
    "number" : "7.14.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "dd5a0a2acaa2045ff9624f3729fc8a6f40835aa1",
    "build_date" : "2021-07-29T20:49:32.864135063Z",
    "build_snapshot" : false,
    "lucene_version" : "8.9.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Install Kibana Dashboard

  1. Create an elasticsearch cluster definition yaml file

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: quickstart
spec:
  version: 7.14.0
  count: 1
  elasticsearchRef:
    name: quickstart

Note: User can change spec.count from 1 to more than 1. This parameter defines the number of pods deployed for the Kibana dashboard.

2. Apply the Kibana dashboard definition yaml file

kubectl apply -f kibana.yaml

3. Once the pods are deployed, it may take a few minutes until all the resources are created and the Kibana dashboard is ready to use

4. Monitor the Kibana health and creation progress

kubectl get kibana
NAME         HEALTH   NODES   VERSION   AGE
quickstart   green    1       7.14.0    1m

5. You can see the Kibana quickstart pod deployed and in Running state

kubectl get pod --selector='kibana.k8s.elastic.co/name=quickstart'
NAME                             READY   STATUS    RESTARTS   AGE
quickstart-kb-7966b84d57-rzcf2   1/1     Running   0          1m

6. Access Kibana dasboard.

A ClusterIP service is automatically created fro Kibana

kubectl get service quickstart-kb-http

From your local workstation, run port-forward command from a different terminal

kubectl port-forward service/quickstart-kb-http 5601

7. Access the Kibana Dashboard from your workstation browser by navigating to below URL

https://localhost:5601

8. Use the username as elastic and password generated in elasticsearch cluster deployment

PreviousT4K Integration with Slack using BotKubeNextRancher Navigation Links for Trilio Management Console

Was this helpful?

Kibana Dashboard Login Page