Monitoring TVK Logs using ELK Stack

This section provides step by step instructions to install and configure Elastic Search, Logstash and Kibana to monitor TVK logs with other native Kubernetes resources.

Introduction

In Kubernetes, its important for users to be able to visualize the logging from different applications running in the environment. ELK stack can be used and integrated with TrilioVault For Kubernetes to read, parse and visualize the logs on a dashboard.

What is ELK Stack

ELK stack consist of Elastic Search, Logstash and Kibana.

  • Elastic Search parses and stores the logs received from Logstash

  • Logstash receives the data from Beats (Agent who reads the data from logfiles for forward to Logstash) and parse it

  • Kibana is a dashboard to visualize the logs stored in Elastic database and run the queries on this data to see the desired results

Install and configure ELK on Kubernetes

Install CRDs and Operators

  1. Install Custom Resource Definitions

kubectl create -f https://download.elastic.co/downloads/eck/1.7.1/crds.yaml

2. Verify the CRDs are deployed correctly on the Kubernetes cluster

kubectl get crd | grep elastic
agents.agent.k8s.elastic.co 2021-08-30T07:36:15Z
apmservers.apm.k8s.elastic.co 2021-08-30T07:36:15Z
beats.beat.k8s.elastic.co 2021-08-30T07:36:16Z
elasticmapsservers.maps.k8s.elastic.co 2021-08-30T07:36:16Z
elasticsearches.elasticsearch.k8s.elastic.co 2021-08-30T07:36:16Z
enterprisesearches.enterprisesearch.k8s.elastic.co 2021-08-30T07:36:16Z
kibanas.kibana.k8s.elastic.co 2021-08-30T07:36:16Z

3. Install Operator using yaml definition

kubectl apply -f https://download.elastic.co/downloads/eck/1.7.1/operator.yaml

4. Verify that the elastic-system namespace is created and elastic-operator statefulset is deployed. The elastic-operator-0 pod must be in Running state

kubectl get all -n elastic-system
NAME READY STATUS RESTARTS AGE
pod/elastic-operator-0 1/1 Running 0 1m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elastic-webhook-server ClusterIP 10.60.6.150 <none> 443/TCP 1m
NAME READY AGE
statefulset.apps/elastic-operator 1/1 1m

Deploy an Elasticsearch Cluster

  1. Create an elasticsearch cluster definition yaml file

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.14.0
nodeSets:
- name: default
count: 2
config:
node.store.allow_mmap: false

Note: User can change spec.nodeSets[0].count to 1 or more than 1. This parameter defines the number of pods deployed in the Elasticsearch cluster.

If your Kubernetes cluster does not have at least 2GB of free memory then the pod will be stuck in Pending state.

2. Apply the elasticsearch definition yaml file

kubectl apply -f elasticsearch.yaml

3. Once the pods are deployed, it may take a few minutes until all the resources are created and the elasticsearch cluster is ready to use

4. Monitor the cluster health and creation progress

kubectl get elasticsearch
NAME HEALTH NODES VERSION PHASE AGE
quickstart green 2 7.14.0 Ready 1m

5. You can see the two pods deployed and in Running state

kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'
NAME READY STATUS RESTARTS AGE
quickstart-es-default-0 1/1 Running 0 1m
quickstart-es-default-1 1/1 Running 0 1m

6. Get the credentials.

A default user names elastic is automatically create with the password stored in Kubernetes secret

PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}')
echo $PASSWORD

Note: Save the $PASSWORD value since you will need this for login to elasticsearch cluster

7. Request the Elasticsearch endpoint from inside the Kubernetes cluster.

curl -u "elastic:$PASSWORD" -k "https://quickstart-es-http:9200"

8. From your local workstation, run port-forward command from a different terminal

kubectl port-forward service/quickstart-es-http 9200
curl -u "elastic:$PASSWORD" -k "https://localhost:9200"
{
"name" : "quickstart-es-default-1",
"cluster_name" : "quickstart",
"cluster_uuid" : "tkOvQq_5SgCEkTyzxmJqFQ",
"version" : {
"number" : "7.14.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "dd5a0a2acaa2045ff9624f3729fc8a6f40835aa1",
"build_date" : "2021-07-29T20:49:32.864135063Z",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}

Install Kibana Dashboard

  1. Create an elasticsearch cluster definition yaml file

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.14.0
count: 1
elasticsearchRef:
name: quickstart

Note: User can change spec.count from 1 to more than 1. This parameter defines the number of pods deployed for the Kibana dashboard.

2. Apply the Kibana dashboard definition yaml file

kubectl apply -f kibana.yaml

3. Once the pods are deployed, it may take a few minutes until all the resources are created and the Kibana dashboard is ready to use

4. Monitor the Kibana health and creation progress

kubectl get kibana
NAME HEALTH NODES VERSION AGE
quickstart green 1 7.14.0 1m

5. You can see the Kibana quickstart pod deployed and in Running state

kubectl get pod --selector='kibana.k8s.elastic.co/name=quickstart'
NAME READY STATUS RESTARTS AGE
quickstart-kb-7966b84d57-rzcf2 1/1 Running 0 1m

6. Access Kibana dasboard.

A ClusterIP service is automatically created fro Kibana

kubectl get service quickstart-kb-http

From your local workstation, run port-forward command from a different terminal

kubectl port-forward service/quickstart-kb-http 5601

7. Access the Kibana Dashboard from your workstation browser by navigating to below URL

https://localhost:5601

8. Use the username as elastic and password generated in elasticsearch cluster deployment

Kibana Dashboard Login Page