Monitoring T4K Logs using ELK Stack
This section provides step by step instructions to install and configure Elastic Search, Logstash and Kibana to monitor T4K logs with other native Kubernetes resources.
Introduction
In Kubernetes, its important for users to be able to visualize the logging from different applications running in the environment. ELK stack can be used and integrated with Trilio For Kubernetes to read, parse and visualize the logs on a dashboard.
What is ELK Stack
ELK stack consist of Elastic Search, Logstash and Kibana.
Elastic Search parses and stores the logs received from Logstash
Logstash receives the data from Beats (Agent who reads the data from logfiles for forward to Logstash) and parse it
Kibana is a dashboard to visualize the logs stored in Elastic database and run the queries on this data to see the desired results
Install and configure ELK on Kubernetes
Install CRDs and Operators
Install Custom Resource Definitions
2. Verify the CRDs are deployed correctly on the Kubernetes cluster
3. Install Operator using yaml definition
4. Verify that the elastic-system
namespace is created and elastic-operator
statefulset is deployed. The elastic-operator-0
pod must be in Running
state
Deploy an Elasticsearch Cluster
Create an elasticsearch cluster definition yaml file
Note: User can change spec.nodeSets[0].count
to 1 or more than 1. This parameter defines the number of pods deployed in the Elasticsearch cluster.
If your Kubernetes cluster does not have at least 2GB of free memory then the pod will be stuck in Pending
state.
2. Apply the elasticsearch definition yaml file
3. Once the pods are deployed, it may take a few minutes until all the resources are created and the elasticsearch cluster is ready to use
4. Monitor the cluster health and creation progress
5. You can see the two pods deployed and in Running
state
6. Get the credentials.
A default user names elastic is automatically create with the password stored in Kubernetes secret
Note: Save the $PASSWORD value since you will need this for login to elasticsearch cluster
7. Request the Elasticsearch endpoint from inside the Kubernetes cluster.
8. From your local workstation, run port-forward
command from a different terminal
Install Kibana Dashboard
Create an elasticsearch cluster definition yaml file
Note: User can change spec.count
from 1 to more than 1. This parameter defines the number of pods deployed for the Kibana dashboard.
2. Apply the Kibana dashboard definition yaml file
3. Once the pods are deployed, it may take a few minutes until all the resources are created and the Kibana dashboard is ready to use
4. Monitor the Kibana health and creation progress
5. You can see the Kibana quickstart pod deployed and in Running
state
6. Access Kibana dasboard.
A ClusterIP
service is automatically created fro Kibana
From your local workstation, run port-forward
command from a different terminal
7. Access the Kibana Dashboard from your workstation browser by navigating to below URL
8. Use the username as elastic
and password generated in elasticsearch cluster deployment