T4K Resource Usage

This page provides a guideline on how to calculate the resource needs under various circumstances

Deprecated Documentation

This document is deprecated and no longer supported. For accurate, up-to-date information, please refer to the documentation for the latest version of Trilio.

# T4K Resource Usage

In order to provide hardware recommendations to users, a series of tests were performed on T4K to measure the memory usage of the control plane, analyzer, and web-backend resources. The tests were performed mainly based on:

  • Number of Kubernetes resources

  • Number of backup plans

Resource Based Consumption

In order to analyze the impact of resources on T4K, a T4K was set up with 1000 active namespace level backups, 50 backup plans (not running in schedule or creating backups), and 10k resources which contained 1 deployment, 10k services, 1500 config maps, and 750 secrets initially. In each iteration of tests, the backups were kept at 1000, and 10k resources were added until the cluster had 50k resources and 1000 backups.

Below table and graphs provide memory insights into the control plane, analyzer, and web backend:

Metrics

Chart

Adding 10k resources consumes memory usage up to 150 MB, 52 MB, and 555 MB in the control plane, analyzer, and web-backend respectively.

Resource Consumption During Backups

In order to analyze the impact of backups on T4K, initially, a T4K was setup with 10k resources which contained 1 deployment, 10k services, 1500 config maps and 750 secrets, ~50 active backups and 50 backup plans creating backups at the same time. In each iteration of tests, the resources were kept at 10k and 1000 backups were added until the cluster had 6k backups and 10k resources.

Below table and graphs provide memory insights of control-plane, analyzer and web-backend:

Metrics

Chart

Note: No spike was seen after the restart in above graph as all scheduled based backups were paused at that time

Note: No spike was seen after the restart in above graph as all scheduled based backups were paused at that time

Addition of 1k backups consumes memory usage upto 239 MB, 200 MB and 330 MB in control-plane, analyzer and web-backend respectively.

Conclusion

control-plane and analyzer components consume more memory with increasing backups than increasing resources. The number of resources has small amount of effect on control-plane and analyzer components therefore, the number of backups factor should be taken into the account more while providing the memory limits for these containers. The control-plane spikes seem to be related to scheduled backup plans creating 50 backups every hour. The number of backups that are being created at the same time also contributes in memory spike seen in control-plane. The same spike isn't seen much in the resource based tests as the backup plans were not creating backups every hour.

web-backend on the other hand consume memory with both increasing backups and resources as it needs to cache every resources in the cluster.

Considering the metrics from the tests above, below recommendations can be followed to configure control-plane, analyzer and web-backend's memory limits:

Last updated