LogoLogo
5.0.X
5.0.X
  • About Trilio for Kubernetes
    • Welcome to Trilio For Kubernetes
    • Version 5.0.X Release Highlights
    • Compatibility Matrix
    • Marketplace Support
    • Features
    • Use Cases
  • Getting Started
    • Getting Started with Trilio on Red Hat OpenShift (OCP)
    • Getting Started with Trilio for Upstream Kubernetes (K8S)
    • Getting Started with Trilio for AWS Elastic Kubernetes Service (EKS)
    • Getting Started with Trilio on Google Kubernetes Engine (GKE)
    • Getting Started with Trilio on VMware Tanzu Kubernetes Grid (TKG)
    • More Trilio Supported Kubernetes Distributions
      • General Installation Prerequisites
      • Rancher Deployments
      • Azure Cloud AKS
      • Digital Ocean Cloud
      • Mirantis Kubernetes Engine
      • IBM Cloud
    • Licensing
    • Using Trilio
      • Overview
      • Post-Install Configuration
      • Management Console
        • About the UI
        • Navigating the UI
          • UI Login
          • Cluster Management (Home)
          • Backup & Recovery
            • Namespaces
              • Namespaces - Actions
              • Namespaces - Bulk Actions
            • Applications
              • Applications - Actions
              • Applications - Bulk Actions
            • Virtual Machines
              • Virtual Machine -Actions
              • Virtual Machine - Bulk Actions
            • Backup Plans
              • Create Backup Plans
              • Backup Plans - Actions
            • Targets
              • Create New Target
              • Targets - Actions
            • Hooks
              • Create Hook
              • Hooks - Actions
            • Policies
              • Create Policies
              • Policies - Actions
          • Monitoring
          • Guided Tours
        • UI How-to Guides
          • Multi-Cluster Management
          • Creating Backups
            • Pause Schedule Backups and Snapshots
            • Cancel InProgress Backups
            • Cleanup Failed Backups
          • Restoring Backups & Snapshots
            • Cross-Cluster Restores
            • Namespace & application scoped
            • Cluster scoped
          • Disaster Recovery Plan
          • Continuous Restore
      • Command-Line Interface
        • YAML Examples
        • Trilio Helm Operator Values
    • Upgrade
    • Air-Gapped Installations
    • Uninstall
  • Reference Guides
    • T4K Pod/Job Capabilities
      • Resource Quotas
    • Trilio Operator API Specifications
    • Custom Resource Definition - Application
  • Advanced Configuration
    • AWS S3 Target Permissions
    • Management Console
      • KubeConfig Authenticaton
      • Authentication Methods Via Dex
      • UI Authentication
      • RBAC Authentication
      • Configuring the UI
    • Resource Request Requirements
      • Fine Tuning Resource Requests and Limits
    • Observability
      • Observability of Trilio with Prometheus and Grafana
      • Exported Prometheus Metrics
      • Observability of Trilio with Openshift Monitoring
      • T4K Integration with Observability Stack
    • Modifying Default T4K Configuration
  • T4K Concepts
    • Supported Application Types
    • Support for Helm Releases
    • Support for OpenShift Operators
    • T4K Components
    • Backup and Restore Details
      • Immutable Backups
      • Application Centric Backups
    • Retention Process
      • Retention Use Case
    • Continuous Restore
      • Architecture and Concepts
  • Performance
    • S3 as Backup Target
      • T4K S3 Fuse Plugin performance
    • Measuring Backup Performance
  • Ecosystem
    • T4K Integration with Slack using BotKube
    • Monitoring T4K Logs using ELK Stack
    • Rancher Navigation Links for Trilio Management Console
    • Optimize T4K Backups with StormForge
    • T4K GitHub Runner
    • AWS RDS snapshots using T4K hooks
    • Deploying Trilio For Kubernetes with Openshift ACM Policies
  • Krew Plugins
    • T4K QuickStart Plugin
    • Trilio for Kubernetes Preflight Checks Plugin
    • T4K Log Collector Plugin
    • T4K Cleanup Plugin
  • Support
    • Troubleshooting Guide
    • Known Issues and Workarounds
    • Contacting Support
  • Appendix
    • Ignored Resources
    • OpenSource Software Disclosure
    • CSI Drivers
      • Installing VolumeSnapshot CRDs
      • Install AWS EBS CSI Driver
    • T4K Product Quickview
    • OpenShift OperatorHub Custom CatalogSource
      • Custom CatalogSource in a restricted environment
    • Configure OVH Object Storage as a Target
    • Connect T4K UI hosted with HTTPS to another cluster hosted with HTTP or vice versa
    • Fetch DigitalOcean Kubernetes Cluster kubeconfig for T4K UI Authentication
    • Force Update T4K Operator in Rancher Marketplace
    • Backup and Restore Virtual Machines running on OpenShift
    • T4K For Volumes with Generic Storage
    • T4K Best Practices
Powered by GitBook
On this page

Was this helpful?

  1. Advanced Configuration
  2. Management Console

KubeConfig Authenticaton

This page describes authenticating to the Trilio Management Console via a KubeConfig file

PreviousManagement ConsoleNextAuthentication Methods Via Dex

Last updated 4 months ago

Was this helpful?

T4K UI supports authentication through kubeconfig files - token, certificate, auth-provider, etc. As a result, any user of the Kubernetes cluster can log into the UI, view information, and perform operations based on their permissions and authorization as per their RBAC.

'Exec' or 'Auth Provider' flags in Kubeconfig

Some Kubernetes clusters may contain cloud-specific exec action or use auth-provider configuration to fetch the authentication token within the kubeconfig file. Since the binaries for the specific cloud service may not available on the setup where the user is providing the config file, T4K may not be able to fetch the token and populate it in the kubeconfig.

In order to support authentication for these cloud providers, follow the steps below to create a kubeconfig file with a custom kubeconfig consisting of a service account token and cluster data.

Create a Service Account

To create a service account on Kubernetes, leverage kubectl and a service account spec. Create a YML file name sa.yml that looks like the one below:

   apiVersion: v1
   kind: ServiceAccount
   metadata:
     name: svcs-acct-dply #any name you'd like

Create the service account:

kubectl create -f sa.yaml

To create a service account token on Kubernetes. Create a YML file name sa-secret.yml that looks like the one below:

apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: svcs-acct-sa-token
  annotations:
    kubernetes.io/service-account.name: svcs-acct-dply

Create a secret to store token for service account:

kubectl create -f sa-secret.yml

Fetch the token from the secret

kubectl describe secrets svcs-acct-dply-token
   Name:           svcs-acct-dply-token
   Namespace:      default
   Labels:         <none>
   Annotations:    kubernetes.io/service-account.name=svcs-acct-dply
            kubernetes.io/service-account.uid=c2117d8e-3c2d-11e8-9ccd-42010a8a012f

   Type:   kubernetes.io/service-account-token

   Data
   ====
   ca.crt:     1115 bytes
   namespace:  7 bytes
   token:      eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby
   9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InNoaXBwYW
   JsZS1kZXBsb3ktdG9rZW4tN3Nwc2oiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoic2hpcHBhYmxlLW
   RlcGxveSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImMyMTE3ZDhlLTNjMmQtMTFlOC05Y2NkLTQyMD
   EwYThhMDEyZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnNoaXBwYWJsZS1kZXBsb3kifQ.ZWKrKdpK7aukTRKnB5SJwwov6Pj
   aADT-FqSO9ZgJEg6uUVXuPa03jmqyRB20HmsTvuDabVoK7Ky7Uug7V8J9yK4oOOK5d0aRRdgHXzxZd2yO8C4ggqsr1KQsfdlU4xRWglaZGI4S31ohCAp
   J0MUHaVnP5WkbC4FiTZAQ5fO_LcCokapzCLQyIuD5Ksdnj5Ad2ymiLQQ71TUNccN7BMX5aM4RHmztpEHOVbElCWXwyhWr3NR1Z1ar9s5ec6iHBqfkp_s
   8TvxPBLyUdy9OjCWy3iLQ4Lt4qpxsjwE4NE7KioDPX2Snb6NWFK7lvldjYX4tdkpWdQHBNmqaD8CuVCRdEQ

Get the certificate info for the cluster

Every cluster has a certificate that clients can use to encrypt traffic. Fetch the certificate and write to a file by running the following command.

   kubectl config view --flatten --minify > cluster-cert.txt
   cat cluster-cert.txt
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDekNDQWZPZ0F3SUJBZ0lRZmo4VVMxNXpuaGRVbG
    15a3AvSVFqekFOQmdrcWhraUc5dzBCQVFzRkFEQXYKTVMwd0t3WURWUVFERXlSaVl6RTBOelV5WXkwMk9UTTFMVFExWldFdE9HTmlPUzFrWmpSak5tU
    XlZemd4TVRndwpIaGNOTVRnd05EQTVNVGd6TVRReVdoY05Nak13TkRBNE1Ua3pNVFF5V2pBdk1TMHdLd1lEVlFRREV5UmlZekUwCk56VXlZeTAyT1RN
    MUxUUTFaV0V0T0dOaU9TMWtaalJqTm1ReVl6Z3hNVGd3Z2dFaU1BMEdDU3FHU0liM0RRRUIKQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURIVHFPV0ZXL09
    odDFTbDBjeUZXOGl5WUZPZHFON1lrRVFHa3E3enkzMApPUEQydUZyNjRpRXRPOTdVR0Z0SVFyMkpxcGQ2UWdtQVNPMHlNUklkb3c4eUowTE5YcmljT2
    tvOUtMVy96UTdUClI0ZWp1VDl1cUNwUGR4b0Z1TnRtWGVuQ3g5dFdHNXdBV0JvU05reForTC9RN2ZpSUtWU01SSnhsQVJsWll4TFQKZ1hMamlHMnp3W
    GVFem5lL0tsdEl4NU5neGs3U1NUQkRvRzhYR1NVRzhpUWZDNGYzTk4zUEt3Wk92SEtRc0MyZAo0ajVyc3IwazNuT1lwWDFwWnBYUmp0cTBRZTF0RzNM
    VE9nVVlmZjJHQ1BNZ1htVndtejJzd2xPb24wcldlRERKCmpQNGVqdjNrbDRRMXA2WXJBYnQ1RXYzeFVMK1BTT2ROSlhadTFGWWREZHZyQWdNQkFBR2p
    JekFoTUE0R0ExVWQKRHdFQi93UUVBd0lDQkRBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFCQwpHWWd0R043SH
    JpV2JLOUZtZFFGWFIxdjNLb0ZMd2o0NmxlTmtMVEphQ0ZUT3dzaVdJcXlIejUrZ2xIa0gwZ1B2ClBDMlF2RmtDMXhieThBUWtlQy9PM2xXOC9IRmpMQ
    VZQS3BtNnFoQytwK0J5R0pFSlBVTzVPbDB0UkRDNjR2K0cKUXdMcTNNYnVPMDdmYVVLbzNMUWxFcXlWUFBiMWYzRUM3QytUamFlM0FZd2VDUDNOdHJM
    dVBZV2NtU2VSK3F4TQpoaVRTalNpVXdleEY4cVV2SmM3dS9UWTFVVDNUd0hRR1dIQ0J2YktDWHZvaU9VTjBKa0dHZXJ3VmJGd2tKOHdxCkdsZW40Q2R
    jOXJVU1J1dmlhVGVCaklIYUZZdmIxejMyVWJDVjRTWUowa3dpbHE5RGJxNmNDUEI3NjlwY0o1KzkKb2cxbHVYYXZzQnYySWdNa1EwL24KLS0tLS1FTk
    QgQ0VSVElGSUNBVEUtLS0tLQo=
        server: https://35.203.181.169
      name: gke_jfrog-200320_us-west1-a_cluster
    contexts:
    - context:
        cluster: gke_jfrog-200320_us-west1-a_cluster
        user: gke_jfrog-200320_us-west1-a_cluster
      name: gke_jfrog-200320_us-west1-a_cluster
    current-context: gke_jfrog-200320_us-west1-a_cluster
    kind: Config
    preferences: {}
    users:
    - name: gke_jfrog-200320_us-west1-a_cluster
      user:
        auth-provider:
          config:
            access-token: ya29.Gl2YBba5duRR8Zb6DekAdjPtPGepx9Em3gX1LAhJuYzq1G4XpYwXTS_wF4cieZ8qztMhB35lFJC-DJR6xcB02oXX
    kiZvWk5hH4YAw1FPrfsZWG57x43xCrl6cvHAp40
            cmd-args: config config-helper --format=json
            cmd-path: /Users/ambarish/google-cloud-sdk/bin/gcloud
            expiry: 2018-04-09T20:35:02Z
            expiry-key: '{.credential.token_expiry}'
            token-key: '{.credential.access_token}'
          name: gcp

Copy two pieces of information from above certificate-authority-data and server

Create a kubeconfig file

From the steps above, you should have the following pieces of information

  • token

  • certificate-authority-data

  • server

Create a file called sa-kconfig and paste the following content to it

apiVersion: v1
kind: Config
users:
- name: svcs-acct-dply
  user:
    token: <replace this with token info>
clusters:
- cluster:
    certificate-authority-data: <replace this with certificate-authority-data info>
    server: <replace this with server info>
  name: self-hosted-cluster
contexts:
- context:
     cluster: self-hosted-cluster
     user: svcs-acct-dply
  name: svcs-acct-context
current-context: svcs-acct-context

Replace the placeholder above with the information gathered so far

  • replace the token

  • replace the certificate-authority-data

  • replace the server

You can either choose to export the created kubeconfig file after that or move/copy it to $HOME/.kube/ location

Create ClusterRole and ClusterRoleBinding

After you have your kubeconfig file ready and exported/moved to .kube/config, Create a RoleBinding to bind the service account created from the above steps with ClusterRole. The role should have minimum permission required to access the T4K.

   apiVersion: rbac.authorization.k8s.io/v1
   kind: ClusterRole
   metadata:
      name: svcs-role
   rules:
   - apiGroups: ["triliovault.trilio.io"]
     resources: ["*"]
     verbs: ["get", "list"]
   - apiGroups: ["triliovault.trilio.io"]
     resources: ["policies"]
     verbs: ["create"]
   - apiGroups: [""]
     resources: ["secrets"]
     verbs: ["create"]
   apiVersion: rbac.authorization.k8s.io/v1
   kind: ClusterRoleBinding
   metadata:
     name: sample-clusterrrolebinding
   roleRef:
     apiGroup: rbac.authorization.k8s.io
     kind: ClusterRole
     name: svcs-role
   subjects:
   - kind: ServiceAccount
     name: svcs-acct-dply
     namespace: default

You can login using this kubeconfig.

GKE/EKS supported kubeconfig file use local credentials file to generate authentication token which can be used to access the kubernetes cluster. Hence, or refer to to use a workaround for using the credentials file to login T4K UI

DigitalOcean supports download of the kubeconfig file directly from the Kubernetes cluster page shown under "Access Cluster Config File" which can be used for authentication to the Trilio Management Console. If you are using doctl CLI to generate a kubeconfig file, it contains an exec section with custom commands that may not be recognized by the Trilio Management Console. Hence, or refer to to use a workaround for the kubeconfig generated through doctl.

this page
this page
Trilio for Kubernetes Login Screen