Kubernetes Access Without Sharing Kubeconfigs

Shared kubeconfigs create credential sprawl, break audit trails, and make offboarding a manual nightmare. Identity-backed Kubernetes access replaces static files with SSO sessions, RBAC alignment, and automatic revocation.

Kubeconfigs Are the New SSH Keys

Every platform team starts the same way: a few engineers need cluster access, so someone generates a kubeconfig, drops it in a wiki or Slack channel, and moves on. Six months later that file — containing a long-lived token or client certificate — lives on dozens of laptops, three CI pipelines, a staging server, and at least one personal GitHub repository. Nobody knows who has it. Nobody can revoke it without breaking half the team.

Kubeconfigs are the Kubernetes equivalent of shared SSH keys. They are bearer credentials: whoever holds the file can authenticate to the cluster. Unlike SSH keys, kubeconfigs often embed cluster-admin privileges because “it was easier at the time.” The result is a credential that grants god-mode access to your most sensitive workloads, distributed through the least secure channels your organization uses.

74%
of teams share kubeconfigs via chat or wiki
63%
cannot revoke a single user’s cluster access
8 min
median time to first cluster access with identity-backed flow

Where Credential Sprawl Happens

The problem is not that kubeconfigs exist — it is that they propagate uncontrollably. Understanding where these credentials end up is the first step toward replacing them.

Location Risk Detection Difficulty
Developer laptops (~/.kube/config) Stolen device, malware, disk not encrypted High — no central inventory
CI/CD pipeline secrets Overly broad permissions, leaked in logs Medium — depends on secret management
Slack or Teams messages Searchable by anyone in the channel Very high — chat history is rarely audited
Internal wikis or Notion pages Broad read access, no expiration High — pages persist indefinitely
Git repositories (committed by mistake) Exposed in history even after deletion Medium — secret scanners can find known patterns
Staging or jump servers Shared filesystem, no user attribution High — files accumulate silently

Each copy of a kubeconfig is an unmanaged credential with no expiration, no audit trail, and no connection to the identity that created it. When an engineer leaves the company, revoking their access means rotating every credential they ever touched — a process most teams cannot complete.

The CI/CD Problem

CI/CD pipelines are the second-largest source of kubeconfig sprawl. A deployment pipeline needs to talk to the Kubernetes API, so a service account token is generated and stored as a pipeline secret. The pattern seems harmless until you count how many pipelines, environments, and clusters are involved.

A typical mid-size organization has 15–30 CI/CD pipelines deploying to 3–8 clusters across staging, production, and regional environments. Each pipeline holds a credential. Some share the same service account. Most have cluster-admin because the original author wanted “flexibility.” If any pipeline is compromised — through a dependency injection, a leaked build log, or a misconfigured runner — the attacker inherits broad cluster access.

CI/CD Credential Anti-Patterns

Cluster-admin service accounts in pipelines: Deployments rarely need full admin rights. Scope each pipeline’s token to the namespaces and verbs it actually uses.

Shared tokens across environments: A staging pipeline token should never work in production. Use separate credentials with separate RBAC bindings per environment.

Long-lived tokens without rotation: Service account tokens should have a maximum lifetime and be rotated automatically. TokenRequest API and bound service account tokens help here.

Shared Kubeconfigs vs. Identity-Backed Access

The solution is not to add more controls around kubeconfig files — it is to remove the files entirely. Identity-backed access replaces static credentials with SSO sessions that are scoped, time-limited, and tied to a real human identity. The architectural difference is fundamental.

Shared Kubeconfigs vs. Identity-Backed Access Shared Kubeconfig Distribution kubeconfig.yaml cluster-admin token Dev Laptops CI Pipelines Slack / Wiki No expiration • No attribution • No revocation Cluster API Server accepts any bearer token Audit Log system:serviceaccount — who was it? Offboarding Rotate every token the user ever touched Identity-Backed Access SSO / IdP Okta, Azure AD, Google RBAC Mapper IdP groups → K8s roles Cluster API Server short-lived certificate Audit Log jane@corp.com • namespace:prod • 14:32 UTC Offboarding Disable in IdP → all cluster access revoked

Shared kubeconfigs propagate uncontrollably and defy revocation. Identity-backed access ties every session to a human, a time window, and a namespace.

RBAC Alignment with IdP Groups

The most common mistake in Kubernetes RBAC is maintaining two separate permission models: one in the IdP (Okta, Azure AD, Google Workspace) and one in Kubernetes ClusterRoleBindings. When these drift apart — and they always do — you end up with engineers who have more access than their role requires, and security teams who cannot explain why.

Identity-backed access solves this by making IdP group membership the source of truth. When an engineer authenticates via SSO, their IdP groups are mapped to Kubernetes roles automatically. The mapping is explicit and auditable:

IdP Group Kubernetes Role Namespaces Capabilities
eng-backend developer staging, dev get, list, logs, port-forward
eng-sre operator staging, production get, list, logs, exec, scale
eng-platform admin all (cluster-scoped) full RBAC, CRD management
data-analysts readonly analytics get, list only

When someone moves teams, they lose the old group and gain the new one. No kubeconfig rotation. No manual ClusterRoleBinding edits. No stale access.

kubectl exec vs. Node SSH

Platform teams often debate whether to secure kubectl exec, node-level SSH, or both. The answer is both, but the risk profile differs. kubectl exec drops you into a container context — constrained by the pod’s security context, resource limits, and Linux capabilities. Node SSH gives you the underlying host: the kubelet, the container runtime, other pods’ volumes, and potentially the control plane components.

  • kubectl exec: Treat as a developer debugging tool. Gate behind RBAC verb restrictions per namespace. Log commands and record sessions.
  • Node SSH: Treat as an escalation path. Require JIT approval, time limits, and full session recording. Restrict to SRE and platform teams only.
  • kubectl debug: Ephemeral containers avoid modifying running pods. Prefer this over exec when investigating workload issues.
  • kubectl port-forward: Safer than exposing services externally for debugging. Log the forwarding target and duration.

Namespace-Level Scoping

Cluster-admin access should be an exception, not a default. Most engineers need access to specific namespaces for specific tasks. Namespace-level scoping means that a backend developer can view logs in the staging namespace and port-forward to debug a service, but cannot list secrets in production or scale deployments in kube-system.

This is not just a security improvement — it is an operational one. When an engineer can only see their team’s namespaces, the output of kubectl get pods is 20 items instead of 2,000. The cognitive load drops. The blast radius of a mistake drops. And the audit evidence is far more meaningful when every action is scoped to a known context.

Namespace Scoping Best Practice

Start by scoping non-production namespaces broadly (read access for all engineers) and production namespaces narrowly (read access for SRE, exec access via JIT approval). Measure friction. Adjust. The goal is the narrowest scope that does not impede legitimate work.

JIT Elevation for Production Namespaces

Standing production access is the largest source of unnecessary risk in Kubernetes. Just-in-Time (JIT) access replaces standing privileges with on-demand elevation that requires a reason, a time limit, and optionally an approval. The engineer requests access, explains why, and gets a short-lived credential that expires automatically.

JIT works particularly well for production namespaces because most production interactions are exceptional: debugging an incident, running a migration, verifying a deploy. The workflow is:

  1. Engineer clicks “Request Access” in the portal or runs a CLI command
  2. Selects the cluster, namespace, and role (e.g., exec in production)
  3. Provides a reason (“investigating OOM in payment-service”) and a time window (30 minutes)
  4. Manager or peer approves (or auto-approval for Severity-1 incidents)
  5. Short-lived credentials are issued; RBAC bindings are created automatically
  6. Access expires; bindings are removed; session is logged

This produces a complete audit record: who requested access, why, who approved it, what they did during the session, and when access was revoked. Every SOC 2 auditor and ISO 27001 assessor wants exactly this evidence.

Session and Command Logging

Kubernetes audit logs capture API requests, but they do not capture what happens inside a kubectl exec session. If an engineer execs into a production pod and runs rm -rf /data/*, the audit log shows that exec was called — but not the command that caused the outage.

Session recording closes this gap. Every interactive session — exec, debug, SSH to a node — is recorded with full command input, output, and timing. The recording can be replayed during incident review, exported for auditors, and searched by keyword or time range.

  • API-level audit logs: Who called which API, when, from where. Essential but insufficient for interactive sessions.
  • Session recordings: Full visual playback of exec and SSH sessions. Shows what the human actually did.
  • Command extraction: Searchable transcript of commands typed during a session. Useful for post-incident analysis.
  • Immutable storage: Recordings stored in tamper-proof storage with retention policies aligned to compliance requirements.

Offboarding Through IdP Revocation

The ultimate test of an access management system is how quickly and completely it revokes access when someone leaves the organization. With shared kubeconfigs, offboarding means hunting down every file copy, every CI secret, every cached token — and hoping you found them all. With identity-backed access, offboarding is a single action: disable the user in your IdP.

When the IdP account is disabled, every active session is terminated, every pending JIT request is cancelled, and every future authentication attempt fails. There are no stale credentials to rotate, no files to hunt, no pipelines to update. The blast radius of a departing employee shrinks from “unknown” to “zero” in seconds.

Offboarding Task Shared Kubeconfigs Identity-Backed Access
Revoke cluster access Rotate all shared tokens (hours to days) Disable IdP account (seconds)
Revoke CI/CD access Update secrets in every pipeline manually Pipelines use scoped service accounts (unaffected)
Audit what they accessed Grep logs for service account (no human attribution) Query sessions by user identity
Verify revocation completeness Cannot confirm — file copies are invisible Zero active sessions, zero valid tokens

Rollout Plan: Start with Non-Critical Clusters

Migrating from shared kubeconfigs to identity-backed access does not have to be a big-bang cutover. The safest approach is a phased rollout that starts with the lowest-risk clusters and progressively tightens controls as confidence grows.

  • Week 1–2: Inventory. Enumerate all kubeconfigs, service account tokens, and RBAC bindings. Use OnePAM’s RBAC generator to map current permissions.
  • Week 3: Pilot on dev/sandbox clusters. Deploy the OnePAM agent via Helm chart. Enable SSO-based access for one team.
  • Week 4–5: Staging clusters. Expand to staging environments. Map IdP groups to RBAC roles. Enable session recording for exec sessions.
  • Week 6–7: Production read-only. Enable identity-backed read access to production namespaces. Keep existing kubeconfigs as fallback.
  • Week 8: Production full access. Enable JIT elevation for production write access. Begin removing standing kubeconfigs.
  • Week 9+: Cleanup. Rotate remaining service account tokens. Remove old kubeconfig files. Update CI pipelines to use scoped credentials.

Resources for Your Migration

Review the resource configuration guide for cluster onboarding, and the access policy documentation for namespace scoping and JIT policy setup. The RBAC generator can produce role definitions from your existing ClusterRoleBindings.

What Changes for Engineers

The most common objection to removing kubeconfigs is developer friction. Engineers worry that they will lose the ability to run kubectl from their terminal. In practice, identity-backed access preserves the same workflow: the engineer runs kubectl get pods, the access layer intercepts the request, prompts for SSO if the session is expired, and proxies the request to the cluster. After the first authentication, subsequent commands in the same session work without interruption.

For browser-based access, engineers can open a terminal in the OnePAM portal and interact with the cluster directly — no local kubeconfig needed. This is particularly useful for on-call engineers who may be working from a personal device and should not have persistent credentials.

The net effect is faster onboarding (new engineers get cluster access in minutes instead of waiting for someone to share a file), safer offboarding (access disappears when the IdP account is disabled), and better audit trails (every action maps to a human identity, a reason, and a time window).

Replace Kubeconfig Sprawl with Identity-Backed Access

OnePAM gives your platform team SSO-backed Kubernetes access, IdP-aligned RBAC, JIT elevation for production, session recording for every exec, and instant offboarding through your IdP. Deploy via Helm in under an hour.

Start Free Trial
OnePAM Team
Security & Infrastructure Team