Isolation Modes

Teabar provides three levels of environment isolation to match different security requirements, use cases, and budgets. You configure isolation mode in your blueprint, and Teabar handles all the complexity.

Overview

ModeIsolation LevelBest ForCostProvisioning Time
NamespaceLogicalBasic training, multi-tenant demosLowSeconds
Virtual ClusterStrongAdmin training, compliance needsMedium1-2 minutes
Dedicated ClusterCompleteFull isolation, production-likeHigh5-15 minutes
┌─────────────────────────────────────────────────────────────────────────────┐
│                        ISOLATION SPECTRUM                                    │
│                                                                             │
│   Shared ◄─────────────────────────────────────────────────────► Isolated   │
│                                                                             │
│   ┌─────────────┐      ┌─────────────────┐      ┌──────────────────┐       │
│   │  NAMESPACE  │      │ VIRTUAL CLUSTER │      │ DEDICATED CLUSTER│       │
│   │             │      │                 │      │                  │       │
│   │ Fast, cheap │      │ Strong isolation│      │ Full isolation   │       │
│   │ Shared APIs │      │ Own API server  │      │ Own infrastructure│      │
│   └─────────────┘      └─────────────────┘      └──────────────────┘       │
│                                                                             │
│   Cost: $           Cost: $$              Cost: $$$                        │
└─────────────────────────────────────────────────────────────────────────────┘

Namespace Isolation

In namespace mode, all participants share the same Kubernetes cluster but are isolated into separate namespaces with RBAC and network policies.

How It Works

┌─────────────────────────────────────────────────────────────────────────┐
│                      Shared Kubernetes Cluster                           │
│                                                                         │
│  ┌───────────────┐  ┌───────────────┐  ┌───────────────┐              │
│  │  ns-alice     │  │  ns-bob       │  │  ns-charlie   │              │
│  │  ───────────  │  │  ───────────  │  │  ───────────  │              │
│  │  • pods       │  │  • pods       │  │  • pods       │              │
│  │  • services   │  │  • services   │  │  • services   │              │
│  │  • configmaps │  │  • configmaps │  │  • configmaps │              │
│  │  • secrets    │  │  • secrets    │  │  • secrets    │              │
│  └───────────────┘  └───────────────┘  └───────────────┘              │
│                                                                         │
│  Shared Resources:                                                      │
│  • Kubernetes API server                                                │
│  • etcd                                                                 │
│  • Cluster-scoped resources (nodes, PVs, CRDs)                         │
│  • Ingress controller                                                   │
└─────────────────────────────────────────────────────────────────────────┘

Configuration

apiVersion: teabar.dev/v1
kind: Blueprint
metadata:
  name: k8s-basics
spec:
  environment:
    isolation: namespace  # Default isolation mode
    
  # Optional: customize namespace settings
  namespaceConfig:
    # Resource quotas per participant
    resourceQuota:
      cpu: "4"
      memory: "8Gi"
      pods: "20"
      services: "10"
      persistentvolumeclaims: "5"
    
    # Limit ranges for containers
    limitRange:
      defaultCpu: "500m"
      defaultMemory: "512Mi"
      maxCpu: "2"
      maxMemory: "4Gi"
    
    # Network isolation
    networkPolicy:
      isolate: true           # Block cross-namespace traffic
      allowEgress: true       # Allow outbound internet
      allowIngress: false     # Block inbound (except ingress controller)

What Participants Can Do

ActionAllowedNotes
Create pods, deployments, servicesYesWithin their namespace
Create namespacesNoCannot escape their namespace
View other namespacesNoRBAC prevents access
Access cluster-scoped resourcesNoNodes, PVs, CRDs hidden
Use persistent volumesYesVia PVCs with storage class
Create ingressYesWith namespace-scoped ingress

Security Measures

Teabar applies these security controls automatically:

  1. RBAC - Participants get edit role only in their namespace
  2. Network Policies - Cross-namespace traffic blocked by default
  3. Resource Quotas - Prevent resource hogging
  4. Limit Ranges - Enforce container resource limits
  5. Pod Security Standards - Restricted or baseline profiles
  6. Admission Webhooks - Block privileged operations

When to Use Namespace Mode

check

Good For

- Basic Kubernetes training - Application deployment exercises - Cost-sensitive workshops - Large participant counts (50+) - Quick provisioning needs
x

Not Ideal For

- Cluster administration training - Security/compliance courses - CRD/operator development - Node-level operations - Multi-cluster scenarios

Limitations

  • Participants cannot create cluster-scoped resources
  • Participants cannot install CRDs or operators
  • Participants cannot access nodes or modify cluster settings
  • Shared cluster means shared failure domain
  • “Noisy neighbor” potential (mitigated by quotas)

Virtual Cluster (vCluster) Isolation

Virtual clusters provide each participant with their own Kubernetes API server while sharing the underlying infrastructure. This gives the appearance and behavior of a dedicated cluster without the cost.

How It Works

┌─────────────────────────────────────────────────────────────────────────┐
│                        Host Kubernetes Cluster                           │
│                                                                         │
│  ┌───────────────────────────────────────────────────────────────────┐ │
│  │                    vCluster: alice                                 │ │
│  │  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐               │ │
│  │  │ API Server  │  │   etcd      │  │ Controller  │               │ │
│  │  │ (alice's)   │  │ (alice's)   │  │  Manager    │               │ │
│  │  └─────────────┘  └─────────────┘  └─────────────┘               │ │
│  │                                                                   │ │
│  │  Alice sees: nodes, namespaces, CRDs - her own "cluster"         │ │
│  └───────────────────────────────────────────────────────────────────┘ │
│                                                                         │
│  ┌───────────────────────────────────────────────────────────────────┐ │
│  │                    vCluster: bob                                   │ │
│  │  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐               │ │
│  │  │ API Server  │  │   etcd      │  │ Controller  │               │ │
│  │  │ (bob's)     │  │ (bob's)     │  │  Manager    │               │ │
│  │  └─────────────┘  └─────────────┘  └─────────────┘               │ │
│  │                                                                   │ │
│  │  Bob sees: nodes, namespaces, CRDs - his own "cluster"           │ │
│  └───────────────────────────────────────────────────────────────────┘ │
│                                                                         │
│  Shared: Physical nodes, CNI, storage classes                          │
└─────────────────────────────────────────────────────────────────────────┘

Configuration

apiVersion: teabar.dev/v1
kind: Blueprint
metadata:
  name: k8s-admin-training
spec:
  environment:
    isolation: vcluster
    
  # Optional: customize vCluster settings
  vclusterConfig:
    # Kubernetes version for virtual clusters
    k8sVersion: "1.28"
    
    # Resource allocation for vCluster control plane
    resources:
      requests:
        cpu: "500m"
        memory: "1Gi"
      limits:
        cpu: "2"
        memory: "4Gi"
    
    # Sync specific resources from host
    sync:
      nodes:
        enabled: true
        syncAllNodes: false      # Only sync labeled nodes
        nodeSelector:
          teabar.dev/pool: training
      
      persistentVolumes: true
      storageClasses: true
      ingressClasses: true
    
    # Expose host services in vCluster
    mapServices:
      - from: kube-system/metrics-server
        to: kube-system/metrics-server

What Participants Can Do

ActionAllowedNotes
Create namespacesYesFull namespace management
Install CRDsYesWithin their vCluster
Deploy operatorsYesFull cluster-admin capabilities
View “nodes”YesSynced from host (virtual view)
Create cluster rolesYesWithin their vCluster
Use kubectl as adminYesFull cluster-admin access

Security Measures

  1. Isolation - Each vCluster has its own API server and etcd
  2. Resource sync - Only explicitly synced resources are visible
  3. Pod scheduling - Pods run on host but in isolated context
  4. Network - Optional network policies between vClusters
  5. Resource limits - vCluster control plane is resource-bounded

When to Use vCluster Mode

check

Good For

- Kubernetes administration training - CRD and operator development - Multi-namespace exercises - RBAC training - Helm chart testing - GitOps workflows
x

Not Ideal For

- Node-level operations (kubelet, containerd) - CNI plugin development - True multi-cluster scenarios - Storage driver testing - Kernel-level security training

Limitations

  • Nodes are virtual representations (synced from host)
  • Cannot modify actual node configurations
  • CNI and storage are inherited from host cluster
  • Some edge cases with certain operators
  • Slightly higher latency than namespace mode

Dedicated Cluster Isolation

Dedicated cluster mode provisions a completely separate Kubernetes cluster for each participant. This is the highest level of isolation and most closely mirrors production environments.

How It Works

┌─────────────────────────────────────────────────────────────────────────┐
│                        Cloud Provider (Hetzner/AWS/Azure)                │
│                                                                         │
│  ┌─────────────────────────────┐  ┌─────────────────────────────┐     │
│  │       Cluster: alice        │  │       Cluster: bob          │     │
│  │  ┌───────────────────────┐  │  │  ┌───────────────────────┐  │     │
│  │  │ Control Plane         │  │  │  │ Control Plane         │  │     │
│  │  │ • API Server          │  │  │  │ • API Server          │  │     │
│  │  │ • etcd                │  │  │  │ • etcd                │  │     │
│  │  │ • Controller Manager  │  │  │  │ • Controller Manager  │  │     │
│  │  │ • Scheduler           │  │  │  │ • Scheduler           │  │     │
│  │  └───────────────────────┘  │  │  └───────────────────────┘  │     │
│  │                             │  │                             │     │
│  │  ┌─────────┐ ┌─────────┐   │  │  ┌─────────┐ ┌─────────┐   │     │
│  │  │ Worker  │ │ Worker  │   │  │  │ Worker  │ │ Worker  │   │     │
│  │  │  Node 1 │ │  Node 2 │   │  │  │  Node 1 │ │  Node 2 │   │     │
│  │  └─────────┘ └─────────┘   │  │  └─────────┘ └─────────┘   │     │
│  │                             │  │                             │     │
│  │  Own: VPC, subnets, nodes   │  │  Own: VPC, subnets, nodes   │     │
│  └─────────────────────────────┘  └─────────────────────────────┘     │
│                                                                         │
│  Complete isolation - no shared resources                               │
└─────────────────────────────────────────────────────────────────────────┘

Configuration

apiVersion: teabar.dev/v1
kind: Blueprint
metadata:
  name: security-workshop
spec:
  environment:
    isolation: dedicated-cluster
    
  resources:
    clusters:
      - name: main
        provider: hetzner
        type: talos              # or kubeadm, eks, aks
        
        # Cluster specification
        version: "1.28"
        
        nodes:
          controlPlane: 1       # Or 3 for HA
          workers: 2
        
        # Node sizing
        controlPlaneSize: cx21  # Hetzner instance type
        workerSize: cx31
        
        # Networking
        networking:
          podCidr: "10.244.0.0/16"
          serviceCidr: "10.96.0.0/12"
          cni: calico           # or cilium, flannel
        
        # Addons to install
        addons:
          metricsServer: true
          ingressNginx: true
          certManager: true

What Participants Can Do

ActionAllowedNotes
Full cluster-admin accessYesComplete control
SSH to nodesYesIf enabled in blueprint
Modify kubelet settingsYesFull node access
Install any CNIYesCluster is theirs
Break things completelyYesThat’s the point!
Multi-cluster exercisesYesEach gets their own cluster

Cluster Types

Teabar supports multiple Kubernetes distributions:

TypeDescriptionBest For
TalosImmutable, API-driven LinuxSecurity, modern infrastructure
KubeadmStandard installationGeneral training, flexibility
EKSAWS managed KubernetesAWS-focused training
AKSAzure managed KubernetesAzure-focused training
Kind/K3dLocal Docker-basedDevelopment, CI/CD

When to Use Dedicated Cluster Mode

check

Good For

- Security and compliance training - Cluster administration deep-dives - CNI and networking training - Node troubleshooting exercises - Production simulation - Multi-cluster/federation - Disaster recovery training
alert

Trade-offs

- Higher cost per participant - Longer provisioning time (5-15 min) - More cloud resources consumed - Requires cleanup discipline

Limitations

  • Cost - Full cluster per participant adds up quickly
  • Time - Provisioning takes 5-15 minutes
  • Resources - Significant cloud resources required
  • Cleanup - Must be destroyed to stop billing

Choosing the Right Mode

Decision Matrix

RequirementNamespacevClusterDedicated
Basic K8s trainingBestGoodOverkill
Cluster admin trainingNoBestGood
CRD/operator developmentNoBestGood
Node-level accessNoNoYes
CNI customizationNoNoYes
Multi-cluster scenariosNoLimitedYes
Security isolationBasicStrongComplete
Fast provisioningBestGoodSlow
Cost efficiencyBestGoodExpensive
Large groups (50+)BestGoodExpensive

Recommendations by Use Case

Kubernetes Basics (Application Developer)

environment:
  isolation: namespace
  • Deploy pods, services, deployments
  • Learn kubectl basics
  • Understand K8s concepts

Kubernetes Administration

environment:
  isolation: vcluster
  • RBAC configuration
  • Namespace management
  • CRD installation
  • Helm/operator deployment

Security & Compliance Training

environment:
  isolation: dedicated-cluster
  • Pod security policies
  • Network policies deep-dive
  • Node hardening
  • Audit logging

CI/CD Workshop

environment:
  isolation: vcluster  # or namespace
  • GitLab + runners
  • ArgoCD deployment
  • Pipeline exercises

Mixing Isolation Modes

A single blueprint can combine different isolation modes for different components:

apiVersion: teabar.dev/v1
kind: Blueprint
metadata:
  name: advanced-workshop
spec:
  # Main resources use vCluster isolation
  environment:
    isolation: vcluster
  
  resources:
    # Shared GitLab instance (deployed to host cluster)
    helm:
      - name: gitlab
        scope: shared           # Shared across all participants
        namespace: gitlab
        chart: gitlab/gitlab
    
    # Participant vClusters
    vcluster:
      - name: "participant-{{ .Index }}"
        count: "{{ .Variables.participant_count }}"
        # Each participant gets their own vCluster
    
    # Optional: dedicated VMs for certain exercises
    vms:
      - name: "bastion-{{ .Index }}"
        count: "{{ .Variables.participant_count }}"
        provider: hetzner
        image: ubuntu-22.04

This creates:

  • One shared GitLab instance
  • Individual vClusters per participant
  • Individual VMs per participant

Best Practices

1. Start with Namespace Mode

Unless you have a specific reason for stronger isolation, namespace mode is:

  • Fastest to provision
  • Most cost-effective
  • Sufficient for most training scenarios

2. Use vCluster for Admin Training

When participants need to:

  • Create namespaces
  • Install CRDs
  • Configure RBAC
  • Deploy operators

3. Reserve Dedicated Clusters for Special Cases

  • Security deep-dives
  • Node troubleshooting
  • Production simulation
  • Compliance requirements

4. Set Appropriate TTLs

Always configure TTL based on isolation mode:

environment:
  isolation: namespace
  ttl: 24h              # Cheap, can be longer

environment:
  isolation: dedicated-cluster
  ttl: 8h               # Expensive, keep short

5. Use Sleep Mode for Cost Control

For dedicated clusters, enable sleep mode:

environment:
  isolation: dedicated-cluster
  sleepMode:
    enabled: true
    idleTimeout: 30m    # Sleep after 30 min of inactivity
    mode: scale-to-zero # Or hibernate for longer breaks

Next Steps

ende