Isolation Modes
Teabar provides three levels of environment isolation to match different security requirements, use cases, and budgets. You configure isolation mode in your blueprint, and Teabar handles all the complexity.
Overview
| Mode | Isolation Level | Best For | Cost | Provisioning Time |
|---|---|---|---|---|
| Namespace | Logical | Basic training, multi-tenant demos | Low | Seconds |
| Virtual Cluster | Strong | Admin training, compliance needs | Medium | 1-2 minutes |
| Dedicated Cluster | Complete | Full isolation, production-like | High | 5-15 minutes |
┌─────────────────────────────────────────────────────────────────────────────┐
│ ISOLATION SPECTRUM │
│ │
│ Shared ◄─────────────────────────────────────────────────────► Isolated │
│ │
│ ┌─────────────┐ ┌─────────────────┐ ┌──────────────────┐ │
│ │ NAMESPACE │ │ VIRTUAL CLUSTER │ │ DEDICATED CLUSTER│ │
│ │ │ │ │ │ │ │
│ │ Fast, cheap │ │ Strong isolation│ │ Full isolation │ │
│ │ Shared APIs │ │ Own API server │ │ Own infrastructure│ │
│ └─────────────┘ └─────────────────┘ └──────────────────┘ │
│ │
│ Cost: $ Cost: $$ Cost: $$$ │
└─────────────────────────────────────────────────────────────────────────────┘ Namespace Isolation
In namespace mode, all participants share the same Kubernetes cluster but are isolated into separate namespaces with RBAC and network policies.
How It Works
┌─────────────────────────────────────────────────────────────────────────┐
│ Shared Kubernetes Cluster │
│ │
│ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │
│ │ ns-alice │ │ ns-bob │ │ ns-charlie │ │
│ │ ─────────── │ │ ─────────── │ │ ─────────── │ │
│ │ • pods │ │ • pods │ │ • pods │ │
│ │ • services │ │ • services │ │ • services │ │
│ │ • configmaps │ │ • configmaps │ │ • configmaps │ │
│ │ • secrets │ │ • secrets │ │ • secrets │ │
│ └───────────────┘ └───────────────┘ └───────────────┘ │
│ │
│ Shared Resources: │
│ • Kubernetes API server │
│ • etcd │
│ • Cluster-scoped resources (nodes, PVs, CRDs) │
│ • Ingress controller │
└─────────────────────────────────────────────────────────────────────────┘ Configuration
apiVersion: teabar.dev/v1
kind: Blueprint
metadata:
name: k8s-basics
spec:
environment:
isolation: namespace # Default isolation mode
# Optional: customize namespace settings
namespaceConfig:
# Resource quotas per participant
resourceQuota:
cpu: "4"
memory: "8Gi"
pods: "20"
services: "10"
persistentvolumeclaims: "5"
# Limit ranges for containers
limitRange:
defaultCpu: "500m"
defaultMemory: "512Mi"
maxCpu: "2"
maxMemory: "4Gi"
# Network isolation
networkPolicy:
isolate: true # Block cross-namespace traffic
allowEgress: true # Allow outbound internet
allowIngress: false # Block inbound (except ingress controller) What Participants Can Do
| Action | Allowed | Notes |
|---|---|---|
| Create pods, deployments, services | Yes | Within their namespace |
| Create namespaces | No | Cannot escape their namespace |
| View other namespaces | No | RBAC prevents access |
| Access cluster-scoped resources | No | Nodes, PVs, CRDs hidden |
| Use persistent volumes | Yes | Via PVCs with storage class |
| Create ingress | Yes | With namespace-scoped ingress |
Security Measures
Teabar applies these security controls automatically:
- RBAC - Participants get
editrole only in their namespace - Network Policies - Cross-namespace traffic blocked by default
- Resource Quotas - Prevent resource hogging
- Limit Ranges - Enforce container resource limits
- Pod Security Standards - Restricted or baseline profiles
- Admission Webhooks - Block privileged operations
When to Use Namespace Mode
Good For
Not Ideal For
Limitations
- Participants cannot create cluster-scoped resources
- Participants cannot install CRDs or operators
- Participants cannot access nodes or modify cluster settings
- Shared cluster means shared failure domain
- “Noisy neighbor” potential (mitigated by quotas)
Virtual Cluster (vCluster) Isolation
Virtual clusters provide each participant with their own Kubernetes API server while sharing the underlying infrastructure. This gives the appearance and behavior of a dedicated cluster without the cost.
How It Works
┌─────────────────────────────────────────────────────────────────────────┐
│ Host Kubernetes Cluster │
│ │
│ ┌───────────────────────────────────────────────────────────────────┐ │
│ │ vCluster: alice │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ API Server │ │ etcd │ │ Controller │ │ │
│ │ │ (alice's) │ │ (alice's) │ │ Manager │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ │ │ │
│ │ Alice sees: nodes, namespaces, CRDs - her own "cluster" │ │
│ └───────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────────────────┐ │
│ │ vCluster: bob │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ API Server │ │ etcd │ │ Controller │ │ │
│ │ │ (bob's) │ │ (bob's) │ │ Manager │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ │ │ │
│ │ Bob sees: nodes, namespaces, CRDs - his own "cluster" │ │
│ └───────────────────────────────────────────────────────────────────┘ │
│ │
│ Shared: Physical nodes, CNI, storage classes │
└─────────────────────────────────────────────────────────────────────────┘ Configuration
apiVersion: teabar.dev/v1
kind: Blueprint
metadata:
name: k8s-admin-training
spec:
environment:
isolation: vcluster
# Optional: customize vCluster settings
vclusterConfig:
# Kubernetes version for virtual clusters
k8sVersion: "1.28"
# Resource allocation for vCluster control plane
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2"
memory: "4Gi"
# Sync specific resources from host
sync:
nodes:
enabled: true
syncAllNodes: false # Only sync labeled nodes
nodeSelector:
teabar.dev/pool: training
persistentVolumes: true
storageClasses: true
ingressClasses: true
# Expose host services in vCluster
mapServices:
- from: kube-system/metrics-server
to: kube-system/metrics-server What Participants Can Do
| Action | Allowed | Notes |
|---|---|---|
| Create namespaces | Yes | Full namespace management |
| Install CRDs | Yes | Within their vCluster |
| Deploy operators | Yes | Full cluster-admin capabilities |
| View “nodes” | Yes | Synced from host (virtual view) |
| Create cluster roles | Yes | Within their vCluster |
| Use kubectl as admin | Yes | Full cluster-admin access |
Security Measures
- Isolation - Each vCluster has its own API server and etcd
- Resource sync - Only explicitly synced resources are visible
- Pod scheduling - Pods run on host but in isolated context
- Network - Optional network policies between vClusters
- Resource limits - vCluster control plane is resource-bounded
When to Use vCluster Mode
Good For
Not Ideal For
Note
Limitations
- Nodes are virtual representations (synced from host)
- Cannot modify actual node configurations
- CNI and storage are inherited from host cluster
- Some edge cases with certain operators
- Slightly higher latency than namespace mode
Dedicated Cluster Isolation
Dedicated cluster mode provisions a completely separate Kubernetes cluster for each participant. This is the highest level of isolation and most closely mirrors production environments.
How It Works
┌─────────────────────────────────────────────────────────────────────────┐
│ Cloud Provider (Hetzner/AWS/Azure) │
│ │
│ ┌─────────────────────────────┐ ┌─────────────────────────────┐ │
│ │ Cluster: alice │ │ Cluster: bob │ │
│ │ ┌───────────────────────┐ │ │ ┌───────────────────────┐ │ │
│ │ │ Control Plane │ │ │ │ Control Plane │ │ │
│ │ │ • API Server │ │ │ │ • API Server │ │ │
│ │ │ • etcd │ │ │ │ • etcd │ │ │
│ │ │ • Controller Manager │ │ │ │ • Controller Manager │ │ │
│ │ │ • Scheduler │ │ │ │ • Scheduler │ │ │
│ │ └───────────────────────┘ │ │ └───────────────────────┘ │ │
│ │ │ │ │ │
│ │ ┌─────────┐ ┌─────────┐ │ │ ┌─────────┐ ┌─────────┐ │ │
│ │ │ Worker │ │ Worker │ │ │ │ Worker │ │ Worker │ │ │
│ │ │ Node 1 │ │ Node 2 │ │ │ │ Node 1 │ │ Node 2 │ │ │
│ │ └─────────┘ └─────────┘ │ │ └─────────┘ └─────────┘ │ │
│ │ │ │ │ │
│ │ Own: VPC, subnets, nodes │ │ Own: VPC, subnets, nodes │ │
│ └─────────────────────────────┘ └─────────────────────────────┘ │
│ │
│ Complete isolation - no shared resources │
└─────────────────────────────────────────────────────────────────────────┘ Configuration
apiVersion: teabar.dev/v1
kind: Blueprint
metadata:
name: security-workshop
spec:
environment:
isolation: dedicated-cluster
resources:
clusters:
- name: main
provider: hetzner
type: talos # or kubeadm, eks, aks
# Cluster specification
version: "1.28"
nodes:
controlPlane: 1 # Or 3 for HA
workers: 2
# Node sizing
controlPlaneSize: cx21 # Hetzner instance type
workerSize: cx31
# Networking
networking:
podCidr: "10.244.0.0/16"
serviceCidr: "10.96.0.0/12"
cni: calico # or cilium, flannel
# Addons to install
addons:
metricsServer: true
ingressNginx: true
certManager: true What Participants Can Do
| Action | Allowed | Notes |
|---|---|---|
| Full cluster-admin access | Yes | Complete control |
| SSH to nodes | Yes | If enabled in blueprint |
| Modify kubelet settings | Yes | Full node access |
| Install any CNI | Yes | Cluster is theirs |
| Break things completely | Yes | That’s the point! |
| Multi-cluster exercises | Yes | Each gets their own cluster |
Cluster Types
Teabar supports multiple Kubernetes distributions:
| Type | Description | Best For |
|---|---|---|
| Talos | Immutable, API-driven Linux | Security, modern infrastructure |
| Kubeadm | Standard installation | General training, flexibility |
| EKS | AWS managed Kubernetes | AWS-focused training |
| AKS | Azure managed Kubernetes | Azure-focused training |
| Kind/K3d | Local Docker-based | Development, CI/CD |
When to Use Dedicated Cluster Mode
Good For
Trade-offs
Limitations
- Cost - Full cluster per participant adds up quickly
- Time - Provisioning takes 5-15 minutes
- Resources - Significant cloud resources required
- Cleanup - Must be destroyed to stop billing
Warning
Choosing the Right Mode
Decision Matrix
| Requirement | Namespace | vCluster | Dedicated |
|---|---|---|---|
| Basic K8s training | Best | Good | Overkill |
| Cluster admin training | No | Best | Good |
| CRD/operator development | No | Best | Good |
| Node-level access | No | No | Yes |
| CNI customization | No | No | Yes |
| Multi-cluster scenarios | No | Limited | Yes |
| Security isolation | Basic | Strong | Complete |
| Fast provisioning | Best | Good | Slow |
| Cost efficiency | Best | Good | Expensive |
| Large groups (50+) | Best | Good | Expensive |
Recommendations by Use Case
Kubernetes Basics (Application Developer)
environment:
isolation: namespace - Deploy pods, services, deployments
- Learn kubectl basics
- Understand K8s concepts
Kubernetes Administration
environment:
isolation: vcluster - RBAC configuration
- Namespace management
- CRD installation
- Helm/operator deployment
Security & Compliance Training
environment:
isolation: dedicated-cluster - Pod security policies
- Network policies deep-dive
- Node hardening
- Audit logging
CI/CD Workshop
environment:
isolation: vcluster # or namespace - GitLab + runners
- ArgoCD deployment
- Pipeline exercises
Mixing Isolation Modes
A single blueprint can combine different isolation modes for different components:
apiVersion: teabar.dev/v1
kind: Blueprint
metadata:
name: advanced-workshop
spec:
# Main resources use vCluster isolation
environment:
isolation: vcluster
resources:
# Shared GitLab instance (deployed to host cluster)
helm:
- name: gitlab
scope: shared # Shared across all participants
namespace: gitlab
chart: gitlab/gitlab
# Participant vClusters
vcluster:
- name: "participant-{{ .Index }}"
count: "{{ .Variables.participant_count }}"
# Each participant gets their own vCluster
# Optional: dedicated VMs for certain exercises
vms:
- name: "bastion-{{ .Index }}"
count: "{{ .Variables.participant_count }}"
provider: hetzner
image: ubuntu-22.04 This creates:
- One shared GitLab instance
- Individual vClusters per participant
- Individual VMs per participant
Best Practices
1. Start with Namespace Mode
Unless you have a specific reason for stronger isolation, namespace mode is:
- Fastest to provision
- Most cost-effective
- Sufficient for most training scenarios
2. Use vCluster for Admin Training
When participants need to:
- Create namespaces
- Install CRDs
- Configure RBAC
- Deploy operators
3. Reserve Dedicated Clusters for Special Cases
- Security deep-dives
- Node troubleshooting
- Production simulation
- Compliance requirements
4. Set Appropriate TTLs
Always configure TTL based on isolation mode:
environment:
isolation: namespace
ttl: 24h # Cheap, can be longer
environment:
isolation: dedicated-cluster
ttl: 8h # Expensive, keep short 5. Use Sleep Mode for Cost Control
For dedicated clusters, enable sleep mode:
environment:
isolation: dedicated-cluster
sleepMode:
enabled: true
idleTimeout: 30m # Sleep after 30 min of inactivity
mode: scale-to-zero # Or hibernate for longer breaks Next Steps
- Environment Lifecycle - States and transitions
- Blueprint Resources - Cluster configuration
- Cost Tracking - Monitor spending
- Sleep Mode - Cost optimization