Storage
Teabar provides flexible storage options for your environments, including persistent volumes, shared storage, and object storage integration.
Storage Types
Ephemeral Storage
By default, container filesystems are ephemeral—data is lost when the container stops:
components:
app:
image: myapp:latest
# No volumes defined = ephemeral storage only Persistent Volumes
Attach persistent storage that survives container restarts:
components:
database:
image: postgres:15
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
size: 20Gi
type: ssd Shared Volumes
Share storage between multiple components:
components:
writer:
image: myapp/writer:latest
volumes:
- shared-data:/data
reader:
image: myapp/reader:latest
volumes:
- shared-data:/data:ro # Read-only mount
volumes:
shared-data:
size: 10Gi
access_mode: ReadWriteMany Volume Configuration
Basic Volume
volumes:
my-volume:
size: 10Gi Advanced Volume Options
volumes:
database-storage:
size: 100Gi
type: ssd # ssd, hdd, nvme
iops: 3000 # Provisioned IOPS (if supported)
throughput: 125 # MB/s throughput (if supported)
access_mode: ReadWriteOnce
reclaim_policy: Retain # Retain, Delete
# Encryption
encrypted: true
encryption_key: ${secrets.storage_key}
# Backup configuration
backup:
enabled: true
schedule: "0 2 * * *"
retention: 7 Storage Classes
Use predefined storage classes:
volumes:
fast-storage:
size: 50Gi
storage_class: premium-ssd
archive-storage:
size: 500Gi
storage_class: standard-hdd Available storage classes vary by provider:
| Provider | Classes |
|---|---|
| AWS | gp3, io2, st1, sc1 |
| GCP | pd-ssd, pd-balanced, pd-standard |
| Azure | premium-ssd, standard-ssd, standard-hdd |
Object Storage
S3-Compatible Storage
Integrate with S3 or compatible object storage:
components:
app:
image: myapp:latest
environment:
S3_BUCKET: ${storage.uploads.bucket}
S3_ENDPOINT: ${storage.uploads.endpoint}
AWS_ACCESS_KEY_ID: ${secrets.aws_access_key}
AWS_SECRET_ACCESS_KEY: ${secrets.aws_secret_key}
storage:
uploads:
type: s3
bucket: my-app-uploads-${ENV_NAME}
region: us-west-2
# Auto-create bucket
create: true
# Lifecycle rules
lifecycle:
- prefix: temp/
expiration_days: 7
- prefix: archives/
transition_to: GLACIER
transition_days: 30 MinIO (Self-Hosted S3)
Include MinIO for local S3-compatible storage:
components:
minio:
image: minio/minio:latest
command: server /data --console-address ":9001"
ports:
- 9000:9000
- 9001:9001
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: ${secrets.minio_password}
volumes:
- minio-data:/data
app:
image: myapp:latest
environment:
S3_ENDPOINT: http://minio:9000
S3_BUCKET: uploads
AWS_ACCESS_KEY_ID: minioadmin
AWS_SECRET_ACCESS_KEY: ${secrets.minio_password}
volumes:
minio-data:
size: 100Gi Database Storage
PostgreSQL
components:
postgres:
image: postgres:15
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: ${secrets.db_password}
PGDATA: /var/lib/postgresql/data/pgdata
volumes:
- pg-data:/var/lib/postgresql/data
resources:
storage: 50Gi
volumes:
pg-data:
size: 50Gi
type: ssd
backup:
enabled: true
schedule: "0 */6 * * *" # Every 6 hours MySQL/MariaDB
components:
mysql:
image: mysql:8
environment:
MYSQL_ROOT_PASSWORD: ${secrets.db_password}
MYSQL_DATABASE: app
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
size: 50Gi
type: ssd MongoDB
components:
mongodb:
image: mongo:6
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: ${secrets.db_password}
volumes:
- mongo-data:/data/db
- mongo-config:/data/configdb
volumes:
mongo-data:
size: 100Gi
mongo-config:
size: 1Gi Storage Operations
Resize Volumes
# Resize a volume (online resize if supported)
teabar volume resize my-env db-data --size 100Gi
# Check resize status
teabar volume status my-env db-data Warning
Volume shrinking is not supported. You can only increase volume size.
View Storage Usage
# Show storage usage for environment
teabar storage usage my-env
# Detailed breakdown
teabar storage usage my-env --verbose Output:
Storage Usage: my-env
Volume Size Used Available Use%
db-data 50Gi 23Gi 27Gi 46%
uploads 100Gi 67Gi 33Gi 67%
cache 10Gi 2Gi 8Gi 20%
Total: 160Gi 92Gi 68Gi 57% Cleanup Orphaned Storage
# Find orphaned volumes
teabar storage orphaned
# Clean up orphaned volumes
teabar storage prune --orphaned
# Dry run first
teabar storage prune --orphaned --dry-run Snapshots
Create Snapshot
# Snapshot a specific volume
teabar volume snapshot my-env db-data --name "pre-migration"
# Snapshot all volumes
teabar volume snapshot my-env --all --name "full-backup" List Snapshots
teabar volume snapshots my-env Output:
Snapshots for: my-env
ID VOLUME NAME SIZE CREATED
snap_abc123 db-data pre-migration 23Gi 2024-01-15 10:00:00
snap_def456 db-data daily-backup 22Gi 2024-01-14 02:00:00
snap_ghi789 uploads pre-migration 67Gi 2024-01-15 10:00:00 Restore from Snapshot
# Restore volume from snapshot
teabar volume restore my-env db-data --snapshot snap_abc123
# Restore to new volume
teabar volume restore my-env db-data --snapshot snap_abc123 --as db-data-restored Automatic Snapshots
volumes:
db-data:
size: 50Gi
snapshots:
enabled: true
schedule: "0 2 * * *" # Daily at 2 AM
retention: 7 # Keep 7 snapshots Data Migration
Export Data
# Export volume to tar archive
teabar volume export my-env db-data --output db-backup.tar.gz
# Export to S3
teabar volume export my-env db-data --output s3://backups/db-backup.tar.gz Import Data
# Import from tar archive
teabar volume import my-env db-data --input db-backup.tar.gz
# Import from S3
teabar volume import my-env db-data --input s3://backups/db-backup.tar.gz Clone Environment with Data
# Clone environment including all data
teabar env clone source-env target-env --include-data
# Clone with specific volumes only
teabar env clone source-env target-env --volumes db-data,uploads Performance Tuning
IOPS Configuration
volumes:
high-performance:
size: 100Gi
type: ssd
iops: 10000 # Provisioned IOPS
throughput: 500 # MB/s Caching
Use in-memory caching for frequently accessed data:
components:
app:
image: myapp:latest
volumes:
- cache:/app/cache
tmpfs:
- /tmp:size=1G # RAM-backed /tmp
volumes:
cache:
size: 10Gi
type: nvme # Fastest available Read Replicas
For read-heavy workloads:
volumes:
db-data:
size: 100Gi
replicas:
enabled: true
count: 2
regions:
- us-west-2
- us-east-1 Best Practices
1. Size Appropriately
Start with estimates and monitor actual usage:
# Monitor storage growth
teabar metrics show my-env --metric storage_used --since 7d 2. Use Appropriate Storage Types
| Use Case | Recommended Type |
|---|---|
| Database | SSD / NVMe |
| Application logs | Standard HDD |
| Static assets | Object storage (S3) |
| Cache | NVMe or RAM |
3. Enable Backups
Always enable backups for stateful data:
volumes:
important-data:
size: 50Gi
backup:
enabled: true
schedule: "0 */4 * * *"
retention: 30 4. Use Encryption
Encrypt sensitive data at rest:
volumes:
sensitive-data:
size: 50Gi
encrypted: true 5. Clean Up Regularly
# Set up automatic cleanup
teabar config set storage.cleanup.orphaned_volumes 7d
teabar config set storage.cleanup.old_snapshots 30d Troubleshooting
Volume Not Mounting
# Check volume status
teabar volume status my-env db-data
# View volume events
teabar volume events my-env db-data
# Check component logs
teabar logs my-env --component database Disk Full
# Check usage
teabar exec my-env -- df -h
# Find large files
teabar exec my-env -- du -sh /* | sort -hr | head -20
# Resize if needed
teabar volume resize my-env db-data --size 100Gi Slow I/O Performance
# Check IOPS usage
teabar metrics show my-env --metric volume_iops
# Consider upgrading storage type
teabar volume migrate my-env db-data --type nvme