Metrics

Teabar automatically collects performance metrics from all environments, providing visibility into resource utilization, application performance, and infrastructure health.

Available Metrics

Infrastructure Metrics

MetricDescriptionUnit
cpu_usageCPU utilization percentage%
memory_usageMemory utilizationbytes
memory_percentMemory utilization percentage%
disk_readDisk read throughputbytes/s
disk_writeDisk write throughputbytes/s
network_inNetwork ingressbytes/s
network_outNetwork egressbytes/s

Application Metrics

MetricDescriptionUnit
request_countTotal HTTP requestscount
request_latency_p5050th percentile latencyms
request_latency_p9595th percentile latencyms
request_latency_p9999th percentile latencyms
error_ratePercentage of failed requests%

Environment Metrics

MetricDescriptionUnit
env_uptimeTime since environment creationseconds
env_costAccumulated costUSD
container_countNumber of running containerscount
restart_countContainer restart countcount

Viewing Metrics

CLI

# View current metrics summary
teabar metrics show my-feature-env

# View specific metric
teabar metrics show my-feature-env --metric cpu_usage

# View metrics over time range
teabar metrics show my-feature-env --from "2024-01-01" --to "2024-01-07"

# Output as JSON
teabar metrics show my-feature-env --format json

Example output:

Environment: my-feature-env
Time: 2024-01-15 10:30:00 UTC

Infrastructure:
  CPU Usage:      45.2%
  Memory Usage:   1.2 GB / 2.0 GB (60%)
  Disk Read:      5.4 MB/s
  Disk Write:     2.1 MB/s
  Network In:     12.3 MB/s
  Network Out:    8.7 MB/s

Application:
  Request Count:  15,432 (last hour)
  Latency P50:    45ms
  Latency P95:    120ms
  Latency P99:    350ms
  Error Rate:     0.12%

Dashboard

Access the metrics dashboard at https://app.teabar.dev/environments/{env-id}/metrics.

The dashboard provides:

  • Real-time metric visualizations
  • Historical trend analysis
  • Comparison across environments
  • Custom time range selection
  • Export capabilities

Streaming Metrics

For real-time monitoring, stream metrics directly to your terminal:

# Stream all metrics (updates every 5 seconds)
teabar metrics stream my-feature-env

# Stream specific metrics
teabar metrics stream my-feature-env --metrics cpu_usage,memory_percent

# Set custom refresh interval
teabar metrics stream my-feature-env --interval 10

Exporting Metrics

Export metrics for external analysis or archival:

# Export to JSON file
teabar metrics export my-feature-env 
  --format json 
  --from "2024-01-01" 
  --to "2024-01-31" 
  --output january-metrics.json

Output format:

{
  "environment": "my-feature-env",
  "period": {
    "from": "2024-01-01T00:00:00Z",
    "to": "2024-01-31T23:59:59Z"
  },
  "metrics": [
    {
      "timestamp": "2024-01-01T00:00:00Z",
      "cpu_usage": 45.2,
      "memory_usage": 1288490188,
      "memory_percent": 60.1,
      "request_count": 1543,
      "request_latency_p50": 45,
      "error_rate": 0.12
    }
  ]
}

Prometheus Integration

Expose metrics in Prometheus format for scraping:

# Enable Prometheus endpoint
teabar config set metrics.prometheus.enabled true
teabar config set metrics.prometheus.port 9090

# View Prometheus endpoint
curl http://localhost:9090/metrics

Configure Prometheus to scrape Teabar:

# prometheus.yml
scrape_configs:
  - job_name: 'teabar'
    static_configs:
      - targets: ['localhost:9090']
    metrics_path: '/metrics'
    scrape_interval: 30s

Custom Metrics

Add custom application metrics using the Teabar SDK:

import { Teabar } from '@teabar/sdk';

const teabar = new Teabar();

// Record a custom metric
teabar.metrics.record('custom_metric_name', 42.5, {
  labels: {
    service: 'api',
    endpoint: '/users'
  }
});

// Increment a counter
teabar.metrics.increment('request_count', {
  labels: { status: '200' }
});

// Record a histogram value
teabar.metrics.histogram('response_time', 145, {
  labels: { endpoint: '/api/data' }
});

Alerting on Metrics

Configure alerts based on metric thresholds:

# teabar.yaml
alerts:
  - name: high-cpu
    metric: cpu_usage
    condition: "> 80"
    duration: 5m
    channels:
      - slack:#alerts
    
  - name: high-latency
    metric: request_latency_p95
    condition: "> 500"
    duration: 2m
    channels:
      - pagerduty:oncall

  - name: high-error-rate
    metric: error_rate
    condition: "> 5"
    duration: 1m
    severity: critical
    channels:
      - slack:#incidents
      - pagerduty:oncall

Best Practices

  1. Set baseline alerts - Establish normal ranges and alert on deviations
  2. Use percentiles for latency - P95 and P99 reveal tail latency issues
  3. Correlate metrics - Compare CPU, memory, and latency together
  4. Export regularly - Archive metrics before they expire
  5. Label consistently - Use standard labels across all environments

Troubleshooting

Metrics Not Appearing

# Check if metrics collection is enabled
teabar config get metrics.enabled

# Verify environment is running
teabar env status my-feature-env

# Check metrics agent logs
teabar logs my-feature-env --component metrics-agent

High Cardinality Issues

If you see warnings about high cardinality:

# Review custom metric labels
teabar metrics labels my-feature-env

# Remove high-cardinality labels
teabar metrics prune --labels user_id,request_id
ende