Metrics
Teabar automatically collects performance metrics from all environments, providing visibility into resource utilization, application performance, and infrastructure health.
Available Metrics
Infrastructure Metrics
| Metric | Description | Unit |
|---|---|---|
cpu_usage | CPU utilization percentage | % |
memory_usage | Memory utilization | bytes |
memory_percent | Memory utilization percentage | % |
disk_read | Disk read throughput | bytes/s |
disk_write | Disk write throughput | bytes/s |
network_in | Network ingress | bytes/s |
network_out | Network egress | bytes/s |
Application Metrics
| Metric | Description | Unit |
|---|---|---|
request_count | Total HTTP requests | count |
request_latency_p50 | 50th percentile latency | ms |
request_latency_p95 | 95th percentile latency | ms |
request_latency_p99 | 99th percentile latency | ms |
error_rate | Percentage of failed requests | % |
Environment Metrics
| Metric | Description | Unit |
|---|---|---|
env_uptime | Time since environment creation | seconds |
env_cost | Accumulated cost | USD |
container_count | Number of running containers | count |
restart_count | Container restart count | count |
Viewing Metrics
CLI
# View current metrics summary
teabar metrics show my-feature-env
# View specific metric
teabar metrics show my-feature-env --metric cpu_usage
# View metrics over time range
teabar metrics show my-feature-env --from "2024-01-01" --to "2024-01-07"
# Output as JSON
teabar metrics show my-feature-env --format json Example output:
Environment: my-feature-env
Time: 2024-01-15 10:30:00 UTC
Infrastructure:
CPU Usage: 45.2%
Memory Usage: 1.2 GB / 2.0 GB (60%)
Disk Read: 5.4 MB/s
Disk Write: 2.1 MB/s
Network In: 12.3 MB/s
Network Out: 8.7 MB/s
Application:
Request Count: 15,432 (last hour)
Latency P50: 45ms
Latency P95: 120ms
Latency P99: 350ms
Error Rate: 0.12% Dashboard
Access the metrics dashboard at https://app.teabar.dev/environments/{env-id}/metrics.
The dashboard provides:
- Real-time metric visualizations
- Historical trend analysis
- Comparison across environments
- Custom time range selection
- Export capabilities
Streaming Metrics
For real-time monitoring, stream metrics directly to your terminal:
# Stream all metrics (updates every 5 seconds)
teabar metrics stream my-feature-env
# Stream specific metrics
teabar metrics stream my-feature-env --metrics cpu_usage,memory_percent
# Set custom refresh interval
teabar metrics stream my-feature-env --interval 10 Exporting Metrics
Export metrics for external analysis or archival:
# Export to JSON file
teabar metrics export my-feature-env
--format json
--from "2024-01-01"
--to "2024-01-31"
--output january-metrics.json Output format:
{
"environment": "my-feature-env",
"period": {
"from": "2024-01-01T00:00:00Z",
"to": "2024-01-31T23:59:59Z"
},
"metrics": [
{
"timestamp": "2024-01-01T00:00:00Z",
"cpu_usage": 45.2,
"memory_usage": 1288490188,
"memory_percent": 60.1,
"request_count": 1543,
"request_latency_p50": 45,
"error_rate": 0.12
}
]
}Prometheus Integration
Expose metrics in Prometheus format for scraping:
# Enable Prometheus endpoint
teabar config set metrics.prometheus.enabled true
teabar config set metrics.prometheus.port 9090
# View Prometheus endpoint
curl http://localhost:9090/metrics Configure Prometheus to scrape Teabar:
# prometheus.yml
scrape_configs:
- job_name: 'teabar'
static_configs:
- targets: ['localhost:9090']
metrics_path: '/metrics'
scrape_interval: 30s Custom Metrics
Add custom application metrics using the Teabar SDK:
import { Teabar } from '@teabar/sdk';
const teabar = new Teabar();
// Record a custom metric
teabar.metrics.record('custom_metric_name', 42.5, {
labels: {
service: 'api',
endpoint: '/users'
}
});
// Increment a counter
teabar.metrics.increment('request_count', {
labels: { status: '200' }
});
// Record a histogram value
teabar.metrics.histogram('response_time', 145, {
labels: { endpoint: '/api/data' }
}); Note
Custom metrics are retained for the same period as built-in metrics (90 days default).
Alerting on Metrics
Configure alerts based on metric thresholds:
# teabar.yaml
alerts:
- name: high-cpu
metric: cpu_usage
condition: "> 80"
duration: 5m
channels:
- slack:#alerts
- name: high-latency
metric: request_latency_p95
condition: "> 500"
duration: 2m
channels:
- pagerduty:oncall
- name: high-error-rate
metric: error_rate
condition: "> 5"
duration: 1m
severity: critical
channels:
- slack:#incidents
- pagerduty:oncall Best Practices
- Set baseline alerts - Establish normal ranges and alert on deviations
- Use percentiles for latency - P95 and P99 reveal tail latency issues
- Correlate metrics - Compare CPU, memory, and latency together
- Export regularly - Archive metrics before they expire
- Label consistently - Use standard labels across all environments
Troubleshooting
Metrics Not Appearing
# Check if metrics collection is enabled
teabar config get metrics.enabled
# Verify environment is running
teabar env status my-feature-env
# Check metrics agent logs
teabar logs my-feature-env --component metrics-agent High Cardinality Issues
If you see warnings about high cardinality:
# Review custom metric labels
teabar metrics labels my-feature-env
# Remove high-cardinality labels
teabar metrics prune --labels user_id,request_id