API Rate Limits

Teabar enforces rate limits to ensure fair usage and platform stability. This guide covers rate limit policies, quotas, and best practices.

Rate Limit Overview

Rate limits are applied per API key or user:

PlanRequests/MinuteRequests/DayBurst
Free6010,000100
Pro300100,000500
Enterprise1,000Unlimited2,000

Rate Limit Headers

Every API response includes rate limit information:

X-RateLimit-Limit: 300
X-RateLimit-Remaining: 250
X-RateLimit-Reset: 1710086400
HeaderDescription
X-RateLimit-LimitMaximum requests per window
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when limit resets

Rate Limit Response

When rate limited, the API returns:

HTTP/1.1 429 Too Many Requests
Content-Type: application/json
Retry-After: 30

{
  "code": "resource_exhausted",
  "message": "Rate limit exceeded. Try again in 30 seconds.",
  "details": [{
    "@type": "type.googleapis.com/google.rpc.RetryInfo",
    "retryDelay": "30s"
  }]
}

Per-Endpoint Limits

Some endpoints have additional specific limits:

EndpointLimitNotes
CreateEnvironment10/minEnvironment creation is resource-intensive
WatchEnvironment20 concurrentStreaming connections
SearchCatalog30/minSearch queries
ValidateBlueprint30/minValidation requests

Resource Quotas

Beyond rate limits, resource quotas apply:

Environment Quotas

PlanConcurrent EnvironmentsEnvironments/Month
Free210
ProUnlimited*Unlimited
EnterpriseUnlimited*Unlimited

*Subject to compute quotas

Compute Quotas

PlanvCPU-hours/monthMemory GB-hours/month
Free100200
ProBased on subscriptionBased on subscription
EnterpriseCustomCustom

Handling Rate Limits

Check Before Limiting

function checkRateLimit(response: Response) {
  const remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0');
  const resetTime = parseInt(response.headers.get('X-RateLimit-Reset') || '0');
  
  if (remaining < 10) {
    console.warn(`Rate limit nearly exhausted. ${remaining} requests remaining.`);
    console.warn(`Resets at ${new Date(resetTime * 1000).toISOString()}`);
  }
}

Implement Retry Logic

async function apiCall(request: () => Promise<Response>): Promise<Response> {
  let attempts = 0;
  const maxAttempts = 3;
  
  while (attempts < maxAttempts) {
    const response = await request();
    
    if (response.status === 429) {
      const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
      console.log(`Rate limited. Retrying in ${retryAfter}s...`);
      await sleep(retryAfter * 1000);
      attempts++;
      continue;
    }
    
    return response;
  }
  
  throw new Error('Max retry attempts exceeded');
}

Exponential Backoff

async function exponentialBackoff<T>(
  fn: () => Promise<T>,
  maxAttempts: number = 5
): Promise<T> {
  for (let attempt = 0; attempt < maxAttempts; attempt++) {
    try {
      return await fn();
    } catch (error) {
      if (error.code === 'resource_exhausted' && attempt < maxAttempts - 1) {
        const delay = Math.pow(2, attempt) * 1000 + Math.random() * 1000;
        await sleep(delay);
      } else {
        throw error;
      }
    }
  }
  throw new Error('Max attempts reached');
}

Best Practices

Request Optimization

  1. Batch operations - Use bulk endpoints when available
  2. Cache responses - Cache read-only data locally
  3. Use pagination - Fetch only needed data
  4. Avoid polling - Use streaming endpoints for real-time data

Caching Strategies

const cache = new Map<string, { data: any; expires: number }>();

async function cachedGet(key: string, fetcher: () => Promise<any>, ttl: number = 60000) {
  const cached = cache.get(key);
  
  if (cached && cached.expires > Date.now()) {
    return cached.data;
  }
  
  const data = await fetcher();
  cache.set(key, { data, expires: Date.now() + ttl });
  return data;
}

Use Streaming for Real-time Data

Instead of polling:

// Bad: Polling every 5 seconds
setInterval(async () => {
  const env = await client.getEnvironment({ environmentId: "env_abc" });
  updateUI(env);
}, 5000);

// Good: Use streaming
const stream = client.watchEnvironment({ environmentId: "env_abc" });
for await (const event of stream) {
  updateUI(event.environment);
}

Monitoring Usage

Via API

curl -H "Authorization: Bearer $TOKEN" 
  https://api.teabar.dev/teabar.v1.UsageService/GetUsage

Response:

{
  "apiUsage": {
    "requestsToday": 4523,
    "requestsThisMonth": 45230,
    "limits": {
      "requestsPerMinute": 300,
      "requestsPerDay": 100000
    }
  },
  "resourceUsage": {
    "environments": 8,
    "vcpuHours": 234.5,
    "memoryGbHours": 567.8
  }
}

Via Web Console

  1. Go to Organization Settings > Usage
  2. View API and resource usage dashboards
  3. Set up usage alerts

Increasing Limits

Pro Plan

Upgrade to Pro for higher limits:

  • 300 requests/minute
  • 100,000 requests/day
  • No environment count limits

Enterprise

For custom limits:

  • Contact sales for negotiated limits
  • SLA guarantees
  • Dedicated infrastructure options

Temporary Increases

For special events (workshops, demos):

  1. Contact support in advance
  2. Provide expected usage details
  3. Temporary limit increases may be granted

Troubleshooting

Frequent Rate Limiting

  1. Check for loops - Ensure no infinite retry loops
  2. Review request patterns - Identify unnecessary requests
  3. Implement caching - Reduce redundant fetches
  4. Use bulk operations - Batch where possible

Unexpected Quota Usage

  1. Audit API keys - Check for unauthorized usage
  2. Review scripts - Identify runaway automation
  3. Check activity logs - See who’s making requests

See Also

ende