API Reference

Rate Limits

Understand API rate limits and how to optimize your integration

Rate limits protect the API from abuse and ensure stable performance for all users. This guide explains limits, how to check your usage, and strategies for optimization.

Rate Limit Tiers

Professional

1,000

requests/hour

  • Per API key
  • Sliding window
  • Burst allowance: 100/min

Enterprise

10,000

requests/hour

  • Per API key
  • Sliding window
  • Burst allowance: 500/min

Custom

Custom

negotiated limits

  • Dedicated infrastructure
  • SLA guarantees
  • Priority support

Rate Limit Headers

Every API response includes rate limit information in headers:

X-RateLimit-Limit

Maximum requests allowed per hour

Example: 1000
X-RateLimit-Remaining

Requests remaining in current window

Example: 847
X-RateLimit-Reset

Unix timestamp when limit resets

Example: 1633028400
Retry-After

Seconds to wait before retrying (only when rate limited)

Example: 1800

429 Response

When you exceed the rate limit, you'll receive a 429 status code:

Example Response

HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1633028400
Retry-After: 1800

{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "API rate limit exceeded. Please retry after 1800 seconds.",
    "retry_after": 1800
  }
}

Handling Rate Limits

Exponential Backoff

Retry with increasing delays when rate limited

async function apiRequestWithRetry(url, options, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    const response = await fetch(url, options);

    if (response.status !== 429) {
      return response;
    }

    const retryAfter = response.headers.get('Retry-After');
    const delay = retryAfter ? parseInt(retryAfter) * 1000 : Math.pow(2, i) * 1000;

    await new Promise(resolve => setTimeout(resolve, delay));
  }

  throw new Error('Max retries exceeded');
}

Monitoring Usage

Track your rate limit usage to avoid hitting limits

const response = await fetch(url, options);

const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const usagePercent = ((limit - remaining) / limit) * 100;

if (usagePercent > 80) {
  console.warn('API rate limit usage at', usagePercent.toFixed(1), '%');
  // Implement throttling or queue requests
}

Optimization Strategies

Best Practices:
  • Cache responses when data doesn't change frequently
  • Use pagination instead of fetching all records
  • Batch requests when possible (e.g., bulk create)
  • Use webhooks instead of polling for real-time updates
  • Implement request queuing during high-traffic periods
  • Respect rate limit headers and implement backoff

Endpoint-Specific Limits

Some endpoints have additional restrictions:

Export Endpoints

POST /courses/:id/export

  • Maximum: 10 exports per hour
  • Reason: Resource-intensive operations

Bulk Operations

POST /students/bulk, POST /enrollments/bulk

  • Maximum: 100 records per request
  • Maximum: 20 bulk requests per hour

Search Endpoints

GET /search/*

  • Maximum: 100 requests per hour
  • Cache results for 5+ minutes when possible

Increasing Limits

Need higher limits? Options include:

Upgrade to Enterprise:

10x rate limits + dedicated support

Custom Plan:

Negotiated limits for high-volume needs

Multiple API Keys:

Distribute load across multiple keys (each has independent limits)

Monitoring Dashboard

Track your API usage in real-time:

  • Current rate limit usage across all keys
  • Historical usage trends
  • Breakdown by endpoint
  • 429 error frequency
  • Peak usage times
  • Set up alerts for approaching limits

Burst Protection

ℹ️
Token Bucket Algorithm:

We use a token bucket algorithm allowing short bursts above the average rate. Professional: 100 req/min burst. Enterprise: 500 req/min burst. Sustained rates still limited to hourly quota.

Common Scenarios

Initial Data Sync

Challenge: Need to fetch all courses/students on first run

Solution: Use pagination with larger page sizes (up to 100), implement cursor-based pagination, spread requests over multiple hours if needed

Real-Time Updates

Challenge: Need immediate notification of changes

Solution: Use webhooks instead of polling. Webhooks aren't subject to rate limits

Batch Processing

Challenge: Need to create many students/enrollments

Solution: Use bulk endpoints (/students/bulk) which count as single requests. Process in batches of 100.

FAQ

Q: Does pagination count as multiple requests?

A: Yes, each page requested counts as one request. Use larger page sizes (up to 100) to reduce requests.

Q: Do webhook deliveries count against rate limits?

A: No, outgoing webhooks don't count against your rate limit.

Q: Can I have multiple API keys with independent limits?

A: Yes, each API key has its own rate limit. Use different keys for different services to isolate limits.

Q: What happens if I consistently hit rate limits?

A: Contact us to discuss upgrading to Enterprise or a custom plan with higher limits.

What's Next?