Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.thehog.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Hog enforces rate limits on all API endpoints to ensure reliable service for every organization. Limits are tracked per organization, not per API key — so all keys belonging to the same organization share the same quota. When you exceed a limit, the API returns HTTP 429 and you must wait before retrying.

Global rate limit

Every authenticated endpoint is covered by a global, per-organization rate limit. The limit applies across all endpoints combined, so a burst of company searches counts against the same bucket as a burst of enrichment requests.

Polling rate limit

GET /api/operations/:id has its own dedicated, more restrictive rate limit — separate from the global bucket — of 30 requests per minute per organization. This limit exists because polling is the highest-frequency access pattern and must not crowd out other API traffic.
Do not poll GET /api/operations/:id in a tight loop or on every tick of a UI refresh cycle. Aggressive polling will trigger HTTP 429 responses and block other requests your organization is making simultaneously.
Poll at 2–5 second intervals for short jobs (enrichment, generation). Use 10–30 second intervals for deep research jobs, which routinely take 30 seconds to several minutes to complete. If you receive a 429 on the poll endpoint, stop polling immediately and wait at least 10 seconds before resuming.

HTTP 429 response

When you hit a rate limit, the API returns:
{
  "statusCode": 429,
  "error": "Too Many Requests",
  "message": "Rate limit exceeded. Please slow down and retry.",
  "path": "/api/operations/op_01HZ9K2QW3RV4M5N6P7Q8R9S0T",
  "requestId": "3f7a1c2e-88b4-4d0e-a1f5-0c9e2b3d7f4a",
  "timestamp": "2025-06-10T09:14:33.000Z"
}

Retrying after 429

Use exponential backoff when you receive a 429. Do not retry immediately — the limit window must pass before your request will succeed.
1

Catch the 429

Detect statusCode === 429 in the error body or response.status === 429.
2

Wait with exponential backoff

Start with a short delay and double it on each consecutive 429, up to a maximum. Add random jitter to avoid thundering-herd retries from parallel processes.
async function retryWithBackoff(fn, maxRetries = 5) {
  let attempt = 0;
  while (attempt < maxRetries) {
    try {
      return await fn();
    } catch (err) {
      if (err.status !== 429 || attempt === maxRetries - 1) throw err;
      const baseDelay = 1000 * Math.pow(2, attempt); // 1s, 2s, 4s, 8s, 16s
      const jitter = Math.random() * 500;
      await sleep(baseDelay + jitter);
      attempt++;
    }
  }
}
3

Resume polling at a slower interval

After a 429 on the poll endpoint, wait at least 10 seconds before polling again — even if your normal interval is shorter.
Job typeRecommended interval
Enrichment (POST /api/people/enrich)2–5 seconds
Content generation (POST /api/generate)2–5 seconds
Deep research (POST /api/deep-research)10–30 seconds
These intervals keep you well within the 30 requests/minute polling limit even when running multiple concurrent jobs.