Send a POST request toDocumentation Index
Fetch the complete documentation index at: https://docs.thehog.ai/llms.txt
Use this file to discover all available pages before exploring further.
/api/generate/estimate to get a pre-flight estimate for a generation job before you commit credits. The endpoint accepts the same parameters as POST /api/generate and returns the estimated credit cost, whether the request would run synchronously or asynchronously, and the expected latency range. No content is generated and no credits are charged.
Request
POST https://api.thehog.ai/api/generate/estimate
This endpoint accepts the same body parameters as POST /api/generate. Refer to that page for the full parameter reference. The key parameters that affect the estimate are:
"text" or "image". Affects model selection and credit cost.The generation prompt. Longer prompts increase estimated token counts.
"short" or "long". Determines whether the estimate predicts sync or async execution.Project scope. Affects context injection and billing grouping.
Social mode source type. Affects which generation path is estimated.
Social mode source row ID.
Social mode regeneration flag.
Writing style override for social mode.
Social mode execution type (
"reply" or "quote_repost").AI mode model override. Different models have different per-token credit rates.
AI mode JSON Schema for structured output.
AI mode system prompt override.
AI mode sampling temperature (0–2).
AI mode max output tokens. Setting a high value increases the upper bound of the credit estimate.
Response
Returns200 OK synchronously. No credits are charged.
The pre-flight estimate.