Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.thehog.ai/llms.txt

Use this file to discover all available pages before exploring further.

The Hog’s generate endpoint lets you produce text, structured data, images, and social content using AI models grounded in your GTM context. There are three main modes: prompt mode for general text or image generation, AI mode for multi-provider structured output, and social/context mode for generating replies and quotes tied to specific social posts or signals. Short text responses are synchronous; long-form generation and images have their own delivery paths.
Pass your X-Project-Id header (or include projectId in the request body) on every generate call. When a project is set, The Hog automatically applies your brand voice, messaging guidelines, and competitive context to every piece of generated content — no extra prompting required.

Endpoints

EndpointMethodWhat it does
/api/generatePOSTText, structured output, or social content
/api/generate/imagePOSTImage generation with binary file download
/api/generate/estimatePOSTPre-flight credit and latency estimate

Mode 1: Prompt mode (text)

Use prompt + length to generate free-form text. Short content is returned synchronously (HTTP 200); long content is queued and returned as an async operation (HTTP 202).
POST https://api.thehog.ai/api/generate
FieldTypeValuesDescription
promptstringYour instruction or content brief
lengthstring"short" / "long""short" → sync 200, "long" → async 202
outputstring"text" (default) / "image"Use "image" for base64 image output
projectIdstringProject context for brand-aligned generation
curl -X POST https://api.thehog.ai/api/generate \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -H "X-Project-Id: proj_abc123" \
  -d '{
    "prompt": "Write a two-sentence cold email subject line and opening for a VP of Sales at a Series B SaaS company that just raised funding.",
    "length": "short"
  }'

Sync response (short, HTTP 200)

{
  "data": {
    "content": "Subject: Congrats on the Series B — quick question\n\nHi Jordan, saw the news about your funding round — exciting milestone. I'd love to show you how teams scaling through a Series B are using [Product] to hit revenue targets faster.",
    "mode": "prompt"
  },
  "meta": {
    "requestId": "req_01hxyz",
    "cost": { "estimated": 1, "actual": 1 }
  }
}

Async accepted response (long, HTTP 202)

{
  "operationId": "op_01hxyz",
  "status": "queued",
  "pollUrl": "/api/operations/op_01hxyz",
  "meta": {
    "requestId": "req_01habc",
    "estimatedCost": 5
  }
}
Poll GET /api/operations/:id until status is "succeeded" and read result.content.

Mode 2: AI mode (multi-provider + structured output)

AI mode gives you direct access to specific models with optional structured output via JSON Schema. Use this when you need deterministic output shapes, want to choose your model, or need to tune temperature and token limits.
FieldTypeDescription
promptstringYour instruction
modelstringProvider and model, e.g. "openai:gpt-4.1" or "google-vertex:gemini-2.5-flash-lite"
schemaobjectJSON Schema for structured output — response data conforms to this schema
systemPromptstringOverride the default system message
temperaturenumberSampling temperature between 0 and 2
maxTokensnumberCap on output tokens
curl -X POST https://api.thehog.ai/api/generate \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "Extract the three most compelling pain points for a VP of Sales from this product description: [your description here]",
    "model": "openai:gpt-4.1",
    "schema": {
      "type": "object",
      "properties": {
        "painPoints": {
          "type": "array",
          "items": {
            "type": "object",
            "properties": {
              "title": { "type": "string" },
              "description": { "type": "string" },
              "severity": { "type": "string", "enum": ["high", "medium", "low"] }
            },
            "required": ["title", "description", "severity"]
          }
        }
      },
      "required": ["painPoints"]
    },
    "temperature": 0.3
  }'

Mode 3: Image generation

The Hog supports two paths for image output:
Include output: "image" in a standard generate request. The response includes imageBase64 (decode for display) and imageMimeType.
curl -X POST https://api.thehog.ai/api/generate \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "A professional banner image for a B2B SaaS product focused on revenue intelligence. Clean, modern, dark blue and white.",
    "output": "image"
  }'
{
  "data": {
    "imageBase64": "iVBORw0KGgoAAAANSUhEUgAA...",
    "imageMimeType": "image/png",
    "mode": "prompt"
  }
}

Mode 4: Social/context mode

Social mode generates context-aware replies or quote reposts tied to specific social content in your project. This is useful for automating or accelerating social engagement as part of your GTM motion.
FieldTypeRequiredDescription
projectIdstringYesYour project identifier
sourceTypestringYesContent type: linkedin_posts, reddit_posts, x_posts, reddit_keyword_posts, x_keyword_tweets
sourceIdstringYesThe ID of the specific post to reply to or quote
executionTypestringNo"reply" (default) or "quote_repost" (LinkedIn only)
writingStyleIdstringNoID of a saved writing style to apply
regeneratebooleanNoRequest a fresh generation for the same source
curl -X POST https://api.thehog.ai/api/generate \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "projectId": "proj_abc123",
    "sourceType": "linkedin_posts",
    "sourceId": "post_01hxyz",
    "executionType": "reply"
  }'
Social mode responses are synchronous and return the same data.content field as prompt mode.

Pre-flight estimate

Before running large or expensive generate requests, call the estimate endpoint to preview credit cost and expected delivery mode.
curl -X POST https://api.thehog.ai/api/generate/estimate \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "Write a detailed go-to-market playbook for a new revenue intelligence product.",
    "length": "long"
  }'
{
  "data": {
    "estimatedCredits": 8,
    "likelySyncOrAsync": "async",
    "expectedLatencyRange": "15–45s",
    "withinPlanLimits": true
  },
  "meta": {
    "requestId": "req_01hxyz"
  }
}

Response fields

FieldTypeDescription
contentstringGenerated text (all modes)
imageBase64stringBase64-encoded image bytes (prompt + output: "image" only)
imageMimeTypestringMIME type for the image, e.g. "image/png"
modestring"prompt" or "social"
metadataobjectAdditional generation metadata