<- Back to API docs

LIVE PRODUCT

Research API

Research API sits above search and extraction. It discovers source URLs, fetches readable evidence, and returns a synthesized answer plus source metadata so your application gets both the summary and the trail behind it.

Endpoint

POST /v1/research

Poll GET /v1/research/:jobId for completed or partial results.

Credits

12 credits per job

The current backend reserves and charges a flat amount per completed research job.

Output

Summary, detailed, or bullets

Choose output_format to shape the synthesis instruction sent to the LLM layer.

Request parameters

NameTypeRequiredDescription
querystringYesResearch prompt or question to investigate.
depthintegerNoResearch depth. Defaults to 5. Allowed range 1-10.
output_formatsummary | detailed | bulletsNoControls the synthesis style. Defaults to summary.
include_sourcesbooleanNoInclude the source list in the final result. Defaults to true.

Response fields

NameTypeDescription
querystringOriginal research query.
summarystringFinal synthesized answer returned by the LLM layer.
sourcesarraySource entries with url, title, snippet, position, and engine. Empty when include_sources is false.
metadataobjectExecution metadata including status, failed_sources, serp_engine_used, and serp_provider_used.
providerstringLLM provider used to generate the answer.
modelstringModel identifier used for synthesis.
research_credits_usedintegerCredits charged for the job.

Code examples

Switch languages in one place. The examples below all follow the current Research API contract.

Start with the raw HTTP request and poll flow.

bash
curl -X POST "https://api.orbitscraper.com/v1/research" \
  -H "x-api-key: ORS_live_1234567890" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "Which AI chip vendors are gaining share in inference workloads?",
    "depth": 5,
    "output_format": "summary",
    "include_sources": true
  }'

curl -X GET "https://api.orbitscraper.com/v1/research/research_123456" \
  -H "x-api-key: ORS_live_1234567890"

Response examples

This is the payload shape you get back from the current public API contract for Research API.

Queued response

The first response confirms the job was accepted and tells you what to poll.

json
{
  "request_id": "req_xyz",
  "trace_id": "trace_xyz",
  "job_id": "research_123456",
  "status": "queued",
  "research_credits_reserved": 12
}

Completed response

After polling, this is the final payload your app reads.

json
{
  "job_id": "research_123456",
  "request_id": "req_xyz",
  "trace_id": "trace_xyz",
  "status": "completed",
  "query": "Which AI chip vendors are gaining share in inference workloads?",
  "summary": "NVIDIA remains dominant, while AMD and hyperscaler silicon are gaining share in targeted inference workloads.",
  "sources": [
    {
      "url": "https://example.com/ai-chip-landscape",
      "title": "Top AI chip hardware and chip-making companies in 2026",
      "snippet": "AMD and hyperscaler custom silicon continue to gain share...",
      "position": 1,
      "engine": "google"
    }
  ],
  "metadata": {
    "status": "completed",
    "failed_sources": [],
    "serp_engine_used": "google",
    "serp_provider_used": "live"
  },
  "provider": "openai",
  "model": "gpt-5-mini",
  "research_credits_used": 12
}

Operational notes

  • The backend may return status=partial when some source fetches fail but the synthesis still completes.
  • API key scope required by the backend: research:read.
  • The current deployment bills 12 credits per successful research job.

Start scraping faster - ask Orbit AI.