LIVE PRODUCT

Research API

Run search discovery, fetch sources, and synthesize a cited answer through one async job.

Research API sits above search and extraction. It discovers source URLs, fetches readable evidence, and returns a synthesized answer plus source metadata so your application gets both the summary and the trail behind it.

Endpoint

POST /v1/research

Poll GET /v1/research/:jobId for completed or partial results.

Credits

12 credits per job

The current backend reserves and charges a flat amount per completed research job.

Output

Summary, detailed, or bullets

Choose output_format to shape the synthesis instruction sent to the LLM layer.

What it's for

  • research copilots that need sourced answers
  • competitive analysis pipelines with traceable source lists
  • automated reporting on fast-moving market topics
  • internal research tools that need both summary and citations
  • LLM workflows where source fetch and synthesis should happen server-side

How it works

  1. 1Submit a research query and choose depth, output format, and whether to include sources.
  2. 2OrbitScraper discovers candidate URLs, fetches readable content, and synthesizes a final answer through the configured LLM provider.
  3. 3Poll until the job reaches completed or partial and then read the summary, source list, and metadata.

Request parameters

These are the fields accepted by the current backend contract for POST /v1/research.

NameTypeRequiredDescription
querystringYesResearch prompt or question to investigate.
depthintegerNoResearch depth. Defaults to 5. Allowed range 1-10.
output_formatsummary | detailed | bulletsNoControls the synthesis style. Defaults to summary.
include_sourcesbooleanNoInclude the source list in the final result. Defaults to true.

Response fields

These fields describe the completed payload you read from the current public API contract.

NameTypeDescription
querystringOriginal research query.
summarystringFinal synthesized answer returned by the LLM layer.
sourcesarraySource entries with url, title, snippet, position, and engine. Empty when include_sources is false.
metadataobjectExecution metadata including status, failed_sources, serp_engine_used, and serp_provider_used.
providerstringLLM provider used to generate the answer.
modelstringModel identifier used for synthesis.
research_credits_usedintegerCredits charged for the job.

Code examples

Start with cURL, then switch to Python, JavaScript, Java, or PHP for the same Research API flow.

Start with the raw HTTP request and poll flow.

bash
curl -X POST "https://api.orbitscraper.com/v1/research" \
  -H "x-api-key: ORS_live_1234567890" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "Which AI chip vendors are gaining share in inference workloads?",
    "depth": 5,
    "output_format": "summary",
    "include_sources": true
  }'

curl -X GET "https://api.orbitscraper.com/v1/research/research_123456" \
  -H "x-api-key: ORS_live_1234567890"

Response examples

This is the shape you get back from the current public API contract for Research API.

Queued response

The first response confirms the job was accepted and tells you what to poll next.

json
{
  "request_id": "req_xyz",
  "trace_id": "trace_xyz",
  "job_id": "research_123456",
  "status": "queued",
  "research_credits_reserved": 12
}

Completed response

After polling, this is the final payload shape your app reads.

json
{
  "job_id": "research_123456",
  "request_id": "req_xyz",
  "trace_id": "trace_xyz",
  "status": "completed",
  "query": "Which AI chip vendors are gaining share in inference workloads?",
  "summary": "NVIDIA remains dominant, while AMD and hyperscaler silicon are gaining share in targeted inference workloads.",
  "sources": [
    {
      "url": "https://example.com/ai-chip-landscape",
      "title": "Top AI chip hardware and chip-making companies in 2026",
      "snippet": "AMD and hyperscaler custom silicon continue to gain share...",
      "position": 1,
      "engine": "google"
    }
  ],
  "metadata": {
    "status": "completed",
    "failed_sources": [],
    "serp_engine_used": "google",
    "serp_provider_used": "live"
  },
  "provider": "openai",
  "model": "gpt-5-mini",
  "research_credits_used": 12
}
The backend may return status=partial when some source fetches fail but the synthesis still completes.
API key scope required by the backend: research:read.
The current deployment bills 12 credits per successful research job.

Ready to build on Research API?

The current backend contract is already live. Use the docs page for request details and the pricing page for credit planning.

Start scraping faster - ask Orbit AI.