LIVE PRODUCT
Research API
Run search discovery, fetch sources, and synthesize a cited answer through one async job.
Research API sits above search and extraction. It discovers source URLs, fetches readable evidence, and returns a synthesized answer plus source metadata so your application gets both the summary and the trail behind it.
Endpoint
POST /v1/research
Poll GET /v1/research/:jobId for completed or partial results.
Credits
12 credits per job
The current backend reserves and charges a flat amount per completed research job.
Output
Summary, detailed, or bullets
Choose output_format to shape the synthesis instruction sent to the LLM layer.
What it's for
- research copilots that need sourced answers
- competitive analysis pipelines with traceable source lists
- automated reporting on fast-moving market topics
- internal research tools that need both summary and citations
- LLM workflows where source fetch and synthesis should happen server-side
How it works
- 1Submit a research query and choose depth, output format, and whether to include sources.
- 2OrbitScraper discovers candidate URLs, fetches readable content, and synthesizes a final answer through the configured LLM provider.
- 3Poll until the job reaches completed or partial and then read the summary, source list, and metadata.
Request parameters
These are the fields accepted by the current backend contract for POST /v1/research.
| Name | Type | Required | Description |
|---|---|---|---|
| query | string | Yes | Research prompt or question to investigate. |
| depth | integer | No | Research depth. Defaults to 5. Allowed range 1-10. |
| output_format | summary | detailed | bullets | No | Controls the synthesis style. Defaults to summary. |
| include_sources | boolean | No | Include the source list in the final result. Defaults to true. |
Response fields
These fields describe the completed payload you read from the current public API contract.
| Name | Type | Description |
|---|---|---|
| query | string | Original research query. |
| summary | string | Final synthesized answer returned by the LLM layer. |
| sources | array | Source entries with url, title, snippet, position, and engine. Empty when include_sources is false. |
| metadata | object | Execution metadata including status, failed_sources, serp_engine_used, and serp_provider_used. |
| provider | string | LLM provider used to generate the answer. |
| model | string | Model identifier used for synthesis. |
| research_credits_used | integer | Credits charged for the job. |
Code examples
Start with cURL, then switch to Python, JavaScript, Java, or PHP for the same Research API flow.
Start with the raw HTTP request and poll flow.
curl -X POST "https://api.orbitscraper.com/v1/research" \
-H "x-api-key: ORS_live_1234567890" \
-H "Content-Type: application/json" \
-d '{
"query": "Which AI chip vendors are gaining share in inference workloads?",
"depth": 5,
"output_format": "summary",
"include_sources": true
}'
curl -X GET "https://api.orbitscraper.com/v1/research/research_123456" \
-H "x-api-key: ORS_live_1234567890"Response examples
This is the shape you get back from the current public API contract for Research API.
Queued response
The first response confirms the job was accepted and tells you what to poll next.
{
"request_id": "req_xyz",
"trace_id": "trace_xyz",
"job_id": "research_123456",
"status": "queued",
"research_credits_reserved": 12
}Completed response
After polling, this is the final payload shape your app reads.
{
"job_id": "research_123456",
"request_id": "req_xyz",
"trace_id": "trace_xyz",
"status": "completed",
"query": "Which AI chip vendors are gaining share in inference workloads?",
"summary": "NVIDIA remains dominant, while AMD and hyperscaler silicon are gaining share in targeted inference workloads.",
"sources": [
{
"url": "https://example.com/ai-chip-landscape",
"title": "Top AI chip hardware and chip-making companies in 2026",
"snippet": "AMD and hyperscaler custom silicon continue to gain share...",
"position": 1,
"engine": "google"
}
],
"metadata": {
"status": "completed",
"failed_sources": [],
"serp_engine_used": "google",
"serp_provider_used": "live"
},
"provider": "openai",
"model": "gpt-5-mini",
"research_credits_used": 12
}Ready to build on Research API?
The current backend contract is already live. Use the docs page for request details and the pricing page for credit planning.