OrbitScraper FAQ
This page collects the practical questions developers usually ask before wiring OrbitScraper into production. It covers the platform overall, the current public backend contract, and the four live product families.
Products
4 live APIs
SERP, Extract, Research, and Crawl are all live on the site.
Auth
x-api-key
Public routes use the current x-api-key contract.
Execution
POST, then poll
The public APIs are queue-backed and return final payloads after polling.
Start here
Docs + product pages
Use the docs for request details and the product pages for implementation context.
General questions
Start here if you want the quickest overview of OrbitScraper.
What is OrbitScraper?+
Are all four OrbitScraper products live?+
What base URL and auth header do I use?+
https://api.orbitscraper.com and pass x-api-key: ORS_live_1234567890. The current backend contract expects x-api-key on public routes.Do these APIs return data immediately?+
Where should I start if I only need one integration?+
Credits and billing
The billing model is different per product, so the details matter.
How many credits does each product use?+
When are credits reserved and when are they charged?+
Does pagination use more credits on SERP API?+
Do SERP features like ads or people also ask cost extra credits?+
Do unused plan credits roll over?+
Do top-up credits expire?+
Do subscriptions renew automatically?+
Are taxes included in the listed prices?+
What errors should I handle in client code?+
SERP API FAQ
Questions developers usually ask before wiring search requests into an app.
Which search engines are supported by the current SERP API contract?+
POST /v1/search.Does SERP API support markdown output?+
markdown=true on the request and the completed payload can include a prompt-ready markdown rendering alongside the structured JSON fields.How do pagination and page size work for search jobs?+
num for results per page and page for the page number. The current contract allows 1 to 20 results per page and page numbers from 1 to 100.What does the completed SERP response include?+
search_metadata, search_parameters, organic_results, related_searches, and optional modules such as local results, knowledge graph data, people also ask, and markdown output.Extract API FAQ
Focused on pulling readable, structured page content without writing custom parsers.
Can Extract API render JavaScript before extraction?+
render_js to true when the page needs a browser-backed render path before parsing.Which output formats does Extract API support?+
markdown, json, and text through output_format.What can I send in extract_fields?+
extract_fields is an optional array of extraction hints. The current docs examples use fields like title, body, author, and price.What do I get back from a completed extract job?+
structured object with extracted fields, fetch metadata, and the credits charged for that extraction.Research API FAQ
Useful when you want the API to discover sources and produce a synthesized answer.
What does the depth parameter control in Research API?+
depth controls how much research work the backend performs. The current contract accepts an integer from 1 to 10 and defaults to 5.What does status=partial mean on a research job?+
Can I remove sources from the final research payload?+
include_sources to false if you only want the synthesized answer and do not need the source list in the returned payload.What does a completed research response include?+
Crawl API FAQ
Built around bounded crawls, progress tracking, and page-level billing.
What field do I send to start a crawl?+
domain as the crawl seed field, not a generic URL field. You can send a domain or a starting URL and the backend will normalize it.How do include_patterns and exclude_patterns work?+
include_patterns to keep the crawl inside specific path groups, and exclude_patterns to block paths you do not want processed.Can I cancel a crawl after it starts?+
DELETE /v1/crawl/:jobId. Once a crawl is already running, the current public contract does not expose general cancellation for in-flight work.How are crawl credits charged?+
max_pages budget, then charges 1 credit per completed page as the crawl finishes.