FAQ

OrbitScraper FAQ

This page collects the practical questions developers usually ask before wiring OrbitScraper into production. It covers the platform overall, the current public backend contract, and the four live product families.

Products

4 live APIs

SERP, Extract, Research, and Crawl are all live on the site.

Auth

x-api-key

Public routes use the current x-api-key contract.

Execution

POST, then poll

The public APIs are queue-backed and return final payloads after polling.

Start here

Docs + product pages

Use the docs for request details and the product pages for implementation context.

General questions

Start here if you want the quickest overview of OrbitScraper.

What is OrbitScraper?+
OrbitScraper is a web data platform with four live API products: SERP API, Extract API, Research API, and Crawl API. It gives developers one place to run search, page extraction, research synthesis, and bounded crawl workloads without maintaining the queueing and parser logic in-house.
Are all four OrbitScraper products live?+
Yes. The current public product lineup is live across SERP API, Extract API, Research API, and Crawl API.
What base URL and auth header do I use?+
Send requests to https://api.orbitscraper.com and pass x-api-key: ORS_live_1234567890. The current backend contract expects x-api-key on public routes.
Do these APIs return data immediately?+
The current contract is queue-backed. You send a POST request to create a job, then poll the product status endpoint until the job reaches a final state such as completed, partial, failed, cancelled, or expired depending on the API family.
Where should I start if I only need one integration?+
Start with the main docs page if you want a unified overview. If you already know the workload, jump straight to the product page and docs for that API family.

Credits and billing

The billing model is different per product, so the details matter.

How many credits does each product use?+
SERP API uses 1 credit per successful request. Extract API uses 2 credits per successful request. Research API uses 12 credits per successful job. Crawl API uses 1 credit per completed page.
When are credits reserved and when are they charged?+
Search charges 1 credit on successful completion. Extract and Research reserve credits when the job is queued and finalize them when the job completes. Crawl reserves credits from the configured page budget, then charges against completed pages as the crawl progresses.
Does pagination use more credits on SERP API?+
Yes. Each additional page is a new successful search request. If you request page 2 or page 3, each completed page consumes another search credit.
Do SERP features like ads or people also ask cost extra credits?+
No. If those modules are included in the same completed SERP response, they do not add separate billing on top of the successful search request itself.
Do unused plan credits roll over?+
Plan usage follows the current billing-cycle limits. If you need additional long-horizon capacity beyond the cycle, OrbitScraper also supports one-time top-up credits.
Do top-up credits expire?+
No. One-time top-up credits do not expire.
Do subscriptions renew automatically?+
Yes. Subscription plans renew automatically unless they are canceled before the next billing date. Cancellation stops future renewals at the end of the active billing period.
Are taxes included in the listed prices?+
Taxes may be calculated at checkout depending on billing location and the merchant of record requirements for that purchase flow.
What errors should I handle in client code?+
The main public errors are 400, 401, 402, 404, 409, 429, 500, and 503. The safest client pattern is to validate request fields before enqueue, treat 401 and 402 as account-level issues, and back off on 429, 500, and 503.

SERP API FAQ

Questions developers usually ask before wiring search requests into an app.

Which search engines are supported by the current SERP API contract?+
The current backend contract supports Google, Bing, Brave, and DuckDuckGo through POST /v1/search.
Does SERP API support markdown output?+
Yes. Set markdown=true on the request and the completed payload can include a prompt-ready markdown rendering alongside the structured JSON fields.
How do pagination and page size work for search jobs?+
Use num for results per page and page for the page number. The current contract allows 1 to 20 results per page and page numbers from 1 to 100.
What does the completed SERP response include?+
A completed search result can include search_metadata, search_parameters, organic_results, related_searches, and optional modules such as local results, knowledge graph data, people also ask, and markdown output.

Extract API FAQ

Focused on pulling readable, structured page content without writing custom parsers.

Can Extract API render JavaScript before extraction?+
Yes. Set render_js to true when the page needs a browser-backed render path before parsing.
Which output formats does Extract API support?+
The current contract supports markdown, json, and text through output_format.
What can I send in extract_fields?+
extract_fields is an optional array of extraction hints. The current docs examples use fields like title, body, author, and price.
What do I get back from a completed extract job?+
The completed payload can include the original URL, resolved title, formatted content, a structured object with extracted fields, fetch metadata, and the credits charged for that extraction.

Research API FAQ

Useful when you want the API to discover sources and produce a synthesized answer.

What does the depth parameter control in Research API?+
depth controls how much research work the backend performs. The current contract accepts an integer from 1 to 10 and defaults to 5.
What does status=partial mean on a research job?+
It means the synthesis completed, but some source fetches or sub-steps did not fully resolve. You still get a usable summary and metadata, and you should read the source and failure details before treating the output as fully complete.
Can I remove sources from the final research payload?+
Yes. Set include_sources to false if you only want the synthesized answer and do not need the source list in the returned payload.
What does a completed research response include?+
A completed research job can include the original query, synthesized summary, source list, metadata about failed sources and the SERP provider used, plus the LLM provider, model, and credits consumed for that run.

Crawl API FAQ

Built around bounded crawls, progress tracking, and page-level billing.

What field do I send to start a crawl?+
The current contract uses domain as the crawl seed field, not a generic URL field. You can send a domain or a starting URL and the backend will normalize it.
How do include_patterns and exclude_patterns work?+
Both fields are optional string arrays. Use include_patterns to keep the crawl inside specific path groups, and exclude_patterns to block paths you do not want processed.
Can I cancel a crawl after it starts?+
You can cancel queued crawl jobs through DELETE /v1/crawl/:jobId. Once a crawl is already running, the current public contract does not expose general cancellation for in-flight work.
How are crawl credits charged?+
Crawl reserves credits from your configured max_pages budget, then charges 1 credit per completed page as the crawl finishes.
What does the crawl status payload return?+
The status payload can include crawl progress totals such as pages found, completed, and failed, along with a per-page status list containing URL, page status, title, error code, and fetched timestamp.

Start scraping faster - ask Orbit AI.