async/await concurrency throughout. All results come back as structured data ready to feed into your iOS, macOS, or server-side Swift applications.
Installation
Swift Package Manager
Add Spidra to yourPackage.swift dependencies:
Get your API key from app.spidra.io under Settings → API Keys.
Never hardcode it in source files — use an environment variable instead.
Requirements
- Swift 5.9+
- iOS 15.0+ / macOS 12.0+ / tvOS 15.0+ / watchOS 8.0+
- A Spidra API key (sign up free)
Getting started
spidra.scrape, spidra.batch, spidra.crawl, spidra.logs, and spidra.usage.
Quick start
Scraping
All scrape jobs run asynchronously using Swift’sasync/await. The run() method submits a job and polls until it finishes. Up to 3 URLs can be passed per request and they are processed in parallel.
Basic scrape
| Parameter | Type | Description |
|---|---|---|
urls | [ScrapeUrl] | Up to 3 URLs, each with optional per-URL browser actions |
prompt | String | AI extraction instruction |
output | String | "markdown" (default) or "json" |
schema | AnyCodable? | JSON Schema for guaranteed output shape |
useProxy | Bool | Route through a residential proxy |
proxyCountry | String? | Two-letter country code, e.g. "us", "de", "jp" |
extractContentOnly | Bool | Strip navigation, ads, and boilerplate before AI extraction |
screenshot | Bool | Capture a screenshot of the page |
fullPageScreenshot | Bool | Capture a full-page (scrolled) screenshot |
cookies | String? | Raw Cookie header string for authenticated pages |
Fire-and-forget approach
Usesubmit() and get() when you want to manage polling yourself.
waiting · active · completed · failed
Structured JSON output
Pass aschema to enforce an exact output shape. Missing fields come back as null rather than hallucinated values.
Geo-targeted scraping
PassuseProxy: true and a proxyCountry code to route through a residential IP in that country.
us, gb, de, fr, jp, au, ca, br, in, nl, and 40+ more. Use "global" or "eu" for regional routing.
Authenticated pages
Pass cookies as a string to scrape pages that require a login session.Browser actions
Actions let you interact with the page before the scrape runs. They execute in order.| Action | Description |
|---|---|
.click(selector:value:) | Click a button, link, or any element |
.type(selector:value:) | Type text into an input or textarea |
.check(selector:value:) | Check a checkbox |
.uncheck(selector:value:) | Uncheck a checkbox |
.wait(duration:) | Pause for a set number of milliseconds |
.scroll(to:) | Scroll to a percentage of the page height |
.forEach(observe:mode:...) | Loop over every matched element and process each |
forEach — loop over every element
forEach finds a set of elements and processes each individually. Best used when dealing with pagination, clicking into detail pages, or looping over long lists.
inline— Read element content directly without navigating away.navigate— Follow each element’s link to its destination page and capture content there.click— Click each element, capture the content that appears (e.g., a modal), then move on.
Poll options
Override default polling intervals viaPollOptions:
batch.run() and crawl.run().
Batch scraping
Submit up to 50 URLs in a single request. All URLs are processed in parallel. Each URL is a plain string.pending · running · completed · failed
Batch statuses: pending · running · completed · failed · cancelled
You can also list(), retry(), or cancel() batches using the same pattern as scrape.
Crawling
Given a starting URL, Spidra discovers pages automatically according to your instruction and extracts structured data from each one.| Parameter | Type | Description |
|---|---|---|
baseUrl | String | Starting URL for the crawl |
crawlInstruction | String | Which links to follow and which to skip |
transformInstruction | String | What to extract from each page |
maxPages | Int | Maximum number of pages to crawl |
useProxy | Bool | Route through a residential proxy |
proxyCountry | String? | Two-letter country code, e.g. "us" |
cookies | String? | Raw Cookie header string for authenticated sites |
Download crawled content
Fetch signed download URLs for HTML and Markdown for all crawled pages. Links expire after 1 hour.Logs
Every API scrape job is logged automatically.Usage statistics
Returns credit and request usage broken down by day or week.| Range | Description |
|---|---|
"7d" | Last 7 days, one row per day |
"30d" | Last 30 days, one row per day |
"weekly" | Last 7 weeks, one row per week |
Error handling
Every API error throws aSpidraError. Catch the specific case you care about.
| Error case | Status | When |
|---|---|---|
.authenticationError | 401 | API key is missing or invalid |
.insufficientCreditsError | 403 | No credits remaining |
.rateLimitError | 429 | Too many requests — back off |
.serverError | 500 | Unexpected server-side error |
.NET
Official .NET SDK — fully async, typed exceptions, JSON schema support. Requires .NET 8+.
Java
Official Java SDK — CompletableFuture-based, builder pattern, no extra HTTP dependencies.

