Skip to main content
The official .NET SDK for Spidra lets you extract structured data from any website by describing what you want in plain English. It handles JavaScript rendering, anti-bot bypass, and CAPTCHA solving as a managed API, so your code stays focused on the data.

Installation

dotnet add package Spidra
Requires .NET 8 or later.
Get your API key from app.spidra.io under Settings → API Keys. Keep your API key out of source control — read it from an environment variable or a secrets manager.

Getting started

All requests require an API key. Pass it to the client at initialization:
var client = new SpidraClient(Environment.GetEnvironmentVariable("SPIDRA_API_KEY")!);

Quick start

using Spidra;
using Spidra.Types.Scrape;

var client = new SpidraClient(Environment.GetEnvironmentVariable("SPIDRA_API_KEY")!);

var job = await client.Scrape.RunAsync(new ScrapeParams
{
    Urls = [new ScrapeUrl("https://news.ycombinator.com")],
    Prompt = "List the top 5 stories with title, points, and comment count",
    UseProxy = true
});

Console.WriteLine(job.Result.Content);
RunAsync submits the job and polls until it completes, then returns the result.

Scraping

RunAsync — submit and wait

var job = await client.Scrape.RunAsync(new ScrapeParams
{
    Urls   = [new ScrapeUrl("https://example.com/pricing")],
    Prompt = "Extract all pricing plans with name, price, and included features",
    Output = OutputFormat.Json
});

Console.WriteLine(job.Result.Content);
Parameters
PropertyTypeDescription
UrlsScrapeUrl[]Up to 3 URLs, each with optional per-URL browser actions
PromptstringAI extraction instruction
OutputOutputFormatOutputFormat.Markdown (default) or OutputFormat.Json
SchemaJsonElement?JSON Schema for guaranteed output shape
UseProxyboolRoute through a residential proxy
ProxyCountrystring?Two-letter country code, e.g. "us", "de", "jp"
ExtractContentOnlyboolStrip navigation, ads, and boilerplate before AI extraction
ScreenshotboolCapture a screenshot of the page
FullPageScreenshotboolCapture a full-page (scrolled) screenshot
Cookiesstring?Raw Cookie header string for authenticated pages

SubmitAsync + GetAsync — manual control

If you need to track progress yourself, use SubmitAsync and GetAsync directly:
var job = await client.Scrape.SubmitAsync(new ScrapeParams
{
    Urls   = [new ScrapeUrl("https://example.com")],
    Prompt = "Extract the main heading"
});

Console.WriteLine($"Job submitted: {job.JobId}");

while (job.Status is not ("completed" or "failed"))
{
    await Task.Delay(TimeSpan.FromSeconds(2));
    job = await client.Scrape.GetAsync(job.JobId);
    Console.WriteLine($"Status: {job.Status}");
}
Job statuses: waiting · active · completed · failed

Structured output

Pass a JSON schema to get a typed, deserializable result instead of raw text:
using System.Text.Json;

var job = await client.Scrape.RunAsync(new ScrapeParams
{
    Urls   = [new ScrapeUrl("https://jobs.example.com/senior-engineer")],
    Prompt = "Extract the job title, company, location, and required skills",
    Output = OutputFormat.Json,
    Schema = JsonSerializer.SerializeToElement(new
    {
        type     = "object",
        required = new[] { "title", "company" },
        properties = new
        {
            title    = new { type = "string" },
            company  = new { type = "string" },
            location = new { type = new[] { "string", "null" } },
            skills   = new { type = "array", items = new { type = "string" } }
        }
    })
});

var listing = job.Result.Content.Deserialize<JobListing>(new JsonSerializerOptions
{
    PropertyNameCaseInsensitive = true
});

Console.WriteLine($"{listing!.Title} at {listing.Company}");
Console.WriteLine($"Skills: {string.Join(", ", listing.Skills)}");

record JobListing(string Title, string Company, string? Location, List<string> Skills);
Fields in required always appear in the response (as null if the data is not found). Optional fields are omitted when unavailable.

Batch scraping

Process up to 50 URLs in one call. All URLs are processed in parallel.
using Spidra.Types.Batch;

var batch = await client.Batch.RunAsync(new BatchScrapeParams
{
    Urls =
    [
        "https://competitor-a.com/pricing",
        "https://competitor-b.com/pricing",
        "https://competitor-c.com/pricing"
    ],
    Prompt   = "Extract all pricing plans with name and monthly price",
    Output   = OutputFormat.Json,
    UseProxy = true
});

var succeeded = batch.Items.Where(i => i.Status == "completed").ToList();
Console.WriteLine($"{succeeded.Count}/{batch.Items.Count} succeeded");

foreach (var item in succeeded)
{
    Console.WriteLine($"{item.Url}: {item.Result}");
}
Item statuses: pending · running · completed · failed Batch statuses: pending · running · completed · failed · cancelled

Retry failed items

if (batch.Items.Any(i => i.Status == "failed"))
    await client.Batch.RetryAsync(batch.BatchId);

Crawling

Crawl an entire site and extract structured data from each page.
using Spidra.Types.Crawl;

var job = await client.Crawl.RunAsync(new CrawlParams
{
    BaseUrl              = "https://example.com/blog",
    CrawlInstruction     = "Find all blog posts published in 2024",
    TransformInstruction = "Extract title, author, publish date, and summary",
    MaxPages             = 30,
    UseProxy             = true
});

foreach (var page in job.Result)
{
    Console.WriteLine($"{page.Url}: {page.Data}");
}
Parameters
PropertyTypeDescription
BaseUrlstringStarting URL for the crawl
CrawlInstructionstringWhich links to follow and which to skip
TransformInstructionstringWhat to extract from each page
MaxPagesintMaximum number of pages to crawl
UseProxyboolRoute through a residential proxy
ProxyCountrystring?Two-letter country code, e.g. "us"
Cookiesstring?Raw Cookie header string for authenticated sites

Error handling

All exceptions inherit from SpidraException.
ExceptionWhen
SpidraAuthenticationException401 — invalid or missing API key
SpidraInsufficientCreditsException402 — not enough credits
SpidraRateLimitException429 — rate limit exceeded
SpidraServerException5xx — server-side error
using Spidra.Exceptions;

try
{
    var job = await client.Scrape.RunAsync(scrapeParams);
    return job.Result.Content;
}
catch (SpidraAuthenticationException)
{
    logger.LogError("Invalid API key. Check your SPIDRA_API_KEY.");
    throw;
}
catch (SpidraInsufficientCreditsException)
{
    logger.LogWarning("Out of scraping credits. Upgrade at spidra.io.");
    throw;
}
catch (SpidraRateLimitException ex)
{
    await Task.Delay(ex.RetryAfter ?? TimeSpan.FromSeconds(5));
    // retry...
}
catch (SpidraServerException)
{
    logger.LogError("Spidra server error.");
    throw;
}
SpidraRateLimitException.RetryAfter contains the server-suggested wait time when available.

Elixir

Official Elixir SDK — idiomatic pattern matching, OTP-ready, works with Phoenix and plain Mix projects.

Swift

Official Swift SDK — async/await native, works on iOS, macOS, tvOS, watchOS, and server-side Swift.