Skip to main content
GET
/
crawl
/
history
List Crawl History
curl --request GET \
  --url https://api.spidra.io/api/crawl/history \
  --header 'x-api-key: <api-key>'
{
  "jobs": [
    {
      "id": "abc-123",
      "base_url": "https://example.com",
      "status": "completed",
      "max_pages": 10,
      "pages_crawled": 8,
      "created_at": "2025-12-17T15:00:00Z",
      "credits_used": 25
    }
  ],
  "total": 15,
  "page": 1,
  "totalPages": 2
}
Use this endpoint to browse all the crawl jobs your account has submitted. Each record shows the base URL, how many pages were crawled, the current job status, and how many credits were consumed. This is useful for building dashboards, auditing your usage, or picking up a job ID you want to work with in a follow-up request.

Pagination

The response is paginated. Use the page and limit query parameters to navigate through your history.
# First page (default: 10 results)
curl https://api.spidra.io/api/crawl/history \
  -H "x-api-key: YOUR_API_KEY"

# Second page with 25 results per page
curl "https://api.spidra.io/api/crawl/history?page=2&limit=25" \
  -H "x-api-key: YOUR_API_KEY"

Response Fields

FieldTypeDescription
jobsarrayList of crawl job records for this page
jobs[].idstringUnique job ID. Use this to call other endpoints like GET /crawl//pages
jobs[].base_urlstringThe starting URL that was crawled
jobs[].statusstringJob status: waiting, active, completed, or failed
jobs[].max_pagesintegerThe maximum number of pages requested
jobs[].pages_crawledintegerActual number of pages successfully crawled
jobs[].created_atstringISO 8601 timestamp when the job was created
jobs[].credits_usednumberCredits charged to your account for this job
totalintegerTotal number of crawl jobs in your account
pageintegerThe current page number
totalPagesintegerTotal number of pages available

Example Response

{
  "jobs": [
    {
      "id": "abc-123",
      "base_url": "https://example.com/blog",
      "status": "completed",
      "max_pages": 10,
      "pages_crawled": 8,
      "created_at": "2025-12-17T15:00:00Z",
      "credits_used": 25
    },
    {
      "id": "def-456",
      "base_url": "https://store.example.com/products",
      "status": "failed",
      "max_pages": 20,
      "pages_crawled": 3,
      "created_at": "2025-12-16T09:30:00Z",
      "credits_used": 8
    }
  ],
  "total": 34,
  "page": 1,
  "totalPages": 4
}
To get the full extracted data for a completed job, call GET /crawl//pages using the id from this response.

Authorizations

x-api-key
string
header
required

Query Parameters

page
integer
default:1
Required range: x >= 1
limit
integer
default:10
Required range: 1 <= x <= 100

Response

Paginated list of crawl jobs

jobs
object[]
total
integer
page
integer
totalPages
integer