Skip to main content
GET
/
scrape-logs
List Scrape Logs
curl --request GET \
  --url https://api.spidra.io/api/scrape-logs \
  --header 'x-api-key: <api-key>'
{
  "status": "success",
  "data": {
    "logs": [
      {
        "uuid": "log-uuid-1",
        "status": "success",
        "started_at": "2025-06-07T15:00:00Z",
        "finished_at": "2025-06-07T15:00:05Z",
        "error_message": null,
        "urls": [
          {
            "url": "https://example.com"
          }
        ],
        "extraction_prompt": "Get the title",
        "tokens_used": 1500,
        "latency_ms": 5000,
        "credits_used": 2
      }
    ],
    "total": 42
  }
}

Filtering Examples

# Get only successful scrapes
/api/scrape-logs?status=success

# Search by URL
/api/scrape-logs?searchTerm=amazon.com

# Date range
/api/scrape-logs?dateStart=2025-12-01&dateEnd=2025-12-31

# Combine filters with pagination
/api/scrape-logs?status=success&searchTerm=blog&limit=20&page=2
The list endpoint excludes result_data for performance. Use the single-log endpoint to get full results.

Authorizations

x-api-key
string
header
required

Query Parameters

page
integer
default:1

Page number

Required range: x >= 1
limit
integer
default:10

Results per page

Required range: 1 <= x <= 100
status
enum<string>

Filter by status

Available options:
success,
error,
in_progress
searchTerm
string

Search by URL

dateStart
string<date>

Filter from date (YYYY-MM-DD)

dateEnd
string<date>

Filter to date (YYYY-MM-DD)

Response

200 - application/json

Paginated list of scrape logs

status
enum<string>
Available options:
success
data
object