When You Need It
Use authenticated scraping when:- The content is behind a login page
- You see different data when logged in vs logged out
- The website requires a subscription or account
- You want to scrape your own account data
How It Works
- Log into the target website in your browser
- Copy your session cookies
- Include them in your API request
- Spidra uses those cookies to access the page as if it were you
Getting Your Cookies
Step 1: Log In
Open the website in Chrome, Firefox, or Edge and log into your account.Step 2: Open DevTools
PressF12 or right-click → Inspect → go to the Application tab (Chrome) or Storage tab (Firefox).
Step 3: Find Cookies
Click Cookies in the left sidebar, then select the website domain.Step 4: Copy Cookies
You have two options: Option A - Copy specific cookies: Look for authentication cookies (often namedsession, auth, token, or similar). Copy the name and value of each.
Option B - Copy all cookies:
Select all rows (Ctrl/Cmd+A), then copy (Ctrl/Cmd+C).
Cookie Formats
Spidra accepts cookies in two formats. The API auto-detects which format you’re using.Standard Format
The traditionalname=value format, separated by semicolons:
Raw DevTools Format
Paste directly from Chrome DevTools without any formatting:Examples
Scraping a dashboard
Crawling authenticated pages
Tips
- Session expiry: Cookies expire. If your scrape fails with authentication errors, get fresh cookies.
-
Required cookies: You usually don’t need all cookies. Look for ones with names like
session,auth,token,jwt, oruser. - Domain matching: Cookies are domain-specific. Make sure you’re copying cookies from the correct domain.
- Incognito test: Before scraping, try opening the URL in an incognito window with your cookies to verify they work.
Legal Responsibility
Submit a Scrape Job
API reference
Submit a Crawl Job
Crawl with authentication

