Target URL(s)
The first field allows you to add one or more URLs you want to scrape.
Operations
To use operations, you have to toggle the Dev Mode which lets you use CSS or XPath selectors for page operations, instead of natural language prompts. You have to write precise selectors for clicks, typing, and checking boxes. This is great for advanced users and debugging tricky pages. Example:
Extraction Prompt
This is the core LLM-powered field which describes what data you want extracted in natural language. For example:- “Get all product titles and prices.”
- “Extract all blog post titles and their publish dates.”

The clearer your prompt, the better your results. Be specific about what fields to extract.
Output Format
You can select your preferred output format in the default output dropdown, currently Spidra only supports this formats:- Markdown
- JSON

SDK Integration
Above the output panel, click on the code button and you’ll find a dynamically generated API code snippet. It shows you the SDK code in:Python JavaScript and CURL.

Advanced Configuration
The options available include: ✅ Stealth Mode (Proxy) — Enables proxy rotation and anti-bot protection. ✅ Scroll to Bottom — Auto-scrolls page to load dynamic content. These are checkboxes and can be toggled per scrape.
Save or Scrape
- Save Preset : Stores your current configuration (URL, actions, prompt, config) into the Presets tab.
- Start Scrape : Runs the scrape job immediately and shows live output in the Output Section.

