Firecrawl CLI is a unified terminal tool for developers and AI agents to scrape, search, map, crawl, and automate browsers on any website. Output formats include clean markdown, JSON, screenshots, and more—written directly to your filesystem. You can run Firecrawl CLI instantly with npx firecrawl (no install required) or install globally, then connect to Claude Code, Cursor, or OpenCode using firecrawl init for immediate agent integration.
Firecrawl CLI is essential if you need reliable, real-time web data without fragile custom scripts or blocked requests. It unifies scraping, search, site mapping, recursive crawling, and cloud browser sessions into a single terminal-native tool. Outputs are tailored for LLMs, keeping token counts low and context precise. Agents like Claude Code, Cursor, and OpenCode use Firecrawl CLI daily to fetch fresh content from JavaScript-heavy or protected sites where traditional tools fail.
đź’ˇ Tip: Before starting with Firecrawl CLI, download Apidog for free. Use it to visually test and debug Firecrawl API endpoints, including keys, custom params, and responses. This saves significant time when setting up or troubleshooting agent integrations.
With Firecrawl CLI, you set up your environment, install and authenticate, explore core commands, and integrate with agents. It handles concurrency, rate limits, and local caching automatically. Fine-tune outputs and efficiency with flags like format selectors and wait timers.
What Firecrawl CLI Delivers vs. Traditional Tools
- Native JavaScript rendering: Uses cloud browsers to fetch content from complex or protected sites.
- LLM-optimized output: Returns markdown stripped of boilerplate, minimizing token usage.
- Filesystem-first: Writes files locally for easy bash-powered search and agent ingestion.
- Unified commands: Combines scrape, search, map, crawl, and browser automation—no need for separate libraries or proxies.
-
Efficient flags: Options like
--only-main-contentyield cleaner, cheaper results.
Prepare Your Environment
- Check Node.js (required ≥18):
node --version
Update via your package manager or nvm if needed.
- Create a workspace directory:
mkdir firecrawl-cli-projects && cd firecrawl-cli-projects
Keep outputs organized and ready for git versioning.
- (Optional) Disable telemetry:
export FIRECRAWL_NO_TELEMETRY=1
Fastest Install: Init Method for Agents
Install, authenticate, and add agent skills in one command:
npx -y firecrawl-cli@latest init --all --browser
- Logs you in via browser, stores your API key, and sets up Claude Code, Cursor, and compatible agents.
- Restart your agent afterward to detect the new CLI capability.
Permanent Install: Global NPM
For ongoing use across projects:
npm install -g firecrawl-cli
firecrawl --version
- Firecrawl CLI now runs instantly from any directory.
Authenticate and Check Configuration
- Authenticate:
firecrawl login
Or set your API key manually:
export FIRECRAWL_API_KEY=fc-your-key-here
- Check CLI status:
firecrawl --status
firecrawl view-config
- Shows credits, concurrency, and auth state.
- Switch accounts:
firecrawl logoutthen login again. - For self-hosted use:
--api-url http://localhost:3002
Scraping Content
Extract content from any URL:
firecrawl scrape https://example.com --only-main-content
- Outputs clean markdown.
- Save to file:
firecrawl scrape https://example.com --only-main-content -o output.md
Request multiple formats:
firecrawl scrape https://example.com --format markdown,json,html,links,images --pretty
Take screenshots:
firecrawl scrape https://example.com --screenshot
or
firecrawl scrape https://example.com --full-page-screenshot
Handle slow loaders:
firecrawl scrape https://example.com --wait-for 5000
Precise filtering:
firecrawl scrape https://docs.example.com --include-tags main,article --exclude-tags nav,footer,script
Benchmark performance:
firecrawl scrape https://example.com --timing
Performing Web Search
Search and scrape top results:
firecrawl search "latest AI agent benchmarks" --scrape --limit 8 --scrape-formats markdown
- Filter by recency:
--tbs qdr:w - Filter by location or source type as needed.
- Combine with browser sessions for deeper validation.
Mapping Websites
Discover URLs before deep extraction:
firecrawl map https://example.com -o sitemap.json
- Outputs structured lists. Chain filtered URLs into scrape or crawl commands.
- Honors
robots.txtand polite crawling by default.
Recursive Crawling
Crawl entire sites:
firecrawl crawl https://example.com --wait --progress -o crawl-output.json
- Controls for depth, max pages, and concurrency.
- Real-time progress reporting.
Browser Automation
Start cloud browser sessions for interactive flows:
firecrawl browser launch-session
- Returns a session ID. Then:
firecrawl browser execute "open https://news.ycombinator.com" --session <id>
firecrawl browser execute "click .titleline > a" --session <id>
firecrawl browser execute "scrape" --session <id>
- Supports navigation, clicks, typing, and extraction after JS interactions.
- Close sessions when done.
Advanced Configuration
Set persistent options:
firecrawl config --api-url https://your-custom-endpoint --concurrency 5
- Adjust output format, headers, and concurrency globally.
- Export your API key in your shell profile for seamless sessions.
Integrate with AI Coding Agents
- Install the skill once:
npx -y firecrawl-cli@latest init --all
- Agents auto-discover Firecrawl CLI.
- CLI returns local file paths, not direct content—keeping agent context lean and efficient.
Troubleshooting
-
Auth issues: Re-run
firecrawl login. - Rate limits: Lower concurrency or check your dashboard.
-
Empty results (JS-heavy sites): Increase
--wait-foror use--only-main-content. -
Diagnostics: Use
--timing. -
Switch keys:
firecrawl logoutand re-authenticate.
Best Practices
- Always use
--only-main-contentfor clean markdown. - Use descriptive filenames and dedicated folders.
- Test on small scopes before full crawls.
- Build pipelines: search → map → crawl.
- Version-control output directories.
- Monitor weekly credit usage for cost efficiency.
Enhance Firecrawl CLI Workflows with Apidog
Download Apidog for free and import Firecrawl endpoints (scrape, search, crawl, etc.) into collections. Apidog visualizes requests, stores your Firecrawl API key as a variable, mocks responses, and runs automated tests. Debug advanced Firecrawl CLI options or payloads before running in terminal. Combining Firecrawl CLI with Apidog ensures reliable, validated workflows.
Conclusion
You now have actionable steps to install, authenticate, scrape, search, map, crawl, and automate browser sessions with Firecrawl CLI. Use the init command, test a scrape, and iterate—Firecrawl CLI rewards careful flag usage and experimentation with superior results.
Download Apidog for free to supercharge your Firecrawl CLI testing and API validation. Set up Firecrawl CLI, leverage its full capabilities, and unlock real-time web automation.
Additional resources
- Firecrawl CLI documentation → https://docs.firecrawl.dev/sdks/cli
- Firecrawl main site → https://www.firecrawl.dev
- GitHub repository → https://github.com/firecrawl/cli
- API reference → https://docs.firecrawl.dev/api-reference
- Dashboard / API key → https://app.firecrawl.dev
- Apidog free API client → https://apidog.com/?utm_source=dev.to&utm_medium=wanda&utm_content=n8n-post-automation
Top comments (0)