If you are building an AI Agent (using OpenAI, LangChain, or AutoGen), you likely face the biggest pain point: The Knowledge Cutoff.
To fix this, we need to give the LLM access to Google or Bing.
Typically, developers turn to SerpApi or Google Custom Search JSON API. They are great, but they have a massive problem: Cost.
- SerpApi costs about $0.01 per search.
- If your Agent runs a loop and searches 100 times to debug a task, you just spent $1. It adds up fast.
I recently found a new alternative on RapidAPI called SearchCans. It provides both Search (SERP) and URL-to-Markdown Scraping (like Firecrawl) but at a fraction of the cost (~90% cheaper).
Here is how to integrate it into your Python project in under 5 minutes.
Step 1: Get the Free API Key
First, go to the RapidAPI page and subscribe to the Basic (Free) plan to get your key. It gives you 50 free requests to test (Hard Limit, so no surprise bills).
π Get your Free SearchCans API Key Here
Step 2: The Python Code
You don't need to install any heavy SDKs. Just use requests.
Here is a clean SearchClient class I wrote that handles both searching Google/Bing and scraping web pages into clean text for your LLM.
import requests
import json
class SearchCansClient:
def __init__(self, rapid_api_key):
self.base_url = "[https://searchcans-google-bing-search-web-scraper.p.rapidapi.com](https://searchcans-google-bing-search-web-scraper.p.rapidapi.com)"
self.headers = {
"X-RapidAPI-Key": rapid_api_key,
"X-RapidAPI-Host": "searchcans-google-bing-search-web-scraper.p.rapidapi.com",
"Content-Type": "application/json"
}
def search(self, query, engine="google"):
"""
Search Google or Bing and get JSON results
"""
payload = {
"s": query,
"t": engine, # 'google' or 'bing'
"d": 10000, # timeout
"p": 1 # page number
}
response = requests.post(f"{self.base_url}/search", json=payload, headers=self.headers)
return response.json()
def scrape(self, url):
"""
Scrape a URL and convert it to clean text/markdown for LLMs
"""
payload = {
"s": url,
"t": "url",
"b": True, # return body text
"w": 3000, # wait time
"d": 30000, # Max timeout
"proxy": 0
}
response = requests.post(f"{self.base_url}/url", json=payload, headers=self.headers)
return response.json()
# --- Usage Example ---
# 1. Replace with your Key from RapidAPI
MY_API_KEY = "YOUR_RAPIDAPI_KEY_HERE"
client = SearchCansClient(MY_API_KEY)
# Test 1: Search for something real-time
print("π Searching...")
results = client.search("latest spacex launch news")
# Print the first result title
if 'data' in results and len(results['data']) > 0:
print(f"Top Result: {results['data'][0]['title']}")
print(f"Link: {results['data'][0]['url']}")
# Test 2: Scrape the content for RAG
print("\nπ·οΈ Scraping content...")
# Let's scrape the first link we found
target_url = results['data'][0]['url']
page_data = client.scrape(target_url)
# Show snippet
print(f"Content Scraped! Length: {len(str(page_data))} chars")
else:
print("No results found.")
Why I switched?
For my side projects, I couldn't justify the monthly subscription of the big players.
| Feature | SerpApi | SearchCans |
|---|---|---|
| Price per 1k req | ~$10.00 | ~$0.60 |
| Search Engine | Google/Bing | Google/Bing |
| Web Scraper | No (Separate tool) | Included |
| Setup | Easy | Easy |
If you are building an MVP or a personal AI assistant, this saves a ton of money.
You can try the Free Tier here:
SearchCans on RapidAPI
Happy coding! π
Top comments (0)