AI agents are increasingly used to automate research, data collection, and reporting workflows. One of the most common needs: storing and sharing structured data. Google Sheets is the go-to collaborative spreadsheet — and now IteraTools lets any AI agent create, read, and write spreadsheets via a single API call.
In this tutorial, we'll walk through a complete agent workflow:
- Create a spreadsheet with headers
- Write research data to it
- Read it back as structured JSON
- Share the URL with a human
The API
IteraTools is a consumption-based API for AI agents — you pay per call with no monthly fee. Three new endpoints handle Google Sheets:
| Endpoint | Price | What it does |
|---|---|---|
POST /sheets/create |
$0.005 | Create a new spreadsheet, get URL |
POST /sheets/write |
$0.003 | Write or append rows |
GET /sheets/read |
$0.002 | Read rows as JSON objects |
Step 1: Create a spreadsheet
curl -X POST https://api.iteratools.com/sheets/create \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"title": "Market Research Q1 2025",
"headers": ["Company", "Location", "Revenue", "Notes"]
}'
Response:
{
"ok": true,
"data": {
"spreadsheet_id": "1abc...xyz",
"url": "https://docs.google.com/spreadsheets/d/1abc...xyz/edit",
"title": "Market Research Q1 2025"
}
}
The spreadsheet is automatically set to public read-only so you can share the URL immediately. Save the spreadsheet_id for the next steps.
Step 2: Write research data
curl -X POST https://api.iteratools.com/sheets/write \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"spreadsheet_id": "1abc...xyz",
"values": [
["Acme Corp", "São Paulo", "R$ 12M", "Growing fast, contact CEO"],
["TechBR", "Florianópolis", "R$ 4M", "SaaS, looking for partnerships"],
["DataSul", "Curitiba", "R$ 8M", "Enterprise focus, good fit"]
],
"mode": "append"
}'
Response:
{
"ok": true,
"data": {
"updated_rows": 3
}
}
Use mode: "append" to add rows below existing data, or mode: "overwrite" (default) to replace from the starting cell.
Step 3: Read the data back
curl "https://api.iteratools.com/sheets/read?spreadsheet_id=1abc...xyz" \
-H "Authorization: Bearer YOUR_KEY"
Response:
{
"ok": true,
"data": {
"headers": ["Company", "Location", "Revenue", "Notes"],
"rows": [
{"Company": "Acme Corp", "Location": "São Paulo", "Revenue": "R$ 12M", "Notes": "Growing fast, contact CEO"},
{"Company": "TechBR", "Location": "Florianópolis", "Revenue": "R$ 4M", "Notes": "SaaS, looking for partnerships"},
{"Company": "DataSul", "Location": "Curitiba", "Revenue": "R$ 8M", "Notes": "Enterprise focus, good fit"}
],
"total": 3,
"range": "Sheet1!A1:D4"
}
}
The first row is automatically treated as headers, and all subsequent rows are returned as key-value objects — ready to use in your agent's logic without any parsing.
Full Python agent example
Here's a complete example of an agent that does market research and stores results in a spreadsheet:
import requests
API_KEY = "your_iteratools_key"
BASE = "https://api.iteratools.com"
HEADERS = {"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}
# 1. Create a spreadsheet
resp = requests.post(f"{BASE}/sheets/create", headers=HEADERS, json={
"title": "Agent Research Results",
"headers": ["Company", "Website", "Category", "Score"]
})
sheet_id = resp.json()["data"]["spreadsheet_id"]
sheet_url = resp.json()["data"]["url"]
print(f"📊 Spreadsheet created: {sheet_url}")
# 2. Simulate agent gathering data (in real use, this would call /search, /scrape, etc.)
research_results = [
["Startup A", "startupa.com", "SaaS", 8.5],
["Startup B", "startupb.io", "Marketplace", 7.2],
["Startup C", "startupc.com.br", "Fintech", 9.1],
]
# 3. Write to spreadsheet
resp = requests.post(f"{BASE}/sheets/write", headers=HEADERS, json={
"spreadsheet_id": sheet_id,
"values": research_results,
"mode": "append"
})
print(f"✅ Wrote {resp.json()['data']['updated_rows']} rows")
# 4. Read back to verify
resp = requests.get(f"{BASE}/sheets/read", headers=HEADERS, params={"spreadsheet_id": sheet_id})
data = resp.json()["data"]
print(f"\n📋 Research results ({data['total']} companies):")
for row in data["rows"]:
print(f" • {row['Company']} ({row['Category']}) — Score: {row['Score']}")
print(f"\n🔗 Share link: {sheet_url}")
Output:
📊 Spreadsheet created: https://docs.google.com/spreadsheets/d/1abc...xyz/edit
✅ Wrote 3 rows
📋 Research results (3 companies):
• Startup A (SaaS) — Score: 8.5
• Startup B (Marketplace) — Score: 7.2
• Startup C (Fintech) — Score: 9.1
🔗 Share link: https://docs.google.com/spreadsheets/d/1abc...xyz/edit
Why this matters for agents
The typical agent loop is: search → scrape → process → store → report. The "store and report" step used to require setting up a database, a reporting tool, or emailing CSV files. With /sheets/create, an agent can instantly produce a shareable, human-readable spreadsheet — no infrastructure needed.
Combined with IteraTools' other endpoints:
-
/searchto find companies -
/scrapeto extract data from their websites -
/sheets/create+/sheets/writeto store results -
/ttsor/email/sendto notify stakeholders
You have a complete research pipeline for under $0.10 total.
Getting started
- Get an API key at iteratools.com
- Call
POST /sheets/createto create your first spreadsheet - Use the returned
spreadsheet_idfor all subsequent reads and writes
Full documentation: iteratools.com/docs
IteraTools is a consumption-based API with 41 tools designed for AI agents — image generation, web scraping, TTS, WhatsApp, PDF processing, and now Google Sheets. No subscriptions, pay per call.
Top comments (0)