I've been using Strava for years to track my runs and rides. The data's all there — pace, heart rate, elevation, splits — but actually getting insight out of it always felt like work. You're clicking around dashboards, mentally comparing numbers, trying to spot trends.
Then I started wondering: what if I could just ask Claude about my workouts? In plain English. No dashboards, no clicking. Just "how has my pace improved over the last month?" and get a real answer.
Turns out you can. And building it taught me more about how AI tools actually work than anything I'd read before.
Here's how I did it.
First, What Even Is an MCP Server?
Before I get into the code, let me explain MCP — because when I first heard it I had no idea what it meant either.
MCP stands for Model Context Protocol. It's a standard that Anthropic created so that AI models like Claude can talk to the outside world in a consistent way.
Think of it like this. Claude on its own is incredibly smart, but it only knows what's in the conversation window. It can't reach out and check your Strava. It can't look at your emails. It has no hands.
An MCP server gives it hands.
Here's the simplest way I can describe it — Claude is a brilliant consultant sitting in a room. The MCP server is a waiter. The waiter has a menu of things it can go fetch (your recent activities, your heart rate zones, your personal bests). Claude reads your question, picks the right item off the menu, the waiter goes and gets it from Strava, and brings it back. Claude reads the result and answers you.
The Strava API underneath is the same one their mobile app uses. The only difference is that instead of you tapping a button to trigger a data fetch, Claude is deciding what to fetch based on your natural language question.
That clicked for me, and suddenly the whole thing made sense.
The Architecture
You (asking a question in Claude Desktop)
↓
Claude
↓
MCP Server (Python running locally on your machine)
↓
Strava API
↓
Your workout data comes back up the chain
Nothing fancy. A small Python program sitting on your laptop, waiting for Claude to call it.
Step 1: Getting Strava API Access
Strava has a public API that anyone can use for personal projects. You register a free app at strava.com/settings/api — give it any name, set the callback domain to localhost, and the website field can just be http://localhost (it's not validated).
You get a Client ID and Client Secret. These are your app's credentials.
But here's the thing — having credentials isn't enough. Strava also needs you, the account owner, to explicitly say "yes, this app can access my data." That's what OAuth2 is for.
The OAuth2 Dance (Yes, It's a Thing)
OAuth2 sounds intimidating but it's actually just three steps:
Step 1 — You open a special URL in your browser. Strava shows you a "Allow access?" screen. You click Allow.
Step 2 — Strava redirects your browser to http://localhost/?code=abc123xyz. The page fails to load (nothing's running there) but that's fine — you just copy the code= value from the URL bar.
Step 3 — You exchange that code for tokens by running this in your terminal:
curl -X POST "https://www.strava.com/oauth/token" \
-d "client_id=YOUR_ID&client_secret=YOUR_SECRET&code=YOUR_CODE&grant_type=authorization_code"
You get back two things: an access token (valid for 6 hours) and a refresh token (valid forever, as long as you use it regularly). The refresh token is what you store. Your server uses it to silently fetch fresh access tokens whenever it needs them — you never have to do this dance again.
Step 2: Building the MCP Server
The server itself is a Python file using a library called fastmcp. The idea is simple — you write Python functions, decorate them with @mcp.tool(), and Claude can call them.
Here's a basic example:
@mcp.tool()
def get_recent_activities(num_activities: int = 10) -> list:
"""Fetch the athlete's most recent Strava activities."""
response = httpx.get(
f"{STRAVA_API_BASE}/athlete/activities",
headers={"Authorization": f"Bearer {get_access_token()}"},
params={"per_page": num_activities}
)
return response.json()
That docstring isn't just a comment — Claude reads it to understand what the tool does and when to use it. Write it like you're explaining to Claude what this function is for, because that's exactly what you're doing.
I ended up building out tools for:
- Recent activities
- Full activity detail (splits, laps, best efforts)
- Heart rate zones per workout
- Weekly summary (this week vs last week)
- Personal bests for common distances
- Relative Effort / suffer score trends
- All-time athlete stats
Step 3: The Data Problem (And How I Solved It)
Here's where it got interesting.
Once I had the basic tools working, I wanted to do real analysis — year over year comparisons, monthly trends, spotting patterns across hundreds of workouts. So I added a tool called get_all_activities that just paginated through the Strava API until it had everything.
The problem? Strava's API returns a maximum of 200 activities per request. If you have 500 workouts, that's 3 separate API calls. Then you're dumping all of that raw data back to Claude at once — and Claude has a context window limit. There's only so much data it can hold in memory at once.
With hundreds of activities, I was hitting that wall.
The solution: a local SQLite database.
Instead of fetching everything from Strava on every question, I built a sync_activities tool that pulls all my data once and stores it in a local SQLite file (strava.db) right on my machine. Then all the analysis tools query the local database instead of the API.
First time: Strava API → sync → strava.db (local file)
Every question after: Claude queries strava.db directly (fast, no API limits)
After new workouts: incremental sync (only fetches what's new)
This solved two problems at once:
No more context window issues — instead of dumping raw JSON into Claude, I write SQL queries that aggregate the data first. "Total distance per year" becomes a single compact number, not 500 individual activity records.
Speed — local database queries are instant. No waiting for API responses.
The database has columns for everything useful: distance, duration, elevation, heart rate, suffer score, calories, PRs. And because I'm using INSERT OR REPLACE, re-syncing never creates duplicates.
Step 4: Connecting to Claude Desktop
Claude Desktop has a config file where you tell it about MCP servers:
{
"mcpServers": {
"strava": {
"command": "/path/to/uv",
"args": ["--directory", "/path/to/strava-mcp", "run", "server.py"]
}
}
}
One gotcha I hit: Claude Desktop doesn't inherit your terminal's PATH, so you can't just write "command": "uv" — it won't find it. You have to use the full absolute path, which you get by running which uv in your terminal.
Once that's saved, fully quit and reopen Claude Desktop. You'll see a hammer icon at the bottom of the chat — that's your tools connected and ready.
What I Can Ask Now
This is the fun part. After a full sync, I can ask things like:
"Give me a year by year breakdown of my total running distance"
"Which month was my biggest training month in 2024?"
"Has my pace on runs over 10km improved over the last 6 months?"
"Which workouts had the highest relative effort score — am I pushing hard enough?"
"How many PRs did I set last year vs the year before?"
"On weeks where I train more than 5 hours, how does my pace compare to lighter weeks?"
Claude figures out which tools to call, calls them, reads the results, and gives me a proper answer. Not a dashboard. An actual answer, in plain English, with context.
What I Learned
Going into this I didn't really understand MCP. A week later I'd built something I actually use.
A few things that stuck with me:
MCP is just middleware with a standard interface. The concept of a middleman sitting between two systems has existed for decades — MCP is just the version designed specifically for LLMs to use. Once I understood that, it stopped feeling magical and started feeling like plumbing.
OAuth2 feels complicated but it's three steps. Everyone makes it sound scary. It's not. Browser → get a code → exchange for tokens. Done.
Context windows are a real constraint, and databases are the answer. This was the most practically useful thing I learned. When your data is too big to dump into a prompt, you don't need a bigger prompt — you need smarter queries. Aggregate first, then pass the result to the model.
Docstrings are part of the interface. When you write @mcp.tool(), your docstring becomes Claude's instruction manual for that function. Write it clearly and Claude uses the tool correctly. Write it vaguely and it guesses wrong.
The full code is on GitHub: github.com/richyaj/strava-mcp
The whole thing — auth, tools, SQLite sync — is one Python file, about 300 lines. If you can run Python, you can run this.
This was genuinely one of the more satisfying things I've built.
The Charts Claude Generated
Here's what the output actually looks like. These charts were generated entirely by Claude using data pulled from my Strava via the MCP server — no manual work, just asking a question and getting a rendered visualisation back.
Running Pace History (Avg km/h by Year)
Eight years of running data in one view. You can clearly see the dip in 2021 and the steady climb since 2022 — a trend I never noticed just clicking around Strava's own dashboards. And that spike in monthly km in early 2025? That was a training block I'd completely forgotten about.
Activity Mix — Recent 12 Months
Breaking down my training by type per month. Turns out I swim a lot more than I thought. March 2025 was clearly a weights-heavy month — 29 out of 36 activities were gym sessions. That kind of insight would have taken me ages to work out manually.
Built with: Python, fastmcp, httpx, SQLite, Strava API v3, Claude Desktop


Top comments (0)