Automating sports nutrition is the Holy Grail for many amateur athletes. We want the precision of a nutritionist without the mental load.
My goal was simple: synchronize my workouts (Intervals.icu / nolio.io) with my nutritional needs (Hexis.live) and generate automatic meal plans.
For those unfamiliar, Hexis is an intelligent nutrition platform that calculates your real-time energy needs (carbs, proteins, fats) based on the intensity of your past and future workouts. The problem? Hexis does not have a public API for meal creation.
Here is how I combined Reverse Engineering, Man-in-the-Middle, Advanced Agentic Design, Meta-MCP, and n8n to build an autonomous and cost-effective system.
1. The Hack: Reverse Engineering & Man-in-the-Middle
Since the front door was locked, I went through the window. To automate meal logging, I set up a Man-in-the-Middle (MITM) attack to listen to traffic between the mobile app and the servers.
Unexpectedly, this interception proved easier to set up on iPhone than on Android, allowing me to quickly map the structure of the private APIs.
I discovered that the API doesn't just settle for a food ID. It requires a specific refCode (a Base64 string hidden in search results) to validate the log. Without this key, the API returns a silent error.
I encapsulated this complexity (search + refCode extraction + formatting) in a custom MCP server. For my AI agents, the operation becomes transparent: they ask to "add a banana", and the server handles the cryptography backstage.
2. Crew Architecture: 2 Pipelines and 10 Agents
Once data access was solved, I built the system's "brain" with CrewAI. The architecture is split into two distinct pipelines to ensure precision and reliability.
🧠 Pipeline 1: Data Analysis
This first group defines the nutritional strategy:
- HEXIS_DATA_SUPERVISOR: The Strategist. Plans which data to retrieve (training load, recovery status). Pure "Thinking" model, no tool access.
- HEXIS_ANALYSIS_REVIEWER: The Nutritionist. Analyzes raw data to define precise macro targets (e.g., "Tomorrow is a big session, we need 400g of carbs").
🍳 Pipeline 2: Meal Generation
This is where magic happens to turn numbers into recipes:
- MEAL_PLANNING_SUPERVISOR (The Chef): Designs the structure of the 4 daily meals respecting energy codes (Low/Medium/High Carb) and ensuring culinary variety.
- INGREDIENT_VALIDATION_EXECUTOR (The Commis): The only agent allowed to talk to the database. Uses parallel execution to quickly validate ingredient existence in the Passio/Hexis database.
- MEAL_RECIPE_REVIEWER (The Controller): Performs final mathematical calculations. If the Chef planned too much chicken, the Reviewer adjusts the portion to the gram to hit ±5% of targets.
This structure relies on a key principle: Reasoning/Action Separation. "Intelligent" models (Supervisors) never touch tools to avoid hallucinations. They delegate technical execution to simpler, faster models (Executors).
Support Agents
Other agents (Weekly Structure, Nutritional Validation, Integration) ensure overall weekly consistency and final synchronization to the app.
3. MCP Infrastructure: Why Meta-MCP?
I use Meta-MCP, an essential abstraction layer. Why? Mainly for security and modularity.
It allows me to group tools by domain (Sports, Nutrition, Weather) and distribute to each agent only what it strictly needs. Thus, my "Hexis Executor" doesn't risk accidentally deleting a Strava activity. It's a least privilege principle applied to AI agents.
4. Cost Optimization: Smart AI Pricing
Running autonomous agents 24/7 has a cost. To avoid an astronomical bill, I implemented an aggressive strategy:
Reverse Proxies & Alternative Models
I use OpenAI-compatible Reverse Proxies (like CLIProxyAPI) to access high-performance alternative models. Special mention to DeepSeek and GLM-4.6 (via z.ai) which now offer performance close to Claude 4.5 Sonnet for a fraction of the price.
Using specialized "Coder" models for structured tasks (JSON) also drastically reduces costs without quality loss.
Automatic Rotation
My system manages a model cascade. If the "Premium" model (Claude 4.5 Sonnet) hits its quota or rate-limit, the system automatically switches to an "Eco" model (GPT-4o-mini or GLM-4) to finish the task. It's transparent to the user and saves production.
5. Orchestration with n8n
Having intelligent agents isn't enough; they need to live over time. That's where n8n comes in.
I set up a workflow (meal-planning-weekly) that runs every Sunday evening:
- Retrieve training load: The workflow queries Intervals.icu (or nolio.io) to know my upcoming sessions for the week.
- Schedule optimization: A JS script reorganizes my slots (e.g., "If big bike ride on Saturday -> High carb meal on Friday night").
- Asynchronous Call to AI: n8n triggers my CrewAI Python script via an HTTP webhook.
- Callback Pattern: Since meal generation takes time (several minutes of thinking for the agents), n8n doesn't block. It waits for a return "ping" (callback) once the agents have finished their work.
- Distribution: The final result (shopping list + menu) is sent directly to Telegram.
Conclusion
This project demonstrates that with thoughtful architecture, we can bypass closed API limits and create AI systems that are truly useful in daily life.
By combining the reasoning power of "Thinking" models, the execution precision of MCP servers, and the robust orchestration of n8n, we are no longer in gadget territory, but dealing with a real personal assistant.
The code is available on my GitHub. Next step? Automating grocery ordering via a drive-thru API... the next reverse engineering challenge!



Top comments (0)