DEV Community

Cover image for I Ditched MyFitnessPal and Built an AI Agent to Track My Food
Juan David Gómez
Juan David Gómez

Posted on

I Ditched MyFitnessPal and Built an AI Agent to Track My Food

I wanted to track my calories and protein for my training goals, but I got tired of existing apps. They lock you into their pretty dashboards, make it hard to export your own data, and you can't cross-reference that nutrition data with your training logs easily. I just wanted to own my raw data and build custom reports for myself.

So I built NutriAgent. It's an AI nutrition tracker that understands text and photos of my meals, logs everything into a database and Google Sheets that I control, and I can chat with it on Telegram or the web. This post is about my journey of turning a simple "call GPT" prototype into a real tool-using agent with memory—for myself, but built with proper product decisions.

My First Agent Wasn't Code (And That's Why I Rewrote It)

I didn't start with Python. My first version was actually a quick PoC in n8n (the self-hosted workflow tool). I set up a simple flow with an agent node, a few tools, and Telegram integration. It worked surprisingly well; I used it for several days, and it logged my meals fine.

The problem hit when I shared it with a friend. He wanted to try it, but I realized nothing was reusable. All my credentials for third-party services were hardcoded to my accounts. The whole flow was built around a single user: me. It couldn't support multiple people, and turning that n8n setup into a real product would have been a hack on top of a hack.
That was the real push. I decided to rebuild it properly in Python—not just for me, but as a real multi-user system. It was more work, but it gave me the excuse to spend more time bringing a proper product to life, which is what I actually enjoy doing.

Building a Proper Agent in Python

The n8n prototype proved the concept worked, but now I had to rebuild it from scratch; this time with proper architecture for multiple users. As I started writing the Python version, I realized I needed to be more intentional about the agent's design than I was in my quick n8n flow.

In n8n, I had basic tools duct-taped together. For a real system, I needed:

  • A clean agent setup that could handle many users' conversations and data
  • Well-designed tools that actually corresponded to product features
  • Robust memory that wouldn't break when I scaled beyond just my own use

I used LangChain's create_agent because it handles a lot of the heavy lifting. The core setup looks like this:

PROMPT_FILE = Path(__file__).parent.parent / "prompts" / "food_analysis_prompt.txt"

class FoodAnalysisAgent:
    def __init__(self) -> None:
        self.llm = ChatOpenAI(
            model="gpt-4o-mini",
            api_key=settings.OPENAI_API_KEY,
            temperature=0.3,
        )
        self.system_prompt = self._create_system_prompt()

    def _create_system_prompt(self) -> str:
        template = PROMPT_FILE.read_text(encoding="utf-8")
        current_datetime = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
        return template.format(current_datetime=current_datetime)
Enter fullscreen mode Exit fullscreen mode

I keep the prompt in a separate file because I edit it a lot. It's easier to tweak the instructions without touching code. I inject the current datetime so the agent knows when we are important for queries like "today" or "this week" in my conversations.

Making It Understand Photos and My Chat History

The agent needs to handle my messy real-world inputs: sometimes text, sometimes a photo, sometimes both. Plus, it needs to remember what we were just talking about.

Here's how I normalize everything before sending it to the agent:

@traceable(name="FoodAnalysisAgent.analyze", run_type="chain")
async def analyze(
    self,
    text: str | None,
    images: list[bytes] | None,
    conversation_history: list[dict[str, Any]] | None,
    user_id: int,
    redirect_uri: str | None = None,
) -> str:
    messages: list[Any] = []

    # Pull my past conversation from DB and convert to LangChain format
    for msg in conversation_history or []:
        if msg["role"] == "user":
            messages.append(HumanMessage(content=msg["text"]))
        elif msg["role"] == "bot":
            messages.append(AIMessage(content=msg["text"]))

    # Add my current message (text + optional images)
    if images:
        content: list[Any] = []
        if text:
            content.append({"type": "text", "text": text})
        for img in images:
            content.append({
                "type": "image_url",
                "image_url": {"url": f"data:image/jpeg;base64,{base64.b64encode(img).decode()}"},
            })
        messages.append(HumanMessage(content=content))
    else:
        messages.append(HumanMessage(content=text or ""))

    agent = self._get_agent(user_id=user_id, redirect_uri=redirect_uri)
    result = await agent.ainvoke({"messages": messages})
    return str(result)
Enter fullscreen mode Exit fullscreen mode

This lets me send a photo of fries and add context like "these were air-fried" to get a better estimate. The agent sees the image and text together, plus our conversation history, so it feels like a natural chat about my meals.

Designing Tools for My Own Use Cases

Each tool maps to something I actually want to do. I didn't want abstract functions; I wanted "register this meal" or "show me my data."

Saving My Meals to DB and Google Sheets

def create_register_nutritional_info_tool(user_id: int):
    @tool
    async def register_nutritional_info(
        calories: float,
        proteins: float,
        carbs: float,
        fats: float,
        meal_type: str,
        extra_details: str | None = None,
    ) -> str:
        record = await save_nutritional_info(
            user_id=user_id,  # This is me
            calories=calories,
            proteins=proteins,
            carbs=carbs,
            fats=fats,
            meal_type=meal_type,
            extra_details=extra_details,
        )

        spreadsheet_id: str | None = None
        config = await get_spreadsheet_config(user_id)
        if config:
            try:
                spreadsheet_id = await append_nutritional_data(
                    user_id=user_id,
                    calories=calories,
                    proteins=proteins,
                    carbs=carbs,
                    fats=fats,
                    meal_type=meal_type,
                    extra_details=extra_details,
                    record_id=record["id"],
                )
            except Exception:
                # DB is my source of truth; Sheets is best-effort
                logger.warning("Failed to append to my spreadsheet", exc_info=True)

        # Build a friendly summary for me
        ...
        return response

    return register_nutritional_info
Enter fullscreen mode Exit fullscreen mode

My database is the source of truth. Google Sheets is a nice-to-have mirror. If Sheets fails, I don't lose my data; the meal is already saved in Supabase. This gives me peace of mind because I know my data is always safe.

Querying My Past Meals

def create_query_nutritional_info_tool(user_id: int):
    @tool
    async def query_nutritional_info(
        start_date: str | None = None,
        end_date: str | None = None,
    ) -> str:
        records = await get_nutritional_info(
            user_id=user_id,  # Querying my own history
            start_date=start_date,
            end_date=end_date,
        )
        if not records:
            return "No nutritional records found."

        lines = []
        for r in records:
            date = r["created_at"].split("T")[0]
            lines.append(
                f"Date: {date} | Meal: {r['meal_type']} | "
                f"Calories: {r['calories']} | Proteins: {r['proteins']}g | "
                f"Carbs: {r['carbs']}g | Fats: {r['fats']}g"
            )
        return "\n".join(lines)

    return query_nutritional_info
Enter fullscreen mode Exit fullscreen mode

I pre-format my records into simple text lines instead of dumping raw JSON. The model understands this better and can answer my questions like "what was my protein intake on Monday?" more reliably.

Connecting My Google Sheets via OAuth

def create_register_google_account_tool(user_id: int, redirect_uri: str | None):
    @tool
    async def register_google_account() -> str:
        config = await get_spreadsheet_config(user_id)
        if config:
            return "Your Google account is already connected. I'll keep saving meals there."

        if not redirect_uri:
            return (
                "I need a valid redirect URL to start the Google authorization flow. "
                "The server configuration seems incomplete."
            )

        authorization_url = get_authorization_url(user_id, redirect_uri)
        return (
            "To enable Google Sheets integration, please authorize access using this link:\n\n"
            f"{authorization_url}"
        )

    return register_google_account
Enter fullscreen mode Exit fullscreen mode

This keeps all the OAuth complexity inside a tool. The agent just decides when I need to connect my account and triggers the flow naturally in our conversation.

My Memory System: Two Stores for Different Jobs

Supabase is my core memory: my chats, messages, and nutritional records all live there. It's fast and reliable.

Google Sheets is for me: I can see my data, build custom charts, and truly own it. But it's slower and sometimes fails, so it's a mirror, not the primary store.

Here's how I ensure my spreadsheet exists before writing:

async def ensure_spreadsheet_exists(user_id: int) -> tuple[str, Credentials]:
    config = await get_spreadsheet_config(user_id)
    if not config:
        raise ValueError(f"No spreadsheet config for my user_id={user_id}")

    credentials = await ensure_valid_credentials(user_id, config)
    spreadsheet_id = config.get("spreadsheet_id")

    if not spreadsheet_id:
        spreadsheet_id = await create_spreadsheet(user_id, credentials)
    else:
        try:
            await verify_spreadsheet_has_headers(credentials, spreadsheet_id)
        except HttpError as e:
            if e.resp.status == 404:
                spreadsheet_id = await create_spreadsheet(user_id, credentials)
            else:
                raise

    return spreadsheet_id, credentials
Enter fullscreen mode Exit fullscreen mode

This dual-store approach balances reliability with my need for ownership. I get a spreadsheet I control, but the app doesn't break if Google has issues.

Same Brain, Different Ways to Chat

The agent is just a class. I can talk to it however I want:

  • Telegram: I message my bot, it normalizes my messages (text, photos, documents), downloads media, and calls the agent. I use webhooks to keep it responsive.
  • Web UI: I built a simple web interface that hits the same agent API. It creates chats with chat_type="external" so the agent doesn't care if I'm using Telegram or the web.

The agent interface is stable. I could add WhatsApp, SMS, or anything else without changing the core AI logic.

Tracing and Logging Saved My Sanity

I added @traceable from LangSmith around the main analyze method. Suddenly I could see:

  • Exactly what the model received from me
  • Every tool call and its arguments
  • Where errors happened and how long things took

I also log my user ID, spreadsheet IDs, and macros to debug production issues.

Real example: When I built the Web UI, my meals stopped showing images in the traces. I saw the model wasn't receiving them. The format was wrong, I fixed it in 5 minutes because the trace made it obvious.

What I Learned Building This for Myself

Where agents are worth it: When they orchestrate real tools and stateful systems (like a database, Sheets, and OAuth), not just when they chat. Each tool should map to a clear, real-world action I want to take.

What surprised me:

  • You don't need the most intelligent LLM to build a useful agent. A simple, well-written prompt and simple tools that capture the main features are often enough to create a reliable and good user experience.
  • Context engineering is key. Understanding the tools and what information or context each tool provides is more important than loading the prompt with ultra-detailed instructions.
  • Handling OAuth tokens, refresh flows, and "self-healing" spreadsheets (like recreating one if I accidentally delete it) was critical for making a reliable tool that depends on a third-party service.

The main takeaway: I've always loved building digital products that solve real problems; it's been my main career motivation. But this project was different. I had a personal problem, and I wasn't just building a "good enough" solution; I was able to build the perfect solution for my own needs. That gets me excited to build more and keep growing my skills with these new technologies.

Starting with a no-code tool like n8n was great for testing ideas quickly. But for a product you might want to share or scale, investing in proper code architecture from the start saves you from rebuilding everything later.

I can't say it was easy; I definitely leaned on my existing experience in software development. But it's a total game-changer. The way we can build products today is so different from even just a few years ago.

The project is live at https://nutriagent.juandago.dev if you want to see what I built. The code is available on GitHub for the Agent and also for the Web UI

Heads up: Since this is a personal project, my Google Cloud account isn't verified. If you try connecting your Google account, you'll get a scary warning screen (Google's way of handling unverified apps). I don't store your credentials; it's just for writing to your own Sheets, but the warning looks dramatic.

This was my journey, but I'd love to hear your thoughts. I'm excited to start sharing more updates on this project and other things I'm building. Let's continue the conversation on X or connect on LinkedIn.

Top comments (0)