DEV Community

Patt S
Patt S

Posted on

I Made My Family Dinner AI's Problem

"Hey guys, what do you want for dinner?"

"Whatever."

You know what comes next. You make the "whatever." Then it's "I don't want that."

I cook for three people. I have food allergies. There's a kid whose favorite meal changes daily. And then there's the one who claims he's not "picky." Everyone has opinions. Nobody wants to decide. But somehow it's still my job to figure it out — standing in the kitchen at 5 pm trying to remember what we had yesterday, while mentally filtering out tree nuts, no-go vegetables, and whatever my kid has decided she hates this week.

I was spending more time planning meals than cooking them. So I did what any reasonable, slightly fed-up person would do — I made it AI's problem.

Starting With the Boring Part (That Turned Out to Be Everything)

The first thing I did with Claude wasn't ask it to generate a meal plan. It was to build a preference system.

I know — going through "what do you eat, what can't you eat, what do you hate" for three people feels like paperwork. But this is the part that actually matters. I sat down and worked through a structured document that captured all the rules: who eats what, who's allergic to what, who hates what, what's preferred over the others. But beyond the profiles, I also had preferences as a cook — rules so I don't end up making three different dishes every night:

One protein per meal for everyone. One protein, slightly different sides, different carb bases.

Cook once, eat twice. Every dinner is cooked at 1.5x. Leftovers become the next day's lunch — no cooking, just reheat and assemble. This alone saves me probably 5 hours a week.

Sunday batch cooking. I order groceries once a week to cover most of what we need. Sous vide proteins on Sunday — that covers Sunday through Thursday. Light prep on some vegetables to cut down weekday cook time. Friday is seafood, Saturday is grill night with fresh beef. Two trips to the meat counter per week, at most.

All of this lives in a single meal_preference.md file. It includes dietary profiles, a sauce library, base recipe templates (grain bowls, taco night, stir-fry night, chicken parm night), pantry staples, and a "previous weeks" section so we never repeat a menu.

The technical decision here

I kept everything in markdown — no database, no app. Markdown is the format that works best as context for an LLM. The preference file is the prompt, essentially.

From Conversations to Skills

With the preference system in place, I gave Claude the file and said "give me a meal plan for this week." What came back respected every rule — no tree nuts, no asparagus, spice split between the adults and the kid, sous vide proteins used within their 4-day storage window. Not a generic recipe dump. A real plan.

But here's where it got interesting. I run everything through Claude Cowork, and the first few meal plans weren't perfect. I'd say "swap out Wednesday, my kid won't eat that" or "we need more variety in the sides" or "the leftover flow doesn't work if Thursday is a stir-fry." Each of those conversations taught me what the instructions needed to say more clearly.

So I turned those conversations into skills — reusable instruction sets that Claude loads every time. There's a meal planner skill, a weekly prep schedule skill, a shopping list skill. Each one is basically the distilled version of every correction and refinement from past conversations.

This is why skills matter: without them, every conversation starts from zero. I'd have to re-explain the rules, the preferences, the cooking strategies — every single time. Skills are how the system remembers. They're the difference between "AI that helps once" and "AI that gets better every week." When I improve a skill based on a conversation, every future session benefits. The knowledge compounds.

I could have built a web app with a database. But natural language constraints are more flexible than coded business logic. "My husband is doing a keto week" is a sentence, not a schema migration.

Now, for grocery ordering — my first instinct was Claude in Chrome, just let it drive Kroger.com. It worked, technically. But 50+ items through browser automation chewed through tokens fast. Every click, every page load, every search result to process. That's what pushed us toward building a Kroger API integration instead — direct API calls are cheaper, faster, and don't break when Kroger redesigns a button.

From Meal Plan to Shopping Cart

Like I mentioned, Claude in Chrome could technically do this — but it was slow and token-heavy for 50+ items. I needed something leaner.

So we built a Kroger API integration. Kroger has a public API (developer.kroger.com) that supports OAuth2 authentication, product search, and cart operations. I had Claude build a set of Python scripts:

  • kroger_auth.py — OAuth2 flow to connect my Kroger account
  • kroger_stores.py — finds my nearest store by zip code
  • kroger_search.py — searches for products at my store
  • kroger_add_to_cart.py — adds items to my cart
  • shopping_list_to_cart.py — the big one: reads the shopping list, searches for each item, picks the best match, and adds everything to cart

The shopping list file was formatted specifically for Kroger's search — "boneless skinless chicken breast 4 lb" instead of "chicken breasts (8, ~4 lbs)" because Kroger's search works better with product-catalog-style descriptions.

I also had Claude generate a setup guide (KROGER_SETUP.md) in plain English so I could recreate the whole thing if I needed to.

The technical decision here

I went with direct API integration instead of browser automation. I considered using something like Playwright to just drive Kroger.com, and honestly it would have been faster to build initially. But browser automation is fragile — one UI change and it breaks. The API is versioned and stable. The tradeoff is that the Kroger Cart API only supports adding items, not modifying quantities or removing things. Good enough for my use case, but worth knowing.

I also chose Python over Node for the scripts because the Kroger API's OAuth2 flow maps cleanly to Python's requests library, and I didn't need any async capabilities. Simple tools for a simple job.

The MCP Server (Where It Actually Got Good — After Some Wrong Turns)

The scripts worked. But the workflow still had friction. I'd generate a meal plan in a conversation with Claude, get a shopping list, and then have to open a terminal, navigate to the right folder, and run a bash command. It's not a lot of steps, but it breaks the flow. You go from conversational to command-line and back.

So I built a local MCP server. And the build itself was way easier than I expected — but everything around the build was where we got tripped up.

The actual code was almost anticlimactic

MCP — Model Context Protocol — is a way to give AI tools direct access to external services. The Python SDK has a decorator-based API called FastMCP that makes tool definition trivial:

from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Kroger")

@mcp.tool()
def kroger_find_stores(zip_code: str, limit: int = 5) -> str:
    """Find Kroger stores near a ZIP code."""
    # ... your implementation
    return json.dumps({"stores": [...]})`
Enter fullscreen mode Exit fullscreen mode

The docstring becomes the tool description Claude sees. The function signature becomes the input schema. Return a string; Claude parses it. The whole server.py is under 250 lines and covers all four tools plus OAuth token management.

If you've heard "MCP" thrown around and it sounds intimidating — it shouldn't be. The SDK does almost everything. (Full repo on GitHub)

The detour that taught me the most about working with AI

Here's where I need to be honest about a mistake — not a code mistake, a prompting mistake.

When I asked Claude to help me get the MCP server working in Cowork (Anthropic's cloud-based conversational interface), I gave it a general prompt: "I want this to work in Claude Cowork." Claude confidently laid out two elaborate paths — one involving publishing to the MCP Registry with PyPI tooling, the other involving HTTP transport with multi-tenant OAuth deployed to the public internet.

Both sounded plausible. Both were detailed and well-reasoned. Both would have been days of work.

Both were completely unnecessary.

The problem wasn't that Claude was wrong about how MCP used to work. The problem was that MCP is evolving fast, and Claude was building on training knowledge — its understanding of the ecosystem from whenever its data was last updated. Things had changed. The answer had gotten simpler, but Claude didn't know that because I didn't tell it to go check.

What I should have said was: "Before we plan anything, research the most up-to-date MCP documentation and tell me how Cowork actually connects to local servers."

The actual answer? Claude Cowork bridges to your local Claude Desktop. Register your stdio server in Desktop's config, restart, and Cowork can use it. One JSON file edit:

// ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
{
  "mcpServers": {
    "kroger": {
      "command": "python3",
      "args": ["/absolute/path/to/server.py"],
      "env": {
        "KROGER_CLIENT_ID": "...",
        "KROGER_CLIENT_SECRET": "..."
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Restart Desktop. Done.

This is the lesson I keep coming back to: AI will confidently build on stale knowledge if you don't tell it to check. It doesn't know what it doesn't know. Especially with fast-moving tools, the answer from six months ago might be completely wrong today. "Research the current docs first" is a better prompt than "here's what I want to build." A vague prompt plus an outdated mental model equals a very convincing plan that wastes your time.

What I'd Tell Someone Starting a Project Like This

Give the AI structure, not just questions. The preference file does most of the heavy lifting. Without it, I'd get generic meal suggestions. With it, I get plans that respect three people's allergies, preferences, and cooking realities. The quality of AI output is directly proportional to the quality of context you provide.

Close the loop to the real action. A meal plan is nice. A meal plan that turns into a shopping list that goes directly into my Kroger cart — that's the difference between a tool I use once and a system I use every week.

Build the simple thing first. Scripts → MCP server was a natural progression. I didn't design it up front — I built the simple thing, used it, felt the friction, and upgraded. Each step was useful on its own. If I'd tried to architect the final version from scratch, I'd still be planning instead of cooking.

Tell the AI to look things up before it plans. I cannot stress this enough. Especially with fast-moving tools. "Research the current docs first" saves you from confidently building the wrong thing.

The Full Pipeline Today

  1. Preferencesmeal_preference.md holds all dietary profiles, rules, recipes, and history
  2. Meal Plan — Claude generates a personalized weekly plan from the preferences
  3. Shopping List — Aggregated, categorized, and formatted for Kroger's catalog
  4. Kroger Cart — Pushed directly via MCP server, no terminal required

Thursday, I generate the meal plan. Friday the groceries get delivered. Sunday is meal prep day. The whole thing takes maybe 10 minutes of my time for a week of meals for three people with completely different dietary needs.

Not bad for a markdown file and a bit of Python.

So — what's the thing eating your brain every week? How are you using AI to get through it?

Top comments (0)