DEV Community

Cover image for How I added experimental MCP Apps support to Apollo MCP Server with Goose
Amanda
Amanda

Posted on

How I added experimental MCP Apps support to Apollo MCP Server with Goose

The challenge: contribute to my companies codebase in a language I do not write using an agentic dev workflow.

As a developer advocate, I'm often creating demo and educational code versus contributing to our product codebases. Over the holiday break I wanted to challenge myself to fully lean into agentic coding by adding experimental support for MCP Apps draft spec to the Apollo MCP Server (my experimental repo here).

It was time to test everything I teach. in a real world scenario in a language I do not write...Rust.

The Apollo MCP Server is open source and is used as a way to expose GraphQL operations as MCP tools without writing any additional code. We recently added support for the OpenAI Apps SDK so evolving into MCP UI apps was a great next step to challenge myself on.

Constraints

The agent had to add experimental support for MCP Apps without breaking existing functionality or interfering with the OpenAI Apps implementation. To validate my changes, I also needed a separate repository with a prototype MCP UI App for my Luma Community Analytics tool.

The Tools

These are the tools and techniques I leveraged to make this a successful build.

  1. goose CLI and Desktop
  2. Markdown files to provide instructions and important information to the agent including MCP Spec details and general project goals and rules
  3. Apollo MCP Server
  4. MCP Jam for testing

The Build

I needed to work with the agent to get to a point where we had a flow that followed a research/plan → code in chunks → test → report deviations → repeat process. This was the core idea I had to make this a successful test.

My research started with a simple prompt:

"research all files in the apollo-mcp-server local repo to understand the OpenAI apps implementation and the details of the draft MCP UI Apps spec. Then create a plan to add expirimental support for MCP apps that preserves the open ai SDK version"
Enter fullscreen mode Exit fullscreen mode

This wasn't a magical plan that was perfect the first time. Iteration on this plan was the most "human in the loop" work that I completed for this project. In my experience spending time on a planning phase is very important when working in an existing codebase especially in the case of adding a new/experimental feature that should not impact the rest of the project.

One of the features of my plan that increased the success of my agent was to have each chunk of the plan followed by full testing of the code. This way, errors were caught along the way and fixed rather than having a mess at the end. I also requested that the final report output have any details where the agent needed to deviate from the plan based on errors or other information.

Once it was complete, I built the rust binary and added it to an MCP Apps project for testing.

Did it work the first time? No.

However, I was able to quickly debug with goose to find the few loose ends that did not get caught such as a query parameter used in our OpenAI SDK version that was not actually MCP App compliant but didn't flag on a test.

the offending query param code

With the app working, I was able to do a preliminary test in MCP Jam which is an alternative to the MCP Inspector. It's a great experience and the regular inspector does not currently support apps.

initial load test in mcp jam

At this point... I had to wait. Remember MCP Apps is in draft and the agent ecosystem hadn't begun to support it yet. The Goose team at this point was almost there but it was the holiday break so I actually shut my laptop. (go touch grass?)

The final test!

Goose team released draft spec support in early January in v1.19.0! I immediately rushed to test in the Goose desktop agent but ran into a few issues.

Back to goose again to debug!

Goose helped me discover 2 small bugs in the early release of MCP Apps support in goose and I was able to report quickly to the team so they could quickly release a patch.

You read that right. I made goose debug goose!

spiderman pointing as himself meme

Ultimately I ended up with the early prototype of my Luma events dashboard which will be a great tool for folks internally here at Apollo to understand community metrics for events I host.

Screenshot of goose desktop with a running mcp app

Learnings

  1. You don't have to be an expert in a language to contribute a meaningful feature to a codebase
  2. Plan, Iterate, Plan
  3. Agents working in small chunks and always testing means they can catch errors early, fix and move on while you grab a coffee
  4. Is perfection the goal? In my experience, not yet but I was able to move so much faster as a developer. The server updates and new app were completed in half a day with me starting from zero knowledge of the draft spec or codebase.

Note: After I completed this I saw a post by Angie Jones for a Research -> Plan -> Implement flow with recipes that she released over the holiday break too. If I were starting this flow again today, this is where I would start. Check the docs here.

Can you do this? Yes! You can test out my Luma Analytics MCP App Protoype or build your own. I'd love to hear more about how you leverage agentic coding in the comments.

Top comments (0)