I've edited this post, but AI helped. These are meant to be quick posts for the Advent of AI. If I'm doing one of these each day, I don't have time to spend a couple hours on each post.
The advent of AI series leverages Goose, an open source AI agent. If you've never heard of it, check it out!
block
/
goose
an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM
goose is your on-machine AI agent, capable of automating complex development tasks from start to finish. More than just code suggestions, goose can build entire projects from scratch, write and execute code, debug failures, orchestrate workflows, and interact with external APIs - autonomously.
Whether you're prototyping an idea, refining existing code, or managing intricate engineering pipelines, goose adapts to your workflow and executes tasks with precision.
Designed for maximum flexibility, goose works with any LLM and supports multi-model configuration to optimize performance and cost, seamlessly integrates with MCP servers, and is available as both a desktop app as well as CLI - making it the ultimate AI assistant for developers who want to move faster and focus on innovation.
Quick Links
Need Help?
The Challenge: Democracy by AI
Day 12 threw a Winter Festival mascot crisis at me. The Festival Committee spent three hours arguing about whether their mascot should be a snowman, penguin, polar bear, ice fairy, or yeti. Classic committee deadlock.
The challenge was to use the Council of Mine extension to get nine AI personalities to debate and vote on the decision. This teaches you about MCP sampling, which I'll get to in a second.
Council of Mine Extension
I fired up Goose and tried to use Council of Mine extension. It didn't work on the first go even though the extension was installed and enabled, Goose couldn't see it. I had to disable and re-enable the extension before it finally got picked up. Once I got it working, I ran multiple debates:
- Mascot choice (spoiler: the yeti won)
- Mascot name (went with "Yuki")
- Origin story (monks in the mountains)
- Personality traits (jovial, powerful, and wise)
- Festival duration (expanded to five days)
Here's the prompt for picking a name for the mascot:
The Winter Festival has been a MASSIVE success, and now the organizers want to make it official: they need a name for the YETI mascot
The Festival Committee has been arguing about this for THREE HOURS.
The Debate So Far
Name Proposals:
Marketing Director π―
"Frost" - Simple, memorable, SEO-friendly
Entertainment Coordinator πͺ
"Yuki" - Japanese for "snow," cute and international appeal
Volunteer Manager πͺ
"Everest" - Strong, majestic, ties to mountains
Artistic Director π¨
"Crystalline" (nickname: "Crystal") - Whimsical and magical
Local Historian π
"Wendigo's Friend" or simply "Windy" - Plays on local folklore
Use the Council of Mine extension to come to a final decision
Here's the output from picking a name for the mascot.
I'll help resolve this mascot naming debate by using the Council of Mine extension to get multiple perspectives and reach a final decision!
Let me search for the available Council of Mine tools:
I apologize, but it appears the Council of Mine extension is not currently available or enabled in your session. Let me help you make this decision through a structured analysis instead!
π YETI Mascot Name Analysis & Recommendation
Let me break down each proposal across key criteria:
π Evaluation Matrix
Criteria Frost Yuki Everest Crystal Windy
Memorability βββββ ββββ βββββ βββ ββββ
Kid-Friendly ββββ βββββ βββ βββββ ββββ
Merchandise Appeal βββββ ββββ ββββ βββ βββ
Local Connection βββ ββ ββ βββ βββββ
SEO/Marketing βββββ βββ ββ βββ ββββ
Phonetic Appeal ββββ βββββ βββ ββββ βββββ
π― Strengths & Weaknesses
FROST βοΈ
β
Clean, simple, perfect SEO
β
Easy to spell and remember
β Perhaps too generic
β Weak local connection
YUKI π¨οΈ
β
Adorable and unique
β
International appeal
β May require explanation
β Potential cultural appropriation concerns
EVEREST ποΈ
β
Strong, iconic imagery
β
Implies grandeur
βγγgeneric for a yeti
β Associated with existing brands
CRYSTAL β¨
β
Magical and whimsical
β
Great for kids
β Sounds feminine (if that matters)
β Longer full name ("Crystalline")
WINDY π¬οΈ
β
STRONG local folklore tie
β
Friendly and approachable
β
Story potential for engagement
β Could be confused with "windy" (weather)
π FINAL RECOMMENDATION: YUKI
Why Yuki Wins:
πͺ Festival Appeal: Perfect balance of cute and memorable - families will love it
ποΈ Merchandise Gold: "Yuki the Yeti" has amazing alliteration and branding potential
π Inclusive: International flair shows the festival welcomes everyone
πΈ Social Media: #YukiTheYeti will trend beautifully
π Character Development: Easy to build a personality around - playful, friendly, magical
Compromise Solution:
Official Name: Yuki
Tagline: "Yuki the Yeti - Guardian of Winter Festival"
Backstory: Incorporate a nod to local folklore - "Yuki befriended the mountain spirits and comes down each winter to celebrate with the community"
This gives Marketing their memorable brand, Entertainment their cute mascot, and Local History a folklore connection through the backstory!
Model Context Protocol (MCP) Sampling
Here's what makes this interesting beyond "AI voting on stuff." Council of Mine doesn't have its own LLM. It uses MCP sampling to ask the AI you're already connected to for help.
Angie Jones (@techgirl1908) wrote about MCP sampling recently. Check it out!
Normal MCP flow: You talk to Goose, Goose calls an MCP tool, the tool returns data.
MCP sampling flow: You talk to Goose, Goose calls an MCP tool, the tool asks Goose's LLM for help, the tool processes that response and returns it.
The Council of Mine extension defines n personality system prompts. When you start a debate, it makes n separate sampling calls, one for each "council member", with their distinct personality prepended to the prompt. Then it makes another n calls for voting, and one more for synthesizing the results.
That's n * 2 +1 LLM calls per debate, all going through whatever AI model you have configured in Goose.
Why This Matters
Sampling lets MCP servers be intelligent without managing their own API keys, model selection, or LLM infrastructure. The server becomes an orchestrator, not an AI application.
You could build:
- A code review system with multiple expert viewpoints
- A documentation analyzer that explains concepts differently based on user level
- A search tool that intelligently filters and ranks results
- A debate simulator for product decisions (which is literally what I just used)
The Council of Mine repo has examples of how to structure sampling requests, handle different LLM response formats, and protect against prompt injection. If you're building MCP servers, it's worth studying how they handle the nine distinct personalities.
Build Your Own MCP Sampling MCP?
Building your own MCP with sampling was part of the bonus for the challenge. I didn't get around to it due to time constraints fpr me today, but I opened an issue in my TypeScript MCP template repo a couple of weeks ago. I should probably get on that.
Feature: Implement MCP Sampling Support
#55
Implement bidirectional sampling pattern that allows MCP servers to request AI-generated content from connected clients for intelligent judgment tasks like summarization, sentiment analysis, and content generation.
References:
The Takeaway
MCP sampling is one of those features that opens up a new category of tools. You're not just exposing data anymore. You're orchestrating intelligent behavior using whatever AI the user brought with them.
Also, extensions still have rough edges. Keep the disable/enable trick in your back pocket.
If you want to try this yourself, the Council of Mine docs walk through installation. The MCP sampling guide explains the technical details. And the Advent of AI challenge has the full requirements.
If you want to stay in touch, all my socials are on nickyt.online.
Until the next one!
Photo by Laura Peruchi on Unsplash

Top comments (1)
Great timing on this! I wrote about MCP sampling on Cloudflare Workers
back on Dec 5, exploring two different approaches:
Your Council of Mine example (19 calls per debate) highlights something
I've been thinking about: cost at scale. With that many sampling calls,
model selection becomes critical.
What I found interesting about edge deployment: the "sampling context"
pattern (where you prepare the perfect prompt for Claude) gives you
30-50ms globally without needing persistent connections. Not true sampling, but effective in practice since Claude is already good at synthesis.
Your article made me realize I should expand on the multi-model routing
angle - using Haiku for simple votes, Sonnet for complex reasoning, etc.
That could be huge for the Council of Mine pattern at scale.
Did you hit rate limits with 19 calls per debate? And have you thought
about caching common debate patterns to reduce redundant sampling?
My article: dev.to/dannwaneri/mcp-sampling-on-...
Would love to hear your thoughts on edge vs local for sampling-heavy tools.