A common sentiment I hear about MCP servers is that you have to have them all enabled prior to your session to be able to use them and that's wasteful.
I realized in talking to some engineers I work with that some of the options to dynamically use MCP Serves might not be known to everyone.
I often use the goose agent internally and a feature that is very useful for anyone with multiple MCP Servers is the Extensions Manager.
This tool allows goose to search for and enable relevant MCP Servers depending on your needs. This works even for servers that require auth as long as you have configured them.
In this example I have the Github MCP Server configured with my required PAT token and disabled.
I then ask goose to complete a task that requires Github. The first step goose takes is to see if there is an MCP tool available to help. Due to Github being an authed server, I have to confirm a second time and then goose can run with my task.
While I used goose dektop for visuals here, you can use this in the terminal if that's where you like to work.
Check it out and let me know other ways you are using MCP


Top comments (3)
This was a great read! Super approachable and easy to follow. Thanks for including a concrete goose example. I'll try it out soon :)
This is a really good clarification, because a lot of the early MCP skepticism comes from people assuming “enabled = always running,” which just isn’t how the better agent setups actually work.
The dynamic discovery angle is the part that deserves more attention. Treating MCP servers as capabilities instead of static dependencies is a much healthier model. You don’t preload everything “just in case” — you let the agent reason about what it needs, then opt-in at the moment it becomes relevant. That’s closer to how humans work, and way closer to how secure systems should work.
I also like that you called out auth-gated servers. The explicit confirmation step matters. It creates a clean trust boundary: the agent can propose, but the human still authorizes. That’s a big deal for things like GitHub, cloud providers, or anything with write access.
What this pattern really unlocks is composability without chaos. You can keep a large catalog of MCP servers configured but dormant, and let intent drive activation. That avoids the “everything on, hope nothing goes wrong” feeling that scares people off MCP in the first place.
Curious to see more examples like this — especially around non-obvious tools (infra, CI, data warehouses) where dynamic enablement really shines. This is the kind of practical usage that makes MCP feel less theoretical and more operational.
Big agree! I've seen multiple angles about where this kind of MCP management should take place too (is it a client concern or a gateway concern for example)
I think we will see this space evolve alot over the course of 2026. Thanks for reading and sharing your thoughts!