There was a time when the terminal felt like the last private corner of software development.
The browser got enterprise controls. The IDE got plugins, telemetry, policy, and procurement drama. The CI pipeline was always a tiny bureaucracy with YAML. But the terminal? The terminal was where developers went to be weird in peace.
Aliases. Half-remembered shell scripts. curl | jq rituals. SSH sessions with the emotional stability of a raccoon in a server room.
Now GitHub has announced enterprise-managed plugins for GitHub Copilot CLI, and I think the interesting part is not “Copilot can do more things in the terminal.”
The interesting part is this:
the terminal is becoming an AI action surface, and AI action surfaces eventually become governed runtimes.
Not because vendors are evil. Not because platform teams are control freaks. Because once an assistant can touch tools, repositories, cloud accounts, secrets, and deployment paths, “just let developers use it” stops being serious.
the terminal used to be personal space
The terminal has always been powerful, but its power was mostly mediated through the person typing.
If I ran a destructive command, that was on me. If I installed a sketchy CLI, that was on me. If I glued five tools together with a shell pipeline and vibes, at least the blast radius moved at human typing speed.
AI changes that shape.
A CLI assistant is not just another autocomplete. It can interpret intent, discover commands, call tools, chain steps, edit files, summarize errors, propose fixes, and sometimes take actions faster than the developer fully reviews each intermediate decision.
That does not make it bad. It makes it operational.
The moment the assistant can say “I will create the branch, update the config, run the migration, open the PR, and fix CI,” the terminal has stopped being only a personal workspace. It has become a runtime for delegated work.
And delegated work needs rules.
plugins are where the governance starts
Enterprise-managed plugins sound like a boring admin feature. That is why they matter.
A plugin system answers very practical questions:
- which tools can the assistant call?
- who approved those tools?
- which teams can use them?
- how are they updated?
- what permissions do they imply?
- where does auditability start and end?
This is the same movie we watched with browser extensions, IDE extensions, Kubernetes admission controllers, CI marketplace actions, and Terraform modules. At first, the ecosystem is fun and chaotic. Then a few incidents happen. Then someone asks why a random package had access to production-adjacent credentials. Then the company discovers governance.
The AI version will be faster because the assistant is not only installing plugins. It is using them on behalf of a human.
That distinction matters.
A normal CLI plugin waits for me to make mistakes. An AI-enabled CLI plugin can help me make mistakes at scale.
the real product is not the chat. it is the permission boundary.
Every AI coding demo wants to show the assistant doing useful work. Fair enough. Demos need movement.
But in production, the valuable questions are much less cinematic:
- can the assistant read this repository?
- can it modify infrastructure code?
- can it call cloud APIs?
- can it open pull requests?
- can it inspect secrets?
- can it trigger deployments?
- can it run commands against customer data?
- can it install a new plugin because the task seems to require it?
That is the real interface.
The chat window is just how the human expresses intent. The permission boundary is where architecture happens.
This is why GitHub’s adjacent MCP security announcements also matter: secret scanning with GitHub MCP Server is generally available, and dependency scanning with GitHub MCP Server is in public preview. The direction is obvious: agents and assistants are being connected to tool ecosystems, and the security model is trying to catch up.
Good.
Because a world where agents can use tools but organizations cannot reason about tool permissions is not developer empowerment. It is unattended automation with nicer copywriting.
we are rebuilding internal platforms inside developer machines
The funny part is that this looks new, but the organizational pattern is old.
Platform teams spent years building internal developer platforms so teams would not have to remember every scary detail of infrastructure. Golden paths, templates, policy checks, paved roads, deployment workflows, observability defaults. All the boring stuff that makes delivery repeatable.
Now AI assistants are moving some of that action back into the developer’s local loop.
The terminal becomes a place where the assistant can:
- query internal docs
- call approved service APIs
- generate infrastructure changes
- run validation commands
- open tickets or pull requests
- inspect CI failures
- apply team-specific workflows
That is convenient. But it also means the local developer environment is becoming a thin edge of the internal platform.
If that edge is unmanaged, every laptop becomes a snowflake platform.
If that edge is overmanaged, developers will route around it with their own tools.
The hard part is the middle: enough governance to make AI actions safe, not so much governance that the assistant becomes a slower way to file a ticket.
the boring design rule: approve capabilities, not vibes
If I were designing this inside a company, I would avoid starting with a giant “AI policy” document that nobody reads.
I would start with capabilities.
For example:
- the assistant may summarize logs, but not access raw customer PII
- the assistant may draft Terraform, but not apply it
- the assistant may open a pull request, but not merge it
- the assistant may run tests, but not deploy to production
- the assistant may query dependency risk, but not auto-upgrade critical packages without review
- the assistant may use approved internal plugins, but not install arbitrary external ones during a task
This is much clearer than “use AI responsibly.”
Responsible according to whom? Under what permissions? With what audit trail? In which repositories? Against what data?
Vibes do not scale. Capability boundaries do.
And once you define capabilities, enterprise-managed plugins start to make sense. They are not just a catalog feature. They are a way to package what the assistant is allowed to do.
developers still need escape hatches
There is a trap here, though.
If companies turn AI-in-the-terminal into another locked-down enterprise sadness machine, developers will hate it, and they will be right.
The terminal is powerful because it supports exploration. Sometimes you need to run a weird command, inspect a strange failure, test a new tool, or build a tiny script that would never survive a platform review meeting.
So the goal is not to make the terminal sterile. The goal is to separate exploration from delegated authority.
A human experimenting locally is one risk shape. An assistant calling tools with organization-approved permissions is another. Good platforms understand that difference. They allow local weirdness, but put review, audit, and ownership around actions that touch shared systems.
Not one giant allow button.
this is senior engineering work now
This is where I think the career conversation gets more interesting than “will AI replace developers?”
Somebody has to define the boundaries.
Somebody has to decide which commands are safe for an assistant to run. Somebody has to package internal workflows as plugins. Somebody has to make sure generated changes leave durable artifacts. Somebody has to connect audit logs to reality. Somebody has to notice when the assistant is technically allowed to do a thing but organizationally should not.
That is engineering work.
Not glamorous, maybe. But very real.
The future of developer productivity is not only better models. It is better delegation contracts.
The assistant can be brilliant, but if every useful action ends in “please ask an admin,” nobody will use it. If every useful action is silently allowed, eventually it will do something expensive, unsafe, or deeply annoying.
The valuable layer is the one that makes the right action easy, the risky action explicit, and the forbidden action impossible.
That is platform engineering with an AI accent.
the punchline
Enterprise-managed Copilot CLI plugins are not just a GitHub feature checkbox. They are a signal that the terminal is being pulled into the same governance story as the rest of the engineering system.
That was inevitable.
Once AI assistants can operate tools, the question is no longer “which assistant gives the best answer?”
The question is:
What is this assistant allowed to do when the answer becomes an action?
That is the line between a neat demo and a production system.
The terminal is still going to be weird. I hope it stays weird. Software would be worse if every shell session had the personality of an expense report.
But the parts of the terminal that act on behalf of the company are going to become governed, packaged, permissioned, and audited.
Not because the terminal lost its soul.
Because AI gave it hands.

Top comments (0)