This is a submission for the Google Cloud NEXT Writing Challenge
Every developer building AI agents right now is quietly maintaining the same ugly file.
You know the one. It's the adapter file. The one where you wrote a custom wrapper to give your agent access to BigQuery. Then another wrapper for Slack. Then a third for your internal PostgreSQL instance. Then a fourth because the BigQuery wrapper broke when the library updated. The file no one wants to touch. The file that has a comment at the top that says # TODO: clean this up that's been there for eight months.
I have three of those files across two different projects. My colleague has five. A friend building an agentic document processor told me last week that she spent more time last month maintaining tool adapters than actually building the agent logic. "The agent works great," she said. "Getting it to talk to things is the nightmare."
At Google Cloud NEXT '26, Google announced something called the Skills Repository — an official, curated, open library of agent-ready tools for every major Google Cloud service. And buried in that announcement, almost as a footnote, is something that I think is the most practically important thing Google announced this week for working developers.
Not Gemini 2.5 Ultra. Not 8th generation TPUs. Not the $750 million partner fund.
The Skills Repository.
Let me tell you why — and also why it only works if Google gets one critical thing right.
The Problem Nobody Talks About at Conferences
Here's how agentic development actually works in 2026, as opposed to how it looks in keynote demos:
You have a model. You have an idea for what you want the agent to do. You want the agent to query your database, send a Slack message, write a file to Cloud Storage, and maybe check a calendar.
So you:
- Install the relevant Python SDKs
- Write wrapper functions that your agent can call as tools
- Handle authentication for each service separately
- Figure out how to describe each tool to the model in a way it actually understands
- Test whether the model actually calls the right tool with the right parameters
- Debug the cases where it calls the right tool with the wrong parameters
- Repeat for every service you add
This is the real development loop for building agents. And step 4 — writing the tool descriptions — is way more painful than anyone admits publicly, because a bad description means the model calls the wrong thing, and diagnosing that is genuinely hard.
The state of the art right now is that every developer is doing this from scratch. There's no shared vocabulary. There's no standard. Your BigQuery tool description and my BigQuery tool description are completely different, trained from different intuitions about what works. Which means our agents behave differently even when they're doing the same thing.
What the Skills Repository Actually Is
At NEXT '26, Google announced an official Skills Repository: a curated library of pre-built, agent-ready tools for Google Cloud services. Install via npx skills install github.com/google/skills. Each skill is:
- Tested against actual models — not just "here's a function," but optimized descriptions that Google has verified work reliably with Gemini
- MCP-compatible — plugs into the Model Context Protocol standard, so it works across platforms, not just Vertex AI
- Versioned and maintained — when the underlying Cloud API changes, Google updates the skill, not you
- Pre-authenticated — uses your existing GCP credentials, no custom auth code required
For example, the BigQuery skill looks like this in practice:
from google.adk.agents import LlmAgent
from google.skills.gcp import BigQuerySkill, CloudStorageSkill, PubSubSkill
agent = LlmAgent(
name="data_pipeline_agent",
model="gemini-2.5-pro",
tools=[
BigQuerySkill(project_id="my-project", dataset_id="analytics"),
CloudStorageSkill(bucket="my-output-bucket"),
PubSubSkill(topic="pipeline-results"),
]
)
Compare that to what you'd write today:
# Today's reality
from google.cloud import bigquery
from google.cloud import storage
from google.cloud import pubsub_v1
import json
client = bigquery.Client()
storage_client = storage.Client()
publisher = pubsub_v1.PublisherClient()
def run_bigquery_query(query: str, max_rows: int = 1000) -> dict:
"""
Runs a SQL query against BigQuery and returns results.
Use this tool when you need to query structured data from the data warehouse.
The query should be standard SQL. Results are limited to max_rows rows.
Returns a dict with 'rows' (list of dicts) and 'schema' (list of field names).
If the query fails, returns a dict with 'error' key explaining what went wrong.
"""
try:
query_job = client.query(query)
results = query_job.result()
rows = [dict(row) for row in results][:max_rows]
schema = [field.name for field in results.schema]
return {"rows": rows, "schema": schema, "total_rows": len(rows)}
except Exception as e:
return {"error": str(e)}
def upload_to_storage(filename: str, content: str, content_type: str = "text/plain") -> dict:
"""
Uploads a file to Cloud Storage.
Use this when you need to save output data or results to persistent storage.
filename is the destination path within the bucket. content is the string content.
Returns dict with 'url' on success or 'error' on failure.
"""
# ... 20 more lines of boilerplate
That's one tool. You'll write four to ten of them per project. The Skills Repository eliminates every line of that.
Why the Description Problem Is the Real Problem
I want to linger on something for a moment that I don't see discussed enough.
When you write a tool description today, you're essentially writing a tiny prompt. The quality of that description determines how reliably your agent invokes the tool correctly. A bad description — too vague, too long, structured in a way the model doesn't parse well — means your agent calls the tool wrong, or calls the wrong tool entirely, or fails to call it when it should.
The people who build agents professionally develop intuitions about this over time. You learn to lead with the use case ("Use this when you need to...") before the implementation details. You learn to be explicit about what the tool doesn't do. You learn the right level of specificity for parameter descriptions.
But there's no documentation for this. There's no standard. You learn it by watching your agent fail and then rewriting the description.
The Skills Repository's descriptions are written and tested by the people who built the underlying services, tuned specifically for Gemini. That knowledge — which took individual developers months to accumulate — is encoded directly into the tools. Every developer who installs BigQuerySkill gets the benefit of that tuning without having to discover it the hard way.
This is underrated. This is huge.
The MCP Angle: Why This Isn't Just a Google Thing
The Skills Repository is built on MCP — the Model Context Protocol that originated at Anthropic and has since become the closest thing to an industry standard for agent tool integration.
That means two things that matter enormously.
First: The skills work with any MCP-compatible agent framework, not just Google's ADK. If you're using LangChain, LlamaIndex, or building something custom, you can still use the Google Skills. You're not locked into the Vertex AI ecosystem just because you want a reliable BigQuery tool.
Second: The precedent this sets. If Google is building its official tooling on MCP, that's a strong signal to every other cloud provider. AWS, Azure, and every SaaS company with an API now has a reason to invest in MCP-compatible skills. The Skills Repository could be the catalyst for an ecosystem shift where tool adapters stop being something every developer writes from scratch and become something you install.
That's the npm-for-agent-tools moment, and Google might have just fired the starting gun.
The One Thing That Has to Go Right
Here's where I'll be honest about my skepticism, because I think it's the most important thing to understand about whether the Skills Repository actually matters long-term.
The skills have to stay current.
Google Cloud APIs change. IAM permissions change. Query syntax evolves. New features get added that agents should know about. The skill for BigQuery that's accurate today will be slightly wrong in six months if nobody maintains it.
The historical precedent here is not encouraging. Google has launched developer tooling, SDKs, and sample libraries before that started strong and quietly decayed when the team moved on or the priority shifted. If you've ever tried to run a Google Cloud tutorial from 2022, you've felt this. Deprecated APIs, changed authentication flows, package versions that no longer match.
The Skills Repository only works if Google treats it like a first-class product with dedicated maintenance, not like a GitHub repo that gets commits on launch day and then drifts.
There are signals this time is different:
- It's built on MCP, an open standard, which means the community can contribute fixes
- Google explicitly tied it to the Agent Platform rebrand, which is a major organizational bet
- The developer relations team demonstrated it in the keynote, which suggests actual investment
But I'd be doing you a disservice if I didn't say: watch this space. The Skills Repository is either the foundation of a durable ecosystem or another well-intentioned launch that gets quietly deprecated in 18 months. The difference is maintenance, not announcement.
How to Use It Today
The Skills Repository is live now. Here's the fastest path from zero to a working agent with real Google Cloud tool access:
Step 1: Install
pip install google-cloud-aiplatform[adk] --upgrade
npx skills install github.com/google/skills
Step 2: Build a minimal agent with a real tool
from google.adk.agents import LlmAgent
from google.skills.gcp import BigQuerySkill
from google.adk.runners import InProcessRunner
# Give your agent BigQuery access in three lines
agent = LlmAgent(
name="data_agent",
model="gemini-2.5-flash", # Flash is fine for tool-calling tasks, much cheaper
instruction="You are a data analyst. Answer questions by querying BigQuery.",
tools=[
BigQuerySkill(
project_id="YOUR_PROJECT_ID",
dataset_id="YOUR_DATASET_ID"
)
]
)
runner = InProcessRunner(agent=agent)
response = runner.run("How many users signed up last week?")
print(response.text)
Step 3: Add more skills as needed
from google.skills.gcp import (
BigQuerySkill,
CloudStorageSkill,
CloudSQLSkill,
PubSubSkill,
FirestoreSkill,
SecretManagerSkill, # For credentials your agent needs
CloudRunSkill, # For triggering jobs
)
The full skill list covers: BigQuery, Cloud Storage, Cloud SQL, Spanner, Firestore, Pub/Sub, Cloud Run, GKE, Firebase, Secret Manager, Cloud Scheduler, and more.
Authentication: Uses your existing gcloud auth application-default login credentials. No additional setup.
Cost note: Each skill invocation is a tool call, and tool calls add tokens to your context. Watch your token usage if you're calling tools in a tight loop. The gemini-2.5-flash model is the right choice for most tool-heavy agentic tasks — lower cost, and the tool-calling reliability is excellent.
The Bottom Line
The Skills Repository isn't the flashiest announcement from NEXT '26. It won't get the same headlines as Gemini 2.5 Ultra or the TPU benchmarks.
But it's the announcement that will save the most developer-hours in 2026. Every hour not spent writing and debugging tool adapters is an hour spent on actual agent logic — on the reasoning, the orchestration, the evaluation. The work that actually differentiates your agent.
The adapter file I described at the beginning of this post? The one everyone has? The Skills Repository is Google's answer to that file. A curated, tested, maintained answer.
If Google maintains it — and there are real reasons to believe they will this time — the Skills Repository could be the infrastructure that finally makes agent development feel less like plumbing and more like building.
That's worth paying attention to.


Top comments (0)