DEV Community

Cover image for Why AI agents need a standard profile format
Boubacar Diallo
Boubacar Diallo

Posted on

Why AI agents need a standard profile format

Why AI agents need a standard profile format

And why nobody has built it yet.


You can describe a REST API in OpenAPI. Any developer on the planet can read that file and know exactly what your API does, what endpoints exist, what parameters they take, and what they return.

You can describe an npm package in package.json. Any Node project can declare a dependency on it, pin a version, and resolve it automatically.

You can describe a Docker container in a Dockerfile. Any machine can pull it, run it, and know exactly what it contains.

Now try to describe an AI agent.

Not how it runs. Not how it communicates. How it is. What it can do. Whether it's reliable. What it costs. Whether you should hire it over the one from a different provider.

There is no standard format for this. None. And in 2026, with hundreds of companies shipping AI agents into production, that gap is becoming a serious problem.


The protocols we have solve different problems

Before going further, let's be precise about what already exists — because the ecosystem has made real progress.

MCP (Anthropic, 2024) — the Model Context Protocol defines how an agent accesses tools and data. It's excellent at what it does: give an agent a structured way to call a calculator, query a database, read a file. If you've built anything with Claude or LangChain recently you've probably used it.

A2A (Google / Linux Foundation, 2025) — the Agent2Agent protocol defines how agents communicate with each other at runtime. Agent A needs to delegate a task to Agent B? A2A handles the message format, the task lifecycle, the streaming. It's gaining serious traction — 150+ organizations backing it, now under the Linux Foundation.

Both are genuinely useful. Both are open. Both solve real problems.

Neither of them answers this question:

You want to hire an AI Executive Assistant for your company. How do you compare the one from AcmeCorp to the one from AgentFactory to the indie agent some developer published on GitHub last month?

MCP tells you how the agent uses tools. A2A tells you how it talks to other agents. Neither tells you what the agent's track record is, what it costs, whether previous clients trusted it enough to give it autonomy over their calendar, or whether its "expert-level email triage" means the same thing as the other provider's "expert-level email triage."

That's the gap. Let's look at why it matters.


Four problems the market has right now

1. No standard vocabulary

Every AI agent platform invents its own way to describe what their agents can do. "Autonomous" means something different at every provider. Proficiency levels — beginner, intermediate, expert — are self-reported and incomparable. Integration support is listed in marketing prose, not machine-readable format.

If you're a developer trying to evaluate agents programmatically — say, building a tool that helps companies find the right agent for their use case — you're scraping websites and guessing.

2. No portable reputation

An agent that has completed 15,000 tasks with a 4.8 average rating at Provider A has zero reputation if its client moves to Provider B. The reviews stay behind. The track record disappears. The trust that was built over months of operation evaporates.

This is exactly the problem that plagued the freelance market before Upwork centralized reputation. A freelancer who was excellent but only worked through one platform had no way to prove it elsewhere. The same dynamic is playing out with AI agents, at faster velocity.

3. No neutral discovery

If you want to find an AI agent for a specific job today, you visit each provider's marketing site individually. There is no neutral directory. No standard way to search across providers by skill, by integration, by minimum rating, by trust model, by budget.

Every vendor only surfaces their own agents. Buyers cannot make informed comparative decisions.

4. No standard for trust and autonomy

This one is subtle but important. The question of how much autonomy to grant an AI agent is arguably the most important question in enterprise AI adoption right now. But there is no shared vocabulary for it.

Does this agent start fully supervised? Can it send emails on its own? At what point does it earn the right to act without approval? Different providers have wildly different models for this, and there's no way to describe them in a comparable format.


What a standard profile format looks like

These four problems have a common shape: they're all description problems. They need a shared schema — a simple, open, machine-readable format that any provider can implement and any buyer or developer can parse.

Here's what I think that format needs to cover:

identity:
  name: "Nova"
  role: "Executive Assistant"
  provider: "ExampleCorp"
  status: "available"
  tagline: "Inbox, calendar, standups  handled."

skills:
  - id: "email_triage"
    category: "communication"
    proficiency: "expert"
    tools_required: ["gmail", "outlook"]
    trust_level_required: 1

  - id: "autonomous_outreach"
    category: "communication"
    proficiency: "intermediate"
    trust_level_required: 3
    # Only available once client grants level 3 autonomy

trust_model:
  type: "progressive"
  levels: 5
  starting_level: 1
  description: "Starts supervised. Earns autonomy through performance."

pricing:
  model: "subscription"
  plans:
    - name: "Starter"
      price_monthly: 99
  trial_days: 14

reputation:
  tasks_completed: 4821
  avg_rating: 4.6
  rating_count: 18
Enter fullscreen mode Exit fullscreen mode

That's it in its simplest form. Skills are typed, proficiency is standardized, trust requirements are explicit, pricing is structured, and reputation is aggregate and verifiable.


The trust model field deserves special attention

Every other field in a profile like this is relatively obvious. Skills, pricing, integrations — these map to things that already exist in the software world.

The trust model is genuinely new.

When you hire a human employee, you don't give them full access to everything on day one. There's an onboarding period, a probation period, a gradual expansion of responsibilities as they prove themselves. That's not bureaucracy — it's good practice. Trust is earned through demonstrated reliability.

AI agents work the same way, but the industry has no standard vocabulary for it.

A trust_model field in a standard profile format lets a provider declare: "this agent starts supervised, here's what it can do at each level, here's how it earns promotion." A buyer can look at that field and immediately understand what they're getting into. A developer building a comparison tool can filter on it. A marketplace can surface it as a key criterion.

The trust_level_required on individual skills takes it further: skills that carry more risk (sending emails autonomously, making bookings on behalf of the user) require higher trust levels to activate. This is a machine-readable expression of something that every serious AI agent deployment already does informally.


How this fits with MCP and A2A

I want to be explicit about this because I've seen confusion in the community.

MCP, A2A, and a profile standard like APP are not competing. They operate at different layers of the stack:

MCP   → how an agent accesses tools and data
A2A   → how agents communicate with each other at runtime
APP   → how humans discover, evaluate, and decide to hire an agent
Enter fullscreen mode Exit fullscreen mode

An A2A AgentCard (served at /.well-known/agent-card.json) tells other agents how to invoke an agent at runtime. An APP profile tells human buyers what an agent does, what its track record is, and whether to hire it.

In fact, there's a clean mapping between them. An APP profile contains a superset of A2A AgentCard information. A provider implementing APP can auto-generate an A2A-compatible AgentCard from their APP profile. The integrations.protocol field in APP maps directly to A2A transport declarations.

The profile layer doesn't replace the communication layer. It complements it.


Why open, and why now

The honest answer to "why publish this as an open standard" is: because the alternative is worse.

Major platforms are already building proprietary agent registries. Meta's acquisition of Moltbook — described internally as "a registry where agents are verified and tethered to human owners" — is the clearest signal. What Meta, Google, Salesforce, and OpenAI build closed, the industry will be locked into.

We've seen this movie before with social networks, with app stores, with cloud platforms. The window to establish an open standard is measured in months, not years. After that, you're either implementing someone else's proprietary format or you're irrelevant.

An open standard benefits everyone:

  • Providers get neutral distribution without lock-in
  • Buyers can make informed decisions across platforms
  • Developers can build tools, marketplaces, and automation on a shared foundation
  • The ecosystem doesn't fracture into incompatible silos

What exists today

I've been working on this problem and published a first draft: the Agent Profile Protocol (APP) v0.1.

It's a YAML/JSON schema that defines:

  • Agent Profile Object — identity, skills, integrations, trust model, pricing, reputation
  • Review Object — verified client feedback, portable across platforms
  • Job Posting Object — standardized requirements for agent matching

It's Apache 2.0. No registry to join. No permission needed to implement it.

It's not finished. It's a v0.1 draft, deliberately. The point of publishing now is to get community feedback before patterns calcify.

Some things I'm not sure about:

  • Is the trust_model field too opinionated? It works well for the "supervised → autonomous progression" pattern, but does it cover enough edge cases?
  • The reputation fields are provider-verified — is that sufficient, or does the standard need to define verification mechanisms?
  • Should the job_posting object be part of the core spec, or a separate extension?

I'd rather settle these questions with community input than get them wrong alone.


What I'm asking for

If you build AI agents, I want to know: does this schema describe your agents accurately? What's missing? What's wrong?

If you evaluate or buy AI agents professionally, I want to know: what information do you need to make a hiring decision that this format doesn't capture?

If you work on MCP, A2A, or any other agentic standard, I want to know: how can APP complement what you're building? Is the A2A compatibility mapping in Section 9 of the spec accurate?

The repo is at: https://github.com/agentwork-org/agent-profile-protocol

The full spec is in SPEC.md. The JSON schemas are in /schemas and are validatable with check-jsonschema. There are three working examples in /examples.

Open an Issue. Start a Discussion. Tell me what I got wrong.

An open agentic economy is better for everyone. Let's build the infrastructure it needs.


APP v0.1 — published March 2026
Apache 2.0 — github.com/agentwork-org/agent-profile-protocol

Top comments (0)