I asked my coding agent to "create a table for tracking employee certifications."
Five minutes later: complete Dataverse schema. Five tables. Relationships. Sample data. The whole thing.
But here's what kept me up that night: How did the agent decide which tool to use for each step?
Why MCP Server for metadata queries, Python SDK for bulk data, and PAC CLI for solution export?
More importantly: One of those costs Copilot credits. The other two don't.
Let's tear apart Dataverse Skills to see how it actually works.
The Two Unfair Advantages
Understanding skill internals gives you:
1. Cost Control Through Tool Selection
- MCP calls: Consume Copilot credits (per 10 responses)
- Python SDK calls: Free (direct Web API)
- PAC CLI calls: Free (local tooling)
When your agent uses MCP for a bulk operation that could've used the Python SDK, you're paying unnecessarily.
2. Organizational Knowledge as Code
Write skills that encode your patterns:
- Publisher prefix conventions
- Mandatory audit columns
- Solution structure standards
Update one skill file → every developer follows the new pattern automatically. No retraining needed.
Skill File Anatomy: It's Just Markdown
Every Dataverse skill: Markdown + YAML frontmatter. That's it.
Microsoft's Official Format
---
name: dv-metadata
description: >
Create and modify Dataverse tables, columns, relationships, forms, and views.
Use when: "add column", "create table", "modify form".
Do not use when: exporting solutions (use dv-solution).
---
# Skill: Metadata — Making Changes
**Do not write solution XML by hand.**
The correct workflow:
1. Make the change in the environment via MetadataService API
2. Pull the change into repo via `pac solution export` + unpack
3. Commit the result
The exported XML is generated by Dataverse itself and is always valid.
Key insight: The description field's "Use when" triggers are what agents use to match your prompts to skills.
Extended Format: Security Boundaries
For organizational skills, extend the frontmatter:
---
name: aidevme-create-table
description: >
Creates a new custom table in Dataverse.
Use when: "create table", "add entity", "new table".
version: 1.0.0
phase: build
allowed-tools:
- bash
- python
- file
# NO web_fetch - prevents data exfiltration
safety:
- Verify publisher prefix matches active solution
- NEVER add tables to Default solution
requires:
- dataverse-connect
- dataverse-create-solution
priority: normal
---
The allowed-tools Security Boundary
This field restricts which tools the agent can use.
Example: A data-loading skill may allow bash and file for CSV imports.
But allowing web_fetch opens the door to prompt injection attacks that exfiltrate data via external APIs.
The
allowed-toolsfield is your defense against prompt injection.
How Agents Chain Skills
When you type:
Create a table for tracking employee certifications
The agent:
- Scans all skills for relevance
-
Identifies
dataverse-create-tableas primary match -
Checks
requiresfield → loads prerequisites first - Executes steps in sequence
-
Selects tools based on
allowed-tools
For complex prompts, the chain looks like:
dataverse-connect
└── dataverse-mcp-register
dataverse-create-solution
dataverse-create-table (× 5)
└── dataverse-create-column (× N)
└── dataverse-create-relationship (× 2)
dataverse-load-data
dataverse-query
Not hardcoded. Built from requires declarations + semantic analysis.
The Three-Tool Strategy (The Part That Really Matters)
Agent-driven development requires encoding tool selection logic into skills themselves.
It's about three factors:
- Cost management: MCP calls = Copilot credits
- Operational correctness: Bulk ops need transactional integrity
- Architectural compliance: ALM needs PAC CLI for audit trails
Tool 1: Dataverse MCP Server
Best for:
- Fast metadata queries (
list_tables,describe_table) - Simple record reads (
read_query) - Single-record operations
Limitations:
- Charged per use (Copilot credits)
- Not optimal for bulk operations
Tool 2: Dataverse Python SDK
Best for:
- Bulk data operations (100+ records)
- Pandas DataFrame transformations
- ETL pipelines
The advantage: Uses Web API directly. No MCP charges.
from PowerPlatform.Dataverse.client import DataverseClient
from azure.identity import AzureCliCredential
client = DataverseClient(
"https://yourorg.crm.dynamics.com",
AzureCliCredential()
)
# Bulk create with CreateMultiple (100x faster)
records = [
{
"aidevme_firstname": f"Consultant {i}",
"aidevme_specialization": "Power Platform"
}
for i in range(1, 51)
]
ids = client.records.create("aidevme_consultant", records)
print(f"Created {len(ids)} consultants")
Tool 3: PAC CLI
Best for:
- Solution ALM (export, import, publish)
- Environment management
- Component registration
# Solution export for ALM
pac solution export \
--name ConsultingTracker \
--path ./solutions/ConsultingTracker.zip \
--managed false
# Publish all customizations
pac solution publish
The Decision Matrix
| Task | MCP | Python SDK | PAC CLI |
|---|---|---|---|
| List tables | ✅ | ❌ | ❌ |
| Read 10 records | ✅ | ✅ | ❌ |
| Read 10,000 records | ❌ | ✅ | ❌ |
| Create 1 record | ✅ | ✅ | ❌ |
| Create 500 records | ❌ | ✅ | ❌ |
| Export solution | ❌ | ❌ | ✅ |
The cost impact:
Typical dev session (5-table solution): 25-50 MCP calls
50-person team, 10 sessions/week: 12,500-25,000 MCP calls/month
Industry benchmark: $200-800/month for 20-100 developers
Real-World Production Example
From Daniel Kerridge's claude-code-power-platform-skills:
---
name: dataverse-plugins
description: >
Use when developing Dataverse plugins.
Use when: "plugin", "server-side logic", "PreValidation".
---
## CRITICAL RULES
1. **Plugins run in a sandbox.** Restricted access to external resources.
2. **2-minute timeout** for synchronous plugins. Use async or
offload to Power Automate for long operations.
3. **Throw `InvalidPluginExecutionException`** to show user-facing
errors. All other exceptions = generic error messages.
4. **Never use static variables** for state. Plugin instances are
cached and reused. Use `IPluginExecutionContext.SharedVariables`.
5. **Always register entity images** when you need pre/post values.
Don't make extra Retrieve calls.
Why this matters: Junior developers make these mistakes once (painfully). When encoded in the skill, the agent never makes them.
That's organizational knowledge as code.
Writing Your Own Skills
Base Template
---
name: {publisher}-{action}-{subject}
description: >
What it does and when to use it.
Use when: "trigger phrases".
Do not use when: {anti-patterns}.
---
# Skill: {Title}
## Purpose
What does this skill do?
## Pre-conditions
1. What must be true before running?
2. What environment state is assumed?
## Steps
### Step 1: Validate inputs
What to check first.
### Step 2: Main operation
The actual work. Include tool calls, code examples.
### Step 3: Verify and report
How to confirm success.
## Error handling
| Error | Cause | Resolution |
|-------|-------|------------|
| ... | ... | ... |
Extended Template (With Security)
Add these optional fields:
---
name: {publisher}-{action}-{subject}
description: >
Description with triggers.
version: 1.0.0
phase: build
allowed-tools:
- bash
- python
- file
safety:
- Explicit safety rules
requires:
- dataverse-connect
priority: normal
---
Testing Your Skills
For Claude Code
Create .claude-plugin/marketplace.json:
{
"name": "My Power Platform Skills",
"version": "1.0.0",
"skills": [
{
"name": "aidevme-create-table",
"path": "skills/aidevme-create-table.md"
}
]
}
For GitHub Copilot
Add to .github/plugins/yourorg/.
Test with explicit invocation
/aidevme-create-environment Create a sandbox named "Project Alpha Dev"
The Debugging Checklist
Agent using wrong tool?
- ✅ Check skill's
descriptionfield for user's trigger phrase - ✅ Add explicit tool guidance in step instructions
- ✅ Review
allowed-toolsrestrictions - ✅ Increase
priorityif competing with built-in skills - ✅ Verify skill registration in marketplace.json
Key Takeaway
The most valuable IP in intent-driven development: well-crafted skills that encode your organization's patterns.
When you write a skill that says:
- "Always verify publisher prefix before creating tables"
- "Never add to Default solution"
- "Use Python SDK for 100+ records"
You're encoding decisions that would otherwise live in developer heads and tribal knowledge.
Update one skill file → every agent session follows the new pattern.
What's Next: Part 4
Coming up: Enterprise Architecture View
- MCP billing models (real benchmarks)
- Managed Environment governance
- ALM integration patterns
- Security posture (prompt injection, secrets)
- 2026 Power Platform roadmap context
The questions that determine enterprise adoption.
Resources
📖 Full Technical Guide: aidevme.com/under-the-hood-how-dataverse-skills-work
🔧 Daniel Kerridge's Skills: github.com/DanielKerridge/claude-code-power-platform-skills
🎯 Microsoft Dataverse Skills: github.com/microsoft/dataverse-skills
Discussion
Have you written custom Dataverse skills? What patterns are you encoding?
What tool selection decisions have you made (MCP vs SDK vs CLI)?
Drop a comment below 👇 — I reply to everyone.
Top comments (0)