oa-aec-mcp: Revit Audit Workflows as MCP Tools
Most Revit MCP servers expose element-level primitives.
Get walls. Create walls. Set parameters. Delete elements.
Those building blocks are useful if your goal is to let an LLM edit a model. But BIM coordinators usually do not think in terms of element CRUD. They think in terms of model health, naming compliance, warning clusters, and incomplete room data.
There is also a practical problem with element-level interfaces: they put the wrong work in the wrong place. If an LLM has to run a naming convention audit using raw element tools, it has to enumerate all elements in a category, read each name, apply a pattern, collect failures, and repeat for every category. That is thousands of round trips, significant token cost, and a lot of reasoning the model has to do in its context window instead of focusing on the actual question.
That is the gap oa-aec-mcp is trying to fill.
What It Is
oa-aec-mcp is an open-source MCP server that exposes four read-only Revit audit tools to Claude Desktop. The C# plugin handles all data collection and filtering inside Revit, sending only aggregated results across the WebSocket connection. The model gets a structured answer, not a pile of element IDs to sort through.
Two public repositories, built over two weeks:
- oa-aec-mcp — TypeScript MCP server: https://github.com/omarabdelazizeng-sketch/oa-aec-mcp
- oa-aec-mcp-plugin — C# Revit plugin: https://github.com/omarabdelazizeng-sketch/oa-aec-mcp-plugin
The Four Tools
summarize_model_health — no input required. Returns warning count, unused families, unplaced rooms, view count, and a plain-English summary of model condition. This is the first tool to call when you want a quick read on where a model stands.
list_unplaced_rooms — optional level filter. Returns unplaced rooms with room number, name, department, and level. Useful when a room program is still evolving and placeholders have accumulated.
find_warnings_by_category — accepts a specific warning type or "all". Groups warnings by type and returns affected element IDs. Instead of a raw warning count, you get structure: which warning categories are driving the number, and which elements are involved.
audit_naming_conventions — accepts a regex pattern and an optional list of categories (Views, Sheets, Rooms, Levels, Walls, Doors, Windows, Families). Returns violations grouped by category, with total counts and a truncation flag if there are more violations than the return cap.
The key feature of this tool is not the regex itself. It is that Claude can translate a natural-language naming rule into a regex before calling the tool. A coordinator does not need to know regex syntax — they describe the standard, Claude writes the pattern, and the tool runs the audit.
Example: Pre-Submission Model Check
A realistic prompt:
Review this model before issue. Check overall health, list any unplaced rooms on Level 02, identify the top warning categories, and check that views and sheets follow our naming standard — two uppercase letters, a hyphen, then the name.
Claude calls four tools in sequence:
summarize_model_health()
list_unplaced_rooms(level="Level 02")
find_warnings_by_category("all")
audit_naming_conventions("^[A-Z]{2}-[A-Z0-9_]+$", ["Views", "Sheets"])
The third call is where the natural-language → regex translation happens. Claude reads "two uppercase letters, a hyphen, then the name" and generates ^[A-Z]{2}-[A-Z0-9_]+$ before calling the tool. The user never writes a pattern.
Example response from Claude after the four calls return:
The model is not ready for issue. Warning count is 143, concentrated in duplicate mark values (18 instances) and unenclosed rooms (9 instances). Level 02 has three unplaced rooms assigned to Operations and IT — these look like program placeholders that were never placed. View and sheet naming compliance is poor: 34 violations of the two-letter prefix standard across 156 checked elements. I would address room placement and naming before the next coordination package.
That is the intended workflow. One prompt, four targeted tool calls, one actionable summary.
Architecture
Claude Desktop
↕ stdio (MCP)
TypeScript MCP Server (oa-aec-mcp)
↕ WebSocket localhost:8765
C# Revit Plugin (oa-aec-mcp-plugin)
↕ ExternalEvent dispatcher
Revit API
The TypeScript server handles the MCP protocol and exposes tools to Claude Desktop. The C# plugin runs inside Revit, receives requests over WebSocket, and executes them through Revit's ExternalEvent mechanism so all API calls run on the Revit main thread. The two repos are intentionally separate — the protocol layer and the Revit-specific logic stay independent.
Limitations
- Naming audits use regular expressions only — no semantic understanding of BIM standards
- Audit only — does not prevent bad naming or block user actions in Revit
- No automatic fixes of any kind
- v0.1 supports Revit 2025 only
- Requires the companion plugin to be running inside Revit before calling any tool
What This Is Not
There are already four or five open-source Revit MCP implementations, including at least one with several hundred stars. This project is not trying to be the most complete. It is narrower: four read-only audit tools that cover a specific slice of what BIM coordinators actually do repeatedly.
If that narrow scope saves a few manual QA steps before a coordination meeting, it is doing its job.
What Is Next
v0.2 candidates:
- Multi-pattern naming audits in a single call
- extract_sheet_index — sheet numbers, names, and current revision
- get_view_filter_summary — view visibility override detection
- Workset usage summary
- View template coverage report
The same principle applies to everything on that list: one tool per coordination workflow, not one tool per API method.
Try It
Open a production model, run a health check, and see whether the results are useful.
If something breaks, or if there is an audit workflow you run regularly that should be a tool, open an issue on either repo.
Top comments (0)