This is a submission for the DEV's Worldwide Show and Tell Challenge Presented by Mux
What I Built
Prism Context Engine is the industry's first Context Management System (CMS) for AI Agents.
We live in the era of "Vibecoding," where we treat AI as a junior developer. But the workflow is broken: developers waste hours copy-pasting documentation or explaining their specific "design vibe" to Cursor, Windsurf, or Claude over and over again. This is Context Pollution.
Prism solves this by bridging the gap between Video and Code:
- Record: I record a screen capture explaining my app's architecture ("The Vibe").
- Ingest: Prism uses Mux to process the video and Azure AI to extract rigid coding rules from my voice.
- Deploy: A local MCP (Model Context Protocol) server pipes these rules directly into my IDE, effectively giving the AI "institutional memory."
My Pitch Video
Demo
- Live App: https://prism.jeffdev.studio
- Documentation: https://docs.jeffdev.studio
- Repository: https://github.com/J-Akiru5/jeffdev-monorepo
How to Test (Judges):
You can sign up for free, or use these pre-configured credentials to access the "Prism MVP" demo project:
-
Email:
demo@jeffdev.studio -
Password:
PrismDemo2026!
The Story Behind It
I run JeffDev Studio, a startup agency. As the lead architect, I often have a specific "visual constitution" for my projects (e.g., JeffDev Design System, Neon Ocean Vibe).
I realized that Implicit Knowledge—the "why" behind the code—is rarely written down. It's usually trapped in a Loom video or a call. I wanted to build a system that respects the developer's time.
The core philosophy is simple: "Don't write docs. Just talk."
I built Prism to prove that we can turn unstructured video data into structured, enforceable linting rules for AI.
Technical Highlights
Prism operates on a Hybrid Cloud Architecture (Azure/Vercel) with a "Local Waiter" (MCP Server).
1. The "Brain" (Cloud)
-
Video Ingestion: Mux handles the raw video stream. We use Mux Webhooks (
video.asset.ready) to trigger the intelligence pipeline. - Rule Extraction: Once Mux confirms the upload, an Azure Function grabs the transcript and passes it to Azure OpenAI (GPT-4o mini) to extract JSON-structured rules.
- Storage: Azure Cosmos DB (MongoDB API) stores the "Recipes" (Rules) and Vector Embeddings.
2. The "Waiter" (Local MCP)
This is the innovation. We built a custom MCP Server (@prism/mcp-server) that runs locally on the developer's machine.
- It connects to the Cloud Brain via API.
- It exposes tools like
get_architectural_rulesandsearch_video_transcriptto the IDE.
The "Magic Moment":
In the pitch video, you will see Windsurf explicitly citing the "JeffDev Design System" and the "#050505 void base" color. The AI never saw the codebase. It learned those specific constraints solely by "watching" the Mux video I uploaded.
Architecture Diagram
mermaid
graph TD
subgraph "Local IDE (Windsurf/Cursor)"
IDE --> MCP[Prism MCP Server]
end
subgraph "Cloud Brain"
MCP --> API[Next.js API Routes]
API --> Cosmos[(Azure Cosmos DB)]
API --> Mux[Mux Video Engine]
Mux -- Webhook --> AI[Azure OpenAI]
end
Top comments (0)