DEV Community

Cover image for Model + Persona + Document: A Simple Framework for Local AI Workflows
Anthony Fox
Anthony Fox

Posted on

Model + Persona + Document: A Simple Framework for Local AI Workflows

Introduction

As AI tools become more available to everyday developers, many are realizing they don’t need cloud access, API keys, or enterprise software to benefit. A private, local setup can offer fast, flexible, and secure AI capabilities — especially when structured around a simple system: Model + Persona + Document.

This paper outlines a practical framework for integrating local AI into your workflow. It's not a tutorial (though one is linked at the end), but a conceptual guide for building your own assistant using local tools and minimal structure.


1. The Core Components

Model

The model is your language engine — a large language model (LLM) running entirely on your machine. Examples include:

You can run these models with tools like:

  • Ollama — Easy model management and CLI access
  • LM Studio — Local GUI frontend
  • llama.cpp — For deep integration with custom workflows

Why local?

  • ⚡ Fast response time
  • 🔐 Full data privacy
  • 💸 No API or token costs

Persona

The persona is the behavior configuration — a persistent system prompt that defines how the model should act. Think of it as the personality, role, or intent you give the assistant.

Examples:

  • "You are a calm technical editor. Speak concisely and critique structure."
  • "You are a smart AI co-author focused on helping the user express ideas clearly."
  • "You are an Emacs expert. You answer questions precisely with minimal explanation."

Implementation:

  • Store in a config file (e.g., ~/.config/ai-profile.el)
  • Load into memory before every prompt session

Why use personas?

  • 🧠 Shape tone and output style
  • 🔄 Swap roles instantly depending on task
  • 📚 Consistency across sessions

Document

The document is the content you’re working with — it provides context and acts as the primary subject of the AI’s output.

Examples:

  • A blog post draft (Markdown)
  • A code buffer (Python, JavaScript, etc.)
  • An outline in Org-mode
  • Meeting notes or a README

You inject the document into the AI's context along with the persona and a custom prompt (e.g., "Refactor this," or "Summarize the key points").

Why documents matter:

  • 🧩 Give the model grounding context
  • 📝 Work on real files, not abstract chat
  • 🖇️ Combine with editor commands for tight integration

2. Operational Flow

The system is minimal:

Model ← Persona + Document + Prompt → Output
Enter fullscreen mode Exit fullscreen mode

Basic Flow:

  1. Load the persona
  2. Extract the document content
  3. Append a user instruction (e.g., "Rewrite this intro")
  4. Send to the model via command-line or Emacs
  5. Insert the response back into your workspace

3. Example Use Cases

  • Refactor or explain a block of code
  • Improve writing tone or structure
  • Convert notes to formatted documentation
  • Catch inconsistencies across large text files

4. Advantages of This System

Feature Benefit
Local models Fast, secure, offline-capable
Persona config Consistent, swappable roles
Document grounding Focused, relevant responses
Emacs/Spacemacs integration Minimal interruption, keyboard-friendly

5. Want to Try It?

Here’s a step-by-step breakdown of how I built this system into Spacemacs:
👉 I Integrated Local AI into Spacemacs – Here's How It Works


Final Thought

This isn't a magic formula. It's a simple structure that gives AI a place in your workflow without overwhelming it.

Model + Persona + Document — nothing more, nothing less.

Make it your own.

Top comments (0)