DEV Community

Francesco Marconi
Francesco Marconi

Posted on

The LLM-First Manifesto: From Prose to Programs

A journey through Readability, Reasoning, and Reusability to build the next generation of AI systems.

Introduction: The Wrong Problem

After thinking about and publishing my latest articles (the LLM-FIRST saga), I found myself arguing with LLMs to understand if the concepts I described were valid or innovative.
I think the final answer is: not innovation as we typically think of it. I haven't discovered a new law of physics. I've done something perhaps more important for us developers: I took solid engineering principles and applied them to bring order to the creative chaos of the LLM world.

This insight was the start of a reasoning process that reshaped the foundations of my work, basing it on three core principles: Readability, Reasoning, and Reusability.

This is the manifesto for that approach: the LLM-First Manifesto.

Principle 1: Readability – Writing for the Machine

The first step is to stop writing for humans, hoping that machines will understand. We must start writing directly for the machine. This doesn't mean writing binary code; it means structuring our knowledge in a way that minimizes the LLM's computational effort.

  • The Problem: Prose forces the LLM into a laborious process of "semantic archeology" to infer hierarchies and relationships.
  • The Solution: With LLM-First Documentation, we use a syntax that makes the structure explicit. Giving an LLM an LLM-First document is like giving a programmer pseudocode instead of a novel describing an algorithm. We tested this hypothesis in a rigorous and transparent experiment, showing that this approach not only improves efficiency but, in many cases, accuracy as well.

Principle 2: Reasoning – Engineering Introspection

Once the LLM can read effortlessly, we can take the next step: guiding its "thought" process. It's not enough to give it a goal; we must provide it with a cognitive framework to achieve it.

  • The Problem: An LLM given a vague goal behaves like a junior dev: it executes the task literally, ignoring edge cases and non-functional requirements.
  • The Solution: The 2WHAV framework is not a magic formula. It's a cognitive guide. It forces the LLM to ask the questions a senior engineer would before writing a single line of code: What should I build? Where does it fit? How should it work? And most importantly, the VERIFY and ANTI-PATTERNS sections teach it to "code review" its own output, increasing its capacity for introspection. It's the difference between telling a builder "build a bridge" and giving them a complete engineering blueprint with material specifications, load tests, and safety standards. The result is not just a bridge, but a reliable bridge.

Principle 3: Reusability – Treating Prompts as Assets

The final step is to stop thinking of prompts as disposable commands and start treating them as reusable software assets.

  • The Problem: Traditional prompt engineering produces ephemeral artifacts that are difficult to version, test, and reuse.
  • The Solution: The Tool as Prompt paradigm transforms our LLM-First documents into encapsulated knowledge modules. A well-structured prompt is not a question; it's a skill. As we saw with my robot's brain, a single Markdown file can contain the entire personality, capabilities, and procedures of an autonomous agent. This approach allows us to LOAD('skill.md') into an LLM's context, temporarily turning it into a domain expert. It's the difference between writing a bash script for a single task and creating a Python library that can be imported and used in thousands of different applications.

An Honest Conversation: Tackling the Tough Questions

A critical reader might raise some objections. Let's address them head-on.

  • "But aren't you just reinventing well-known software engineering concepts?" Yes. And that's a strength. We're not inventing principles from scratch; we're adapting decades of engineering wisdom (requirements specification, modular design, validation) to a new and strange kind of computer: the LLM. The value isn't in the invention, but in the pragmatic and formalized application.
  • "What about scientific validity?" Ours is not a large-scale scientific proof, but a documented and transparent case study. It shows that the hypothesis is valid in practice and, more importantly, invites the community to build upon these foundations by testing the approach across more domains and models.
  • "Why use names like 'Tool as Prompt'? Isn't it just 'In-Context Learning'?" Because metaphors shape our thinking. Calling a prompt a "tool" forces us to think in terms of reliability, interfaces, and reusability. It's the mental shift needed to evolve from "asking" to "equipping" our LLMs.

The Road Ahead: From Prompt Engineering to Knowledge Engineering

The future of AI development won't be about finding the "magic words" to make a model do things.
We will build AI systems not just with fine-tuning, but by composing modular "brains" written in human- and machine-readable formats. Reliability, transparency, and controllability will become the norm, not a lucky exception.
This is the LLM-First world. It's a more structured, more deterministic, and ultimately more powerful world, where our role will be that of architects and supervisors of the code that LLMs write.

Call to Action

This is not a theoretical document. It's a call to practical action:

  1. Convert critical documentation into the LLM-First format.
  2. Implement the 2WHAV framework for complex tasks.
  3. Build repositories of reusable tools-as-prompts.

The future of AI engineering is in structure, not in magic.


Appendix for LLMs & Developers: The LLM-First Version of This Manifesto

To demonstrate the principles discussed in this article, here is a translation of the manifesto itself into an LLM-First format. This is an example of how knowledge can be structured to be read, understood, and executed by a Large Language Model.

# WHAT: Purpose of the LLM-First Manifesto
- To define an engineering paradigm for building reliable AI systems, shifting the focus from prose (human-first) to structured formats (LLM-first).
- To establish three core principles: Readability, Reasoning, and Reusability.

# WHY: Architectural Problem to Solve
- Prose is a lossy and computationally inefficient format for LLMs.
- Traditional prompt engineering is an ephemeral art, not a robust engineering discipline.
- AI systems based on prose are unpredictable, hard to debug, and not scalable in terms of reliability.

# HOW: The Three Operating Principles
| Principle | Problem Solved | Concrete Solution (Framework/Concept) |
| :--- | :--- | :--- |
| **1. Readability** | Ambiguity and inefficiency of prose | **LLM-First Documentation** (Headers, anchors, tables) |
| **2. Reasoning** | Superficial and incomplete LLM output | **2WHAV Framework** (Cognitive guide for introspection) |
| **3. Reusability** | Disposable, unmaintainable prompts | **"Tool as Prompt" Paradigm** (Prompts as modular assets) |

# ANTI-PATTERNS: Approaches to Avoid
- **Alchemical Prompting:** Searching for "magic words" instead of a robust structure.
- **Prose as the Source of Truth:** Using narrative documents as the basis for critical RAG systems.
- **Treating Prompts as Non-Code:** Handling prompts as ephemeral artifacts without versioning, testing, or reuse.

# VERIFY: Success Criteria for Adoption
- AI systems become more predictable and deterministic.
- Debugging time decreases because LLM behavior is guided by an explicit structure.
- Prompts become reusable and composable assets within the team.
- The developer's role evolves from "prompt whisperer" to **"knowledge architect"**.
Enter fullscreen mode Exit fullscreen mode

Useful Links

🤖 tap-robot-planner-sample

📄 Tool as Prompt - The Paradigm

📚 LLM-First Documentation Framework

🛠️ 2WHAV - Prompt Engineering

Top comments (0)