DEV Community

Francesco Marconi
Francesco Marconi

Posted on

Tool as Prompt: From LLM-First Docs to Teaching LLMs Domain Knowledge On The Fly

The integration of Large Language Models (LLMs) into software systems presents a fundamental challenge: reconciling their non-deterministic nature with the engineering requirement for predictable and consistent outputs.
The "Tool as Prompt" paradigm addresses this challenge by treating structured documents not as mere text to be interpreted, but as specialized knowledge modules that an LLM can load and execute.

Putting the Paradigm to the Test

Before diving into the theory, let's see this approach in action with a practical experiment using a real-world tool.
Open your preferred LLM (ChatGPT, Claude, Gemini, etc.) and follow these two steps:

1. Load the Tool:
Give the LLM this instruction:

Load and analyze the tool specifications from the following URL: https://github.com/fra00/2WHAV
Enter fullscreen mode Exit fullscreen mode

Note: If your LLM cannot access external links or encounters an error, don't worry. Open the URL, copy the entire content of the README.md file, and paste it directly into the chat.

2. Execute the Task:
After the LLM confirms it has loaded the content, give it this command:

Now, apply the framework you have just learned to this task: "Create an email validator in TypeScript."
Enter fullscreen mode Exit fullscreen mode

Analysis of Expected Results

What should happen?
The LLM will not immediately write the TypeScript code. Instead, it will execute the engineering process it has just "learned" from the README.md file. It will produce a far more robust and detailed intermediate prompt, following the structure of the 2WHAV framework. You should see an output similar to this:

## WHAT: Objective
- **Task:** Create an asynchronous TypeScript function `validateEmail(email)`.
- **Output:** Must return a `Promise<{isValid: boolean}>`.
---
## HOW: API Contract and Rules
- **Available API:** `api.externalCheck(email)` which returns a `Promise<boolean>`.
- **Rules:** The function must handle API call exceptions.
---
## VERIFY: Validation Checklist
- [ ] Does the function have the correct signature?
- [ ] Is only the API defined in the contract being used?
- [ ] Is error handling implemented as required?
Enter fullscreen mode Exit fullscreen mode

You've just forced the LLM to think like an engineer: first, it generated the complete specifications. For the final step, copy this new, detailed prompt and feed it to the LLM. The code you'll get will be significantly more robust and aligned with the requirements than the one generated from your initial request. You have seen the entire "Tool as Prompt" paradigm in action.

Fundamental Principle: From Structured Documentation to Execution

Now that you've seen what it does, let's analyze how it works. To understand "Tool as Prompt," we must distinguish between two concepts:

  1. LLM-First Documentation: This is a writing methodology that structures information into a formal schema (using tables, hierarchies, explicit rules). Its purpose is to mitigate (not eliminate) ambiguity and make knowledge easily parsable.
  2. Tool as Prompt: This is the operational paradigm that uses an LLM-First document. It treats it as a temporary extension of the LLM's "working memory," a module that teaches it a process it would not otherwise know.

In short: LLM-First is how you write; Tool as Prompt is how you use what you've written.

Conceptual Operating Model: Load / Compile / Execute

The "Tool as Prompt" paradigm can be described through an operating model that highlights this "temporary learning" process.

  1. LOAD (Knowledge Loading): The LLM parses the LLM-First document. This is not a simple reading, but a process of loading it into a virtual working memory.
  2. COMPILE (Model Specialization): The LLM's general-purpose model temporarily "specializes" based on the loaded knowledge, building an execution plan.
  3. EXECUTE (Expert Execution): The LLM, now operating as an "expert" on the newly learned process, applies this knowledge to perform the task.

Validation Through Case Studies

The principle just demonstrated can be applied to much more complex systems.

  • Case Study 1: The Principle of Structuring (LLM-First Documentation) This repository illustrates the LLM-First writing principles, which are the prerequisite for creating an effective "Tool as Prompt." It demonstrates how structuring the text makes it "loadable."
  • Case Study 2: The Execution of Specialized Knowledge (2WHAV Framework) The framework you used in the initial experiment is a full-fledged application of the paradigm. Its README.md is a tool that an LLM can load to manage the creation of complex prompts in an engineered way.

Conclusion and Engineering Implications

The "Tool as Prompt" paradigm allows us to overcome the limitations of the LLM's pre-trained knowledge by providing it with the necessary domain expertise "on the fly" to perform specific tasks reliably.

This transforms documentation from a passive artifact for humans into a dynamic knowledge module for machines, paving the way for more robust and intelligent automation workflows.

Top comments (0)