DEV Community

Leo Pessoa
Leo Pessoa

Posted on

From Pydantic Model to AI Agent in 10 Lines of Python

You've been doing this for a while now:

response = client.chat.completions.create(
    model="gemini-2.0-flash",
    messages=[{"role": "user", "content": f"Extract the following fields from this text: {fields}. Text: {text}"}]
)
data = json.loads(response.choices[0].message.content)
proposal = Proposal(**data)
Enter fullscreen mode Exit fullscreen mode

Parse the response. Hope the JSON is valid. Add a retry. Add a fallback. Add validation. Repeat for every model in your app.

There's a better way.


Meet exomodel

exomodel is an open-source Python framework that turns your Pydantic models into autonomous agents. Instead of writing prompts that produce objects, you define the object — and it fills itself.

The paradigm shift:

Old way exomodel way
Write prompt → parse response → validate Define schema → call .create()
Manual JSON extraction Native Pydantic validation
One prompt per model Provider-agnostic, reusable
Fragile string parsing Structured output, always

Let's build something.


Prerequisites

  • Python 3.9+
  • An API key from any supported provider (Google, Anthropic, OpenAI, Cohere)

Install

pip install "exomodel[google]"
# or: exomodel[anthropic] | exomodel[openai] | exomodel[cohere] | exomodel[all]
Enter fullscreen mode Exit fullscreen mode

Create a .env file:

MY_LLM_MODEL=google:gemini-2.0-flash
GOOGLE_API_KEY=your-key-here
Enter fullscreen mode Exit fullscreen mode

The 10 lines

from exomodel import ExoModel

class Proposal(ExoModel):
    client: str = ""
    project_title: str = ""
    budget: float = 0.0
    timeline_weeks: int = 0
    summary: str = ""

p = Proposal.create("Draft a proposal for Tesla — AI dashboard integration, 6 weeks, $45,000 budget")

print(p.to_ui(format="markdown"))
Enter fullscreen mode Exit fullscreen mode

That's it. Run it.

## Proposal

**Client:** Tesla
**Project Title:** AI Dashboard Integration
**Budget:** 45000.0
**Timeline (weeks):** 6
**Summary:** A 6-week engagement to design and integrate an AI-powered...
Enter fullscreen mode Exit fullscreen mode

exomodel sent your natural language input to the LLM, mapped the response to your schema, validated it with Pydantic, and returned a typed Python object. No prompt engineering. No JSON parsing.


Add business rules with RAG

What if your proposals need to follow company rules — minimum budget, forbidden industries, mandatory margins?

Create a proposal_rules.md file:

# Proposal Rules

- Minimum project budget is $10,000.
- Every proposal must include a 10% safety margin in pricing.
- We do not work with companies in the tobacco industry.
Enter fullscreen mode Exit fullscreen mode

Now attach it to your model:

class Proposal(ExoModel):
    client: str = ""
    project_title: str = ""
    budget: float = 0.0
    timeline_weeks: int = 0
    summary: str = ""

    @classmethod
    def get_rag_sources(cls):
        return ["proposal_rules.md"]
Enter fullscreen mode Exit fullscreen mode

The model now has context. You can validate against your own rules:

p = Proposal.create("Draft a 5k proposal for Philip Morris")

print(p.run_analysis())
# → This proposal violates company policy: budget below $10,000 minimum
#   and client operates in the tobacco industry.
Enter fullscreen mode Exit fullscreen mode

The LLM grounded its reasoning in your document, not its training data.


Update fields with natural language

Already created a proposal but the client changed the scope?

p.update_object("Increase the budget by 20% and extend the timeline to 8 weeks")

print(p.budget)          # 54000.0
print(p.timeline_weeks)  # 8
Enter fullscreen mode Exit fullscreen mode

Or update a single field:

p.update_field("summary", "Make it more formal and concise")
Enter fullscreen mode Exit fullscreen mode

Bulk creation with ExoModelList

Need to generate multiple structured objects at once?

from exomodel import ExoModel, ExoModelList

class LineItem(ExoModel):
    name: str = ""
    quantity: int = 0
    unit_price: float = 0.0

class Invoice(ExoModelList[LineItem]):
    pass

invoice = Invoice()
invoice.create_list("10 MacBook Pros at 2499, 5 Dell monitors at 599, 3 mechanical keyboards at 189")

print(invoice.to_csv())
Enter fullscreen mode Exit fullscreen mode
name,quantity,unit_price
MacBook Pro,10,2499.0
Dell Monitor,5,599.0
Mechanical Keyboard,3,189.0
Enter fullscreen mode Exit fullscreen mode

How it works under the hood

When you call .create(), exomodel:

  1. Introspects your Pydantic schema (field names, types, defaults)
  2. If get_rag_sources() is defined, chunks and indexes those documents into an in-memory vector store
  3. Builds a structured prompt with your schema and any RAG context
  4. Sends it to your configured LLM provider
  5. Validates the response against your Pydantic model
  6. Returns a typed instance — with usage tracking built in

Everything goes through LangChain under the hood, so provider-switching is a one-line .env change.


Expose methods as agent tools

Need the LLM to call methods on your object, not just fill fields? Use @llm_function:

from exomodel import ExoModel, llm_function

class Proposal(ExoModel):
    client: str = ""
    budget: float = 0.0
    discount: float = 0.0

    @llm_function
    def apply_discount(self, percentage: float):
        """Apply a percentage discount to the budget."""
        self.discount = percentage
        self.budget = self.budget * (1 - percentage / 100)

p = Proposal.create("Draft a 50k proposal for Tesla")
p.master_prompt("Apply a 15% discount for a long-term partnership")

print(p.budget)    # 42500.0
print(p.discount)  # 15.0
Enter fullscreen mode Exit fullscreen mode

master_prompt lets the LLM autonomously decide which tool to call — no routing logic needed.


Token usage

print(p.get_usage())
# {'prompt_tokens': 312, 'completion_tokens': 87, 'total_tokens': 399}
Enter fullscreen mode Exit fullscreen mode

What's next

If this saved you from writing another prompt parser, give the repo a star — it helps more developers find it.


Have a use case you'd like to see covered? Drop it in the comments.

Top comments (0)