DSPy is a framework for programming — not prompting — language models. Instead of writing brittle prompts, you define modules that learn their own prompts through optimization.
Why DSPy Replaces Prompt Engineering
A team had 50 hand-crafted prompts that broke every time they switched models. DSPy automatically optimizes prompts for each model — no manual prompt engineering needed.
Key Features:
- Programmatic LM Calls — Define what you want, not how to prompt
- Automatic Optimization — Prompts are learned, not written
- Composable Modules — Chain, branch, and loop LM calls
- Evaluation-Driven — Optimize against your metrics
- Model-Agnostic — Works with any LLM provider
Quick Start
pip install dspy
import dspy
lm = dspy.LM("openai/gpt-4")
dspy.configure(lm=lm)
# Define a simple QA module
qa = dspy.ChainOfThought("question -> answer")
result = qa(question="What is the capital of France?")
print(result.answer) # "Paris"
Custom Module
class Summarizer(dspy.Module):
def __init__(self):
self.summarize = dspy.ChainOfThought("document -> summary")
self.extract_keywords = dspy.Predict("document -> keywords")
def forward(self, document):
summary = self.summarize(document=document)
keywords = self.extract_keywords(document=document)
return dspy.Prediction(summary=summary.summary, keywords=keywords.keywords)
Optimization
from dspy.teleprompt import BootstrapFewShot
optimizer = BootstrapFewShot(metric=my_metric)
optimized_module = optimizer.compile(my_module, trainset=train_data)
Why Choose DSPy
- No prompt engineering — prompts are optimized automatically
- Composable — build complex pipelines from simple modules
- Model-portable — switch models without rewriting prompts
Check out DSPy docs to get started.
Building AI systems? Check out my Apify actors or email spinov001@gmail.com for data extraction.
Top comments (0)