DEV Community

Cover image for Using LLMs in 3 lines of Python
Chandler for TimeSurge Labs

Posted on • Edited on

Using LLMs in 3 lines of Python

When working with LLMs, the first thing people generally install is the openai or anthropic packages, if you’re a little more adventurous with your LLM choice it may be litellm or ollama. The issue is that all of these require a bit of code to get your started. For example, assuming you have an API key in your environment like I do, you’ll need at least this code to make an LLM call with OpenAI (also assuming you’re using the older Chat Completions endpoint).

import os
from openai import OpenAI

# retrieve API key from environment
api_key = os.getenv("OPENAI_API_KEY")

# initialize client
client = OpenAI(api_key=api_key)

# send a chat request
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Say something concise."}
    ]
)

# print assistant's answer
print(response.choices[0].message.content.strip())
Enter fullscreen mode Exit fullscreen mode

And if you want to wrap your API call with a function so you can call it repeatedly, that’s even more lines!

import os
from openai import OpenAI

def chat_with_openai(prompt: str) -> str:
    api_key = os.getenv("OPENAI_API_KEY")

    client = OpenAI(api_key=api_key)
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": prompt}
        ]
    )
    return response.choices[0].message.content.strip()

if __name__ == "__main__":
    print(chat_with_openai("Say something concise."))
Enter fullscreen mode Exit fullscreen mode

And that is simply unacceptable!

Do you really care?

No, I’m being facetious. For most LLM projects, consistency of output trumps anything else, however sometimes its nice to have a super simple way to add LLMs to my one-off python scripts and tools without all the boilerplate.

Magentic

Magentic is a Python package that lets you create functions that call LLMs in 3 lines of code. No, really! Here’s an example ripped straight from their docs.

from magentic import prompt

@prompt('Add more "dude"ness to: {phrase}')
def dudeify(phrase: str) -> str: ...  # No function body as this is never executed
Enter fullscreen mode Exit fullscreen mode

Thanks to some black box dark magic that I don’t feel like learning about, this is a completely valid Python function that’s callable anywhere in the script, assuming you have an OpenAI API Key in your environment variables.

print(dudeify("Hello, how are you?"))
# "Hey, dude! What's up? How's it going, my man?"
Enter fullscreen mode Exit fullscreen mode

A Note On Package Management

I’m going to be using the PEP 723 standard at the top of all my scripts for the rest of this post. This allows you to use uv, the best package manager for Python, to run the scripts without you having to make a virtual environment, then install packages, then run the script. This automates all three of those tasks into a single command. Here’s an example.

Here’s the above script with the added metadata and some slight modifications. This assumes you have uv installed and the OPENAI_API_KEY env var set.

#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "fire",
#     "magentic"
# ]
# ///

import fire
from magentic import prompt

@prompt('Add more "dude"ness to: {phrase}')
def dudeify(phrase: str) -> str: ...  # No function body as this is never executed

if __name__=="__main__":
    fire.Fire(dudeify)
Enter fullscreen mode Exit fullscreen mode

This script can now be downloaded and ran like an executable. I’ve uploaded to a gist for easy download.

wget -O dudeify https://gist.githubusercontent.com/chand1012/218372f3e1101dfa7f915dc35c0e66d8/raw/363f720d21fa8ebe2e6a484f6b389496c3452064/dudeify.py
chmod +x dudeify
./dudeify "Hello how are you"
# Installed 23 packages in 45ms
# Yo dude, how's it hangin'?
Enter fullscreen mode Exit fullscreen mode

The first time you run the script it’ll handle making a cached virtual environment for the next time you run it! For more information on how this works, you can check out the uv docs, and the blog post that inspired my constant use of this feature.

Structured Outputs

If you want to have structured outputs, like for example for an API response or just to make it easier to parse and use the data with your scripts, you can use a Pydantic Dataclass.

#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "fire",
#     "magentic",
#     "pydantic",
# ]
# ///

from fire import Fire
from magentic import prompt
from pydantic import BaseModel

class Animal(BaseModel):
    species: str
    legs: int
    latin_species: str
    predators: list[str]
    prey: list[str]

@prompt("Give me information on the animal {animal_name}.")
def animal_info(animal_name: str) -> Animal: ...

if __name__=="__main__":
    Fire(animal_info)
Enter fullscreen mode Exit fullscreen mode

Here’s an example of that method being ran.

Example output

Prompting and Function Calls

There’s two ways you can prompt the LLM with Magentic. You can either use the @prompt decorator, as I’ve been using, which is the simplest and fastest way to create LLM methods. There’s also @chatprompt, which allows you to pass a list of chat messages to the LLM. This is especially useful for few-shot prompting, where you give the LLM some examples of what output you want. After all, LLMs are just fancy pattern matching black boxes.

#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "fire",
#     "magentic",
#     "pydantic",
# ]
# ///
from fire import Fire
from magentic import chatprompt, AssistantMessage, SystemMessage, UserMessage
from pydantic import BaseModel

# this is a modified version of magentic's example chatprompt code
# https://magentic.dev/#chatprompt
class Quote(BaseModel):
    quote: str
    character: str

@chatprompt(
    SystemMessage("You are a movie buff."),
    UserMessage("What is your favorite quote from Harry Potter?"),
    AssistantMessage(
        Quote(
            quote="It does not do to dwell on dreams and forget to live.",
            character="Albus Dumbledore",
        )
    ),
    UserMessage("What is your favorite quote from {movie}?"),
)
def get_movie_quote(movie: str) -> Quote: ...

if __name__=="__main__":
    Fire(get_movie_quote)
Enter fullscreen mode Exit fullscreen mode

You can also pass function calls to LLMs to allow them to return a python callable that you can call later. Another use of this is the decorator @prompt_chain which allows you to have an LLM call a function and use the returned results to generate its response.

#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "fire",
#     "magentic",
#     "duckduckgo_search",
# ]
# ///
from fire import Fire
from magentic import prompt_chain
from duckduckgo_search import DDGS

def web_search(query: str) -> dict:
    """Searches the web for a given query"""
    with DDGS() as ddgs:
        results = ddgs.text(query, max_results=5)
        print(results)
        return results

@prompt_chain(
    "You are a helpful assistant that can search the web for information. Use your tools to answer the user's question: {query}",
    functions=[web_search],
)
def search(query: str) -> str: ...

if __name__ == "__main__":
    Fire(search)

Enter fullscreen mode Exit fullscreen mode

Using Other LLMs

If you’re a data conscious person, or just want your options to be open, Magentic can be configured to work with nearly all other LLMs as long as they are supported by LiteLLM or offer an OpenAI compatible API. Here’s an example of a script that runs entirely locally using Ollama and Google’s Gemma 3.

#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "fire",
#     "magentic"
# ]
# ///

import fire
from magentic import prompt, OpenaiChatModel

model = OpenaiChatModel("gemma3:27b-it-qat", base_url="http://localhost:11434/v1/")

@prompt('Add more "dude"ness to: {phrase}', model=model)
def dudeify(phrase: str) -> str: ...  # No function body as this is never executed

if __name__=="__main__":
    fire.Fire(dudeify)
Enter fullscreen mode Exit fullscreen mode

If your chosen LLM is one of the many supported by LiteLLM, you can use the LiteLLM extra feature of Magentic.

#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "fire",
#     "magentic[litellm]"
# ]
# ///

import fire
from magentic import prompt
from magentic.chat_model.litellm_chat_model import LitellmChatModel

# this specific example requires GEMINI_API_KEY env var to be set
model = LitellmChatModel("gemini/gemini-2.0-flash")

@prompt('Add more "dude"ness to: {phrase}', model=model)
def dudeify(phrase: str) -> str: ...  # No function body as this is never executed

if __name__=="__main__":
    fire.Fire(dudeify)
Enter fullscreen mode Exit fullscreen mode

You can use the LiteLLM method to use Anthropic’s Claude series of models, or you can use Magentic’s official Anthropic extension.

#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "fire",
#     "magentic[anthropic]"
# ]
# ///

import fire
from magentic import prompt
from magentic.chat_model.anthropic_chat_model import AnthropicChatModel

# this specific example requires GEMINI_API_KEY env var to be set
model = AnthropicChatModel("claude-4-sonnet-latest")

@prompt('Add more "dude"ness to: {phrase}', model=model)
def dudeify(phrase: str) -> str: ...  # No function body as this is never executed

if __name__=="__main__":
    fire.Fire(dudeify)
Enter fullscreen mode Exit fullscreen mode

No LLM left behind!

Advanced Usage

Need an async function? Just prefix with async def instead of def !

# incomplete snippet
from magentic import prompt

@prompt("Tell me more about {topic}")
async def tell_me_more_about(topic: str) -> str: ...
Enter fullscreen mode Exit fullscreen mode

You can use Python’s AsyncIterable to make multiple simultaneous calls to the LLM.

# incomplete snippet
import asyncio
from typing import AsyncIterable

from magentic import prompt

@prompt("List ten presidents of the United States")
async def iter_presidents() -> AsyncIterable[str]: ...

tasks = []
async for president in await iter_presidents():
    # Use asyncio.create_task to schedule the coroutine for execution before awaiting it
    # This way descriptions will start being generated while the list of presidents is still being generated
    task = asyncio.create_task(tell_me_more_about(president))
    tasks.append(task)
descriptions = await asyncio.gather(*tasks)
Enter fullscreen mode Exit fullscreen mode

Need to stream the response back to the user? Use Magentic’s StreamedStr to loop through the response chunks.

#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "fire",
#     "magentic"
# ]
# ///

import fire
from magentic import prompt, StreamedStr

@prompt("Tell me about {country}")
def describe_country(country: str) -> StreamedStr: ...

def describe(country: str):
    for chunk in describe_country(country):
        print(chunk, end="")
    print()

if __name__=="__main__":
    fire.Fire(describe)
Enter fullscreen mode Exit fullscreen mode

This also works for multiple objects, simply wrap your objects in the Iterable class.

#!/usr/bin/env -S uv run --script
# /// script
# requires-python = ">=3.10"
# dependencies = [
#     "fire",
#     "magentic",
#     "pydantic",
# ]
# ///
from collections.abc import Iterable

from fire import Fire
from magentic import prompt
from pydantic import BaseModel

class Animal(BaseModel):
    species: str
    legs: int
    latin_species: str
    predators: list[str]
    prey: list[str]

@prompt("Give me information on the animals in the family {family}.")
def animal_family_info(family: str) -> Iterable[Animal]: ...

def info(family: str):
    for animal in animal_family_info(family):
        print(animal)

if __name__=="__main__":
    Fire(info)
Enter fullscreen mode Exit fullscreen mode

Conclusion

Working with LLMs is now easier than ever, and Magnetic makes it even easier than the standard methods to quick add LLMs to any Python script, regardless of the scale of complexity. Using this in tandem with something like uv and the new scripting metadata allows you to quickly make command line tools that can utilize AI quickly and effectively. I won’t always use Magentic for every project I need an LLM for, but I’ll definitely use it all the time with my small one-offs and utilities.

Top comments (0)