DEV Community

Cover image for Your text file is the prompt now: LLM's shebang trick
Andrew Kew
Andrew Kew

Posted on

Your text file is the prompt now: LLM's shebang trick

Simon Willison's llm CLI tool already does a lot — run prompts, manage models, call tools, store logs in SQLite. But a Hacker News comment last week sent him down a rabbit hole, and the result is one of those tricks that makes you stop and stare.

You can now put llm in a shebang line and make a plain text file directly executable.

"But seriously, you can put a shebang on an english text file now (if you're sufficiently brave) [...]"
Kim_Bruning, Hacker News

What this actually looks like

The simplest form uses llm's fragments mechanism (-f), which reads the file's contents as a prompt:

#!/usr/bin/env -S llm -f
Generate an SVG of a pelican riding a bicycle
Enter fullscreen mode Exit fullscreen mode

Save it, chmod +x, run it. That's it. The file IS the prompt.

The -S flag on env is the key detail — without it, env treats llm -f as a single command name and errors out. With -S, it splits the arguments correctly before handing off to llm.

Want to suppress model commentary and return only the code block? Add -x:

#!/usr/bin/env -S llm -x -f
Generate an SVG of a pelican riding a bicycle
Enter fullscreen mode Exit fullscreen mode

Adding tools

Scripts get a lot more interesting when the model can take actions. LLM ships with default tools you can opt into via -T:

#!/usr/bin/env -S llm -T llm_time -f
Write a haiku that mentions the exact current time
Enter fullscreen mode Exit fullscreen mode

Run it and the model calls llm_time, grabs the current time, and weaves it into the haiku. No scaffolding code. No wrapper script.

YAML templates with embedded Python

The most powerful form uses LLM's template format. End the shebang with -t and the file body becomes a YAML template — including inline Python function definitions that become callable tools:

#!/usr/bin/env -S llm -t
model: gpt-5.4-mini
system: |
  Use tools to run calculations
functions: |
  def add(a: int, b: int) -> int:
      return a + b
  def multiply(a: int, b: int) -> int:
      return a * b
Enter fullscreen mode Exit fullscreen mode

Run it with --td (tools debug) to see the model's reasoning:

./calc.sh 'what is 2344 * 5252 + 134' --td
Enter fullscreen mode Exit fullscreen mode
Tool call: multiply({'a': 2344, 'b': 5252})
  12310688

Tool call: add({'a': 12310688, 'b': 134})
  12310822

2344 × 5252 + 134 = **12,310,822**
Enter fullscreen mode Exit fullscreen mode

Willison also published a more complex example: a shebang script that defines a search_blog() tool using httpx and a Datasette SQL API, allowing the model to answer questions by querying his actual blog content — all in a single self-contained file.

Why this matters

This pattern collapses the distance between "I have an idea for an LLM script" and "I have a running LLM script." No Python wrapper, no argparse boilerplate, no orchestration layer. The prompt is the program.

It's also a sign of how fast LLM's capability surface is growing. Fragments, tools, templates, and inline function definitions are all relatively recent additions — and they compose in ways that weren't obviously planned. Willison found this use case because someone mentioned it in passing on HN.

The flip side: there's no input sanitisation, no error handling, no retry logic. These are exploration scripts, not production primitives. But for automating your own workflows or prototyping agent behaviour cheaply, they're excellent.

What to do

  • Already using llm? Try the shebang pattern today — even the simple -f form is immediately useful for one-shot tasks you currently write wrapper scripts for.
  • Not using llm yet? pip install llm (or brew install llm). The full docs are at llm.datasette.io.
  • Want to go deeper? Willison's full TIL has the blog-search example with the complete Datasette SQL query — worth reading end-to-end.

Sources: Simon Willison's blog · Full TIL · LLM docs

✏️ Drafted with KewBot (AI), edited and approved by Drew.

Top comments (0)