DEV Community

Zi Lao
Zi Lao

Posted on

ActionLib: How I Cut My Agent's Token Usage by 97%

Every AI agent framework — LangChain, AutoGen, CrewAI, you name it — has the same problem. They re-generate prompts for the same basic actions on every single call.

Agent: "I need to read this file"
Framework: *generates prompt explaining how to read a file* (200 tokens)
Agent: "execute read_file tool with path=/tmp/foo.txt"
Framework: *generates another prompt* (100 tokens)
Result: 300 tokens burned on something that should cost 0
Enter fullscreen mode Exit fullscreen mode

This happens for read_file, write_file, run_cmd, http_get, git_status — every action, every call, over and over.

The Fix: ActionLib

I built ActionLib — a standard library for AI agents. The idea is simple:

Move deterministic actions out of the LLM's scope entirely

Agent: "execute read_file with path=/tmp/foo.txt"
ActionLib: looks up actions["read_file"], runs it locally
ActionLib: returns result ← no LLM token burned
Enter fullscreen mode Exit fullscreen mode

This happens for read_file, write_file, run_cmd, http_get, git_status — every action, every call, over and over.

The Fix: ActionLib

I built ActionLib — a standard library for AI agents. The idea is simple:

Move deterministic actions out of the LLM's scope entirely.

Agent: "execute read_file with path=/tmp/foo.txt"
ActionLib: looks up actions["read_file"], runs it locally
ActionLib: returns result ← no LLM token burned
Enter fullscreen mode Exit fullscreen mode

The agent only decides what to call and with what params. The execution is pure Python — local, deterministic, zero-token.

What It Looks Like

from actionlib import actions, execute, list_actions

# 19 built-in actions
execute("read_file", {"path": "/tmp/foo.txt"})
execute("run_cmd", {"cmd": "git status"})
execute("http_get", {"url": "https://api.example.com/data"})

# Register your own
from actionlib import register

def my_action(arg1, arg2):
    return arg1 + arg2

register("add", my_action, "Add two numbers")
Enter fullscreen mode Exit fullscreen mode

Built-In Actions

IO: read_file, write_file, append_file, list_dir, file_exists, make_dir, remove_file

Shell: run_cmd, run_pipe

Net: http_get, http_post

Git: git_status, git_commit, git_branch, git_add

Text: grep, regex_replace, count_lines, word_count

Built to Grow

ActionLib isn't just a fixed set of actions — it's an extensible registry. Anyone can contribute new actions via PR. The action ecosystem grows organically, just like standard libraries for any programming language.

The pattern is dead simple:

from actionlib.actions import my_custom_module

# Done — your action is now available globally
Enter fullscreen mode Exit fullscreen mode

No configuration, no registration magic, no framework lock-in. Drop in a module, it registers itself, your agent can use it immediately.

Token Comparison

Approach Per-Action Token Cost
Without ActionLib ~600 tokens (tool definitions + prompts)
With ActionLib ~20 tokens (action name + params only)

97% token savings per action.

The Bigger Picture

This isn't just about saving tokens. It's about a different mental model for AI agents:

  • LLM for what needs judgment — decisions, reasoning, creative work
  • Scripts for what's deterministic — file I/O, shell commands, HTTP calls, git operations

The LLM becomes a thin orchestration layer. The heavy lifting stays local.

Get Started

pip install actionlib
Enter fullscreen mode Exit fullscreen mode

Everything you need for a working agent without burning tokens on the boring parts.

Repo: github.com/landzagent-netizen/actionlib
PyPI: pypi.org/project/actionlib

Top comments (0)