Youβve seen how agents can search the web, but what if you need your AI to interact with your specific company database, calculate a proprietary risk score, or even control a smart lightbulb in your house?
For that, you need Custom Tools. Today, weβll see how a simple Python decorator can bridge the gap between your local code and a LLM.
π¨ The Magic of the @tool Decorator
The easiest way to create a tool in 2026 is using the @tool decorator. When you wrap a function with this, LangChain automatically analyzes your code to tell the AI:
- What the tool is called.
- What it does (based on your docstring).
- What arguments it needs (based on your type hints).
What are Decorators in python?
In Python, decorators are a powerful design pattern that allows you to modify or enhance the behavior of a function, method, or class without permanently changing its original source code.
Think of a decorator as a "wrapper" that can execute code before and after the original function runs. Following code will help you understand better:
def my_decorator(func):
def wrapper():
print("Before the function.")
func()
print("After the function.")
return wrapper
@my_decorator
def say_hello():
print("Hello!")
say_hello()
π οΈ Step-by-Step: Creating a "Secret Multiplier" Tool
Letβs build a tool that performs a calculation the AI couldn't possibly knowβusing a "secret constant" from our local environment.
from langchain_core.tools import tool
@tool
def calculate_secret_score(base_value: int) -> str:
"""Calculates a business score by multiplying input by a secret company constant."""
secret_constant = 42 # This could be from a database or API
result = base_value * secret_constant
return f"The secret business score is {result}."
# Let's see what the AI sees!
print(calculate_secret_score.name)
print(calculate_secret_score.description)
print(calculate_secret_score.args)
π§ Why Docstrings and Type Hints Matter
In regular coding, docstrings are for other humans. In LangChain, docstrings are for the AI.
If your docstring is vague (e.g., "Does math"), the AI won't know when to use the tool. If itβs specific (e.g., "Use this tool when the user asks for a 'business score' or 'proprietary calculation'"), your agent becomes incredibly reliable.
ποΈ Plugging it into the Agent
Once your tool is defined, you just add it to your tool list, exactly like we did yesterday.
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
# 1. Define your custom skills
tools = [calculate_secret_score]
# 2. Setup the brain
model = ChatOpenAI(model="gpt-4o")
# 3. Create the Agent
agent = create_react_agent(model, tools)
# 4. Test it!
response = agent.invoke({"messages": [("human", "What is the secret score for a value of 10?")]})
print(response["messages"][-1].content)
π― Day 8 Summary
Today, you moved from "User" to "Creator." You learned:
The
@toolDecorator: Converting functions to AI skills.Prompt Engineering via Docstrings: Writing descriptions so the LLM knows when to call your tool.
Integration: Adding your custom logic into a ReAct agent loop.
Your Homework: Write a custom tool called get_user_status that takes a username and returns a fake status (like "Active" or "Away"). Try to get your agent to tell you the status of "Alex."
See you tomorrow! β
Top comments (0)