Most AI demos stop at text. In production, your agent should do things—like starring a repo, creating issues, or labeling PRs—safely, with audit trails and least-privilege access.
This guide shows an end-to-end setup where a Gaia-hosted LLM proposes a tool call and ACI.dev executes it against GitHub on the user’s behalf.
What you’ll build
- An agent request like: “Star
tobySolutions/stream2peer
.” - The model returns a structured tool call (function name + JSON args).
- Your server executes it via ACI SDK using the user’s linked GitHub account.
- You get a clean, auditable result.
Architecture
Prerequisites
- Python 3.10+
- Gaia API key and domain (OpenAI-compatible)
- Gaia toolkit: https://www.npmjs.com/package/gaia-toolkit
-
ACI.dev account with:
- A project and default agent
- GITHUB app configured
- Linked GitHub account (via OAuth)
- Agent allowed apps includes
GITHUB
- Agent API key (for ACI SDK)
Make sure to use the Gaia toolkit to start and run your Gaia nodes locally.
One-time ACI platform setup
- Create/enter project & agent → platform.aci.dev
- Configure GITHUB app → App Store → GITHUB → Configure App
-
Link GitHub account → App Configurations → GITHUB → Add Account → Start OAuth2 Flow
Choose a **linked account owner id* (e.g.,
tobySolutions
). You’ll use this in code. - Allow agent to use GITHUB → Project Settings → Manage Project → Agent → Edit → ALLOWED APPS: GITHUB
- Copy the agent API key
Environment
Create .env
in your project root:
# Gaia
GAIA_BASE_URL=https://YOUR-GAIA-DOMAIN/v1
GAIA_API_KEY=your_gaia_api_key
# ACI
ACI_API_KEY=your_aci_agent_api_key
ACI_OWNER_ID= # your linked account owner id chosen during OAuth link
Install dependencies
# pip
pip install --upgrade pip
pip install openai python-dotenv aci
# or with uv
uv pip install openai python-dotenv aci
Some environments publish the package as
aci
(notaci-sdk
). Use Python 3.10+.
Minimal working example
Goal: Tell the model the GITHUB “star repo” function exists, let it propose a tool call, then execute it via ACI.
# main.py
import os, json
from dotenv import load_dotenv
from openai import OpenAI
from aci import ACI
load_dotenv()
# --- Clients ---
llm = OpenAI(
base_url=os.getenv("GAIA_BASE_URL"), # Gaia’s OpenAI-compatible endpoint
api_key=os.getenv("GAIA_API_KEY"),
)
aci = ACI() # uses ACI_API_KEY from .env
OWNER_ID = os.getenv("ACI_OWNER_ID") # linked account owner id
def main() -> None:
# 1) Discover the function schema from ACI
github_star_repo_fn = aci.functions.get_definition("GITHUB__STAR_REPOSITORY")
# 2) Ask the model to do the task, including the tool schema
resp = llm.chat.completions.create(
model="Llama-3-Groq-8B-Tool", # pick a tool-capable model available on your Gaia node
messages=[
{"role": "system", "content": "You are a helpful assistant with access to tools."},
{"role": "user", "content": "Star the tobySolutions/stream2peer GitHub repository for me."},
],
tools=[github_star_repo_fn],
tool_choice="required", # force a demo tool call; drop in prod
)
msg = resp.choices[0].message
tool_call = msg.tool_calls[0] if msg.tool_calls else None
if not tool_call:
print("Model replied with text only:", msg.content)
return
# 3) Execute the function via ACI (server-side, with your credentials)
args = json.loads(tool_call.function.arguments or "{}")
# Optional guardrail: only allow your org
if args.get("owner") != "tobySolutions":
raise ValueError("Blocked: only allowed to act for owner=tobySolutions")
result = aci.handle_function_call(
tool_call.function.name, # "GITHUB__STAR_REPOSITORY"
args, # {"owner": "...", "repo": "..."}
linked_account_owner_id=OWNER_ID,
)
print("ACI result:", result)
if __name__ == "__main__":
main()
Run it:
python main.py
Expected flow:
- LLM returns a tool call like:
`{"name":"GITHUB__STAR_REPOSITORY","arguments": {"owner":"tobySolutions","repo":"stream2peer"}}`
- Your server calls
aci.handle_function_call(...)
with your linked account owner id. - GitHub stars the repository; you see a success payload.
Why this pattern works
- Discover: Fetch the exact function schema from ACI; pass it to the model.
- Constrain: The model can only call what you expose.
- Execute: Your backend holds credentials; the model never sees tokens.
- Audit: You can log every call and enforce org/repo allowlists.
Extending beyond “star repo”
Discover more ACI functions and expose them to the model:
issue_create = aci.functions.get_definition("GITHUB__CREATE_ISSUE")
comment_pr = aci.functions.get_definition("GITHUB__COMMENT_ON_PR")
tools = [github_star_repo_fn, issue_create, comment_pr]
resp = llm.chat.completions.create(
model="Llama-3-Groq-8B-Tool",
messages=[...],
tools=tools,
tool_choice="auto",
)
Your server logic remains the same—just route the selected tool call to aci.handle_function_call(...)
.
Troubleshooting
No tool call generated
Ensure the function schema is intools=[...]
. For demos, keeptool_choice="required"
.Permission errors
Confirm: GITHUB app configured, GitHub account linked, agent allowed to use GITHUB.Wrong owner
Thelinked_account_owner_id
must match the id you chose during account linking (e.g.,tobySolutions
).Package install issues
Useaci
(notaci-sdk
) and Python 3.10+. Upgrade pip if needed.
MCP note
If you’re building an MCP stack, expose ACI tools via an MCP server so any MCP-aware client (including ones running against Gaia) can discover and call them. The execution layer—ACI—stays the same.
Wrap-up
You now have a clean pattern to turn LLM intent into real actions using Gaia for inference and ACI.dev for tool execution.
Swap the function, keep the shape, and your agent can open issues, label PRs, or post to Slack—safely and audibly.
Top comments (0)