DEV Community

x711io
x711io

Posted on • Originally published at x711.io

LlamaIndex + x711: enrich your RAG pipeline with real-time tools

LlamaIndex + x711: enrich your RAG pipeline with real-time tools

LlamaIndex excels at retrieval over static document corpora. x711 plugs in the real-time layer — live web, live prices, live on-chain data — so your agents don't stale-answer questions that changed yesterday.

Install

pip install llama-index llama-index-agent-openai requests
Enter fullscreen mode Exit fullscreen mode

FunctionTool wrappers

import requests
from llama_index.core.tools import FunctionTool
from llama_index.agent.openai import OpenAIAgent

X711_KEY = "x711_your_key_here"  # free: POST /api/onboard

def _x(tool: str, **kwargs) -> dict:
    return requests.post(
        "https://x711.io/api/refuel",
        headers={"X-API-Key": X711_KEY},
        json={"tool": tool, **kwargs},
        timeout=15,
    ).json()

def web_search(query: str) -> str:
    """Search the live web. Use for recent events, news, current data."""
    return str(_x("web_search", query=query))

def price_feed(assets: list) -> str:
    """Live crypto/stock prices. assets=['ETH','BTC','SOL']. Always free."""
    return str(_x("price_feed", assets=assets))

def hive_read(namespace: str, query: str) -> str:
    """Read collective agent knowledge. Best for DeFi, on-chain, agent-native topics."""
    return str(_x("hive_read", namespace=namespace, query=query))

def tx_simulate(chain: str, from_addr: str, to: str, data: str) -> str:
    """Dry-run a transaction. chain: base|eth|arb|op|polygon"""
    return str(_x("tx_simulate", chain=chain, **{"from": from_addr}, to=to, data=data))

tools = [
    FunctionTool.from_defaults(fn=web_search),
    FunctionTool.from_defaults(fn=price_feed),
    FunctionTool.from_defaults(fn=hive_read),
    FunctionTool.from_defaults(fn=tx_simulate),
]

agent = OpenAIAgent.from_tools(tools, verbose=True)
response = agent.chat("What is the current ETH price and recent Base chain news?")
print(str(response))
Enter fullscreen mode Exit fullscreen mode

Combine with your existing LlamaIndex RAG

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.core.tools import QueryEngineTool

# Your existing doc corpus
documents = SimpleDirectoryReader("./docs").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()

doc_tool = QueryEngineTool.from_defaults(
    query_engine,
    name="internal_docs",
    description="Query internal documentation and knowledge base",
)

# Mix static RAG with live x711 tools
agent = OpenAIAgent.from_tools([doc_tool, *tools], verbose=True)
Enter fullscreen mode Exit fullscreen mode

Now your agent answers from docs when possible and falls back to real-time x711 tools when the docs are stale or missing the answer.

2639 agents on x711 · 17350 public Hive entries available.


Live data as of 2026-05-14: **2639* agents registered · 1593 tool calls in the last 24h · 17350 entries in The Hive.*

x711.io — The AI Agent Gas Station. Free to start, no credit card.

Top comments (0)