Modern “agentic AI” needs more than prompts—it needs architecture.
This guide shows when to stay inside OpenAI’s style tool ecosystem and when to move to a workflow runtime for observability, safety, and control.
💡 TL;DR
- Use OpenAI + function calling or MCP when your AI just needs to answer a question. Maybe call one or two tools. All in one turn.
- Use a workflow runtime when your AI must run multiple steps, trigger hooks, or perform actions that need to be observable, auditable, and reliable.
These are complementary, not competing, approaches.
🔍 Two Ways to Build Agentic AI
1. The “Chat + Tools” Approach (OpenAI, Anthropic, MCP)
The LLM drives everything.
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What’s the weather in Berlin?"}],
tools=[weather_tool]
)
- The model decides whether to call a tool.
- Your code runs it and returns the result.
- The model gives a final answer.
✅ Great for
- Quick Q&A
- Simple assistants
- Early prototypes
❌ Falls short when you need
- Multi-step logic
- Retries or human approval
- Audit trails or state
- Compliance or safety guardrails
Here, the LLM is both brain and driver. You hand it tools and hope for the best. (We’ve all seen what happens when an unguarded LLM calls a destructive tool.)
2. The “Workflow Runtime” Approach (contenox, Temporal+LLMs, custom orchestrators)
I’ll use contenox to show how this works differently.
You define the workflow as a clear sequence of tasks. Each has a handler, optional LLM use, and transitions.
Realistic contenox syntax:
id: weather-advisor
description: Suggests actions based on the weather forecast
tasks:
- id: get_weather
description: Fetch weather data via external hook
handler: hook
hook:
name: weather-api
tool_name: get_forecast
args:
location: "Berlin"
output_template: "{{.temperature}}"
transition:
branches:
- operator: ">"
when: "25"
goto: "suggest_icecream"
- operator: "default"
goto: "suggest_walk"
- id: suggest_icecream
handler: model_execution
system_instruction: "If it's hot, suggest a fun outdoor activity involving ice cream."
execute_config:
model: phi3:3.8b
provider: ollama
transition:
branches:
- operator: "default"
goto: "end"
- id: suggest_walk
handler: model_execution
system_instruction: "If it's cool, suggest something relaxing like a walk or a coffee indoors."
execute_config:
model: phi3:3.8b
provider: ollama
transition:
branches:
- operator: "default"
goto: "end"
✅ Great for
- Reliable, stateful workflows
- Real actions (APIs, notifications, DB writes)
- Replay, audit, and debugging
- Controlled, compliant agents
❌ Overkill for
- Simple chatbots
- One-off prompts
Here, you control the flow. The LLM is just one worker in the chain.
🧠 Why Both Exist
They solve different problems.
| Context | Goal | Best Tool |
|---|---|---|
| Assistive AI | “Help me get an answer fast.” | OpenAI + Tools / MCP |
| Autonomous AI | “Run a safe, reliable process.” | Workflow runtime (contenox, Flyte, Temporal) |
Think of it this way:
- OpenAI + Tools is your clever intern—fast but unpredictable.
- contenox is your project manager—structured, logged, and accountable.
🛠️ How to Choose
| Use Case | Best Approach |
|---|---|
| “Ask HR about PTO policy.” | ✅ OpenAI + RAG |
| “Detect outage → Slack alert → Jira ticket → confirm fix.” | ✅ contenox |
| “Generate a report and email it.” | ⚠️ Start with OpenAI. Switch to contenox if reliability matters. |
| “Run AI in an air-gapped system.” | ✅ contenox |
| “Weekend agent hack.” | ✅ OpenAI + function calling |
🔮 The Future: They’ll Meet in the Middle
MCP will add light state.
Workflow runtimes will simplify small jobs.
But the core question stays the same:
Is your AI assisting—or acting?
If it’s assisting, use tools.
If it’s acting, use orchestration.
🚀 Try contenox Yourself
It’s open source and self-hostable.
git clone https://github.com/contenox/runtime.git
cd runtime
docker compose up -d
./scripts/bootstrap.sh nomic-embed-text:latest phi3:3.8b phi3:3.8b
Define a workflow like the YAML above. Register your hooks. Watch your AI take real, safe actions.
Top comments (0)