import TOCInline from '@theme/TOCInline';
I shipped an OpenAI o3-mini planner for my Drupal CMS 2 AI agent and kept a local rule-based fallback, because Drupal's 2026 AI direction is clear: intelligent automation is coming fast, but production teams still need predictable behavior when external AI calls fail.
TL;DR — 30 second version
- Built a hybrid planner: OpenAI
o3-miniwhen available, local rule-based fallback when not - Drupal is explicitly moving toward intelligent-agent workflows in 2026
- Cloud AI calls can fail, models can drift, API keys can be missing in lower environments
- Fallback behavior must be explicit and tested, or you will ship hidden outage paths
Why I Built It
Drupal is explicitly moving toward intelligent-agent workflows, and that changes how I design integrations: agentic features are now a near-term product concern, not a lab experiment.
The problem is operational, not conceptual. Cloud AI calls can fail, models can drift, and API keys can be missing in lower environments. If planning logic depends only on remote inference, content operations become fragile.
There is already a maintained Drupal AI module ecosystem, and I recommend starting there for most teams because it gives faster integration and community support. I chose a custom connector in this project because I needed strict control over tool-step generation and deterministic fallback behavior for testing and demos.
The Architecture
I added an OpenAI planner in src/openAiPlanner.js, wired it in src/index.js, and validated both paths with tests in tests/openAiPlanner.test.js.
flowchart TD
A[User Intent] --> B{OPENAI_API_KEY set?}
B -->|No| C[Use local rule-based planner]
B -->|Yes| D[Call OpenAI Chat Completions o3-mini]
D --> E{Valid Drupal tool steps returned?}
E -->|Yes| F[Execute Drupal tools]
E -->|No| C
C --> G[Deterministic fallback execution]
G --> F
Planner Configuration
```javascript title="src/openAiPlanner.js"
const OPENAI_MODEL = process.env.OPENAI_MODEL || 'o3-mini';
async function plan(userIntent, availableTools) {
// Call OpenAI with strict tool-step schema
// Validate response against allowed tool names
// Fall back to local planner on invalid response
}
```bash title="Terminal — environment setup"
export OPENAI_API_KEY="sk-..."
export OPENAI_MODEL="o3-mini"
npm test # 7 passing, lint clean
⚠️ Warning: Trust Boundaries
Do not treat model output as trusted commands. Constrain allowed tool names and validate argument shape before execution. The biggest gotcha was reliability, not syntax.
💡 Tip: Top Takeaway
Hybrid planning is worth it when you need AI speed but cannot accept AI-only runtime fragility. Keep fallback behavior explicit and tested, or you will eventually ship hidden outage paths.
The biggest gotcha was reliability, not syntax. The planner must degrade gracefully when the model returns irrelevant output. The fallback path is what keeps agent behavior stable under partial failure.
Related Implementation Context
The Code
Shipped scope in this run:
- OpenAI planning path using
OPENAI_API_KEYand configurableOPENAI_MODEL(defaulto3-mini) - Local planner fallback when key is missing or model response is unusable
- Tests covering fallback behavior and OpenAI request/model expectations
- Lint and test validation before push (
7 passing, lint clean)
What I Learned
- Hybrid planning is worth trying when you need AI speed but cannot accept AI-only runtime fragility.
- Use maintained Drupal AI modules first when your use case is standard integration and you want lower maintenance overhead.
- Build custom planner layers when you need strict tool contracts, deterministic tests, or provider-switching control.
- Avoid executing raw model intent in production; enforce an allowlist of tools and schema validation for each step.
- Keep fallback behavior explicit and tested, or you will eventually ship hidden outage paths.
Signal Summary
| Topic | Signal | Action | Priority |
|---|---|---|---|
| Drupal AI Roadmap 2026 | Agentic features are near-term | Design for intelligent automation | High |
| OpenAI Planner | Cloud AI calls can fail | Build local fallback path | Critical |
| Tool Validation | Model output is untrusted | Allowlist tools + validate schema | High |
| Drupal AI Modules | Maintained ecosystem exists | Start there for standard use cases | Medium |
Why this matters for Drupal and WordPress
Drupal's 2026 AI roadmap explicitly targets intelligent-agent workflows in core, meaning Drupal module developers should start building AI integrations with local fallbacks now rather than waiting for core APIs to stabilize. WordPress plugin developers building AI-powered features (content generation, smart search, editorial assistants) face the same reliability challenge — OpenAI API calls fail, rate limits hit, and API keys are missing in staging environments, so the hybrid planner pattern with deterministic fallback applies directly to WordPress plugin architecture. For agencies offering AI-enhanced CMS services on both platforms, this pattern ensures that AI features degrade gracefully rather than breaking editorial workflows when cloud inference is unavailable.
References
- Drupal's AI Roadmap for 2026: Accelerating Innovation
- Drupal's Vision 2026: Transformation into an Intelligent Agent
Looking for an Architect who doesn't just write code, but builds the AI systems that multiply your team's output? View my enterprise CMS case studies at victorjimenezdev.github.io or connect with me on LinkedIn.
Looking for an Architect who doesn't just write code, but builds the AI systems that multiply your team's output? View my enterprise CMS case studies at victorjimenezdev.github.io or connect with me on LinkedIn.
Originally published at VictorStack AI — Drupal & WordPress Reference
Top comments (0)