For years, building sophisticated multi-agent systems meant switching to Python. The release of Neuron V2 makes me feel one step closer toward answering whether PHP is ready for serious agentic application development, and the Deep Research Agent project provides a definitive answer.
This comprehensive research system demonstrates how Neuron is an agentic framework that allows you to create full-featured multi-agent architectures in PHP through practical implementation of coordinated AI agents working together to generate detailed research reports. The system orchestrates multiple specialized agents, each handling distinct phases of research planning, data gathering, content generation, and formatting.
The Multi-Agent Architecture
The Deep Research Agent breaks complex research tasks into manageable components through event-driven workflow architecture. A workflow is an event-driven, node-based way to control the execution flow of an application. Your application is divided into sections called Nodes which are triggered by Events, and themselves return Events which trigger further nodes.
Each agent in the system specializes in a specific aspect of research:
- Planning Agent: Creates structured report outlines and identifies research areas
- Query Generation Agent: Transforms topics into targeted search queries
- Web Search Agent: Executes searches and processes results
- Content Generation Agent: Synthesizes findings into coherent sections
- Formatting Agent: Compiles the final report structure
Neuron Workflow modular approach enables sophisticated coordination while maintaining code clarity and debugging capabilities that were previously impossible in PHP.
Implementation Overview
The system's entry point demonstrates the simplicity of launching complex multi-agent workflows:
class DeepResearchAgent extends Workflow
{
/**
* @throws WorkflowException
*/
public function __construct(string $query, protected int $maxSections = 3)
{
parent::__construct(new WorkflowState(['topic' => $query]));
}
protected function nodes(): array
{
return [
new Planning($this->maxSections),
new GenerateSectionContent(), // Loop until all sections are generated
new Format(),
];
}
}
Behind this interface, the workflow coordinates multiple agents through event-driven nodes.
- Planning: Creates the structure of the report
- GenerateSectionContent: Generates content for each section using search results
- Format: Compiles the final report, with additional nodes handling query generation and web searches.
Event-Driven Coordination
What makes NeuronAI Workflows special is their streaming and interruption capabilities. This means your multi-agent system can stream updates directly to clients, pause mid-process, ask for human input, wait for feedback, and then continue exactly where it left off – even if that’s hours or days later.
The workflow architecture enables agents to communicate through events rather than direct function calls:
class Planning extends Node
{
public function __construct(protected int $template)
{
}
public function __invoke(StartEvent $event, WorkflowState $state): GenerationEvent
{
/** @var ReportPlanOutput $plan */
$plan = ResearchAgent::make()
->withInstructions(
"You are an expert in research. You are given a user query and you need to generate a report plan for the user."
)
->structured(
new UserMessage(str_replace('{topic}', $state->get('topic'), $template)),
ReportPlanOutput::class
);
return new GenerationEvent($plan);
}
}
Each event triggers subsequent nodes automatically, creating a self-coordinating system where agents respond to completed work from other specialists.
Real-Time Streaming Capabilities
The system provides visibility into the research process as it happens. Users can observe planning completion, watch search queries execute, and see content generation in real-time. This streaming architecture means users see planning complete, watch research queries execute, and observe content generation happen step by step. For applications where users need to understand what the AI system is doing, this visibility builds confidence in the process.
class GenerateQueries extends Node
{
/**
* @throws \Throwable
*/
public function __invoke(StartEvent $event, WorkflowState $state): \Generator|SearchEvent
{
$prompt = \str_replace('{query}', $state->get('query'), Prompts::SEARCH_QUERY_INSTRUCTIONS);
yield new ProgressEvent("\n\n========== Generating search queries ==========\n\n");
/** @var SearchQueriesOutput $response */
$response = ResearchAgent::make()
->structured(
new UserMessage($prompt),
SearchQueriesOutput::class
);
yield new ProgressEvent(\implode("- ", $response->queries) ."\n");
return new SearchEvent($response->queries);
}
}
This streaming capability addresses one of the fundamental challenges in AI applications: providing transparency in automated processes that traditionally operate as black boxes.
Human-in-the-Loop
One of the most practical features v2 reinforced is seamless human intervention. Workflows can pause execution, request human input, and resume exactly where they stopped. This capability makes AI systems viable for sensitive business processes where human oversight is required.
class InterruptionNode extends Node
{
public function __invoke(InputEvent $event, WorkflowState $state): OutputEvent
{
// Interrupt the workflow and wait for the feedback.
$feedback = $this->interrupt([
'question' => 'Should we continue?',
'current_value' => $state->get('accuracy')
]);
if ($feedback['approved']) {
$state->set('is_sufficient', true);
$state->set('user_response', $feedback['response']);
return new OutputEvent();
}
$state->set('is_sufficient', false);
return new InputEvent();
}
}
This pattern enables deployment of AI systems in production environments where complete automation isn’t appropriate or trusted.
Monitoring & Debugging
Many of the Agents you build with Neuron will contain multiple steps with multiple invocations of LLM calls, tool usage, access to external memories, etc. As these applications get more and more complex, it becomes crucial to be able to inspect what exactly your agent is doing and why.
The Deep Research Agent integrates with Inspector for comprehensive observability:
INSPECTOR_INGESTION_KEY=fwe45gtxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This monitoring reveals the complete execution timeline, showing which agents made decisions, how long each phase took, and where any issues occurred. For multi-agent systems, this visibility proves essential for optimization and debugging.
Getting Started
The complete implementation is available at the Deep Research Agent repository. Setup requires minimal configuration:
composer install
php research.php
It’s a complete implementation that creates comprehensive research reports by orchestrating multiple AI services. Planning, research, content generation, formatting – all streaming in real-time, all built with the new event-driven workflow architecture.
The Broader Implications
This implementation demonstrates that sophisticated AI architectures are no longer exclusive to Python. For teams with substantial PHP expertise, v2 removes the primary arguments for switching to Python for AI development. Complex agentic workflows are now feasible within the PHP ecosystem, using familiar patterns and deployment infrastructure.
The Deep Research Agent represents more than a technical achievement. It proves that PHP developers can build production-ready multi-agent systems without abandoning their existing skills, deployment infrastructure, or codebase. Neuron definitively fills the gap for AI Agents development between PHP and other ecosystems like Python or Javascript, enabling teams to innovate within their current technology stack.
For developers curious about agentic applications, the Deep Research Agent provides both a learning resource and a foundation for building specialized multi-agent systems.
The question isn’t whether PHP can handle multi-agent systems anymore. The question is what you’ll build with these capabilities.
If you want to learn more about how to get started your AI journey in PHP check out the learning section in the documentation.
Top comments (0)