DEV Community

Cover image for AI-Native CLI Template: Build Production-Ready AI Agents in Minutes in NodeJS
Muhammad Arslan
Muhammad Arslan

Posted on

AI-Native CLI Template: Build Production-Ready AI Agents in Minutes in NodeJS

Building AI-powered applications just got dramatically easier. We're excited to introduce the AI-Native CLI Template for HazelJSโ€”a complete, production-ready starter that brings together our most powerful AI and agent capabilities into a single, easy-to-use package.

Whether you're building a customer support bot, an intelligent data analyst, or a complex multi-agent system, this template gives you everything you need to go from idea to production in minutes, not weeks.

Why an AI-Native Template?

The AI landscape is evolving rapidly, and developers need tools that keep pace. Traditional frameworks force you to wire together disparate libraries, manage complex state, and build infrastructure from scratch. The HazelJS AI-Native template eliminates this friction by providing:

  • Pre-configured AI Integration: OpenAI, Anthropic, and custom LLM support out of the box
  • Production-Ready Agent System: Multi-agent orchestration with streaming, tracing, and delegation
  • Complete RAG Pipeline: Vector search, document ingestion, and semantic retrieval
  • Docker & Deployment Ready: Full containerization with health checks and monitoring
  • Type-Safe Development: Full TypeScript support with intelligent autocomplete

What's Included?

๐Ÿค– Advanced Agent Capabilities

The template showcases HazelJS's powerful @hazeljs/agent package with real-world examples:

1. Execution Tracing (POST /agent/trace)

Get complete visibility into your agent's reasoning process with detailed execution traces:

@Agent({ name: 'WeatherAgent', description: 'Weather information agent' })
export class WeatherAgent {
  @Tool({
    description: 'Get current weather for a city',
    parameters: [
      { name: 'city', type: 'string', required: true, description: 'City name' }
    ]
  })
  async getCurrentWeather(input: { city: string }) {
    return {
      city: input.city,
      temperature: 72,
      conditions: 'Sunny',
      humidity: 45
    };
  }
}
Enter fullscreen mode Exit fullscreen mode

The trace endpoint returns a complete breakdown of every step:

{
  "result": "The weather in San Francisco is currently sunny...",
  "steps": [
    {
      "stepNumber": 1,
      "action": { "type": "tool_call", "toolName": "getCurrentWeather" },
      "result": { "city": "San Francisco", "temperature": 72 }
    }
  ]
}
Enter fullscreen mode Exit fullscreen mode

2. Server-Sent Events Streaming (POST /agent/stream)

Real-time streaming responses for better UX:

async executeStream(
  @Body() body: { input: string },
  @Res() res: HazelResponse
) {
  res.setHeader('Content-Type', 'text/event-stream');

  for await (const chunk of this.agentService.executeStream(
    'WeatherAgent',
    body.input
  )) {
    res.write(`data: ${JSON.stringify(chunk)}\n\n`);
  }

  res.end();
}
Enter fullscreen mode Exit fullscreen mode

Stream types include:

  • token: Individual LLM response tokens
  • step: Complete reasoning steps
  • done: Execution completion signal

3. Multi-Agent Delegation (POST /travel)

Build complex workflows by delegating between specialized agents:

@Agent({ name: 'TravelAgent', description: 'Travel planning assistant' })
export class TravelAgent {
  @Delegate({
    agent: 'WeatherAgent',
    description: 'Get weather information for travel planning'
  })
  async checkWeather(input: { input: string }): Promise<string> {
    return '';  // Implementation handled by @Delegate
  }

  @Delegate({
    agent: 'FactsAgent',
    description: 'Get interesting facts about destinations'
  })
  async getCityFacts(input: { input: string }): Promise<string> {
    return '';
  }
}
Enter fullscreen mode Exit fullscreen mode

The @Delegate decorator automatically:

  • Routes requests to the appropriate agent
  • Handles tool registration for LLM visibility
  • Manages context passing between agents
  • Provides transparent error handling

4. Supervisor Pattern (POST /travel/supervisor)

Let the LLM intelligently route requests to the right agent:

async executeSupervisor(@Body() body: { input: string }) {
  return this.agentService.executeSupervisor(
    body.input,
    ['WeatherAgent', 'FactsAgent'],
    {
      systemPrompt: 'Route travel-related queries to the appropriate agent'
    }
  );
}
Enter fullscreen mode Exit fullscreen mode

The supervisor analyzes the user's intent and automatically selects the best agent for the task.

๐Ÿ’ฌ AI Chat & RAG

Beyond agents, the template includes a complete AI chat system with RAG capabilities:

@Post('chat')
async chat(@Body() body: { message: string; enableRAG?: boolean }) {
  return this.aiService.complete({
    messages: [{ role: 'user', content: body.message }],
    temperature: 0.7,
    maxTokens: 1000
  });
}
Enter fullscreen mode Exit fullscreen mode

RAG Integration (POST /rag/query):

async queryWithRAG(@Body() body: { query: string }) {
  // Semantic search across your knowledge base
  const relevantDocs = await this.ragService.search(body.query, { limit: 5 });

  // Augment LLM context with retrieved documents
  return this.aiService.complete({
    messages: [
      { role: 'system', content: `Context: ${relevantDocs.join('\n')}` },
      { role: 'user', content: body.query }
    ]
  });
}
Enter fullscreen mode Exit fullscreen mode

๐Ÿณ Production-Ready Infrastructure

The template includes complete Docker configuration:

Dockerfile with multi-stage builds:

FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/index.js"]
Enter fullscreen mode Exit fullscreen mode

docker-compose.yml with PostgreSQL and Redis:

services:
  hazeljs-ai-app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“ฆ Complete Postman Collection

Test all endpoints immediately with the included Postman collection:

  • Agent execution with tracing
  • Streaming responses
  • Multi-agent delegation
  • Supervisor routing
  • RAG queries
  • Standard chat completions

Key Benefits

1. Instant Productivity

npx @hazeljs/cli new my-ai-app --template ai-native
cd my-ai-app
npm install
npm run dev
Enter fullscreen mode Exit fullscreen mode

Your AI application is running in under 2 minutes with:

  • โœ… Multiple working agent examples
  • โœ… Streaming and tracing configured
  • โœ… RAG pipeline ready
  • โœ… Docker deployment files
  • โœ… Health checks and monitoring

2. Learn by Example

Every feature is demonstrated with working code:

  • WeatherAgent: Simple tool-based agent
  • FactsAgent: Knowledge retrieval agent
  • TravelAgent: Multi-agent delegation
  • Supervisor: Intelligent routing

Copy, modify, and extend these examples for your use case.

3. Production-Grade Architecture

Built on battle-tested HazelJS patterns:

  • Type Safety: Full TypeScript with strict mode
  • Error Handling: Comprehensive error boundaries
  • Logging: Structured logging with log levels
  • Health Checks: Kubernetes-ready health endpoints
  • Graceful Shutdown: Proper cleanup on termination

4. Scalable from Day One

Start simple, scale when needed:

  • Local Development: SQLite for quick iteration
  • Production: PostgreSQL with connection pooling
  • Caching: Redis for session and vector storage
  • Horizontal Scaling: Stateless design for easy replication

5. Comprehensive Testing

The agent package includes 439 tests with 74.52% branch coverage, ensuring reliability:

  • Unit tests for all agent features
  • Integration tests for multi-agent workflows
  • Streaming and tracing validation
  • Error scenario coverage

Real-World Use Cases

Customer Support Bot

@Agent({ name: 'SupportAgent', description: 'Customer support assistant' })
export class SupportAgent {
  @Tool({ description: 'Search knowledge base' })
  async searchKB(input: { query: string }) {
    return this.ragService.search(input.query);
  }

  @Delegate({ agent: 'TicketAgent', description: 'Create support ticket' })
  async createTicket(input: { issue: string }) {
    return '';
  }
}
Enter fullscreen mode Exit fullscreen mode

Data Analysis Assistant

@Agent({ name: 'DataAnalyst', description: 'Analyze business data' })
export class DataAnalyst {
  @Tool({ description: 'Query database' })
  async queryData(input: { sql: string }) {
    return this.db.query(input.sql);
  }

  @Tool({ description: 'Generate visualization' })
  async createChart(input: { data: any[], type: string }) {
    return this.chartService.generate(input.data, input.type);
  }
}
Enter fullscreen mode Exit fullscreen mode

Content Generation Pipeline

@Agent({ name: 'ContentAgent', description: 'Generate marketing content' })
export class ContentAgent {
  @Delegate({ agent: 'ResearchAgent', description: 'Research topics' })
  async research(input: { topic: string }) {
    return '';
  }

  @Delegate({ agent: 'WriterAgent', description: 'Write content' })
  async write(input: { outline: string }) {
    return '';
  }

  @Delegate({ agent: 'EditorAgent', description: 'Edit and polish' })
  async edit(input: { draft: string }) {
    return '';
  }
}
Enter fullscreen mode Exit fullscreen mode

Getting Started

Prerequisites

  • Node.js 18+
  • OpenAI API key (or other LLM provider)
  • Docker (optional, for containerized deployment)

Quick Start

# Create new project
npx @hazeljs/cli new my-ai-app --template ai-native

# Navigate to project
cd my-ai-app

# Install dependencies
npm install

# Set up environment
cp .env.example .env
# Add your OPENAI_API_KEY to .env

# Start development server
npm run dev

# Test the endpoints
curl -X POST http://localhost:3000/agent \
  -H "Content-Type: application/json" \
  -d '{"input": "What is the weather in Paris?"}'
Enter fullscreen mode Exit fullscreen mode

Import Postman Collection

  1. Open Postman
  2. Import HazelJS-AI-Native.postman_collection.json
  3. Test all endpoints with pre-configured requests

Deploy with Docker

# Build image
docker build -t my-ai-app .

# Run with docker-compose
docker-compose up -d

# Check health
curl http://localhost:3000/health
Enter fullscreen mode Exit fullscreen mode

What's Next?

The AI-Native template is just the beginning. We're actively working on:

  • More Agent Patterns: Graph-based workflows, hierarchical agents, and autonomous loops
  • Enhanced RAG: Multi-modal retrieval, hybrid search, and reranking
  • Observability: Built-in tracing with OpenTelemetry and LangSmith integration
  • Fine-tuning Support: Easy integration with custom model training
  • Agent Marketplace: Share and discover pre-built agents

Join the Community

The HazelJS ecosystem is growing rapidly with 45+ packages and an active community of developers building AI-native applications.

Conclusion

The AI-Native CLI template represents our vision for the future of AI application development: powerful, yet simple. By combining production-ready infrastructure with intuitive APIs, we're making it possible for any developer to build sophisticated AI agents without the complexity.

Whether you're a solo developer exploring AI or a team building production systems, this template gives you a solid foundation to build upon.

Try it today and see how quickly you can go from idea to intelligent application.

npx @hazeljs/cli new my-ai-app --template ai-native
Enter fullscreen mode Exit fullscreen mode

HazelJS: The modular, AI-native framework for building production-grade TypeScript applications.

Top comments (0)