We're thrilled to announce the release of HazelJS 0.3.0, a major milestone that transforms HazelJS into the most comprehensive AI-native backend framework for Node.js. This release brings enterprise-grade machine learning capabilities, advanced RAG systems, and a complete toolkit for building production-ready AI applications.
π― What is HazelJS?
HazelJS is a modern, TypeScript-first Node.js framework designed from the ground up for the AI era. Unlike traditional frameworks that bolt on AI features as an afterthought, HazelJS treats AI as a first-class citizen, providing native support for:
- π€ AI Agents with autonomous decision-making
- π§ Machine Learning pipelines and model management
- π Retrieval-Augmented Generation (RAG) with advanced chunking strategies
- πΎ Persistent Memory for context-aware applications
- π Agentic Workflows with visual flow builders
- 40+ Enterprise Packages for complete application development
β¨ What's New in 0.3.0
π§ͺ Complete Machine Learning Toolkit (@hazeljs/ml)
The new ML package provides everything you need to build, train, and deploy machine learning models in production:
Model Management & Training
import { Model, Train, Predict, TrainerService } from '@hazeljs/ml';
@Model({ name: 'sentiment-classifier', version: '1.0.0', framework: 'tensorflow' })
class SentimentModel {
@Train()
async train(data: TrainingData) {
// Your training logic
return { accuracy: 0.95, loss: 0.05 };
}
@Predict()
async predict(input: { text: string }) {
return { sentiment: 'positive', confidence: 0.92 };
}
}
Experiment Tracking
Track your ML experiments with built-in support for metrics, parameters, and artifacts:
import { ExperimentService } from '@hazeljs/ml';
const experiment = experimentService.createExperiment('sentiment-analysis');
const run = experimentService.startRun(experiment.id);
experimentService.logMetric(run.id, 'accuracy', 0.95);
experimentService.logMetric(run.id, 'f1_score', 0.93);
experimentService.logArtifact(run.id, 'model', 'model', modelData);
experimentService.endRun(run.id);
Model Drift Detection
Monitor your models in production with comprehensive drift detection:
import { DriftService } from '@hazeljs/ml';
const driftService = new DriftService();
driftService.setReferenceDistribution('age', trainingData);
const driftResult = driftService.detectDrift('age', productionData, {
method: 'psi', // Population Stability Index
threshold: 0.25
});
if (driftResult.driftDetected) {
console.log(`Drift detected! Score: ${driftResult.score}`);
}
Supports multiple drift detection methods:
- PSI (Population Stability Index)
- KS (Kolmogorov-Smirnov)
- JSD (Jensen-Shannon Divergence)
- Wasserstein Distance
- Chi-Square for categorical features
Model Evaluation & Metrics
import { MetricsService } from '@hazeljs/ml';
const result = await metricsService.evaluate('sentiment-classifier', testData, {
metrics: ['accuracy', 'precision', 'recall', 'f1']
});
console.log(result.metrics);
// { accuracy: 0.95, precision: 0.94, recall: 0.96, f1Score: 0.95 }
Production Monitoring
import { MonitorService } from '@hazeljs/ml';
monitorService.registerModel({
modelName: 'sentiment-classifier',
modelVersion: '1.0.0',
featureDrift: { method: 'psi', threshold: 0.25 },
accuracyMonitor: { threshold: 0.9, windowSize: 100 },
checkIntervalMinutes: 60
});
monitorService.onAlert((alert) => {
console.log(`Alert: ${alert.message}`);
// Send to Slack, PagerDuty, etc.
});
π Test Coverage Achievement: The ML package now has 97.52% test coverage with 246 comprehensive tests, ensuring production-ready reliability.
π Advanced RAG Capabilities (@hazeljs/rag)
Enhanced RAG system with production-grade features:
Intelligent Document Chunking
import { ChunkingStrategy } from '@hazeljs/rag';
// Semantic chunking based on meaning
const semanticChunker = new ChunkingStrategy({
type: 'semantic',
maxChunkSize: 512,
overlapSize: 50
});
// Recursive chunking for structured documents
const recursiveChunker = new ChunkingStrategy({
type: 'recursive',
separators: ['\n\n', '\n', '. ', ' ']
});
Multi-Vector Store Support
- Pinecone - Serverless vector database
- Weaviate - Open-source vector search
- Qdrant - High-performance vector similarity
- Chroma - AI-native embedding database
- In-Memory - For development and testing
Agentic RAG
Build autonomous RAG systems that can reason and make decisions:
import { AgenticRAG } from '@hazeljs/rag';
const agenticRAG = new AgenticRAG({
vectorStore: pineconeStore,
llm: openaiService,
tools: [webSearchTool, calculatorTool],
maxIterations: 5
});
const answer = await agenticRAG.query(
'What were the Q4 2023 revenue figures and how do they compare to Q3?'
);
π€ Enhanced AI Agent Runtime (@hazeljs/agent)
Build production-ready AI agents with advanced capabilities:
import { Agent, Tool } from '@hazeljs/agent';
@Agent({
name: 'customer-support-agent',
model: 'gpt-4',
temperature: 0.7,
maxIterations: 10
})
class CustomerSupportAgent {
@Tool({ description: 'Search knowledge base' })
async searchKB(query: string) {
return await this.ragService.search(query);
}
@Tool({ description: 'Create support ticket' })
async createTicket(issue: string, priority: string) {
return await this.ticketService.create({ issue, priority });
}
}
πΎ Persistent Memory System (@hazeljs/memory)
Give your AI applications long-term memory:
import { MemoryService } from '@hazeljs/memory';
// Store conversation context
await memoryService.store({
userId: 'user123',
sessionId: 'session456',
content: 'User prefers technical explanations',
metadata: { type: 'preference', importance: 'high' }
});
// Retrieve relevant memories
const memories = await memoryService.recall({
userId: 'user123',
query: 'How should I explain this?',
limit: 5
});
π Visual Flow Builder (@hazeljs/flow + @hazeljs/flow-runtime)
Create complex AI workflows with a visual interface:
import { Flow, FlowNode } from '@hazeljs/flow';
const workflow = new Flow('customer-onboarding')
.addNode('validate-email', { type: 'validation' })
.addNode('send-welcome', { type: 'email' })
.addNode('create-profile', { type: 'database' })
.addNode('ai-personalization', { type: 'ai-agent' })
.connect('validate-email', 'send-welcome')
.connect('send-welcome', 'create-profile')
.connect('create-profile', 'ai-personalization');
await flowRuntime.execute(workflow, { email: 'user@example.com' });
π Data Processing & ETL (@hazeljs/data)
Enterprise-grade data processing with quality checks:
import { DataPipeline, DataContract } from '@hazeljs/data';
@DataContract({
owner: 'data-team',
schema: {
userId: { type: 'string', required: true },
email: { type: 'string', format: 'email' },
age: { type: 'number', min: 0, max: 120 }
},
sla: { freshness: '1h', completeness: 0.95 }
})
class UserData {}
const pipeline = new DataPipeline()
.extract(source)
.transform(cleanData)
.validate(schema)
.load(destination);
π‘οΈ AI Guardrails (@hazeljs/guardrails)
Ensure safe and compliant AI outputs:
import { GuardrailService } from '@hazeljs/guardrails';
const guardrails = new GuardrailService({
toxicity: { threshold: 0.7 },
pii: { detect: true, redact: true },
factuality: { enabled: true },
bias: { check: true }
});
const result = await guardrails.validate(aiResponse);
if (!result.passed) {
console.log('Guardrail violations:', result.violations);
}
π’ 40+ Enterprise Packages
HazelJS 0.3.0 includes a complete ecosystem of packages:
Core Infrastructure
-
@hazeljs/core- Framework foundation -
@hazeljs/config- Configuration management -
@hazeljs/cache- Multi-tier caching (Redis, Memory) -
@hazeljs/discovery- Service discovery & registry -
@hazeljs/gateway- API gateway with rate limiting
AI & ML
-
@hazeljs/ai- Multi-provider AI integration (OpenAI, Anthropic, Gemini, Cohere, Ollama) -
@hazeljs/ml- Complete ML toolkit -
@hazeljs/rag- Advanced RAG systems -
@hazeljs/agent- AI agent runtime -
@hazeljs/memory- Persistent memory -
@hazeljs/guardrails- AI safety & compliance -
@hazeljs/prompts- Prompt management & templates
Data & Integration
-
@hazeljs/data- ETL & data quality -
@hazeljs/prisma- Prisma ORM integration -
@hazeljs/typeorm- TypeORM integration -
@hazeljs/graphql- GraphQL server -
@hazeljs/grpc- gRPC support -
@hazeljs/kafka- Kafka integration -
@hazeljs/queue- Job queues -
@hazeljs/messaging- Message bus
Security & Auth
-
@hazeljs/auth- JWT authentication -
@hazeljs/oauth- OAuth 2.0 / OpenID Connect -
@hazeljs/casl- Authorization (CASL) -
@hazeljs/audit- Audit logging
DevOps & Monitoring
-
@hazeljs/inspector- Runtime inspector dashboard -
@hazeljs/ops-agent- Operations agent -
@hazeljs/resilience- Circuit breakers & retries -
@hazeljs/serverless- Serverless deployment
Developer Experience
-
@hazeljs/cli- Code generation & scaffolding -
@hazeljs/swagger- OpenAPI documentation -
@hazeljs/i18n- Internationalization -
@hazeljs/cron- Scheduled tasks
π Getting Started
Installation
# Create a new HazelJS project
npx @hazeljs/cli new my-ai-app
# Or add to existing project
npm install @hazeljs/core @hazeljs/ai @hazeljs/ml @hazeljs/rag
Quick Example: AI-Powered API
import { Module, Controller, Get, Post, Body } from '@hazeljs/core';
import { AIEnhancedService } from '@hazeljs/ai';
import { RAGService } from '@hazeljs/rag';
import { MetricsService } from '@hazeljs/ml';
@Controller('/api')
class AIController {
constructor(
private ai: AIEnhancedService,
private rag: RAGService,
private metrics: MetricsService
) {}
@Post('/chat')
async chat(@Body() { message }: { message: string }) {
// Use RAG for context
const context = await this.rag.search(message, { limit: 3 });
// Generate AI response
const response = await this.ai
.chat()
.system('You are a helpful assistant')
.context(context)
.user(message)
.execute();
// Track metrics
await this.metrics.recordEvaluation({
modelName: 'chat-assistant',
version: '1.0.0',
metrics: { responseTime: response.latency }
});
return response;
}
}
@Module({
controllers: [AIController],
providers: [AIEnhancedService, RAGService, MetricsService]
})
class AppModule {}
π Performance & Reliability
- 97.52% Test Coverage for ML package
- Production-tested drift detection algorithms
- Type-safe APIs with full TypeScript support
- Modular architecture - use only what you need
- Enterprise-ready with comprehensive monitoring
π Resources & Links
- Website: https://hazeljs.ai
- Documentation: https://hazeljs.ai/docs
- GitHub: https://github.com/hazel-js/hazeljs
- NPM: https://www.npmjs.com/org/hazeljs
- Discord Community: https://discord.gg/rnxaDcXx
Package Documentation
- AI Package - Multi-provider AI integration
- ML Package - Machine learning toolkit
- RAG Package - Retrieval-augmented generation
- Agent Package - AI agent runtime
- Memory Package - Persistent memory
- Data Package - ETL & data quality
- Flow Package - Visual workflow builder
Guides & Tutorials
π€ Community & Support
Join our growing community:
- GitHub Discussions: Ask questions and share ideas
- Discord: Real-time chat with the team and community
-
Stack Overflow: Tag your questions with
hazeljs
π― What's Next?
We're already working on exciting features for the next release:
- AutoML capabilities for automated model selection
- Federated Learning support for privacy-preserving ML
- Enhanced Graph RAG with knowledge graph integration
- Multi-modal AI support (vision, audio, video)
- Distributed Training for large-scale ML
- Real-time Model Serving with optimized inference
π‘ Why Choose HazelJS?
AI-Native Architecture
Unlike frameworks that add AI as an afterthought, HazelJS is built from the ground up for AI applications.
Production-Ready
With 97%+ test coverage, comprehensive monitoring, and enterprise features, HazelJS is ready for production workloads.
Developer Experience
TypeScript-first design, intuitive APIs, and excellent documentation make development a joy.
Complete Ecosystem
40+ packages covering everything from AI to databases, authentication to monitoring.
Open Source
Apache 2.0 licensed, community-driven, and transparent development.
π Try It Today
npx @hazeljs/cli new my-intelligent-app
cd my-intelligent-app
npm run dev
Visit hazeljs.ai to get started and join the AI-native revolution in backend development!
Found this helpful? Give us a β on GitHub
Questions? Drop a comment below or join our Discord community.
Top comments (0)