Less than a month ago we hit publish on our first beta. Yesterday we shipped our first alpha. In between: 38 packages built, tested, and released. Here’s the story so far.
February 12: The First Beta
February 12, 2025 — We released the first public beta of HazelJS.
Not a “coming soon” or a private preview. A real, installable set of packages on npm. The idea was simple: a Node.js framework that gets out of your way. NestJS-style structure without the weight. Express-level simplicity with decorators, DI, and AI built in from day one.
We didn’t know how it would land. We put out the core — routing, controllers, dependency injection, middleware — and a handful of packages around AI, RAG, and agents. The goal was to see if anyone would actually use it.
They did. Issues, discussions, and early adopters gave us something we couldn’t get from internal dogfooding alone: real feedback under real conditions.
The Sprint: 38 Packages
Between that first beta and yesterday’s alpha, we didn’t slow down.
We went from a small set of modules to 38 packages — each with a clear job, each installable on its own from the HazelJS npm org. Core and config. AI, agent, RAG, and memory. Flow and flow-runtime. Auth, cache, prisma. Gateway, resilience, discovery. Kafka, gRPC, GraphQL. CLI, Swagger, WebSocket, cron, serverless. Payment, guardrails, MCP, and more.
That number isn’t bloat. It’s modularity. You don’t get a monolith. You get a stack you compose: pick what you need, leave the rest behind. Same philosophy that made the first beta possible — just a lot more of it.
March 8: Alpha
Yesterday we released our first alpha.
Alpha means: we’re ready for you to try the whole thing. Not just one or two packages — the full ecosystem. Run it. Break it. Tell us what’s missing, what’s confusing, what should change before we lock in APIs and call it stable.
Beta was “here’s something that works.” Alpha is “here’s the stack we’re betting on — help us make it better.”
Below we break down the features that make this alpha what it is — especially the AI, agent, RAG, memory, flow, and guardrails packages.
Features in depth
AI — @hazeljs/ai (npm)
One unified API for multiple LLM providers: OpenAI, Anthropic, Gemini, Cohere, and Ollama. Streaming, function calling, embeddings, and vector search. Switch providers without rewriting your code. Use decorators like @AITask and @AIFunction for declarative AI integration — wire a model to a route or method and get type-safe, streaming responses. No provider lock-in; same interface whether you’re on GPT-4, Claude, or a local Ollama instance.
Agent — @hazeljs/agent (npm)
A production-oriented agent runtime: stateful execution, tool registration and execution, memory integration, and human-in-the-loop approval workflows. The runtime handles the think → act → persist loop, state recovery, tool validation, and a full event bus for observability. Build support bots, research agents, and multi-step automations that can pause for approval and resume safely. Tools are first-class: register them, let the runtime validate and execute them, and optionally require human approval before sensitive actions.
RAG — @hazeljs/rag (npm)
End-to-end RAG pipeline: 11 document loaders (TXT, Markdown, PDF, DOCX, web, YouTube, GitHub, and more), multiple vector stores (in-memory, Pinecone, Qdrant, Weaviate, ChromaDB), GraphRAG for knowledge-graph retrieval, and a built-in memory system (conversation, entity, fact, working memory) with buffer, vector, and hybrid storage. Semantic and hybrid search, auto-summarization, and decorators like @Embeddable and @SemanticSearch for declarative RAG. Ingest from docs, URLs, or code; query with natural language; get answers with sources. See the RAG guide for full setup.
Memory — @hazeljs/memory (npm)
New in this alpha. Pluggable, user-scoped memory with one interface and multiple backends. Without it, user context often ends up scattered — in RAG conversation buffers, ad-hoc DB tables, or hardcoded in prompts. @hazeljs/memory gives you a single model for profile, preference, behavioral, emotional, episodic, and semantic memory, with explicit vs inferred storage, optional TTL (e.g. for emotional state), and support for composite stores that combine backends. Backends: in-memory by default; optional Postgres (raw or via Prisma), Redis, vector for episodic/semantic search. Use the @hazeljs/rag/memory-hazel adapter to back RAG’s MemoryManager with @hazeljs/memory and pass the same store to RAG and every AgentRuntime — one memory layer for the whole app. Details in the Memory package docs and Memory guide.
Flow — @hazeljs/flow (npm)
A durable workflow engine: define flows with @Flow, @Node, @Edge. In-memory by default; optional Prisma storage for crash recovery and multi-process safety. First-class wait and resume (e.g. for human approval or external callbacks), retries, timeouts, idempotency keys, and a full audit trail. Model long-running processes — onboarding, approvals, multi-step integrations — without losing state when the process restarts.
Flow runtime — @hazeljs/flow-runtime (npm)
Optional HTTP service to start and resume flow runs. Drive workflows from APIs or external systems: trigger a flow via POST, get a run ID, then call back to resume after a webhook or human action. Keeps flow execution decoupled from your main app so you can scale or isolate it.
Guardrails — @hazeljs/guardrails (npm)
Content safety, PII handling, and output validation for AI applications. AI faces unique risks: prompt injection, PII leakage, toxic output. @hazeljs/guardrails plugs into HTTP, AI, and agent layers — no separate middleware. PII detection & redaction (email, phone, SSN, credit card, configurable); prompt injection detection (heuristic patterns); toxicity check (keyword blocklist); output validation (schema + PII redaction on LLM responses). Use GuardrailPipe or GuardrailInterceptor on routes, @GuardrailInput and @GuardrailOutput on @AITask methods, and when AgentModule is present, tool input and output are validated automatically. Built for public chat APIs and compliance (GDPR/CCPA).
And the rest
Gateway, resilience, discovery — Version routing, canary deployments, circuit breakers, service discovery. Auth, cache, prisma — JWT, multi-tier cache, Prisma + repositories. Payment — One API for Stripe (and more providers). CLI — Scaffolding, generators for controllers, modules, auth, RAG. Swagger, WebSocket, GraphQL, cron, serverless, Kafka, gRPC, MCP, PDF-to-audio — all in the 38, all documented, all on npm.
Why This Matters
Four weeks. From first beta to first alpha. 38 packages. One consistent story: a modular, TypeScript-first framework for backend and AI applications.
We’re not just adding features. We’re proving that you can have:
- Structure without ceremony
- AI and RAG without starting from scratch
- Choice without fragmentation — one org, one namespace, one way to compose
That’s the achievement we’re celebrating. Not the number 38 for its own sake, but what it represents: a full, coherent stack that you can adopt piece by piece. Explore the full package list on npm or the documentation to see everything that's included.
What’s Next
Alpha is an invitation. We’ll iterate on your feedback, fix bugs, improve docs and examples, and move toward a stable release when the APIs and migration paths feel right.
If you haven’t tried it yet:
npm install @hazeljs/core @hazeljs/ai @hazeljs/agent @hazeljs/rag
Or start with the CLI:
npx @hazeljs/cli new my-app
- Docs: hazeljs.com/docs — Quick Start, packages, guides
- GitHub: github.com/hazel-js/hazeljs — star the repo, open issues, contribute
- npm: npmjs.com/org/hazeljs — all 38 packages
- Blog: First Alpha Release, Payment package
Thank you for being part of the journey. From beta to alpha in under a month — and the best is still ahead.
— The HazelJS Team
Top comments (0)