DEV Community

Shaiju Edakulangara
Shaiju Edakulangara

Posted on

NodeLLM 1.15: Automated Schema Self-Correction and Middleware Lifecycle

Building reliable AI systems requires more than just high-quality models; it requires infrastructure that can handle the inherent unpredictability of LLM outputs. Even the most capable models occasionally hallucinate malformed JSON or fail to adhere to strict validation schemas.

With NodeLLM 1.15, we are introducing a powerful set of tools designed to make your AI workflows more resilient, predictable, and type-safe.

The Headline: Schema Self-Correction

One of the most common friction points in building LLM-powered applications is validation failure. You define a Zod schema for a structured output, but the model returns something slightly off—perhaps a missing required field or a string where a number was expected.

Previously, handling these errors required manual retry logic in your application code. NodeLLM 1.15 introduces the Schema Self-Correction Middleware, which automates this recovery process.

How it Works

When configured, the middleware intercepts Zod validation errors from withSchema() or tool-calling arguments. Instead of throwing an error immediately, it:

  1. Captures the specific validation error messages from Zod.
  2. Feeds that feedback back to the model as a system prompt.
  3. Instructs the model to correct its previous output.
import { createLLM, z } from "@node-llm/core";

const llm = createLLM({
  maxRetries: 2 // Configurable globally (default: 2)
});

const chat = llm.chat("gpt-4o");

const schema = z.object({
  analysis: z.string(),
  confidence: z.number().min(0).max(1),
  tags: z.array(z.string())
});

const response = await chat
  .withSchema(schema)
  .ask("Analyze the current market trends for AI infrastructure.");

// If the model originally missed the 'confidence' field, 
// the middleware ensures it corrects itself before returning
console.log(response.data);
Enter fullscreen mode Exit fullscreen mode

This "Self-Correction Loop" happens transparently within the ask() or askStream() call, ensuring your application logic stays clean and focused on the happy path.

Middleware Lifecycle Directives

As NodeLLM's middleware ecosystem grows, so does the need for fine-grained control over the execution flow. Version 1.15 introduces lifecycle directives—a set of instructions a middleware can return to influence the core orchestrator.

The new directives include:

  • RETRY: Re-run the current request (used by self-correction).
  • REPLACE: Replace the current response with a new one (e.g., for PII masking).
  • STOP: Halt the middleware chain and return immediately.
  • CONTINUE: Move to the next middleware (default).

This architectural shift allows developers to build sophisticated interceptors for safety, caching, or rate-limiting that can intelligently decide whether to let a request proceed or trigger a retry loop.

Declarative Agent Middlewares

In our mission to make Agents just LLMs with tools, we've brought the middleware DSL directly into the Agent class. You can now define middlewares at the class level, ensuring every instance of that agent inherits the same safety and observation layer.

import { Agent } from "@node-llm/core";
import { MyLoggingMiddleware } from "./middlewares";

class SupportAgent extends Agent {
  static model = "gpt-4o";
  static instructions = "You are a helpful support assistant.";

  // Declarative middleware support
  static middlewares = [new MyLoggingMiddleware()];
}
Enter fullscreen mode Exit fullscreen mode

ORM 0.7.0: Enhanced Tool Persistence

Matching the core updates, @node-llm/orm has been updated to 0.7.0. This release ensures that when an agent persists a session to the database, it correctly captures the schema-validated tool arguments.

If a model's tool proposal was corrected by the self-correction middleware, the ORM will persist the final, valid arguments, ensuring your audit trail is accurate and reliable.

Getting Started

NodeLLM 1.15 is designed to be a drop-in update for most users. Upgrade today to start benefiting from automated self-correction and improved type safety.

npm install @node-llm/core@1.15.0 @node-llm/orm@0.7.0
Enter fullscreen mode Exit fullscreen mode

For a full list of changes, check out the Changelog.

Top comments (0)