DEV Community

Cover image for LLM Chains vs Single Calls in n8n: My Caption Generation Experiment
masaya ohsugi
masaya ohsugi

Posted on

LLM Chains vs Single Calls in n8n: My Caption Generation Experiment

I tested two approaches for generating Instagram captions with n8n:

  • Approach A: Claude Haiku 4.5 × 3-step chain (cost optimization attempt)
  • Approach B: Claude Sonnet 4.5 × single call

Result: Sonnet single call won by a landslide. Consistency and tone matter more than I thought.

The Hypothesis

While building an n8n workflow for Instagram carousel captions, I wondered: "Can I save costs by chaining a cheaper model (Haiku) with role-specific prompts instead of using an expensive model once?"

Spoiler: Nope. Not for creative tasks.

Tech Stack

  • n8n v1.19.4+ (Advanced AI nodes)
  • Claude 4.5 Haiku & Sonnet via Anthropic API
  • Node.js 18+ for testing
  • Environment: ANTHROPIC_API_KEY

The Two Approaches

Approach A: 3-Step Chain (Haiku)

Break the task into micro-steps:

  1. Extract key points from data
  2. Draft structure using Step 1 output
  3. Polish and add metadata

Approach B: Single Call (Sonnet)

Throw everything at the model once and let it handle the whole flow.

The Code

Here's the test script I used to compare both approaches outside n8n:

# Setup
npm init -y
npm i @anthropic-ai/sdk dotenv
export ANTHROPIC_API_KEY="your_key_here"
Enter fullscreen mode Exit fullscreen mode

index.mjs

import "dotenv/config";
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

// Sample data structure
const image_details = [
  { order: 1, attributeA: "value1", attributeB: "value2" },
  { order: 2, attributeA: "value3", attributeB: "value4" },
];

async function callClaude({ model, system, prompt }) {
  const res = await client.messages.create({
    model,
    max_tokens: 1000,
    system,
    messages: [{ role: "user", content: prompt }],
  });
  return res.content?.text ?? "";
}

// Approach A: Haiku 3-step chain
async function haikuChain() {
  const model = "claude-haiku-4-5";
  const system = "You are a caption editor. Be concise.";

  // Step 1: Extract data
  const step1 = await callClaude({
    model,
    system,
    prompt: `Extract key descriptions from this data:\n${JSON.stringify({ image_details }, null, 2)}`,
  });

  // Step 2: Create structure
  const step2 = await callClaude({
    model,
    system,
    prompt: `Based on this, create hook + body + CTA:\n${step1}`,
  });

  // Step 3: Add tags
  const step3 = await callClaude({
    model,
    system,
    prompt: `Add hashtags to this caption:\n${step2}`,
  });

  return { step1, step2, final: step3 };
}

// Approach B: Sonnet single call
async function sonnetSingle() {
  const model = "claude-sonnet-4-5";
  const system = "You are a caption editor. Be concise.";

  return await callClaude({
    model,
    system,
    prompt: `Create a complete caption (hook + body + bullets + CTA + tags) from:\n${JSON.stringify({ image_details }, null, 2)}`,
  });
}

// Run both
const resultA = await haikuChain();
const resultB = await sonnetSingle();

console.log("=== Haiku Chain ===\n", resultA.final);
console.log("\n=== Sonnet Single ===\n", resultB);
Enter fullscreen mode Exit fullscreen mode

n8n Integration Gotcha

When preparing data in n8n's Code node, don't just pass IDs:

const transformedImages = items.json.transformedImages;

// ❌ Wrong: LLM can't work with IDs alone
const image_details = transformedImages.map(img => ({ id: img.id }));

// ✅ Right: Give the LLM actual content to work with
const image_details = transformedImages.map((img, index) => ({
  order: index + 1,
  ...img, // Spread ALL attributes the LLM needs
}));

return [{ json: { image_details } }];
Enter fullscreen mode Exit fullscreen mode

What Broke (And How I Fixed It)

Issue: Step 1 Kept Returning "Unknown"

Root cause: I was only passing image IDs to the LLM. No descriptions, no attributes—just meaningless identifiers.

The fix: Spread the entire object (...img) so the LLM gets all the context it needs.

This was embarrassing but educational.

Results

Metric Haiku 3-Chain Sonnet Single
Context awareness Fragments between steps Holistic understanding
Tone consistency Stitched together, uneven Unified from start to finish
Output quality Informative but stiff Natural, engaging, flows well
Cost ~3× cheaper Higher but worth it

The verdict: For captions (or any creative writing), context continuity beats cost savings.

When to Use Each Approach

Pattern Use Case Examples
Lower model + Chain Clear role separation Data extraction, classification, formatting
Mid/upper model + Single Context-dependent creativity Captions, articles, copywriting

Key Takeaways

  1. Context > Cost: For creative tasks, fragmented chains break the narrative flow
  2. Data quality matters: If you don't feed the LLM enough info, no amount of chaining will help
  3. Haiku's niche: Great for speed and structured tasks, but Sonnet wins for "feel"

References

Top comments (0)