DEV Community

BLNCraft
BLNCraft

Posted on

6 n8n Workflow Patterns for AI Automation (Lead Gen, Enrichment, RAG, Self-Healing)

6 n8n workflow patterns for AI automation (with real examples)

After building n8n automations for over a year across different use cases, I started recognizing the same core patterns appearing in every serious AI automation setup. Here are the six that show up in every production workflow I have built.


Pattern 1: Webhook → LLM classify → route

The classic intake pattern. An inbound webhook (from email, form, Slack, API call) triggers an LLM to classify the payload, then routes it to the right downstream workflow.

Webhook trigger
  → HTTP Request (normalize payload)
  → Claude/GPT-4o node (classify: type, urgency, intent)
  → Switch node (route by classification)
    → Branch A: create Linear ticket
    → Branch B: send Slack alert
    → Branch C: log and discard
Enter fullscreen mode Exit fullscreen mode

Use case: Customer support triage, inbound lead routing, webhook fan-out


Pattern 2: Cron → scrape → summarize → Slack

The daily briefing pattern. Fires every morning, scrapes target sources, runs an LLM summarizer, posts to Slack.

Schedule trigger (every day 07:00)
  → HTTP Request nodes (scrape sources)
  → Merge node
  → Claude node (summarize + extract key signals)
  → Slack node (post to #briefings)
Enter fullscreen mode Exit fullscreen mode

Use case: Competitor monitoring, keyword alerts, market intelligence, news briefings


Pattern 3: CRM event → AI enrich → update

Fires when a new lead enters HubSpot/Pipedrive. Enriches with public data, generates an AI-written lead summary, writes it back to the CRM.

HubSpot trigger (new contact)
  → HTTP Request (fetch enrichment data: LinkedIn, Clearbit, Apollo)
  → Claude node (synthesize: company context, fit score, outreach angle)
  → HubSpot node (update contact with AI summary)
  → Slack node (notify sales rep)
Enter fullscreen mode Exit fullscreen mode

Use case: Sales automation, lead enrichment, SDR assist


Pattern 4: Document → chunk → embed → vector store

Local RAG without a managed service. Process documents into a searchable vector store.

File trigger (new upload to folder)
  → Code node (read + chunk into ~512 token segments)
  → Embeddings node (OpenAI/Cohere)
  → Vector store node (Qdrant, Pinecone, Supabase pgvector)
  → Response: "Document indexed, 42 chunks stored"
Enter fullscreen mode Exit fullscreen mode

Use case: Internal knowledge base, contract analysis, support doc search


Pattern 5: Error → LLM diagnose → create ticket

Self-healing workflows. When a critical workflow errors, an LLM diagnoses the error and creates a Linear/GitHub ticket automatically.

Error trigger (workflow failed)
  → Code node (format error: workflow, node, message, context)
  → Claude node (diagnose: likely cause + suggested fix)
  → Linear node (create issue with LLM diagnosis)
  → Slack node (alert #on-call)
Enter fullscreen mode Exit fullscreen mode

Use case: Monitoring, incident response, workflow reliability


Pattern 6: Trigger → AI draft → human approve → send

Best of both worlds: AI writes, human approves. The approval step can be Slack, email, or a custom web form.

Schedule trigger
  → Claude node (draft: email / social post / report)
  → Wait node (pause for human review)
    → Slack node ("Draft ready — approve or edit?")
  → If approved: send via Gmail / Postmark / Resend
  → If rejected: loop back to Claude with feedback
Enter fullscreen mode Exit fullscreen mode

Use case: Outbound email sequences, social media, weekly reports


What makes these work in production

A few things separate toy demos from workflows that run reliably for months:

Error handling at every HTTP node. Set "Continue on fail" and add an error branch. Unhandled HTTP failures will silently break your workflow.

Declare your LLM model explicitly. Do not use defaults — specify the exact model (claude-sonnet-4-6, gpt-4o, etc.) so updates do not surprise you.

Use credential variables, not hardcoded values. n8n's built-in credentials manager handles rotation gracefully.

Separate triggering from processing. Webhook receivers should be thin — just validate and hand off. Do the heavy processing in a second workflow called via n8n's API.


The part that is tedious

Building these patterns from scratch is not hard — it is just slow. I spent most of my first six months recreating the same structural patterns across different clients and use cases.

I packaged up 350 of these into a production workflow library: 12 integration lanes, all LLMs pre-configured and swappable with one node edit.

350 n8n AI Workflow Templates on Gumroad — use BLNCRAFT20 for 20% off launch week.


What patterns are you using in your n8n automations? Always curious what combinations people find most useful.

Top comments (0)