DEV Community

Cover image for Replacing 30 minutes of daily email scanning with n8n + Claude Haiku
Leandro García
Leandro García

Posted on

Replacing 30 minutes of daily email scanning with n8n + Claude Haiku

Problem

Upwork sends email alerts for new postings. If you're active on the platform, that's 20-30 emails a day. Most are noise. You scan the subject, open the ones that look relevant, read the description, decide if it's worth applying. That's 30+ minutes daily of low-value repetitive work, exactly the kind of task an LLM can do.

Architecture

Architecture

The workflow has seven nodes. When a new email arrives, three things happen in parallel:

  1. Get full email, the Gmail trigger only gives you metadata, not the body. A separate HTTP Request node fetches the full message via the Gmail API, then a Code node strips tracking URLs, footer boilerplate, and extracts the job posting URL.

  2. Get a document, fetches your developer profile from Google Docs. This runs at the same time as the email fetch, not after it.

  3. Mark as read, so the trigger doesn't pick up the same email twice.

  • The first two branches converge at a Merge node. This is where n8n shines over writing this in code, parallel execution with a sync point is a drag-and-drop operation, not a Promise.all you have to manage.

  • After the merge, a Prompt builder Code node assembles the system prompt (with your profile injected) and the user message (the cleaned posting). This hits the Anthropic API directly via an HTTP Request node, no n8n AI node needed, just a standard REST call with your API key in a header auth credential.

  • The response comes back as structured JSON. A Response cleanup node parses it, applies the score threshold filter (anything below 3 gets dropped — the branch just stops), and formats a Telegram message with the score, rationale, concerns, a snippet of the posting, and the direct link.

The whole thing runs on a Gmail polling trigger (every minute by default). Total cost per evaluation: ~$0.002 with Haiku.

The prompt design

This is where the non-obvious findings live:

  • Profile-as-a-document pattern. The scoring rules live in a Google Doc, not in the prompt template. This means you can update your preferences (rate thresholds, skip list, tech stack) without touching the workflow. The LLM reads your profile fresh on every execution.

  • Structured JSON output without function calling. Haiku returns {score, category, rationale, concerns}. The prompt explicitly says "respond ONLY with valid JSON, no markdown, no preamble."

You are a job posting triage agent for a freelance developer. Your task is to score Upwork job postings against the developer's profile and preferences.

<developer_profile>
${profile}
</developer_profile>

Score the posting from 1-5:
5 = Strong match, apply immediately
4 = Worth a look, aligns with goals
3 = Borderline, surface but flag concerns
2 = Weak fit, only if nothing better
1 = Skip entirely

Respond ONLY with valid JSON, no markdown, no preamble:
{
  "score": 3,
  "category": "strong_match | worth_a_look | skip",
  "rationale": "One sentence explaining why",
  "concerns": "One sentence on yellow/red flags, or null"
}

Rules
- Use the Rate Thresholds, Skip list, and Client Credibility Signals from the profile strictly
- If the posting matches any Skip criteria, score 1 and category "skip"
- If client credibility has red flags, score 1 and category "skip"
- Be harsh — surfacing noise wastes the developer's time
- Do NOT wrap the response in markdown code fences or backticks
Enter fullscreen mode Exit fullscreen mode

The response cleanup node strips any markdown code fences the model wraps around the JSON, because even with explicit 'no preamble' instructions, models sometimes do it anyway.

  • "Be harsh" as a design decision. The prompt says "surfacing noise wastes the developer's time." This is the key insight: for triage, false negatives (missing a good posting) are much cheaper than false positives (interrupting you with garbage). Most people design LLM scoring to be inclusive. For this use case, you want the opposite.

  • Score threshold as a separate concern. The LLM scores 1-5, but the filter is a separate Code node that drops anything below 3. This means you can adjust sensitivity without changing the prompt — just move the threshold.

Why Haiku and not GPT-4o

This is a classification task, not a reasoning task. Haiku is fast, cheap, and accurate enough. The scoring rubric is simple, match skills, check budget, flag red flags. At ~$0.002 per evaluation, you can process hundreds of postings for pennies. If you see misclassification on messy descriptions, step up to Sonnet, but start cheap.

What I'd do differently

  • Batch digest instead of per-posting alerts. Right now every posting that clears the threshold fires a separate Telegram message. On a busy day that's 8-10 notifications, which is its own kind of noise. A better pattern: accumulate matches throughout the day and send a single digest at a fixed time (say, 9 AM). n8n can do this with a Schedule trigger that reads from a simple store or spreadsheet, but I haven't built it yet.

  • Feedback loop. The system tells me what to look at, but I have no way to tell the system whether it was right. Did I actually apply to the ones scored 4-5? Did I ignore a 3 that turned out to be perfect? Without this signal, the prompt and profile stay static. A lightweight version: add two buttons to the Telegram message ("Applied" / "Skipped") that log the decision back to a Google Sheet. Over time, that data tells you whether your profile needs tuning or your score threshold is too aggressive.

  • Rate limiting awareness. If Upwork sends a burst of 15 emails at once (it happens when you first enable alerts for a new search), you hit the Anthropic API 15 times in rapid succession. Haiku handles this fine at current volume, but a production version should add a queue or at least a Wait node between evaluations to stay well within rate limits.

Run it yourself

You can clone the repo here: https://github.com/leojg/n8n-email-automation

There is also a profile-template.md that can be used as a guideline.

Or if you want something like this built for your own email workflows, contact me at https://automation.lgcode.me/

Top comments (0)