DEV Community

Cover image for Automation Patterns That Survive Real Teams
Gwilym Pugh
Gwilym Pugh

Posted on

Automation Patterns That Survive Real Teams

About 40% of the automations I build during a monday.com implementation have stopped working correctly within 90 days of go-live.

Not because the automation engine failed. Because the business changed and nobody updated the automation. The status label got renamed. The person who owned the workflow moved teams. A new column was added and the old one became redundant. The automation kept firing, just on the wrong conditions, producing quietly broken output that nobody noticed until a report looked wrong two months later.

This isn't a monday.com problem. It's a pattern I've seen across every platform I've worked with. The automations that survive aren't the clever ones. They're the ones built with structural awareness that the business will change faster than the automation.

Here are the patterns that actually last, drawn from 50+ SMB implementations across construction, recruitment, insurance, and financial services.

1. Target the 80% case, not the edge cases

The biggest killer of automation reliability is trying to handle every possible scenario.

You build an automation to route new leads to the right sales rep based on territory. The happy path is simple: US leads go to the US team, EU leads go to the EU team. Then someone asks "what about Mexico?" So you add a rule. Then "what about contractors based in Spain but selling to Latin America?" Another rule. Then "what if the contact is from the US but the company is based in Germany?" Three more rules.

After six weeks your lead routing automation has 14 conditional branches, three of which contradict each other, and nobody can remember why rule #9 exists.

The better pattern: handle the 80% case cleanly, route the rest to a human. In monday.com terms:

When new lead created
  AND country = US
  Then assign to US Sales group

When new lead created
  AND country = UK
  OR country = Germany
  OR country = France
  Then assign to EU Sales group

When new lead created
  AND no previous rule matched
  Then assign to "Needs Review" group
  AND notify Sales Ops
Enter fullscreen mode Exit fullscreen mode

The "Needs Review" bucket is where edge cases go to be handled by a human. A week in production tells you which edge cases are actually common enough to automate. The ones that happen twice in three months stay manual forever.

2. Build a manual override path for every automated decision

Every automation should have a documented way to override it.

If an automation assigns a deal owner based on territory, there should be a simple way for a sales manager to reassign it without breaking the automation's future behaviour. If an automation moves a project to "In Progress" when the client approves the proposal, there should be a way to move it back to "Pending" if the client changes their mind without the automation immediately fighting back.

The anti-pattern: an automation that instantly undoes any manual change because it sees the "wrong" state.

The fix in monday.com: use a status column for the automation's decision and a separate checkbox column (like "Locked by user") that the automation checks first. If locked, the automation skips. If unlocked, it runs.

When Status column changes
  AND Locked by user = false
  Then [automation action]

When Status column changes
  AND Locked by user = true
  Then do nothing
Enter fullscreen mode Exit fullscreen mode

This small structural change is the difference between an automation the team trusts and one they work around by building a parallel spreadsheet.

3. Document trigger, action, and owner in one sentence each

Every automation needs three pieces of documentation:

  • Trigger: What fires this automation? (One sentence.)
  • Action: What does it do? (One sentence.)
  • Owner: Who maintains this? (A name, not a team.)

If you can't write each of these in one sentence, the automation is too complex and will break.

Keep the documentation somewhere the team can actually find it. A dedicated "Automation Registry" board in your workspace is better than a buried Confluence page. Each row is an automation. Columns are: Name, Board, Trigger, Action, Owner, Last Reviewed, Status.

Here's the schema I use:

Board: Automation Registry
Columns:
  - Name (text)
  - Parent Board (board relation)
  - Trigger Description (long text, 1 sentence)
  - Action Description (long text, 1 sentence)
  - Owner (person)
  - Last Reviewed (date)
  - Active? (checkbox)
  - Notes (long text, optional)
Enter fullscreen mode Exit fullscreen mode

When something changes (new column added, new team member joined, business process updated), the team can open the registry and see which automations need reviewing. Without this, you rediscover broken automations through downstream bugs months later.

4. Weekly five-minute health check

Automations need maintenance. The teams whose automations still work a year later are the teams that review them weekly.

The review doesn't need to be complicated. Five minutes, once a week:

  1. Open the Automation Registry board
  2. Check which automations fired this week (monday.com shows activity logs per automation)
  3. Check which ones failed or skipped
  4. Scan the outputs they produced (did the notifications actually land? Did the status changes actually stick? Are the reports still correct?)

This single habit catches silent breakages before they become noisy problems. A renamed status label, a removed column, a changed field type. All of these can silently disable an automation. Weekly checks surface them before your quarterly board report shows wrong numbers.

5. Train the maintainer, not just the users

Most automation training focuses on the end users. The sales reps who benefit from lead routing. The project managers whose status updates get mirrored automatically. The ops team whose weekly reports generate themselves.

This matters, but it's not the critical training.

The critical training is for the person or people who maintain the automations after go-live. They need to understand:

  • How each automation works, structurally
  • How to diagnose when one isn't firing
  • How to modify it safely when the business process changes
  • When to remove an automation rather than patch it

Without this, any business change either breaks the system or forces the team to work around it. The person who set up the automations six months ago has long since forgotten the subtle reasons for certain design decisions, and the consultant has moved on. The result is a slow erosion of trust in the platform.

A concrete example: CRM pipeline automation

Here's a full pattern I've implemented dozens of times. This handles the end-to-end deal progression in monday.com CRM without ever hitting the "too clever" failure mode.

Boards involved:

  • Deals (primary)
  • Accounts
  • Projects (delivery board)
  • Team Capacity (resource planning)

Automations:

1. When Deal Status changes to "Won"
   AND Connected Account is not empty
   Then:
     - Move deal to "Closed - Won" group
     - Set Close Date to today
     - Create item on Projects board
     - Connect the new project to the deal
     - Notify Delivery Lead with deal context

2. When Project created from Won deal
   Then:
     - Mirror Deal Value from Deals board
     - Mirror Client Contact from Accounts board
     - Set Project Status to "Kickoff Pending"
     - Create three default subitems: Kickoff, Scope, Delivery Plan

3. When Project Status changes to "Kickoff Complete"
   Then:
     - Update connected Deal status to "Delivery Active"
     - Notify Account Manager
     - Update Team Capacity board with assigned resources

4. When Deal has been in "Proposal Sent" > 14 days
   AND Deal Status has not changed
   Then:
     - Flag deal as "Stalled"
     - Notify Deal Owner with nudge
     - Do NOT auto-send follow-up email (human decides)
Enter fullscreen mode Exit fullscreen mode

Notice what this pattern does and doesn't do.

Does: Handle the predictable state transitions cleanly. Push context across boards. Alert humans when judgement is needed.

Doesn't: Auto-send outreach emails. Auto-assign deals to specific reps. Auto-adjust pricing or terms. These are human decisions that automation should prompt, not make.

The automations that survive are the ones that reduce admin, not the ones that try to replace judgement.

The principle behind all of this

Every pattern above is a specific expression of one underlying principle: build automations for how the business actually works, not how it theoretically should work.

Theoretical processes don't have exceptions. Real ones do. Theoretical users don't forget to update fields. Real ones do. Theoretical data is clean. Real data has duplicates, typos, and fields nobody has touched in 18 months.

The automations that work in production are the ones designed by someone who's seen how the business really operates, built to handle the predictable cases cleanly and route everything else to a human. The clever edge-case handling goes in version 2, after a month in production reveals which edge cases are actually common.

If you want more concrete patterns, I've collected the ones I use most often as a set of monday.com automation recipes covering lead intake, assignment routing, follow-ups, deal progression, and reporting. They're all built on the patterns above.

Related reading

Previous entry in this series: How to Structure a monday.com Workspace for Multi-Department Operations. The workspace architecture you choose determines which automation patterns are even possible, so it's worth reading that piece first if you're setting up a new instance.

Top comments (0)