DEV Community

Cover image for 29 Zapier + Make automations replaced in four weeks
Michel Faure
Michel Faure

Posted on

29 Zapier + Make automations replaced in four weeks

The invoice I no longer understood

One March morning, I'm going through L'Atelier Palissy's recurring charges, and I hit a line whose regularity surprises me: Zapier Pro, 73.50 dollars, every month for eighteen months. Next to it, a smaller Make Pro charge just as discreet. I count: twenty-one active Zaps, nine Make scenarios, a few stopped, a few clearly broken for weeks. Nobody noticed, because each automation lived on its own dashboard, with its own history, its own logs that nobody ever looked at.

I didn't know exactly what each one did. I knew there were about thirty, that they passed data from Formidable (a French WordPress form plugin) to Mailchimp, from Meta Lead Ads to a shared Google Sheet, from Stripe to a confirmation email, that they triggered on webhooks, on polling, on schedule, and that half the silent failures in the system probably came from there. I didn't have the energy to go look, because going to look meant opening a tool where I wasn't at home.

Twenty-eight days later, they were gone. Zero Zaps, zero Make scenarios, all replaced by three hundred lines of TypeScript running in my ERP Rembrandt, under Sentry, under tests, with a single overview I look at in the morning with my coffee.


If you have 30 seconds. Zapier/Make automations carry three invisible debts: they live outside your database, they log outside your monitoring, and they trigger on rules nobody re-reads. Replacing thirty automations with a single event pipeline takes four weeks, not six months, provided you hold one golden rule: never cut a Zap before validating its replacement in dual-write for three to five days. The article gives the method, the timeline, and the avoided cost.

Three invisible debts

The first debt no-code tools produce is called, in the internal literature that doesn't yet exist, the distributed debt. Your data lives in three places at once, and none of the three is canonical. My CRM thought the truth was in Google Sheets. Mailchimp thought the truth came from Zapier. Supabase thought nothing because I only wrote part of what was happening. The day a prospect arrives in three different tabs with three different spellings, nobody knows which record is the real one. The only way to settle it is to choose a place that wins by construction, and make everything else a slave.

The second debt is the absence of unified monitoring. A Zap that breaks sends an email to the address that created the Zapier account, possibly to no one. A Make scenario that fails on its third step lets the first two consume quotas, and the only trace is a small red counter on a dashboard nobody opens. Sentry, Datadog, Grafana — none aggregate. You learn your automation is dead when a customer calls to say they didn't receive their confirmation email.

The third debt is the quietest, and it's that of rules you forget you ever wrote. A Zap created eleven months ago for a summer-season edge case keeps running through the following winter, routing a Paris lead to the email address of a colleague who left the house. You don't see it, because the lead arrives anyway, apparently. Nobody rereads a Zap. It's designed precisely so that you don't have to. That's exactly what makes it a debt.

The golden rule it took me fifteen days to accept

My first attempt, early April, was naive. I'd write a Rembrandt replacement, cut the Zap, move to the next one. My third attempt ended in Slack at eleven at night, a lost Meta lead because my webhook wasn't deployed to the right Vercel environment, and the certainty that if I kept going like this, I'd break at least one critical piece by week's end.

I set the following rule, which I held to the last Zap.

Never cut a Zap before validating its replacement in dual-write for three to five days.

Concretely: I write the replacement, I deploy it, it runs alongside the Zap. Every lead arrives duplicated in my Supabase, and at the team email that receives the notification. For three to five days, I compare: same lead, same data, same timings, same emails sent? If yes, I cut the Zap. If not, I disable my code and I still have a net running while I understand what broke.

The worst side effect possible during the transition is a lead received twice by the sales team. That's an annoyance, not a catastrophe. A lost lead is a silent catastrophe you discover a month later.

The architecture that replaced thirty automations

The target has a simplicity that wasn't visible while I stayed in the no-code tools.

Meta Ads    ──┐
Formidable  ──┤
Stripe      ──┤──► Rembrandt webhooks ──► Supabase (single source)
Manual      ──┘                              │
                                             ├── Gmail SMTP (internal notifs)
                                             ├── Slack (team alerts)
                                             ├── Meta CAPI (campaign feedback)
                                             ├── automation_logs (traceability)
                                             └── Cron → Mailchimp then Brevo
Enter fullscreen mode Exit fullscreen mode

One entry point per source, one storage place, a parallel fan-out to notification tools. The core sits in a file called lib/lead-pipeline.ts that runs outbound integrations in parallel after every insert into the contacts table.

export async function runLeadPipeline(lead: Lead) {
  await Promise.allSettled([
    syncMailchimp(lead),
    notifySlack(lead),
    notifyGmail(lead),
    sendMetaCapi(lead),
    generateFirstContactDraft(lead),
  ])
  await logAutomation(lead, /* status per tool */)
}
Enter fullscreen mode Exit fullscreen mode

Promise.allSettled rather than Promise.all because if Slack is down, I still want the email sent. Each result feeds an automation_logs table, which is the only thing I look at in the morning: how many leads arrived, which tools succeeded, which failed, over which time window.

The timeline, as it actually happened

Week What I did Zaps cut
W1 Central pipeline + shared Slack client + automation_logs table 0
W2 Formidable direct, dual-write 0
W3 Cut the ten Formidable Zaps + Meta webhook dual-write 10
W4 Cut the eight Meta Zaps + Stripe webhook 18
W5 Make scenarios (PDFs, crons) and cleanup 21
W6 Kill Zapier Pro, downgrade scheduled 21/21

The day of the final cut, I didn't sleep the night before, and the following day I opened my automation_logs dashboard every hour. Thirteen days later, nothing had broken. I still keep a dead route, sync-gsheets-leads, that I never call but that serves as a reactivable net until the end of the month.

What we gained that we didn't suspect

The economic gain is obvious — around a hundred euros per month of consolidated subscriptions. But what surprised me was a comprehension gain. The first week after the cut, I realized I understood my system for the first time in eighteen months. I could open a file, reread a routing rule, modify it, test it, deploy it in twenty minutes. Before, even a trivial change — updating the destination address of a notification email — went through five Zapier tabs and a dull fear of breaking one while moving another.

Two short scenes come back to me. The first: I had to call Gaspard, our IT contractor, to retrieve the Zapier account password. He had it, he gave it to me, he didn't ask why I wanted to go in. The second, earlier in the morning, Françoise had stepped out of her office cup in hand and planted herself in front of mine: « Bon. Tes doublons Meta, tu comptes les garder combien de temps ? Parce que Hélène reçoit deux mails pour le même lead, elle commence à s'agacer, celle-là. »Right. Your Meta duplicates, how long do you plan to keep them? Because Hélène is getting two emails for the same lead, she's starting to get annoyed, that one. It was the third week, I replied « Encore deux jours »Two more days — she nodded, put her cup down, and the cut was made the next evening. I've kept this: dual-write has a team-patience cost that you neither minimize nor stretch beyond necessity.

There's a particular hygiene to a system that holds in one place. It's underestimated until it's there, because no-code tools sell precisely the promise that it's everywhere, that it no longer has to be thought through. The truth is that if it isn't thought through in one place, it's just buried. It costs less to write, and far more to live with.

What you can copy into your project

Reusable patterns extracted from this migration, independent of my stack:

  1. The golden rule — dual-write three to five days before cutting. Non-negotiable. A duplicate lead beats a lost lead. The extra time pays once and refunds itself on every avoided incident
  2. A single event pipeline — one runPipeline(event) function called after each insert, running outbound integrations in parallel (Promise.allSettled) and logging each tool's status in a dedicated table
  3. An automation_logs table — one row per event, one column per outbound tool with its status. It's the only dashboard you look at in the morning, and it replaces all the separate dashboards of the no-code tools
  4. A reactivable net post-cut — keep the dead route for another two weeks. The day a bug surprises you, it's fifteen seconds of vercel.json to restore the net. After, you delete for good

And a broader discipline: any tool that houses your business logic outside your database charges you three debts — distributed, unmonitored, unreadable. Zapier and Make are useful for prototyping. They become dangerous as soon as serious activity depends on them.

And you — how many no-code automations are running your system right now, and when was the last time a person reread them all? I read the comments.


Companion code: rembrandt-samples/lead-pipeline/runLeadPipeline with Promise.allSettled, automation_logs schema, hub-and-spoke architecture diagram, MIT, copy-pastable.

Top comments (0)