So tonight I went to an Umbraco.AI hackathon, and somehow walked out with a working product.
Not a hand-wavy prototype, not a "here's roughly what it could do" demo, but an actual end-to-end thing that does what it says on the tin, ready to show to people, ready to install on a real site.
Two hours.
And I didn't write a single line of code myself.
The problem I picked
AI-generated contact form spam has quietly become one of those headaches nobody really talks about. It reads as plausible English, has none of the obvious spam markers we've all trained ourselves to spot, and walks straight past the usual defences. If you run a public-facing website, you've almost certainly been quietly deleting it out of your inbox for the last twelve months, even if you hadn't quite put a name to it.
Nobody had built anything in the Umbraco ecosystem to deal with this at the contact form level, and that felt like a gap worth filling.
The idea was simple enough. Every time someone submits a form on your website, send the contents to an AI model, ask it to score how likely the submission is to be genuine, and if it looks like spam, stop it before it ever reaches your inbox or your team. The legitimate enquiries flow through as normal. The spammy ones get held back and logged somewhere you can review them later.
Easy on paper. The challenge was getting it built, working, and demonstrable in the time available.
How I actually built it
I used Claude Code.
For anyone who hasn't come across it yet, Claude Code is a tool that lets you describe what you want a piece of software to do, in plain English, and have it produce the actual code for you. You stay in the driving seat, you make the architectural decisions, you spot when it's gone off-track, you tell it what to fix. But the typing-the-actual-code part isn't your job anymore.
That matters more than it sounds, because I'm not a developer. I've spent twenty-odd years in and around digital delivery, leading teams, scoping projects, understanding the shape of what good looks like, but I've never been the person hands-on-keyboard writing C# at one in the morning. Historically, to ship something like this, I would have needed to find a developer, brief them, wait for them, and hope I'd communicated clearly enough that what came back resembled what I'd asked for.
Tonight, I just had the idea, the plan, and the tool.
The plan was what made the difference, not the typing. I went in with a proper product brief, a rough hour-by-hour breakdown, and a small set of test examples I'd prepared covering obvious spam, plausible AI outreach, and genuine enquiries. Claude Code did the actual building. I did the thinking, the steering, and the testing.
What actually happened
None of it went quite as I'd written it down.
The biggest surprise was how much the world had moved on from the version of reality I was carrying in my head. Things I'd assumed worked one way had been changed, renamed, or replaced entirely in newer versions of the platform. There's something quite humbling about discovering all of that in real time, with a clock running and people drifting over to ask how it's going.
This is where the tool genuinely earns its keep. When something didn't work the way the documentation said it should, I didn't have to dig through forum threads at midnight or guess at the right call to make. I'd describe the symptom, Claude Code would write a tiny test probe to confirm what was actually happening, and within a minute or two we'd know whether the real platform behaved like the docs said it did. Spoiler, it often didn't.
That probably sounds obvious. It isn't. The temptation under time pressure is always to skip the small experiment and just start building the thing, and almost every time you regret it.
What I built in
The other thing I leaned into early was a safety net.
A spam filter that breaks your contact form is worse than the spam itself, and I wanted that property baked in from the first commit rather than bolted on later. So the whole thing is wrapped in a "default-pass" guarantee. If the AI takes too long to respond, the submission goes through. If the response comes back garbled, the submission goes through. If anything at all goes wrong, the submission goes through.
The AI scoring is a quality-of-life feature, not a hard gate. That mindset shaped a lot of the design.
I also added a master kill switch and a configurable threshold, so anyone installing this can decide for themselves how aggressive they want it to be, or turn it off entirely without uninstalling. Small thing, but it matters. The worst thing a package can do is take options away from the people using it.
None of those decisions were the tool's. They were mine. The tool just made them happen.
What I learned
The headline lesson, for me, isn't really about spam filtering or contact forms. It's about what the role of the person sitting in front of these AI tools is actually becoming.
You don't need to be able to write the code anymore. You need to be able to define the problem clearly, design the safety nets, spot when the output is heading in the wrong direction, and know when to stop and ask a better question. That's a different skill set to the one our industry has been hiring for the last fifteen years, and it's one that experienced engineers, product people, and consultants are already quietly really good at, even if they've never thought of themselves as builders.
The gap between "I understand this conceptually" and "I can ship this in two hours" used to be made up of the typing. The actual writing of the code. The bit you had to either learn or pay someone else to do.
That gap has narrowed, sharply.
What's left in the gap is the harder, more interesting stuff. The judgement about what to build. The discipline to test your assumptions against reality. The instinct for when something is about to go wrong. The willingness to keep going when the first three attempts didn't work. None of that gets easier with better tools. If anything, it matters more, because the tools will happily produce something plausible and broken if you don't know what you're doing.
The reason I got over the line tonight wasn't that I had Claude Code. It was that I knew exactly what I wanted to build, why, and what good looked like along the way.
Where it lands
The package scored eight and a half out of nine on the test corpus. All the block decisions were correct. The legitimate enquiries flowed through normally, the spam got held back, and the flagged entries showed up in a dashboard a moment later for review.
There's a clear roadmap of next steps already forming in my head. Editor feedback loops so you can mark a flagged entry as a false positive and have the model learn from it. Webhooks out to Slack so a flagged entry can ping a channel for review. Per-form custom prompts for sites running very different kinds of enquiries. Stats over time. None of that was going to fit in two hours, and that's fine, because the point of an evening like tonight isn't to finish the thing. It's to prove the thing is possible.
The bit that's stayed with me, walking home, isn't the technical journey. It's that someone with no coding ability, but with twenty years of knowing what good software looks like, can now sit down in an evening and produce a real working product.
That's a genuinely new thing in our industry, and I don't think most people have fully felt the shape of it yet.
Two hours. One working product. Not a line of code typed by me.
What would you point a two-hour hackathon at, if you knew you could actually finish it?




Top comments (0)