How mid-sized companies can approach system modernisation without the budget, timelines, or risk tolerance of a large enterprise programme.
If your company has been around for more than five years and has grown faster than its technology, you probably have at least one system in a difficult position.
Not broken. Not urgent. But slowing you down in ways that are hard to quantify and even harder to justify fixing when there is always something more pressing on the backlog.
It might be the billing system that three engineers built in the early days and that now handles enough revenue that nobody wants to touch it.
The internal tool that began as a quick solution to a real problem and has since become load-bearing infrastructure with no documentation and one person who understands it. The database schema that made sense in year one and now has eight years of business logic buried in stored procedures.
These systems accumulate in every growing company. They are not failures. They are the natural consequence of building quickly and shipping often — which is the right thing to do when you are finding product-market fit.
The problem is that they accrue cost quietly. Not in crashes, but in the hours your engineers spend working around limitations instead of building new capability. In the features that cannot be built because the data model does not support them. In the new team members who take weeks longer than expected to become productive because the system has no documentation and the knowledge lives in two people's heads.
At a certain point, the accumulated cost of leaving these systems in place exceeds the cost of addressing them. Most growing companies reach this point and do not realise it until they are already past it.
This article is a practical guide to recognising that point — and approaching modernisation in a way that is realistic for a team that cannot stop to do a two-year programme.
Why the Typical Modernisation Story Does Not Apply
Most writing about legacy modernisation is aimed at large enterprises. The advice is calibrated for organisations with dedicated programme management offices, multi-year transformation budgets, and the organisational bandwidth to run a modernisation programme alongside normal delivery.
Mid-sized companies — typically in the 30 to 300 engineer range — operate under completely different constraints.
The engineering team is fully utilised. There is no spare capacity waiting to be redirected to a modernisation effort. Every sprint is already committed to product work, and the backlog is longer than the team can realistically address in any reasonable timeframe.
The budget is real but bounded. A mid-sized company can fund meaningful modernisation work, but not at the cost of product delivery. The business will not accept a six-month pause in feature development while the engineering team rebuilds the billing system.
The risk tolerance is lower than it appears. A failed modernisation at a large enterprise is painful and expensive. A failed modernisation at a mid-sized company — one that takes longer than expected, disrupts operations, and consumes the engineering team's attention — can genuinely threaten the business.
The approach that works for mid-sized companies is not a smaller version of what large enterprises do. It is a fundamentally different approach: incremental, scoped to the highest-cost problems first, and structured to run alongside product development rather than replacing it.
The First Step: Understand What You Are Actually Paying
Before deciding how to approach modernisation, it is worth establishing what the current state is actually costing.
This is not about creating a business case document for a board presentation. It is about building a clear picture — for yourself and your team — of where the real drag is coming from.
The costs that matter are not the dramatic ones. They are the quiet, recurring ones.
Engineering time spent on maintenance and workarounds. How many hours per week does your team spend on work that is purely a consequence of the current system's limitations — patching, debugging issues that stem from architectural decisions made years ago, building manual processes to compensate for integration gaps? Even a conservative estimate is usually surprising.
Deployment friction. How long does it take to ship a change to the systems in question? If the answer is measured in days rather than hours, there is a real cost in delivery velocity that compounds across every feature, every bug fix, and every customer request.
Onboarding drag. How long does it take a new engineer to become independently productive on the systems in question? For systems with high technical debt and low documentation, this is often measured in months — which is a significant cost per hire that does not appear on any balance sheet.
Feature limitations. Are there capabilities the product team has been asking for that cannot be built without changes to the foundational system? The cost of delayed or impossible product work is harder to quantify but often the most significant.
Adding these up does not require precision. An order-of-magnitude estimate is sufficient to answer the question: is the cost of the status quo larger than the cost of addressing it? For most mid-sized companies with systems that have been accumulating debt for three or more years, the answer is yes — by a margin that is not close.
How AI Has Changed the Assessment Problem
Historically, the hardest part of any modernisation effort was the assessment phase — understanding what the system actually does, how it does it, and where the boundaries and dependencies lie.
For a system that has been evolving for years, this is genuinely difficult. The documentation is incomplete or nonexistent. The engineers who built the original version may have left. The codebase has been modified by many hands, often under time pressure, and the current behaviour is not always what the code appears to suggest.
The traditional approach was to spend weeks or months on manual assessment — reading code, interviewing engineers, mapping dependencies by hand, and gradually building a mental model that was inevitably incomplete.
This is no longer the only option.
LLM-based code analysis tools can now process an entire codebase in hours, identifying dependency clusters, service boundaries, integration points, dead code, and architectural patterns with a coverage and consistency that manual review cannot match at the same speed. For a mid-sized company with a monolithic application or a tightly-coupled service architecture, this changes the economics of the assessment phase substantially.
An assessment that previously required weeks of senior engineering time — and was still incomplete — can now be produced in days, with higher coverage and a structured output that supports the decisions that follow.
For teams that cannot afford to spend months on assessment before beginning any delivery work, this matters. The diagnostic phase, which was previously a significant cost and timeline risk in itself, is now a tractable starting point.
A Realistic Modernisation Approach for Mid-Sized Teams
The following approach is designed for engineering teams that are building product simultaneously — not for teams that can dedicate full capacity to a modernisation programme.
Start with the highest-cost problem, not the largest system.
The instinct is often to start with the most visible system, the oldest system, or the system that generates the most complaints. This is not necessarily the right starting point.
The right starting point is the system whose current state is costing the most — in engineering time, in delivery friction, in feature limitations, or in business risk. That cost calculation, done honestly, will usually point to a specific component or subsystem rather than the entire platform.
Scoping to the highest-cost problem first keeps the programme deliverable within a realistic timeframe and produces measurable value before the effort expands to adjacent areas.
Prefer incremental over complete replacement.
For a team that cannot stop product delivery to run a modernisation programme, the Strangler Fig approach — progressively replacing components one at a time while the existing system remains operational — is almost always the right structural choice.
The logic is straightforward: the existing system, however imperfect, is running in production and serving customers. Replacing it incrementally means that at every point in the programme, there is a working system. The risk is bounded. If a phase takes longer than expected, the business continues to operate. The replacement can be paused, adjusted, or reprioritised without a crisis.
Complete replacement — rewriting the system from scratch — removes these safety properties. The old system and the new system exist in parallel, the old system cannot be retired until the new one is complete, and the programme is committed to a scope and timeline that was defined based on an understanding of the system that improves only as the new build progresses.
For most mid-sized companies, the risk profile of complete replacement is not compatible with the operational constraints of a team that is simultaneously running a product.
Treat data migration as a separate workstream, not a final step.
Data migration is the most common source of unexpected cost and timeline extension in any modernisation programme. It is also the workstream that is most frequently underscoped.
The problem is not moving data from one database to another. The problem is that the data in the existing system almost certainly contains inconsistencies, anomalies, and structural decisions that made sense at the time and now represent gaps between what the data says and what the business currently requires.
Running data quality assessment in parallel with the early phases of the modernisation — rather than treating it as a final migration step — surfaces these issues when there is still time to address them as design decisions rather than as blockers at go-live.
Build documentation as you go, not at the end.
One of the most valuable outcomes of a modernisation programme is a system that is actually understood — with documentation, decision records, and operational runbooks that allow any engineer on the team to work on it productively.
This outcome only materialises if documentation is treated as a deliverable throughout the programme, not as a task to complete before handover. The engineers doing the work are the ones who understand what they built and why. Capturing that understanding at the time is a fraction of the cost of reconstructing it later.
What to Expect in Practice
A well-structured incremental modernisation programme for a mid-sized company typically proceeds in phases of eight to twelve weeks each, with each phase delivering a discrete, testable improvement to a specific component.
The first phase is invariably the most uncertain — not because the work is harder, but because the understanding of the current state is still incomplete. The AI-assisted assessment changes this, but it does not eliminate the learning that happens when engineers begin working in the codebase in earnest. Budget more time for the first phase, and treat its output as a revised plan for the phases that follow.
By the third or fourth phase, the team has established patterns, the codebase is better understood, and delivery velocity typically improves. The initial phases feel slow. The later phases feel fast. This is normal and expected.
The business will see measurable improvements — faster deployments, reduced incident rates, faster onboarding — before the programme is complete. These are the proof points that justify continued investment and that make the case for expanding the programme scope if the initial results warrant it.
The Decision to Start
The organisations that are in the best position to modernise are not the ones with the most technical debt. They are the ones that recognise the cost of waiting and make a deliberate decision to address it — with a realistic scope, a realistic approach, and a clear understanding of what success looks like before the first sprint begins.
For most growing companies, that decision is not a dramatic one. It does not require a board presentation or a multi-million dollar budget. It requires an honest conversation about what the current state is costing, a scoped starting point that the team can execute alongside product delivery, and a commitment to incremental progress over comprehensive transformation.
The systems that nobody wants to touch do not improve on their own. They accrue cost. The decision to address them is a decision to reclaim that cost — gradually, without disruption, and on a timeline the business can support.
WiseAccelerate works with growing engineering teams on practical modernisation — from initial assessment through incremental delivery and knowledge transfer. AI-native engineers. Full-stack capability. Scoped to what your team can actually execute.
→ What does the system in your organisation that everyone knows needs attention actually cost you per month? Interested in how other engineering leaders are quantifying this.
Top comments (0)