The Most Expensive Lesson in Software Development
Most startups don't fail because they wrote bad code. They fail because they wrote the wrong code, and they wrote too much of it before a single real user ever touched it.
I've been building software for over 20 years. I've watched well-funded teams spend eight months perfecting a product that the market didn't want. I've also watched scrappy two-person teams ship a rough but working product in six weeks, learn what users actually needed, and iterate into something genuinely successful.
The difference wasn't talent. It wasn't funding. It was the discipline to build a Minimum Viable Product — and the experience to know what that actually means in practice.
This isn't a theoretical guide. It's what we've learned from dozens of real MVP engagements across fintech, healthtech, SaaS, and marketplaces. I'm going to walk you through the honest version: what works, what founders get wrong, and the specific technical decisions that determine whether your MVP becomes a product or a post-mortem.
What an MVP Actually Is (and Isn't)
Let's start with the definition people get wrong.
An MVP is not a prototype. It's not a demo. It's not a "version with bugs we'll fix later." An MVP is the smallest thing you can ship to real users that tests your core assumption.
That last part is critical: tests your core assumption. Every MVP has a hypothesis at its centre. Before you write a single line of code, you should be able to complete this sentence: "We believe that [target user] will [do this thing] because [this is true about the world]."
If you can't complete that sentence clearly, you're not ready to build. You're building on a foundation of fog.
The five types of MVPs we actually build
1. Single Feature MVP
The most common and usually the most effective. Strip everything down to the one feature that proves or disproves the core hypothesis. Everything else — the dashboard, the settings, the onboarding flow — gets cut. You add those back after you've validated that people care about the core thing.
2. Clickable Prototype MVP
Not functional code — a high-fidelity mockup built in Figma or similar. Used when the question isn't "can we build it" but "will users engage with this flow." Useful for early fundraising too.
3. Fake Door MVP
You build the marketing page and the sign-up flow but not the product. You measure conversion rates, email captures, and user intent before writing a line of backend code. This is underused and massively underrated.
4. Pre-order / Crowdfunding MVP
Validates demand and generates early revenue before you've committed to full development. Appropriate for physical products and some consumer software.
5. Minimum Lovable Product (MLP)
A step up from minimum viability — you build something that early adopters will genuinely love, not just tolerate. Higher build cost, but can accelerate word-of-mouth growth in the right markets.
For most early-stage startups, the answer is option one. Start there.
The 8-Week Framework We Use
Here's the structure we follow on every MVP development engagement. It's not rigid — every product is different — but the phases are consistent.
Weeks 1–2: Discovery and Scoping
This is the most important part of the process, and it's the part clients most often want to skip. Don't skip it.
Discovery is where we:
- Define the core hypothesis in writing
- Identify the single riskiest assumption (not all assumptions — the riskiest one)
- Map the user journeys that matter for the MVP
- Make explicit decisions about what's out of scope
- Choose the technology stack
On the technology side, our default choices are deliberate:
- Frontend: Next.js with TypeScript. SSR out of the box, strong typing reduces bugs, and the ecosystem is mature.
- Backend: Python with FastAPI or Django depending on complexity. FastAPI for lightweight APIs, Django when you need the ORM, admin panel, and auth scaffolding.
- Database: PostgreSQL almost always. It's boring, reliable, and scales further than most MVPs will ever need.
- Infrastructure: AWS or Azure depending on the client's existing relationships. We use managed services aggressively — RDS, ECS, S3 — because we're not in the business of managing servers for an MVP.
The technology choices at MVP stage matter less than people think. What matters is using tools your team knows deeply. Introducing unfamiliar technology to ship faster is a trap.
Weeks 3–4: Design and Architecture
Two things happen in parallel here: UI/UX design and technical architecture.
On the design side, we produce high-fidelity mockups for every screen in scope before writing any code. This sounds slow. It isn't. Every hour spent resolving design ambiguity in Figma saves three hours of code changes later.
On the architecture side, we make decisions that will either make your life easier or haunt you at scale:
Keep the service boundary simple. Unless your MVP has genuinely different scaling requirements for different parts of the system, run it as a monolith. Not a microservices architecture — a monolith. You can extract services later when you understand where the load actually is.
Build for the 10x case, not the 100x case. Your architecture should handle 10x your expected initial load without changes. Planning for 100x from day one is premature optimisation that will slow you down.
Implement auth properly from the start. JWT with refresh tokens, proper session management, and the ability to add SSO later. Auth done wrong creates technical debt that's genuinely painful to fix.
Make your data model extensible. The schema you build for the MVP will evolve. Build it with that assumption in mind. Use nullable columns sparingly but strategically. Version your APIs from v1.
Weeks 5–7: Development
Three weeks of focused building. The architecture decisions made in week 3 pay off here.
A few development practices we enforce on every MVP engagement:
Daily deployments to a staging environment. The client can see real progress every day. This also forces us to keep the codebase in a deployable state, which is good discipline.
Feature flags from day one. We use LaunchDarkly or a simple homegrown equivalent. This lets us ship code that isn't live yet, run gradual rollouts on launch day, and A/B test features with real users post-launch.
No gold-plating. This is a cultural thing more than a technical thing. Engineers naturally want to build things properly. On an MVP, "properly" means "to the standard required to validate the hypothesis" — not to the standard required to run a Fortune 500 company. The job is to learn, not to build the perfect system.
Automated tests for the critical path only. We write tests for the user flows that, if broken, would fundamentally undermine the product. We don't aim for 80% coverage on an MVP. Coverage is a vanity metric at this stage. Confidence in the things that matter is what counts.
Week 8: Testing, Launch, and Hypercare
The final week is split between QA, deployment, and the first few days of live operation.
QA on an MVP is not the same as QA on a mature product. We're looking for bugs that would prevent a user from completing the core flows. We're not looking for pixel-perfect rendering on every browser in every resolution.
On deployment, we use blue-green deployments even on day one. It adds maybe two hours to the infrastructure setup and means that if something goes wrong post-launch, rollback takes 90 seconds.
The first two weeks post-launch are what we call hypercare. We have an engineer available to respond to production issues within the hour. Most critical bugs surface in the first 72 hours of real user traffic. Having someone on hand during that window is not optional.
The Five Mistakes That Kill MVPs
In two decades of building software, these are the failure modes I've seen most often.
1. Scope Creep Before Launch
The most common killer. The initial scope is defined, everyone agrees, and then — gradually, through a series of individually reasonable conversations — features get added. Each addition feels justified. "Users will definitely want this." "It'll only take a day." "We can't launch without it."
Six months later the product is three times the size of the original scope, still in development, and the market has moved on.
The antidote is a written, signed-off scope document that requires formal change control to modify. Not because bureaucracy is good, but because it creates friction that makes people think twice before adding something.
2. Building Features Instead of Testing Assumptions
An MVP's job is to validate or invalidate a hypothesis. Every feature should exist in service of that goal.
If your hypothesis is "enterprise finance teams will pay for automated reconciliation," your MVP needs to answer that question. It does not need a mobile app. It does not need custom branding. It does not need a team management system. Build the thing that tests the assumption. Ship everything else to the backlog.
3. Waiting for Perfect Before Shipping
There is no perfect. There is only "good enough to learn from." Every week you spend polishing before launch is a week you're not getting data from real users. That data is worth more than any amount of internal refinement.
Ship when the core flow works. Then learn. Then improve.
4. Choosing the Wrong Technology to Impress Investors
I've seen founders insist on Rust, or a distributed event-sourced architecture, or a microservices setup with 12 services, for an MVP that serves 50 beta users. The motivation is usually "this is what serious companies use."
Serious companies use serious technology because they have serious-scale problems. You don't have those problems yet. Use the technology your team knows, ship fast, validate, and scale the architecture when you have a reason to.
5. Not Talking to Users After Launch
The MVP is not the destination. It's the mechanism for generating learning. If you ship the MVP and then spend the next month in engineering meetings, you've missed the point.
In the first month post-launch, the founder or product lead should be talking to real users every day. Not reading analytics dashboards — actually talking to people. "What did you try to do?" "Where did you get stuck?" "What would make you pay for this?"
That information is irreplaceable.
What Comes After the MVP
The MVP is a learning device. The product that comes after it should be informed by what you learned.
Assuming the core hypothesis is validated, the post-MVP phase is about:
- Expanding depth in validated areas: The features users actually used and asked to have improved
- Removing or deprioritising validated failures: The things you built that nobody used
- Addressing the technical debt that matters: Not all technical debt is equal; fix the debt that's slowing down your ability to iterate
- Building for scale where you now have evidence you need it: Don't guess at scale constraints — measure them
Most clients who come to us after a successful MVP launch stay with us for the first phase of post-MVP development. The reason is continuity — the team that built the MVP has context that a new team would spend weeks acquiring.
The Build vs. Buy Decision at MVP Stage
One of the most consequential decisions at the MVP stage is what to build yourself versus what to buy or integrate.
General rule: if a service exists that solves the problem adequately, use it. Your job is to validate your core hypothesis, not to rebuild Stripe, or Twilio, or SendGrid, or Auth0.
The exceptions: when the thing you're building around is the differentiated technology. If your core hypothesis is about a novel approach to payment processing, you might need to build that yourself. But for everything else — auth, email, payments, notifications, storage — buy it.
At Lycore, our default integration stack for MVPs includes:
- Auth: Auth0 or Supabase Auth
- Payments: Stripe
- Email: Resend or SendGrid
- File storage: AWS S3 or Cloudflare R2
- Feature flags: LaunchDarkly or Statsig
- Error tracking: Sentry
- Analytics: PostHog (self-hosted or cloud)
This stack gives you production-grade infrastructure on day one for a fraction of the cost of building it yourself.
A Note on IP and Code Ownership
This comes up in almost every conversation we have with founders, and it deserves a direct answer.
When you work with a development partner, you should own everything: the code, the designs, the infrastructure configurations, the domain models, the data. Not after the project ends — from day one.
This means no proprietary frameworks that you can't escape from. No code that only runs on the development partner's infrastructure. No licensing arrangements that tie you to a vendor after the engagement.
This is standard in every engagement we run at Lycore, not an optional upgrade. If a development partner isn't willing to commit to this in writing, that's a serious warning sign.
Final Thoughts
Building an MVP is a discipline, not a deadline. It requires the ability to say no to things that feel important but aren't essential. It requires shipping before you're comfortable. It requires staying close to users after launch even when engineering work is calling.
Done well, an MVP compresses months of learning into weeks. It tells you, with real data, whether your hypothesis is correct — before you've committed the full budget.
Done poorly, it's just a slow product launch.
The difference is almost never about technical talent. It's about process, discipline, and a clear understanding of what you're actually trying to learn.
If you're at the stage where you're thinking seriously about building an MVP and want to talk through your specific situation, our team at Lycore offers a no-commitment discovery call where we'll help you define scope, validate your approach, and give you an honest assessment of what it will take to ship.
Building something? Have questions about the approach? Drop them in the comments — happy to discuss.


Top comments (0)