Creating a culture of trust, ownership, and data-driven continuous experimentation—now accelerated by AI
The Infinite Loop (L∞P) was introduced in 2023 as a software development methodology that unified lessons from Agile, Lean UX, Kanban, DevOps, and Product-led growth. Its core philosophy—trust, ownership, outcomes over outputs, no arbitrary deadlines—was designed to create high-performance teams that could achieve flow state and deliver genuine customer value.
Three years later, AI has fundamentally changed how software is built. Large language models, agentic workflows, and AI-assisted development have compressed the time from idea to implementation. What took days now takes hours; what took hours now takes minutes.
But the principles of L∞P are not obsolete—they are more relevant than ever.
In 2023, the core problem was that companies underinvested in discovery and used time boxes that corrupted quality. Teams rushed to build without proper validation, and artificial deadlines led to technical debt accumulation and output-over-outcome thinking.
AI doesn't change this—it amplifies it. Teams that skip discovery will now ship bad products faster. The new constraint is verification: AI generates code quickly, but proving correctness requires human judgment and automation investment. The teams that automate verification fastest will ship fastest.
This update to L∞P acknowledges this shift while preserving the core philosophy that made it effective.
How AI Accelerates the Loop
AI transforms every phase of the product development cycle—but humans remain in control of judgment and validation.
Research & Discovery: AI synthesizes market data, customer feedback, and competitive intelligence faster than manual research. Teams can explore more hypotheses with the same effort.
Prototyping: AI generates functional prototypes that enable real user validation faster and more effectively than static UX mockups. Users interact with working software, not wireframes—leading to higher-quality feedback earlier.
Discovery → Development Transition: AI assists the translation from validated discovery to technical specification. It asks refinement questions, identifies gaps in implementation plans, surfaces edge cases, and highlights integration risks before development begins.
Implementation: AI accelerates code generation, but humans own architecture decisions and review all output. The role shifts from typing to steering and verifying.
Verification: AI cannot be 100% trusted with verification, but it accelerates it significantly—automated vulnerability scanning, test generation, code review assistance, and anomaly detection. Human judgment remains the final gate.
AI compresses time but does not replace human judgment. Discovery still requires validation with real users. Architecture still requires human design. Verification still requires human oversight. AI makes the loop faster—it doesn't make corners safe to cut.
Acceleration works both ways. If a team is leveraging AI correctly—investing in discovery, maintaining verification automation, addressing technical debt—everything works well and faster. But if things are going wrong, they go wrong faster too.
Technical debt that was accumulating before AI? Now it accumulates twice as fast. Vulnerabilities that went unnoticed? They multiply faster. Poor architectural decisions? They propagate through the codebase before anyone catches them. Teams skipping discovery? They ship the wrong product to customers in record time.
AI is an amplifier, not a corrector. It accelerates whatever trajectory you're already on. Teams with good practices win bigger. Teams with bad practices lose faster.
What Actually Changed (And What Didn't)
The fundamentals haven't changed as much as the hype suggests. The core problems remain: companies still underinvest in discovery, still use arbitrary deadlines, still optimise for outputs over outcomes. What changed is speed—iterations of the loop are faster, and estimation is even more useless than before.
But there's a cultural shift that will separate winners from losers.
The developer identity problem: Developers have long lived by "talk is cheap, show me the code." Code is part of our identity. Letting go of code ownership is painful—even traumatic. We obsess over clean code, elegant abstractions, and technical excellence. This isn't inherently wrong; good code matters.
But many developers fail to recognise "good bad code"—code that is technically excellent but ultimately harmful. Premature optimizations. Over-engineered abstractions. Architectural purity that overcomplicates the system and prevents faster value delivery. The code might be beautiful, but the user perceives no value, and the product loses to a scrappier competitor.
The uncomfortable truth: The winning product has rarely been the one with the best code. It's been the one with the most value—whether that's better UX, more features, faster iteration, or simply showing up first. Technical debt accumulated by winners gets paid down later. Technical perfection pursued by losers never ships.
The AI acceleration: In the new world, this dynamic intensifies. AI makes code generation cheap. Control freaks who obsess over every line will be left behind. And while it's fun to mock "vibe coders" who prompt their way to working software, some of them will win—because they start with value, not code perfection. The vibe coders who ship without any verification will fail fast. The winners are those who start with value AND invest enough in verification to sustain velocity.
The new developer role: One consequence of AI is that developers must become more heavily involved in discovery. When implementation is fast, understanding what to build matters more than how to build it. Developers who stay isolated from customers, business context, and user research will become bottlenecks—not because they're slow at coding, but because they lack the judgment to steer AI toward value. The developers who thrive will be those who invest in understanding the business, participate in user research, and can make product decisions on the fly.
This doesn't mean quality doesn't matter. It means quality serves value, not the other way around. The best teams will use AI to iterate faster toward value while maintaining just enough quality to sustain velocity. The worst teams will use AI to generate perfect code that nobody wants.
L∞P Principles
L∞P proposes eleven equally essential principles:
Customer-Centric: Everyone should be in constant direct contact with customers, understand their needs, and be obsessed with delivering value to them.
Value-Driven: The team is asked to deliver an outcome, not an output. The effectiveness and efficiency of the team is measured by the success of the customers, not by outputs (No Burn-down charts).
Product-Led: Remove silos between marketing, sales, customer success, and the product team.
Trust & Ownership: The product team is tasked with leading the customer to success and having total freedom to come up with the optimal solution.
Flow-Friendly: There must be at least 50% allocated focus time on the calendar every day. This applies to both deep-thinking architectural work and AI-orchestrated development—both require protection from interruption.
No Estimates or Time Boxes: Use a pull-based system. Focus on one work item at a time. Discovery over planning. AI velocity is unpredictable, making estimation even less meaningful than before.
Cost Tracking Over Velocity Tracking: Instead of tracking velocity or estimating story points, track actual costs: team salaries, infrastructure, AI tokens. You know what you're spending weekly. Then measure outcomes: Activation Rate, Retention Rate, LTV, NPS, Feature Engagement. If customer satisfaction is improving or sustained, does velocity matter? The question isn't "how fast are we going?"—it's "are we delivering value relative to cost?" This reframes budget conversations from "will we hit the deadline?" to "is this investment generating returns?"
Explicit Policies: Use templates for agendas and artefacts to prevent deviation from your processes. This extends to AI governance—establish clear policies for security review and quality standards for AI-generated output.
Clear Goals: The entire organisation should understand the business mission, vision, principles, and strategy.
Data-Driven: The decisions, direction, and work items are backed by data.
Pragmatic: Making decisions based on what is best for the project rather than just optimising for individual preferences or technical ideals.
Automation-First: Invest in automation before features. The team that automates verification, testing, and deployment will experience the full productivity gains of AI. A feature without automated verification is incomplete. Technical debt is addressed organically as part of ongoing work, not accumulated in a separate backlog.
L∞P Roles
"If you want to go fast, go alone; if you want to go far, go together" - African proverb
L∞P tries to balance collaboration and working as a team, so we can attempt to achieve goals that are bigger than ourselves (go far) with focus and alone time so we can get into the zone and be super productive (go fast). When we work together, our goal should be to remove unknowns and enable autonomy; then, we can go our separate ways and get stuff done.
The L∞P team structure is designed to ensure all disciplines are aligned and work without silos. Instead of having separate teams for product development, sales, marketing, and other functions, there is one cross-functional team in charge of discovery and delivery. This team integrates with sales and marketing by aligning goals and strategies around the product.
Discovery and delivery are not separate silos. Developers can propose hypotheses, build prototypes, and participate in user research—they are not just implementers waiting for specifications. Similarly, UX and product can contribute to technical discussions. The entire team owns the full cycle from idea to validated, live product.
The Product Manager is a key role in this structure, taking on both the roles of product owner and scrum master. The Product Manager is responsible for leading the team, making decisions that impact the product, and ensuring the team delivers maximum customer value efficiently.
UX plays a crucial role in the product-led growth organisation, responsible for the design and usability of the product. The UX team works closely with the Product Manager and Engineering to ensure that the product is easy to use and meets the customer's needs. AI accelerates prototype generation; UX spends more time on user validation than wireframing.
Architecture creates the blueprint for the product and ensures technical coherence. This role becomes more critical in the AI era—AI generates code fast but makes poor architectural decisions. Humans must own system design.
Engineering implements architecture decisions, builds verification systems, and maintains the product. The role shifts from "writing code" to "orchestrating AI, reviewing output, and building verification automation."
Sales & Marketing represents business functions that influence product perception and customer expectations. These teams work closely with the Product Manager to align go-to-market strategy with product capabilities, ensuring promises match what the product delivers.
AI orchestration is not a separate role. Every team member incorporates AI assistance into their existing responsibilities organically.
The Product Manager (PM)
The role of the product manager is the most critical one in the product team—and AI makes it more critical, not less. The PM is often seen as the proving ground for future CEOs, as the success or failure of a product falls on their shoulders. It's therefore important that the PM role is reserved for the best talent, with a combination of technical expertise, deep customer and business knowledge, credibility among stakeholders, market and industry understanding, and a passion for the product.
A PM must be smart, reactive, and persistent, with a deep respect for the product team. They should also be comfortable with using data and analytics tools to inform their decisions and drive the success of the product. The PM's main task is to ensure that only the most valuable work items reach the backlog, guiding the product team towards building solutions that deliver the greatest impact and customer value.
How AI transforms the PM role:
AI amplifies PM leverage but also amplifies the cost of poor judgment. When the team can build anything fast, deciding what to build becomes the primary bottleneck. The PM who chooses wrong wastes more resources faster.
Research at scale: AI synthesizes market data, customer feedback, and competitive intelligence. The PM can explore more hypotheses and validate faster—but must still make the judgment calls about what matters.
Saying no becomes harder: AI can generate infinite feature ideas, prototypes, and specifications. The PM must resist the temptation to build everything that's now "easy." The discipline of focus intensifies.
Specification quality gates: AI assists the translation from validated discovery to technical specifications, asking refinement questions and identifying gaps. But the PM validates that the specification actually captures customer value—AI can generate coherent specs for useless features.
Verification strategy ownership: Before work begins, the PM ensures a verification strategy exists. How will we know this feature works? How will we know users value it? AI accelerates verification, but the PM defines what "verified" means.
Faster feedback loops: AI-generated prototypes enable real user validation in hours instead of weeks. The PM must be ready to act on feedback immediately—there's no hiding behind "we'll fix it next sprint."
The PM role becomes less about managing process and more about making decisions under uncertainty. The teams that win will have PMs who can synthesize information fast, say no confidently, and move from validated learning to shipped value without hesitation.
L∞P Artefacts
In this section, we are going to take a look at the L∞P artefacts. We will mention common artefacts from other methodologies, clarify why we will not use them, and introduce some new ones.
✅ Mission and vision: The product mission and vision should be clearly articulated and documented. The team should not only know what the product aims to be but also what it is not aiming to be.
✅ Unified Backlog: A single backlog with tags to distinguish work types. We use a pull-based system—take the top item from the backlog. No separate backlogs for discovery and development.
❌ Sprint Backlog: We don't use a Sprint Backlog because we don't use time boxes. We use a Work board and Work-in-progress limits to track our current focus.
❌ Definition of done: We don't allow custom definitions of done. Done means live and used by actual customers. If it's live, it was verified—verification is a prerequisite, not a separate checkbox.
❌ Product Increment: We don't use a Product Increment because we don't accept the idea of something being "potentially releasable". We release everything; if we are not going to release it, we don't build it.
❌ Sprint goal: We don't use a Sprint goal because we don't have time boxes but also because our metrics are already focused on outcomes.
❌ Separate technical debt backlog: Technical debt is addressed organically as part of ongoing work, not accumulated separately.
✅ Explicit work policies: We use Explicit work policies to ensure that nobody corrupts or deviates from our principles.
✅ User stories: We use User Stories, but we are careful to avoid including specific implementation details or technical requirements (WHAT) to keep the focus on the user's needs and goals (WHO and WHY). Stories should keep the focus on the user, enable collaboration and drive creative solutions. AI may draft stories; humans refine them.
✅ Technical specifications: When discovery outputs are validated (user-tested prototypes, research findings), they transform into technical specifications. AI assists this transformation—taking a validated prototype and generating a specification draft. Refinement sessions identify gaps: edge cases, integration points, security considerations, verification requirements. The specification is complete when the team has enough clarity to implement without constant clarification.
✅ Verification automation: Unit tests, end-to-end tests, AI-assisted security reviews, and observability are first-class deliverables, not afterthoughts. Every feature ships with its verification.
✅ Outcome metrics over output metrics: We don't use Output-based metrics like Burn-down & Burn-up charts, Lead time, Cycle time and Cumulative flow diagrams because they make people focus on outputs, not outcomes. We use outcomes-based metrics instead, like Activation Rate, Retention Rate, Lifetime Value (LTV), Net Promoter Score (NPS), Feature Engagement, Cohort Analysis & A/B Testing, Change Failure Rate, Employee satisfaction surveys, Employee turnover rate. We are careful with the activation rate because we understand that retention rate is a more reliable metric for customer value.
L∞P Ceremonies
In this section, we are going to take a look at the L∞P ceremonies. We will mention common ceremonies from other methodologies, clarify why we will not use them, and introduce some new ones.
-
❌ We don't use Sprints because a sprint is a time box, and we believe that time boxes lead to decreased quality and lower customer value, so we don't have any Sprint-based meetings. Including:
- ❌ Sprint planning,
- ❌ Sprint review and
- ❌ Sprint retrospective.
However, we value the principles behind the Sprint retrospective.
❌ We don't host the Delivery planning and Risk review meetings from Kanban because they strongly focus on outputs.
✅ We host as many User research/testing sessions as needed to validate hypotheses and generate product ideas. The entire team participates in the research phase, sales and development included. AI can significantly enhance these sessions—agents can help facilitate discussions, synthesize findings in real-time, or transform meeting transcriptions into structured insights, specifications, and hypothesis refinements.
✅ We block 4 hours daily in people's calendars to ensure they can get into the zone and move fast. We call this the Do Not Disturb (DnD) meeting. This protected time applies to both deep-thinking work and AI-orchestrated development.
✅ We host a daily stand-up meeting, but we use meeting agendas to ensure they don't become a checkpoint. The goal is to resolve blockers and provide the team with the information required to act with autonomy for the rest of the day.
✅ We host a monthly Flow review meeting to reinforce a continuous improvement culture. This meeting includes: What verification gaps exist? What automation was added? What escaped our automated checks? How can we prevent similar escapes? AI can assist by analysing production incidents, identifying patterns across issues, and suggesting automation opportunities.
✅ We host a monthly Show and Tell meeting to enable conversation across teams, share research insights, and celebrate our achievements. This is a meeting to share knowledge with other teams and the wider business.
✅ We host monthly hackathons to encourage the development team to generate product ideas and reinforce the involvement of the developers in the discovery phase.
✅ We host a quarterly Strategy review meeting to align the product teams with the leadership's mission, vision and strategy.
The core philosophy remains: trust leads to ownership, ownership leads to agility, agility plus protected focus time leads to flow. AI accelerates this cycle—it doesn't replace it.

Top comments (0)