<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Celestina Dike</title>
    <description>The latest articles on DEV Community by Celestina Dike (@celz_of_moniepoint_engineers).</description>
    <link>https://dev.to/celz_of_moniepoint_engineers</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/celz_of_moniepoint_engineers"/>
    <language>en</language>
    <item>
      <title>Turning Messy Data into User Insights: Using Thematic Analysis</title>
      <dc:creator>Celestina Dike</dc:creator>
      <pubDate>Thu, 30 Apr 2026 11:53:00 +0000</pubDate>
      <link>https://dev.to/celz_of_moniepoint_engineers/turning-messy-data-into-user-insights-using-thematic-analysis-3pkf</link>
      <guid>https://dev.to/celz_of_moniepoint_engineers/turning-messy-data-into-user-insights-using-thematic-analysis-3pkf</guid>
      <description>&lt;p&gt;&lt;em&gt;By Barakat Ajadi, Product Manager&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Most product teams are sitting on research gold they don't know how to use. They conduct interviews, gather feedback, and collect transcripts, then struggle to turn it all into something the team can actually act on. This is that story, and more importantly, this is the method that changed how I approach qualitative data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. In the Beginning&lt;/strong&gt;&lt;br&gt;
You have carried out your user interviews and research, but what do you do with the messy data you have gathered? How do you make sense of it and use it to make informed decisions?&lt;br&gt;
I recently conducted user discovery for a new feature we shipped. I grouped users into three categories to understand them at every phase of the journey and to see how each step was perceived.&lt;br&gt;
I started by gathering users across the different groups and creating a research plan, which I shared with the customer research team for vetting. A process was then set up for interview slots, and the journey began. But in reality, this was only the beginning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Making Sense of the User Groups&lt;/strong&gt;&lt;br&gt;
Dividing users into groups wasn't just for structure. It was the only way to get a complete picture of the journey.&lt;br&gt;
Each group represented a different stage. Group one had completed the process. How did you complete it, and what were the hurdles you had to cross, if any? Group two was still in the process. Are you experiencing similar hurdles or a different one entirely? Group three hadn't started the process. What prevents you from starting this flow?&lt;br&gt;
Understanding every stage helps uncover pain points and how to make the flow seamless for users throughout the cycle. A single user group would have collapsed very different experiences into one. We would have missed the fact that blockers at the start of the journey are completely different from blockers at the end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Carrying Out the User Interviews&lt;/strong&gt;&lt;br&gt;
Before the first call, make sure you have recording and transcription enabled on your meeting platform. Get approval from the user upfront. This is non-negotiable.&lt;br&gt;
A plan? Check. Interviews scheduled? Check. Now this is where you start to acquire data. A bunch of user interviews are lined up for the week; the interesting part is where you ask questions and listen to your users talk. Because you have a plan set up with your question bank, it makes it easier, but a user may say something you are not expecting, and it is important to ask follow-up questions to dig deeper.&lt;br&gt;
We have AI infused into our meeting platform, so recording and transcribing were seamless. Tools like Otter.ai, Fireflies, or even the built-in transcription on Google Meet and Zoom work well here. The transcript becomes your best friend in the next phase. Trust me on that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. The Storm Before the Calm&lt;/strong&gt;&lt;br&gt;
Phew. The interviews are done, and it has been quite a week. Twenty calls across three different user groups, and now I am sitting with a folder full of transcripts and recordings staring back at me.&lt;br&gt;
Here is the thing nobody really tells you about user research. The interviews are actually the fun part. You are in a flow state, asking questions, listening, picking up insights in real time. Post-interview is when the actual work begins. You have raw data and need to analyse it and draw insights.&lt;br&gt;
I opened the first transcript. Then the second. By the fifth one, I already had contradicting information. A user from group one said the process was straightforward. A user from group two said they had no idea what they were supposed to do at the exact same step. Both were right. They were just at different points in their journey. But reading transcript by transcript, it is genuinely hard to hold it all together.&lt;br&gt;
I needed a way to find patterns from these conversations without losing the nuance. That is when I turned to thematic analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Introducing Thematic Analysis&lt;/strong&gt;&lt;br&gt;
Thematic analysis is a way to form patterns from qualitative data. You break down your data into labels called codes, and then group those codes into bigger themes.&lt;br&gt;
Here is how I approached it. I gathered all the transcripts and read through them, coding and labelling each one. A code is simply a short label that captures what a user is saying or feeling at that point. For example, a user said, "It took me 5 minutes to complete," and the code for that was completion time. Another said, "One thing I like is that delivery is swift", and the code was ‘merchant satisfaction’. Another said, "I don't think anyone should have issues filling the information", and the code was ‘perceived difficulty’.&lt;br&gt;
Doing this across 20 transcripts, you start to pick up patterns. The same ideas are showing up just in different words. That is your signal.&lt;br&gt;
Next, group similar codes into themes. Codes like completion time, perceived difficulty, and document readiness all pointed at the same underlying problem. They became one theme: Activation Friction. Where you have a cluster of related but distinct codes under one theme, those become sub-themes, a way of preserving the detail without losing the bigger picture.&lt;br&gt;
By the end, I had 8 themes that cut across all three user groups. What made this powerful was seeing which themes were universal across the journey and which ones were specific to a particular stage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Where AI Fits In&lt;/strong&gt;&lt;br&gt;
Now let's address what everyone is thinking. Reading through twenty transcripts? Three groups? Coding responses manually? This sounds like a lot, and this is where AI can come in to make your work significantly easier.&lt;br&gt;
Once I had my transcripts ready, I fed them into an AI tool. ChatGPT works well for this, as does Claude or a dedicated research tool like Dovetail. I prompted it to code responses, identify similar phrases, and flag recurring sentiments. What would have taken me an entire day was done in a fraction of the time.&lt;br&gt;
Here is something important, though. AI gives you a strong starting point, not a final answer. The interpretation of what those patterns actually mean for your product still sits with you. You handle the thinking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. From Themes to Product Insights&lt;/strong&gt;&lt;br&gt;
Having 8 themes feels good. But themes alone don't ship features. The next step was turning what I found into something the team could actually act on.&lt;br&gt;
I broke down each theme into four sections in my research report.&lt;br&gt;
What We Heard. This section was a brief summary of what users said about the theme, with sample quotes to support it. For example, under the theme of Activation Friction, quotes clustered around not knowing a document was required, or needing time to gather specific documents before completing the flow.&lt;br&gt;
Why It Matters. How does this theme affect the user journey? Does it cause delayed activation? Drop-off? Frustration that only surfaces later? This section connects the theme to a real business or experience consequence.&lt;br&gt;
What It Means. This is where you translate the theme into a product diagnosis. Where does the issue actually come from? Is the form too long, or is it the missing visual cues that tell users what to prepare? Are users benchmarking against a competitor's experience? This section separates the symptom from the cause.&lt;br&gt;
Recommendations. Based on everything identified, what should the team do? This is where research becomes a roadmap. Before the analysis, conversations with stakeholders were centred on surface-level fixes. After that, we were talking about the right problems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8. What Thematic Analysis Changed&lt;/strong&gt;&lt;br&gt;
Honestly? It changed how I think about qualitative data altogether.&lt;br&gt;
Thematic analysis gave me a process, and having a process meant I could defend my insights. Walking into stakeholder meetings, the conversation wasn't "users seem frustrated with the journey." It was saying "across all three user groups, activation friction and perceived difficulty were the two most consistent themes, and here is the&lt;br&gt;
If you are a product manager sitting on a pile of interview transcripts right now and wondering where to start, start here. The themes are already in your data. Thematic analysis just helps you find them. The messy data was never the problem. It was always the signal.&lt;/p&gt;

&lt;p&gt;Barakat Ajadi is a Product Manager at Moniepoint.&lt;br&gt;
Read more articles from our technical team on &lt;a href="https://engineering.moniepoint.com/" rel="noopener noreferrer"&gt;the blog&lt;/a&gt;&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>productmanagement</category>
      <category>thematicanalysis</category>
    </item>
    <item>
      <title>When Code Becomes Cheap, Thinking Becomes Expensive</title>
      <dc:creator>Celestina Dike</dc:creator>
      <pubDate>Thu, 26 Mar 2026 12:27:53 +0000</pubDate>
      <link>https://dev.to/celz_of_moniepoint_engineers/when-code-becomes-cheap-thinking-becomes-expensive-172o</link>
      <guid>https://dev.to/celz_of_moniepoint_engineers/when-code-becomes-cheap-thinking-becomes-expensive-172o</guid>
      <description>&lt;h2&gt;
  
  
  From "Prove It in Code" to "Prove It in Judgment"
&lt;/h2&gt;

&lt;p&gt;For decades, credibility in software came from what you built with your own hands. Writing robust systems required stamina, coordination, and deep technical craft. The effort itself acted as a filter; only ideas worth the cost were implemented.&lt;br&gt;
That constraint is gone.&lt;/p&gt;

&lt;p&gt;Today, large language models can generate entire services, documentation, tests, and deployment scripts in minutes. The barrier to producing software artefacts has collapsed. The limiting factor is no longer typing speed or knowledge of syntax; it is clarity of thinking.&lt;/p&gt;

&lt;p&gt;Inside an organisation adopting AI, this shift is profound. A team can spin up a prototype payments reconciliation service over a weekend using AI assistance. But the real question becomes: Who defined the problem well? Who validated the trade-offs? Who understands the system well enough to maintain it?&lt;br&gt;
Code is abundant. Judgment must not become scarce.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Output Becomes Cheap, Signal Changes
&lt;/h2&gt;

&lt;p&gt;Well-structured codebases and thorough documentation used to be signals of maturity and care. Today, AI tools can generate pristine README files, polished APIs, and layered architectures instantly.&lt;br&gt;
This makes surface quality an unreliable proxy for depth.&lt;br&gt;
Within an AI-enabled engineering team, you might see beautiful documentation generated in seconds, clean abstractions suggested by the model, and automated test scaffolding created in bulk, but none of these guarantee that the system handles scale, edge cases, or operational realities. &lt;/p&gt;

&lt;p&gt;An AI can generate an event-driven order processing service, complete with retry logic, dead-letter queues, and idempotency keys, and the code will look immaculate. But does it handle message ordering under partition rebalancing? Does the retry backoff account for downstream rate limits? A model can scaffold a Kubernetes deployment manifest with health checks, resource quotas, and horizontal pod autoscaling,  yet miss that the pod affinity rules will starve a specific availability zone under real traffic patterns.&lt;/p&gt;

&lt;p&gt;The signal of quality shifts from how polished it looks to something harder to fake: Does the team understand the system deeply? Is there ownership and accountability? Can someone explain why the circuit breaker threshold is set to 60% and not 40%, without re-prompting the model?&lt;br&gt;
In AI adoption, governance and provenance become more valuable than aesthetics.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Compression of Effort
&lt;/h2&gt;

&lt;p&gt;AI drastically reduces the mechanical cost of building software. Tasks that once required weeks, writing gRPC service definitions with protobuf schemas, building CDC pipelines to sync state across bounded contexts, and standing up end-to-end integration test harnesses with containerised dependencies can now be compressed into hours.&lt;br&gt;
This is not hype. It is leverage. Used correctly, AI frees engineers to spend more time on architecture and systems thinking, explore multiple design alternatives quickly, and run experiments that previously felt too expensive. In an organisation adopting AI at scale, this means faster iteration cycles and broader experimentation: scaffolding a complete OAuth2 flow with PKCE, token rotation, and role-based access control in an afternoon instead of a sprint; generating contract tests between fifteen microservices to catch breaking changes before deployment; spinning up a feature-flagged canary release pipeline to compare two ranking algorithms in production traffic. Teams can prototype an entire event-sourced domain model, evaluate its query performance against a traditional CRUD approach, and make an informed architectural decision all before the old process would have finished the design document.&lt;br&gt;
However, speed without discipline produces fragile systems. The advantage goes to teams that combine AI acceleration with experienced technical oversight.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Risk: Infinite Artefacts, Finite Understanding
&lt;/h2&gt;

&lt;p&gt;If software can be generated endlessly, its perceived value drops. A pull request that took weeks of focused effort once carried an implicit sense of respect. Now, large AI-generated diffs may appear instantly with unclear human involvement.&lt;br&gt;
This creates tension. Reviewing becomes harder than generating. Accountability becomes blurry. Junior engineers may rely on tools without building fundamentals.&lt;/p&gt;

&lt;p&gt;For organisations, the real danger is not "bad AI code." It is teams that lose the ability to reason about systems independently.&lt;br&gt;
If AI writes the data pipeline, optimises the SQL, and configures the infrastructure, but no one fully understands the flow, technical debt becomes invisible.&lt;/p&gt;

&lt;p&gt;AI adoption must therefore include a strong review culture, explicit learning paths, and clear architectural ownership. Otherwise, productivity gains today become fragility tomorrow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Talk Becomes the Multiplier
&lt;/h2&gt;

&lt;p&gt;In this new landscape, the most valuable skill is not typing code, it is articulating intent`&lt;br&gt;
The engineer who can frame a problem precisely, define constraints clearly, ask the right questions, and evaluate model output critically will outperform someone who merely executes.&lt;br&gt;
AI does not eliminate engineering roles. It amplifies differences in clarity and systems thinking. Organisational structures compress too - design, coding, testing, and iteration increasingly blur into tight feedback loops.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future Is Thinking-First
&lt;/h2&gt;

&lt;p&gt;The future of software development is not code-first. It is thinking-first.&lt;br&gt;
When machines can generate implementation at scale, the competitive edge shifts to strategic problem definition, technical stewardship, responsible governance, and mentorship that builds foundational skill.&lt;br&gt;
Code may be cheap now. But reasoning, accountability, and leadership are not.&lt;/p&gt;

&lt;p&gt;Got any questions? Drop a comment&lt;/p&gt;

&lt;p&gt;Pavan leads multiple engineering departments at Moniepoint &lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>productivity</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Curiosity Doesn’t Kill the Engineer</title>
      <dc:creator>Celestina Dike</dc:creator>
      <pubDate>Wed, 25 Feb 2026 18:48:48 +0000</pubDate>
      <link>https://dev.to/celz_of_moniepoint_engineers/curiosity-doesnt-kill-the-engineer-22a0</link>
      <guid>https://dev.to/celz_of_moniepoint_engineers/curiosity-doesnt-kill-the-engineer-22a0</guid>
      <description>&lt;p&gt;&lt;strong&gt;It Gives Them Nine More Lives in the Age of AI&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;“Curiosity killed the cat” is a warning. &lt;br&gt;
In engineering, it is survival. And in the age of AI, it may be the single trait that keeps us relevant.&lt;/p&gt;

&lt;p&gt;AI can autocomplete code. It can scaffold services. It can generate infrastructure templates, tests, and even architectural diagrams.&lt;br&gt;
But AI is trained on patterns. Curiosity questions patterns.&lt;br&gt;
That difference matters.&lt;/p&gt;

&lt;p&gt;The engineer who stops at “it works” will slowly be replaced by systems that can also make things “work.” The engineer who asks “why the queue saturates at this load”, “what the true bottleneck is”, “what assumption an abstraction is hiding”, and “what happens when the model is wrong” cannot be easily replaced.&lt;/p&gt;

&lt;p&gt;Curiosity operates below the surface layer where AI is strongest.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Every Deep Question Is an Extra Life&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you trace a request from the load balancer to the kernel to userspace and back, you gain a life.&lt;br&gt;
When you understand how WAL durability interacts with fsync semantics, you gain another.&lt;br&gt;
When you read the consensus paper instead of just using the library, that’s another.&lt;br&gt;
And another life when you model throughput using queueing theory instead of guessing,&lt;br&gt;
These are not just technical wins. They are resilience wins.&lt;/p&gt;

&lt;p&gt;AI can produce solutions. Curious engineers understand trade-offs.&lt;br&gt;
AI can refactor code. Curious engineers understand failure modes.&lt;br&gt;
AI can summarise documentation. Curious engineers build mental models.&lt;br&gt;
Mental models compound.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Real Threat Is Complacency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The danger is not that AI will replace engineers.&lt;br&gt;
The danger is that engineers will outsource their curiosity.&lt;br&gt;
If we let tools think for us, abstractions become opaque walls instead of glass windows. We become operators of systems we do not understand.&lt;br&gt;
The curious engineer uses AI differently.&lt;br&gt;
Not as a crutch but as a multiplier.&lt;br&gt;
They ask better questions.&lt;br&gt;
They interrogate outputs.&lt;br&gt;
They test assumptions.&lt;br&gt;
They dive into edge cases that the model glosses over.&lt;br&gt;
Curiosity turns AI from a replacement into an amplifier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Curiosity Makes You Antifragile&lt;/strong&gt;&lt;br&gt;
When production fails, the incurious engineer searches for a quick answer.&lt;br&gt;
The curious one traces signals.&lt;br&gt;
Why did the replication lag spike?&lt;br&gt;
Why did tail latency explode?&lt;br&gt;
What is the queue discipline actually doing?&lt;br&gt;
Is this a coordination limit or a physical limit?&lt;br&gt;
Each failure becomes another life added to the stack.&lt;br&gt;
Over time, systems stop looking like random collections of tools. They start revealing deeper patterns.&lt;br&gt;
Entropy and compression.&lt;br&gt;
Backpressure and channel capacity.&lt;br&gt;
Cache hierarchies and storage engines.&lt;br&gt;
Scheduling theory and latency.&lt;br&gt;
Once you see principles instead of products, you cannot be easily automated away.&lt;br&gt;
AI predicts tokens.&lt;br&gt;
Engineers reason about constraints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nine Lives in the AI Era&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Technologies will change.&lt;br&gt;
Frameworks will expire.&lt;br&gt;
AI models will improve.&lt;br&gt;
But the engineer who remains relentlessly curious about fundamentals, failure, and first principles keeps regenerating.&lt;br&gt;
Each layer of understanding is another life.&lt;br&gt;
In an era where automation accelerates everything, the only durable edge is the willingness to keep asking ‘why?’ long after others have stopped.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Curiosity will not kill the engineer.&lt;br&gt;
It will give them nine more lives.&lt;br&gt;
And it may be exactly how we survive.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://youtu.be/IZOxzVn_T8Q?si=SfxMmyW_mqUs1YVq" rel="noopener noreferrer"&gt;Opeyemi Folorunsho&lt;/a&gt; leads the Engineering Research &amp;amp; Development team at Moniepoint.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>software</category>
    </item>
    <item>
      <title>How to Build Engineering Teams That Think in Systems, Not Tickets</title>
      <dc:creator>Celestina Dike</dc:creator>
      <pubDate>Wed, 25 Feb 2026 17:41:21 +0000</pubDate>
      <link>https://dev.to/celz_of_moniepoint_engineers/how-to-build-engineering-teams-that-think-in-systems-not-tickets-4jfp</link>
      <guid>https://dev.to/celz_of_moniepoint_engineers/how-to-build-engineering-teams-that-think-in-systems-not-tickets-4jfp</guid>
      <description>&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                    _Commits by John_
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;The Performance Conversation That Changed the Room&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;During a performance review, an engineering manager highlighted that an engineer had spent several late nights debugging production issues and pushing features. It was presented as proof of performance beyond expectation.&lt;br&gt;
The room expected agreement.&lt;br&gt;
Instead, I said something that made it uncomfortable.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Spending more time is not the same as performing better. Effort alone is not performance. Outcome is.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Yes, features were shipped quickly. But months later, we were still fixing subtle bugs tied to that release. Every small enhancement required touching core architectural decisions. During the product review, assumptions were not questioned. The design was implemented verbatim.&lt;br&gt;
&lt;strong&gt;The engineer looked busy. The system was fragile.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the same period, another engineer worked normal hours. No heroics. No drama. Since his release, there have been no production incidents in that module. He questioned the PRD. He challenged design decisions. He wanted to understand how the feature affected customers. As we scaled, extending his work became easier, not harder.&lt;br&gt;
He was rated beyond expectations.&lt;br&gt;
That contrast exposed something deeper — a flaw not in the engineers, but in what we had trained ourselves to value.&lt;/p&gt;

&lt;p&gt;•  •  •&lt;/p&gt;

&lt;p&gt;We Reward What Is Visible, Not What Is Valuable&lt;br&gt;
Most engineering performance systems are biased toward visible work.&lt;br&gt;
Hours spent. Story points completed. Tickets closed. Production calls attended.&lt;br&gt;
These are observable. They are measurable. They make managers feel progress is happening.&lt;br&gt;
But systems thinking is often invisible.&lt;br&gt;
Clean architecture is invisible. Prevented incidents are invisible. Reduced complexity is invisible. A system that rarely breaks does not receive applause.&lt;br&gt;
Over time, engineers optimise for busyness instead of durability.&lt;br&gt;
When that happens, ticket completion becomes the goal instead of system integrity.&lt;br&gt;
And nowhere is this more apparent than in how we treat incidents.&lt;/p&gt;

&lt;p&gt;•  •  •&lt;/p&gt;

&lt;p&gt;Prevention Should Matter More Than Recovery&lt;br&gt;
Recovery is important. Incident response discipline matters. Mean time to recovery matters.&lt;br&gt;
But prevention should carry more weight.&lt;br&gt;
If I were to assign weights, prevention should count for 80 per cent and recovery for 20 per cent. It is cheaper to prevent than to recover.&lt;/p&gt;

&lt;p&gt;Reducing incidents from 10 per month to 1 is more valuable than reducing recovery time from 2 hours to 30 minutes.&lt;br&gt;
When teams focus primarily on recovery, they build better firefighters.&lt;/p&gt;

&lt;p&gt;When teams focus on prevention, they build better architects.&lt;br&gt;
If you celebrate the firefighter more than the architect, you are building a culture that creates a fire incident all the time.&lt;br&gt;
The root cause of this pattern is almost always the same — teams that have been conditioned to think in tickets.&lt;/p&gt;

&lt;p&gt;•  •  •&lt;/p&gt;

&lt;p&gt;Ticket Thinking Creates Avoidable Work&lt;br&gt;
Ticket thinking creates the illusion of velocity while increasing fragility.&lt;/p&gt;

&lt;p&gt;Architecture gets reworked after release. Hotfixes stabilise features that should have been designed more carefully. Refactoring becomes necessary because shortcuts were taken to meet sprint targets. Adding new modules becomes increasingly expensive because the foundation is unstable.&lt;/p&gt;

&lt;p&gt;Velocity looks impressive in the short term. But complexity compounds quietly.&lt;br&gt;
Every new feature becomes harder to ship safely. Every release carries anxiety. Engineers spend more time fixing than building.&lt;br&gt;
That is not scale. That is activity disguised as progress.&lt;br&gt;
So how do you break the cycle?&lt;/p&gt;

&lt;p&gt;•  •  •&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Build Teams That Think in Systems&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Shifting from ticket thinking to system thinking requires deliberate leadership.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Redefine performance.&lt;/strong&gt; Stop rewarding just hours spent or story points completed. Start recognising engineers who reduce incident frequency, simplify architecture, and make future changes easier. Promotion should reflect ownership of outcomes, not volume of output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Change what you measure.&lt;/strong&gt; Track incident recurrence, change failure rate, and the safety of extending the system. Measure the reduction of avoidable work. If every new feature destabilises the system, velocity is meaningless.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Involve engineers earlier.&lt;/strong&gt; Engineers should not simply implement requirements. They should challenge assumptions, clarify ambiguities, and consider second-order effects. An engineer who takes a PRD verbatim is implementing tasks. An engineer who questions it owns outcomes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reward complexity reduction.&lt;/strong&gt; Promote engineers who simplify flows, clarify domain boundaries, and reduce cognitive load. Complexity is invisible until it explodes. Recognise those who prevent that explosion.&lt;/p&gt;

&lt;p&gt;When you start doing all of this, one question inevitably comes up — especially from engineers who have been grinding hard but not seeing the results they expect.&lt;/p&gt;

&lt;p&gt;•  •  •&lt;br&gt;
&lt;strong&gt;Promotion Is About Mastery, Not Busyness&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Many engineers ask why they are not being promoted despite long hours and visible effort.&lt;br&gt;
Promotion panels are not looking for busyness. They are looking for depth.&lt;/p&gt;

&lt;p&gt;Companies are looking to promote masters.&lt;br&gt;
Mastery in engineering is not about writing more code or responding to more incidents. It is about understanding systems deeply enough to design them so that problems do not repeatedly occur. It is about building foundations that scale and optimising for the long term.&lt;br&gt;
Speed and quality are not enemies; they answer to mastery.&lt;/p&gt;

&lt;p&gt;•  •  •&lt;/p&gt;

&lt;p&gt;This is ultimately a leadership problem. If we want to build engineering teams that think in systems, not tickets, we must reshape how we define performance, what we measure, and what we celebrate.&lt;/p&gt;

&lt;p&gt;Tickets close.&lt;br&gt;
Systems endure.&lt;/p&gt;

&lt;p&gt;And endurance is what separates coding from engineering.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;John Ojetunde leads multiple Engineering Teams at Moniepoint&lt;br&gt;
*&lt;/em&gt;&lt;/p&gt;

</description>
      <category>software</category>
      <category>leadership</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
