<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nek.12</title>
    <description>The latest articles on DEV Community by Nek.12 (@nek12).</description>
    <link>https://dev.to/nek12</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nek12"/>
    <language>en</language>
    <item>
      <title>How Does My Agent Survive 37+ Compactions in a Row? A Deep Dive into Proactive Compact in Builder</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Thu, 09 Apr 2026 11:11:43 +0000</pubDate>
      <link>https://dev.to/nek12/how-does-my-agent-survive-37-compactions-in-a-row-a-deep-dive-into-proactive-compact-in-builder-5cmn</link>
      <guid>https://dev.to/nek12/how-does-my-agent-survive-37-compactions-in-a-row-a-deep-dive-into-proactive-compact-in-builder-5cmn</guid>
      <description>&lt;h2&gt;
  
  
  How Does Compaction Work in Builder?
&lt;/h2&gt;

&lt;p&gt;Today I shipped the 0.10.0 update to Builder CLI, and it landed huge improvements to compaction and cache efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Builder's Compact Is Better Than Native
&lt;/h3&gt;

&lt;p&gt;By default, when you go through onboarding, Builder recommends the local proprietary compact, which is far more detailed than the native one. Native compact can still be selected or configured in the config file if your provider supports it. I recommend local - here's why.&lt;/p&gt;

&lt;p&gt;The native compact algorithm for Claude Code and Codex was &lt;a href="https://x.com/Kangwook_Lee/status/2028955292025962534?s=20" rel="noopener noreferrer"&gt;reverse-engineered&lt;/a&gt; a long time ago, and it's literally 5 - 7 lines of instructions along the lines of: "write a description of the work, idk, make no mistakes." I reverse-engineered the native compact myself to make sure of this and to pull out the best parts.&lt;/p&gt;

&lt;p&gt;The only real advantage of native compact is preserving reasoning traces from previous conversation, but that can be achieved another way (described later). &lt;strong&gt;The local compact in Builder uses a carefully crafted prompt that I've been polishing for several months, covering all the important details that affect the agent's ability to continue working.&lt;/strong&gt; Because of this, the compact can survive literally dozens of sessions without losing the overall task context. I have large refactors that ran autonomously through more than 37 compacts in a row, all night long - without any issues.&lt;/p&gt;

&lt;p&gt;All of Builder's code is open source, so you can read the prompt. But without the harness improvements I also made for the compact, and without the ability to change this prompt in official harnesses, it won't get you very far.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2xtm9uofgoastfuzn5m.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2xtm9uofgoastfuzn5m.webp" alt="Builder running through 22 compacts" width="800" height="469"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Example: Builder ran through 22 compacts&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The compact in Builder is also better than the original because &lt;strong&gt;it saves the entire conversation history no matter how many compacts you've had - but the model only sees the fresh history.&lt;/strong&gt; That means you can roll back 19 compacts at any point, start a new conversation, fork it, and go off into a separate branch. On top of that, unlike some providers, Builder preserves cache, so it ends up being much cheaper. This is offset by the fact that my compact is more detailed - but as I've said, Builder's main goal is quality, not price tag or speed. In future versions I plan to optimize the compact so it contains nothing unnecessary and costs less than other providers.&lt;/p&gt;
&lt;h3&gt;
  
  
  Proactive Compact - New in 0.10.0
&lt;/h3&gt;

&lt;p&gt;Beyond the prompt itself, the compact in Builder is proactive. Version 0.10.0 ships the first experimental tool that lets the model decide on its own when to run a compact. &lt;strong&gt;The model in Builder knows it has limited memory, knows how much space is left, knows when to make checkpoints - and gets a notification about remaining context.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enable via:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="nn"&gt;[tools]&lt;/span&gt;
&lt;span class="py"&gt;trigger_handoff&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this isn't implemented the way it is in Claude Code, where it freaks the model out and it refuses to work. And not the way Codex does it, where it constantly stops to "take a breather." In Builder, it's just one of the agent's autonomous processes - one that doesn't cause unnecessary pauses or degrade work quality!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proactive compact prevents losing changes, breaking builds, tests, or overwriting your decisions&lt;/strong&gt; in situations where the model runs out of context and compact gets triggered at a bad moment - causing it to forget that it has a file partially edited or a spec half-implemented. In my tests, the model knows on its own when it needs to hand off to the next agent, and prepares the workspace for that, which makes agent-to-agent collaboration across compacts much better.&lt;/p&gt;

&lt;p&gt;Plus, the main agent can pass what's called a &lt;em&gt;letter to the future&lt;/em&gt; to the next agent. The general compact prompt, especially the native one, often doesn't include specific things from the model's internal reasoning - things it knows but hasn't gotten to implement yet. Since reasoning traces aren't saved, my solution lets the model save them itself and pass them forward.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tpmzjvc46cpctckxtoq.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1tpmzjvc46cpctckxtoq.webp" alt="Builder prepared for compact" width="800" height="469"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Builder prepared for compact, passed instructions for saving important info, triggered native compact, and continued working&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-Compact Before Starting a Task
&lt;/h3&gt;

&lt;p&gt;Another UX improvement you won't find in other harnesses: pre-compact before starting a task. The harness constantly checks, whenever you send any command to the agent, whether there's enough context to complete the task - and based on heuristic analysis decides: &lt;strong&gt;should the task run now, or should it compact first and then move on to the next part of the plan?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This way your command is saved directly, doesn't go through an LLM summarization loop, doesn't waste tokens, and the agent starts fresh with clean context and no hallucinations. This lets you just talk to the model without thinking about compaction - the harness figures it all out alongside the model, freeing you from having to manually trigger slash commands or worry about hitting the "dumb zone." This is enabled by default.&lt;/p&gt;

&lt;h3&gt;
  
  
  Queues Across Compacts
&lt;/h3&gt;

&lt;p&gt;The last improvement I really love - &lt;strong&gt;queue support across compacts.&lt;/strong&gt; Your prompts, slash commands, and any processes, including background subagents, can survive compacts and transfer ownership to the next agent without any loss.&lt;/p&gt;

&lt;p&gt;For example, you can queue up: right after a task completes - a compact with a custom prompt, then a slash command, and also spin up three background subagents. When the compact finishes, the model automatically receives your prompt, the summary, the message from the previous agent, your slash command, and the output from all three subagents - and picks up from that point as if nothing happened. &lt;strong&gt;No lost information, no stops, no limits on what you can do while the agent is running.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This seriously simplifies the workflow: give the agent a task, immediately schedule a compact, and then tell it to open a pull request and fix the comments in it with background agents for planning and execution. You won't find this in any harness out of the box. Maybe some allow it through hooks, but Builder's advantage is that all of this is implemented natively - with maximum integration out of the box.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://opensource.respawn.pro/builder/quickstart/" rel="noopener noreferrer"&gt;Try Builder here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>builder</category>
      <category>compaction</category>
      <category>claude</category>
      <category>codex</category>
    </item>
    <item>
      <title>I got tired of every existing coding agent. So I built my own - Builder.</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Tue, 07 Apr 2026 15:39:29 +0000</pubDate>
      <link>https://dev.to/nek12/i-got-tired-of-every-existing-coding-agent-so-i-built-my-own-builder-4391</link>
      <guid>https://dev.to/nek12/i-got-tired-of-every-existing-coding-agent-so-i-built-my-own-builder-4391</guid>
      <description>&lt;p&gt;Today I'm excited to show my new project I've been working on for the past 2 months - &lt;a href="https://opensource.respawn.pro/builder/" rel="noopener noreferrer"&gt;Builder&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's a free and open-source coding agent built specifically for professional agentic engineers, and it works with your existing token subscription.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frehvxrgc4e6ddpsp0zqw.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frehvxrgc4e6ddpsp0zqw.webp" alt="Image" width="800" height="697"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why build your own when CC and Codex exist?
&lt;/h2&gt;

&lt;p&gt;I used Claude Code for over five months, then spent a long time with Codex CLI and the desktop app, Opencode, Junie, and others. And all of these major projects share three core problems for me.&lt;/p&gt;

&lt;h3&gt;
  
  
  Opacity
&lt;/h3&gt;

&lt;p&gt;Existing tools are opaque to a professional engineer. In Junie, instead of the actual commands the agent runs, you only see one-line descriptions of what's happening. In Codex, command calls are aggressively hidden and only accessible in transcript mode. And in Claude Code, instead of letting the model work freely through bash commands, it uses its own opaque custom toolset for reading, writing files, and searching.&lt;/p&gt;

&lt;p&gt;Maybe that's fine for people who don't want to look at code and don't want to understand what's going on. &lt;strong&gt;But for an engineer who understands the process and wants to work with the model collaboratively - like with a pair programmer - it's a bad fit.&lt;/strong&gt; And not a single agent wrapper I know of has ever focused on engineers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bloat
&lt;/h3&gt;

&lt;p&gt;Many agentic harnesses pack in a ton of features for vibecoders that only make sense for specific workflows. For day-to-day work, agentic engineers find them more of a hindrance than a help. I'm talking about things like planning mode, 38+ persistent notifications in ClaudeCode, orchestration modes, Swarm, plugins, Explorer agents, and so on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;All of this not only gets in the model's way and pollutes the context, it also ruins the user experience&lt;/strong&gt; - and nobody has actually proven the value of these "improvements" compared to straightforward prompting and iterative work with an agent. So I wanted an agent without a planning mode (which only constrains the model), and without the flashy gimmicks that prettify responses while making the model hallucinate (looking at you, Junie).&lt;br&gt;
MCP deserves a special callout - it's been considered an anti-pattern in agentic development for a while now: it pollutes the context with useless tools, and most of them can't be used alongside bash tools.&lt;/p&gt;

&lt;p&gt;All of this gets replaced by more effective strategies for engineers who direct the model as a coding agent, rather than delegating their own thinking to it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Instability and closed nature
&lt;/h3&gt;

&lt;p&gt;These kinds of harnesses often have their own opinions on how things should be done - and those opinions keep changing. With every update you can expect your workflow to change in some way too. It's unclear what's going on under the hood: what the system prompt looks like, how compaction works, missing settings, no real way to switch models or providers. You can't tell what ends up in the context, when cache invalidation happens, or what changed in the latest update that might have silently broken your workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I simply got fed up with sitting down to work one fine day only to find my agent suddenly hallucinating - and it turning out to be yet another hidden bug in the harness that invalidated something under the hood and was feeding the model bad data.&lt;/strong&gt; So I decided to write my own harness.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6488idujvbzmelz0rhfn.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6488idujvbzmelz0rhfn.webp" alt="Image" width="800" height="697"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Builder CLI's background shell management&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What makes Builder better
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Collaborative architecture work
&lt;/h4&gt;

&lt;p&gt;Many harnesses don't let models work together with the user on product and architectural design. One of the key things in my wrapper is giving the agent the ability to ask a question whenever it runs into a problem, a blocker, or finds something in the code it wants to refactor or improve.&lt;/p&gt;

&lt;p&gt;In practice, this results in dramatically better output quality: &lt;strong&gt;agents no longer try to bulldoze their way through every problem&lt;/strong&gt; and end up implementing terrible, broken solutions or trashing the architecture. Instead, they ask a question and solve the problem together with the user.&lt;/p&gt;

&lt;h4&gt;
  
  
  Instrumented workflow instead of blind trust
&lt;/h4&gt;

&lt;p&gt;The second key thing is evolving agentic harnesses beyond a simple loop where the agent spins and is just trusted to do everything right - toward a clearly instrumented workflow that prevents the agent from making dumb mistakes. Many agents start hallucinating - and that's normal for the current state of models. Even people forget things, lose context, miss small details that can actually matter. &lt;strong&gt;That's why in Builder, a separate parallel agent always watches over your agent's work, checking its output for quality and compliance with your project's rules.&lt;/strong&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Quality over token savings
&lt;/h4&gt;

&lt;p&gt;Large agent wrappers right now are optimized for quick fixes, minimal viable solutions, conserving context and tokens. Essentially, you're paying with output quality and your own time spent cleaning up after the model - all to save a few thousand tokens and run 20% faster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In Builder's design, I'm focused first and foremost on output quality: proper architecture, code safety, performance, first-principles solutions&lt;/strong&gt; - not the hacks that are currently the default for virtually every agent.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's already there
&lt;/h2&gt;

&lt;p&gt;I've already fully replaced Codex CLI with Builder and don't open it anymore. I use it for everything, including work tasks.&lt;/p&gt;

&lt;p&gt;Current features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agent loop equivalent to Codex CLI&lt;/li&gt;
&lt;li&gt;Background tasks, background agents, subagents, orchestration&lt;/li&gt;
&lt;li&gt;Supervisor - a parallel agent that continuously watches the model and improves its results&lt;/li&gt;
&lt;li&gt;Auto and manual context compaction in multiple modes; native Codex compaction is supported via settings, but I've long since switched to Builder's own compaction, which is several times better in quality&lt;/li&gt;
&lt;li&gt;Two display modes: compact and detailed. In detailed mode, full information about what the model did is available - something like a transcript view, but with far more data than in Codex or Claude Code&lt;/li&gt;
&lt;li&gt;Questions from the model to the user&lt;/li&gt;
&lt;li&gt;Model steering and prompt queuing&lt;/li&gt;
&lt;li&gt;Native image and PDF viewing&lt;/li&gt;
&lt;li&gt;Native web search&lt;/li&gt;
&lt;li&gt;Prompt and session history&lt;/li&gt;
&lt;li&gt;Notifications (including system notifications) when the model finishes&lt;/li&gt;
&lt;li&gt;agents.md standard support&lt;/li&gt;
&lt;li&gt;Agent Skills support&lt;/li&gt;
&lt;li&gt;Code syntax highlighting&lt;/li&gt;
&lt;li&gt;Custom slash commands and many built-in ones&lt;/li&gt;
&lt;li&gt;Conversation editing and forking&lt;/li&gt;
&lt;li&gt;Lots of keyboard shortcuts&lt;/li&gt;
&lt;li&gt;Toggles for OpenAI: turbo mode, model verbosity, thinking mode&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm currently actively working on worktree management and also laying the groundwork for a native desktop app.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3580scbzmrj4edk49xc.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3580scbzmrj4edk49xc.webp" alt="Image" width="800" height="621"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Finished the code review&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What won't be there
&lt;/h2&gt;

&lt;p&gt;So you can decide whether Builder is right for you, here's what conflicts with its philosophy and won't be implemented:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MCP support&lt;/strong&gt; - pollutes the context and is incompatible with bash tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Planning mode&lt;/strong&gt; - a legacy leftover from Anthropic's models that constrains the model&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UI bells and whistles&lt;/strong&gt; for people who aren't doing serious programming&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Micro-compaction and anything that invalidates caches&lt;/strong&gt; - it costs a lot of money&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sandbox&lt;/strong&gt; - ClaudeCode, Codex, and Junie still don't have proper sandboxing, and I've been using them without it for a long time. As a professional engineer I simply don't end up in situations where the model deletes something. Safety is handled through proper prompting right now; file editing is configurable and access can be restricted. Sandboxing won't save you on any OS from destructive actions - so I decided not to put on the tinfoil hat.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;WebFetch or equivalent&lt;/strong&gt; - there's already a CLI, Markdown standards, and skills for that. No need for an extra tool that hands the model an LLM-processed Medium article. Just use any CLI script like jina.ai if the agent needs internet access.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gemini and Anthropic subscriptions&lt;/strong&gt; - because those are now illegal for us.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Current state
&lt;/h2&gt;

&lt;p&gt;The product is quite ready to use, but I realize I focused primarily on the features I use myself to start dogfooding it as quickly as possible. So I haven't tested the agent on Windows or Linux, haven't tested it with a bare API key, and haven't implemented support for other model providers.&lt;/p&gt;

&lt;p&gt;Please report any issues and missing features - open an issue on GitHub and I'll do my best to address everything. Any feedback is very welcome.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opensource.respawn.pro/builder/quickstart/" rel="noopener noreferrer"&gt;Getting started guide&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I'd also love some stars on &lt;a href="https://github.com/respawn-app/builder" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>agenticcoding</category>
      <category>opensource</category>
      <category>codingagent</category>
      <category>builder</category>
    </item>
    <item>
      <title>I switched to a tiling window manager on macOS. Full breakdown: Aerospace, Amethyst, and Yabai</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Fri, 27 Mar 2026 11:08:25 +0000</pubDate>
      <link>https://dev.to/nek12/i-switched-to-a-tiling-window-manager-on-macos-full-breakdown-aerospace-amethyst-and-yabai-1c88</link>
      <guid>https://dev.to/nek12/i-switched-to-a-tiling-window-manager-on-macos-full-breakdown-aerospace-amethyst-and-yabai-1c88</guid>
      <description>&lt;p&gt;A week ago I ran into a problem. I started using more and more parallel workflows and more and more terminal tabs. I use Ghostty, but keeping up with that many tabs - even with its pretty solid UX - became simply impossible. I had 3 - 4 windows open with two to four tabs each running in parallel, and the entire workflow depended purely on window order and which one was currently in focus. &lt;strong&gt;For working with agents across multiple projects and multiple clients simultaneously - that approach just doesn't work anymore.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I tried Cmux - a new terminal emulator built on top of Ghostty - but it's still rough around the edges: no native tabs, ugly UI, no blur. The concept is solid, but the implementation needs more time. That said, I know plenty of people who are very happy with cmux - if you're not as picky about UI/UX as I am, you'll really like its workspace-based model: each workspace is a separate tab with a single terminal session. You could also try Codex App, but since I have my own agent, I didn't want to switch to a desktop Electron app lacking the features I need just because I couldn't manage my windows properly. The solution for me turned out to be tiling window managers, which I'd been wanting to try for a long time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a tiling window manager
&lt;/h2&gt;

&lt;p&gt;A tiling window manager is something that came from Linux. It's a non-traditional window management system: instead of manually dragging and resizing windows, the window manager does all of that for you using various algorithms when a window is opened or its boundaries change. &lt;strong&gt;You say: I have X windows, I want them to fill the entire screen in the most convenient arrangement without overlap - and the manager does it for you.&lt;/strong&gt; A tiling WM simply won't allow a single pixel of empty space on the screen: it constantly resizes windows so they stay snapped together with a sensible distribution.&lt;/p&gt;

&lt;p&gt;On Mac there are only three options, because macOS has serious limitations - you can't replace the window manager entirely like you can on Linux. Your experience will always be second-rate compared to Linux, but there are working solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The main macOS bug you need to know about
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;There's a major bug in macOS that will seriously affect your workflow.&lt;/strong&gt; That's exactly why I say the experience is second-rate. All native tabs in macOS apps are treated as new windows by window managers - there's no way to tell whether it's a tab or a window. macOS always sends a signal that a new window was opened, even if what actually happened was a new tab opening in an existing window. This means your tiling WM will treat all open tabs as new windows, and empty space will keep multiplying on your screen depending on how many tabs you have open.&lt;/p&gt;

&lt;p&gt;There's no fix for this, and any app using native macOS tabs will behave badly in a tiling WM. Ghostty uses native tabs - which is exactly why I had to ditch the tab concept in my terminal entirely.&lt;/p&gt;

&lt;h2&gt;
  
  
  My current workflow
&lt;/h2&gt;

&lt;p&gt;I disabled tabs in Ghostty and even the window affordances, and moved as fully as possible to keyboard-driven control - part of a broader focus on moving away from the mouse. Right now I can't even see the close and minimize buttons in my terminal. Other apps keep them - like Android Studio, which spams modal windows and needs to be maximized to full screen, or Telegram, which I want to float on top of everything.&lt;/p&gt;

&lt;p&gt;For the terminal, I open, close, and minimize windows with hotkeys that are flexibly configured in Ghostty. &lt;strong&gt;The screen splitting logic works like this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I open the first terminal - it takes up the full screen.&lt;/li&gt;
&lt;li&gt;I open a second one via &lt;code&gt;CMD+N&lt;/code&gt; - the space immediately splits in half, each terminal takes exactly 50%.&lt;/li&gt;
&lt;li&gt;I open a third window - the half where focus was splits again. The new terminal opens in the same folder I was in at the moment of opening.&lt;/li&gt;
&lt;li&gt;Each additional window divides the space using the golden ratio principle (Binary Space Partitioning): depending on whether there's more horizontal or vertical space in that particular section of the screen.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This lets you spawn child windows from a single window in the same directory and launch things like Vim or multiple copies of a dev agent inside them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fnek12.dev%2Fmedia%2Ftilingwm-1-1774604800.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fnek12.dev%2Fmedia%2Ftilingwm-1-1774604800.webp" alt="Only up to 1/8th here" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At some point, as you can see in the screenshot above, I'm already close to the limit of what my brain can handle. So I separate workflows by project using standard macOS Spaces: a dedicated Space for my agent, for Respawn, for each client project - and one extra for general tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fnek12.dev%2Fmedia%2Ftilingwm-2-1774605402.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fnek12.dev%2Fmedia%2Ftilingwm-2-1774605402.webp" alt="Spaces" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Three tiling WM options for macOS
&lt;/h2&gt;

&lt;p&gt;One thing upfront: all the configuration was done for me by my assistant OpenClaw, not by me. Tiling window managers are a pretty niche product aimed at developers - configuration is done via YAML files in various folders or through bash macros for hotkey management. I would have taken forever figuring it out on my own.&lt;/p&gt;

&lt;h3&gt;
  
  
  Aerospace
&lt;/h3&gt;

&lt;p&gt;The first option I wanted to try. It has a fairly convenient YAML configuration, though I still couldn't be bothered to dig into it. &lt;strong&gt;It's a clone of the i3 window manager from Linux&lt;/strong&gt; - those who know, know.&lt;/p&gt;

&lt;p&gt;The main problem for me: it's manual. Opening any window doesn't trigger an automatic relayout of the rest. I needed the window manager to handle all windows for me automatically, so I had to drop Aerospace.&lt;/p&gt;

&lt;p&gt;That said, it's stable, highly configurable, it has handy workspace functionality with fast switching and even some semblance of a user interface on Mac. Aerospace has window groups, each with its own layout mode. One group is split vertically into columns, another is split horizontally in half, a third isn't split at all. You manage them manually through hotkeys. The concept is really cool if you're willing to learn how to work with them and memorize all the necessary hotkeys. &lt;strong&gt;If you've already used a tiling WM on Linux - I'd recommend Aerospace first, it's the closest thing to a proper professional window manager experience on macOS.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Amethyst
&lt;/h3&gt;

&lt;p&gt;A completely different approach - more simplified and user-friendly. &lt;strong&gt;Amethyst has a full graphical interface&lt;/strong&gt; via an icon in the menu bar, where you can configure everything, though YAML configs are also supported and take priority.&lt;/p&gt;

&lt;p&gt;For me the biggest issue was that it doesn't support mouse control - everything is keyboard only. Sometimes my layout is complex enough that I just want to drag a window with the mouse instead of fumbling with arrow key combos. It also has poor support for modal windows and dialogs - they're treated as separate windows, and when a tiny "Do you want to send a voice message?" popup appears in Telegram, the entire screen gets wrecked: all windows shuffle around to make room for it. So I just stopped using it. &lt;strong&gt;But I'd recommend trying Amethyst first if you want to get into the world of tiling WMs&lt;/strong&gt; - the GUI makes the entry point much gentler.&lt;/p&gt;

&lt;h3&gt;
  
  
  Yabai
&lt;/h3&gt;

&lt;p&gt;The last contender, and the one I settled on. As far as I know, it's the oldest tiling WM project on macOS. It supports the Fibonacci layout (Binary Space Partitioning) that I wanted, and generally has all the features of the previous two combined.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Two downsides.&lt;/strong&gt; First, Yabai has no graphical interface at all - all configuration is through bash scripts and shell commands, it runs as a background service, and it's started and stopped via the terminal. On one hand, that's a barrier to entry; on the other, I just told my OpenClaw to set up the configuration I needed, and it set everything up in 10 seconds. A one-time setup overhead is much better than tolerating some other persistent bug.&lt;/p&gt;

&lt;p&gt;Second, Yabai has no built-in hotkey support - you need a separate utility that translates hotkeys into shell commands. OpenClaw recommended &lt;code&gt;skhd&lt;/code&gt;. I honestly don't even care what that is - I just asked the agent to set everything up and sent it a list of the hotkeys I needed, and it was all ready in a minute. An unexpected bonus of yabai was that it works really well with AI agents: my Gemini already knew absolutely everything about its configuration options, so setting it up from a single voice message in Telegram took 10 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Yabai has a lot going for it.&lt;/strong&gt; Beyond supporting all the features I needed, it has a handy visual highlight when dragging windows with the mouse: you can immediately see exactly how the space will be split and where the window will end up. This makes mouse-based rearranging so simple and intuitive that, I'll admit, I now often skip the hotkeys and just use the mouse.&lt;/p&gt;

&lt;p&gt;Additionally, Yabai supports extended features when you partially disable macOS system security - you can remove shadows, rounded corners, and the slow transition animations between Spaces. &lt;strong&gt;The main advantage - it uses native macOS Spaces instead of its own&lt;/strong&gt;, which means support for different Spaces on different monitors. All the other options require you to disable this feature or they break. I didn't install the system extensions myself: every update requires a reboot and re-disabling security, which is more annoying than the slow animations.&lt;/p&gt;

&lt;p&gt;Yabai can also be disabled with a single shell command - handy when you want to play games or let someone else use the computer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you have an agent that can configure everything for you and you want the best option with maximum control - go with Yabai.&lt;/strong&gt; But I wouldn't install it just to try it out: by default it ships with no configuration and barely works until you create the config files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Will I keep using a tiling window manager? Absolutely - at least until the problem of managing dozens of windows while working with agents gets solved. Maybe cmux will get polished up and I'll switch to that, but every day I like tiling WM more, and I notice I'm using the mouse less and less. &lt;strong&gt;I'll never go back to native macOS features like Stage Manager&lt;/strong&gt; - objectively, that thing is unusable.&lt;/p&gt;

</description>
      <category>macos</category>
      <category>tilingwm</category>
      <category>yabai</category>
      <category>aerospace</category>
    </item>
    <item>
      <title>Case Study: How I Sped Up Android App Start by 10x</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Thu, 29 Jan 2026 13:48:44 +0000</pubDate>
      <link>https://dev.to/nek12/case-study-how-i-sped-up-android-app-start-by-10x-1c03</link>
      <guid>https://dev.to/nek12/case-study-how-i-sped-up-android-app-start-by-10x-1c03</guid>
      <description>&lt;p&gt;At my last job, we had a problem with long load times, especially for the first launch of our Android app. ~18% of people were leaving before the app even opened. I was tasked with fixing this situation and achieving an app load time of &lt;strong&gt;under 2 seconds&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;At first glance, the task seemed impossible, because the app on startup hits the backend more than four times, registers a new anonymous user, exchanges keys for push notifications, initializes three different analytics SDKs, downloads remote configuration, downloads feature flags, downloads the first page of the home screen feed, downloads several videos that play on app start during feed scrolling, initializes multiple ExoPlayers at once, sends data to Firebase, and downloads assets (sounds, images, etc.) needed for the first game. How can you fit such a huge volume of work into less than two seconds?!&lt;/p&gt;

&lt;p&gt;After two weeks of meticulous work, I finally did it! And here's a complete breakdown of how I made it happen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Audit and Planning
&lt;/h2&gt;

&lt;p&gt;I conducted a full audit of the codebase and all logic related to app startup, profiled everything the app does on start using Android Studio tooling, ran benchmarks, wrote automated tests, and developed a complete plan for how to achieve a 2-second load time without sacrificing anything I described above.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementing all of this took just one week&lt;/strong&gt; thanks to the fact that I planned everything out, and the team could parallelize the work among several developers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Did
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Switching from Custom Splash Screen to Android Splash Screen API
&lt;/h3&gt;

&lt;p&gt;We switched from a custom splash screen, which was a separate Activity, to the official Android Splash Screen API and integrated with the system splash screen. I've written many times in my posts and always say in response to questions, or when I see developers trying to drag in a custom Activity with a splash screen again, or some separate screen in navigation where they load something: &lt;strong&gt;this is an antipattern&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Our Splash Activity contained a huge ViewModel with thousands of lines, had become a God Object where developers just dumped all the garbage they needed to use, and forced all the rest of the app logic to wait while it loaded. &lt;strong&gt;The problem with custom Activities is that they block the lifecycle, navigation, and take time to create and destroy.&lt;/strong&gt; Plus, they look to the user like a sharp, janky transition with the system animation that Android adds when transitioning between Activities. This creates a user experience that increases not only the actual load time, but also how it's &lt;strong&gt;perceived&lt;/strong&gt; by the user.&lt;/p&gt;

&lt;p&gt;We completely removed the Splash Activity and deleted all two thousand lines of code it had. We switched to the Splash Screen API, which allowed us to integrate with the system Splash Screen that Android shows starting from version 8, add an amazing animation there, and our own custom background.&lt;/p&gt;

&lt;p&gt;Thanks to this, because we were no longer blocking data loading for the main screen with this custom Activity, we got a significant boost in actual performance from this change. &lt;strong&gt;But the biggest win was that people stopped perceiving the app loading as actual loading.&lt;/strong&gt; They just saw a beautiful splash animation and thought that their launcher was organizing the app start so nicely for them. And even if they thought the app was taking a long time to load, they were more likely to think it was because of the system or because of the load on their phone (and most often - that's exactly what it is), and not because the app is lagging, because the system Splash Screen looks like a part of the OS, not of the app.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Developing a Startup Background Task System
&lt;/h3&gt;

&lt;p&gt;In order to get rid of this huge Splash Activity, I needed to develop a custom system of startup jobs that executed when the app launches. In pretty much any app there are a lot of things that need to be done on startup: asynchronous remote config updates, reading something, initializing SDKs, feature flags, sending device or session analytics data, loading services, checking background task status, checking push notifications, syncing data with the backend, authorization.&lt;/p&gt;

&lt;p&gt;For this, I made an integration with DI, where &lt;strong&gt;a smart Scheduler collects all jobs from all DI modules in the app and efficiently executes them with batching, retry, and error handling, sending analytics, and measuring the performance of all this.&lt;/strong&gt; We monitored which jobs took a lot of time in the background afterwards or which ones failed often, diagnosed and fixed issues.&lt;/p&gt;

&lt;p&gt;Another architectural advantage of the system I developed is that developers no longer had to dump everything in one pile in the Splash Activity ViewModel. They got access to registering background jobs from anywhere in the app, from any feature module, for example. &lt;strong&gt;I believe that problems with app behavior aren't a question of developer skill, it's a question of the system&lt;/strong&gt;. This way, I helped the business by creating an efficient system for executing work on startup that's fully asynchronous and scales to hundreds of tasks, many years into the future.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Switching to Reactive Data Loading Model
&lt;/h3&gt;

&lt;p&gt;We historically used old patterns of imperative programming and one-time data loading. This was probably the most difficult part of the refactoring. But fortunately, we didn't have that much tied to imperative data loading specifically on app startup:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;I migrated to &lt;strong&gt;asynchronous data loading using Jetpack DataStore.&lt;/strong&gt; They have a nice asynchronous API with coroutine support, that is non-blocking, and this significantly sped up config loading, user data loading, and auth tokens.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, I migrated to a reactive user management system. This was the hardest part at this stage. Our user object was being read from preferences on the main thread, and if it didn't exist, every screen had to access the Splash Screen to block all processes until a user account was created or retrieved from the backend and the tokens were updated.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;I redesigned this system to an asynchronous stream of updates for the user account, which automatically starts loading them on first access as early as possible on app startup.&lt;/strong&gt; And changed all the logic from blocking function calls that get the user to observing this stream.&lt;/p&gt;

&lt;p&gt;Thus, also thanks to the fact that we use &lt;a href="https://github.com/respawn-app/FlowMVI" rel="noopener noreferrer"&gt;FlowMVI&lt;/a&gt; - a reactive architecture, &lt;strong&gt;we got the ability to delegate loading status display to individual elements on the screen.&lt;/strong&gt; For example, the user avatar &amp;amp; sync status on the main screen loaded independently while the main content was loading asynchronously, and didn't block the main content from showing. And also, for example, push registration could wait in the background for the User ID to arrive from the backend before sending the token, instead of blocking the entire loading process.&lt;/p&gt;

&lt;p&gt;In the background, we were also downloading game assets: various images and sounds, but they were hidden behind the Splash screen because they were required for the first game launch. But we didn't know how many videos a person would scroll through before they decided to play the first game, so we might have plenty of time to download these assets asynchronously and block game launch, not app launch. Thus, the total asset load time could often be decreased down to 0 just by cleverly shifting not loading, but &lt;strong&gt;waiting&lt;/strong&gt;. I redesigned the asset loading architecture to use the newly developed background job system, and the game loading logic itself to asynchronously wait for these assets to finish downloading, using coroutines.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Working with the Backend
&lt;/h3&gt;

&lt;p&gt;Based on my profiling results, we had very slow backend calls, specifically when loading the video feed on the main screen.&lt;br&gt;
I checked the analytics and saw that most of our users were using the app with unstable internet connections. This is a social network, and people often watched videos or played games, for example, on the bus, when they had a minute of free time.&lt;/p&gt;

&lt;p&gt;I determined from benchmark results that our main bottleneck wasn't in the backend response time, but in how long data transfer took.&lt;/p&gt;

&lt;p&gt;I worked with the backend team, developed a plan for them and helped with its execution. We switched to HTTP/3, TLS 1.3, added deflate compression, and implemented a new schema for the main page request, which reduced the amount of data transferred by over 80%, halved TCP connection time, and increased the data loading speed by ~2.3x.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Other Optimizations
&lt;/h3&gt;

&lt;p&gt;I also optimized all other aspects, such as:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Code precompilation: configured Baseline Profiles, Startup Profiles, Dex Layout Optimizations. Net ~300ms win, but only on slow devices and first start;&lt;/li&gt;
&lt;li&gt;Switched to lighter layouts in Compose to reduce UI thread burst load;&lt;/li&gt;
&lt;li&gt;Made a smart ExoPlayer caching system that creates them asynchronously on demand and stores them in a common pool;&lt;/li&gt;
&lt;li&gt;Implemented a local cache for paginated data, which allowed us to instantly show content, with smart replacement of still-unviewed items with fresh ones from the backend response. Huge win for UX.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Also, on another project, in addition to this, I managed to move analytics library loading, especially Firebase, to a background thread, which cut another ~150 milliseconds there, but more on that in future posts I will send to the newsletter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Thus, I was able to reduce the app's cold start time by more than 10 times.&lt;/strong&gt; The cold first app start went from 17 seconds to ~1.7.&lt;/p&gt;

&lt;p&gt;After that, I tracked the impact of this change on the business, and the results were obvious. &lt;strong&gt;Instead of losing 18% of our users before onboarding started, we started losing less than 1.5%.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;Optimizing app startup time is quite delicate work and highly personalized to specific business needs and existing bottlenecks. Doing all this from scratch can take teams a lot of time and lead to unexpected regressions in production, so I now help teams optimize app startup in the format of a short-term audit. After analysis (2-5 days), the business gets a clear point-by-point plan that can be immediately given to developers/agencies + all the pitfalls to pay attention to. I can also implement the proposed changes as needed.&lt;/p&gt;

&lt;p&gt;If you want to achieve similar results, send your app name to &lt;a href="mailto:me@nek12.dev"&gt;me@nek12.dev&lt;/a&gt;, and I'll respond with three personalized startup optimization opportunities for your case.&lt;/p&gt;

</description>
      <category>android</category>
      <category>performance</category>
      <category>optimization</category>
      <category>kotlin</category>
    </item>
    <item>
      <title>AGP 9.0 is Out, and Its a Disaster. Heres Full Migration Guide so you dont have to suffer</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Tue, 20 Jan 2026 13:20:25 +0000</pubDate>
      <link>https://dev.to/nek12/agp-90-is-out-and-its-a-disaster-heres-full-migration-guide-so-you-dont-have-to-suffer-p4f</link>
      <guid>https://dev.to/nek12/agp-90-is-out-and-its-a-disaster-heres-full-migration-guide-so-you-dont-have-to-suffer-p4f</guid>
      <description>&lt;p&gt;Yesterday I migrated a big 150,000-line project from AGP 8 to AGP 9. This was painful. &lt;strong&gt;This is probably the biggest migration effort that I had to undergo this year.&lt;/strong&gt; So, to save you from the pain and dozens of wasted hours that I had to spend, I decided to write a full migration guide for you. &lt;/p&gt;

&lt;p&gt;Be prepared, this migration will take some time, so you better start early. With AGP 9.0 already being in release, Google somehow expects you to already start using it yesterday. And they explicitly state that &lt;strong&gt;many of the existing APIs and workarounds that you can employ right now to delay the migration will stop working in summer 2026.&lt;/strong&gt; So for big apps, you don't have much time left.&lt;/p&gt;

&lt;p&gt;Before we start, please keep in mind that despite AGP somehow being in production release, &lt;strong&gt;a lot of official plugins, such as the Hilt plugin and KSP, do not support AGP 9.0.&lt;/strong&gt; If you use Hilt or KSP in your project, you will not be able to migrate without severe workarounds for now. If you're reading this later than January 2025, just make sure to double-check if Hilt and KSP already have shipped AGP 9.0 support.&lt;/p&gt;

&lt;p&gt;If you're not blocked and still here, here is what you need to do to migrate your KMP project to AGP 9.0.&lt;/p&gt;

&lt;h2&gt;
  
  
  The biggest migration point: Dropped support for build types
&lt;/h2&gt;

&lt;p&gt;Previously, we didn't have build types on other platforms in KMP, but Android still had them. And in my opinion, they are one of the best features for security and performance that we had, but now they will not be supported and there is no replacement for them. You literally have to remove all build type-based code.&lt;/p&gt;

&lt;p&gt;At first glance, this seems like a small problem, because teams usually don't split a lot of code between source sets. It usually revolves around some debug performance and security checks. But there is a hidden caveat. &lt;strong&gt;BuildConfig values will stop working completely, because they are using the build types under the hood.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I had in my codebase dozens and dozens of places where I had a static top-level variable &lt;code&gt;isDebuggable&lt;/code&gt;, delegating to &lt;code&gt;BuildConfig.DEBUG&lt;/code&gt;, which I was checking and using a lot to add some extra rendering code, debugging, logging code, and to disable many of the security checks that the app had, which were only applicable on release.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why I was using it as a static variable instead of something like &lt;code&gt;context.isDebuggable&lt;/code&gt; is because the R8, when optimizing the release build of the app, would be able to remove all of that extra debug code&lt;/strong&gt; without the need to create extra source sets, etc. This works well for KMP, where release and debug split wasn't fully supported in the IDE for a long time. &lt;/p&gt;

&lt;p&gt;But now this is completely impossible. This is a huge drawback for me personally, because &lt;strong&gt;I had to execute a humongous migration to replace all of those static global variable usages with a DI-injected interface,&lt;/strong&gt; which was implemented using still build configuration, but in the application module, e.g.:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="c1"&gt;// in common domain KMP module&lt;/span&gt;
&lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;AppConfiguration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;debuggable&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Boolean&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;backendUrl&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;deeplinkDomain&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;deeplinkSchema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;deeplinkPath&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// in android app module&lt;/span&gt;
&lt;span class="kd"&gt;object&lt;/span&gt; &lt;span class="nc"&gt;AndroidAppConfiguration&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;AppConfiguration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;debuggable&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BuildConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DEBUG&lt;/span&gt;
    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;backendUrl&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BuildConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;BACKEND_URL&lt;/span&gt;
    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;deeplinkDomain&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BuildConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DEEPLINK_DOMAIN&lt;/span&gt;
    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;deeplinkSchema&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BuildConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DEEPLINK_SCHEMA&lt;/span&gt;
    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;deeplinkPath&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BuildConfig&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;DEEPLINK_PATH&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This may result in a significant refactor, because I personally used the static &lt;code&gt;isDebuggable&lt;/code&gt; flag in places where context/DI isn't available. So, I had to sprinkle some terrible hacks with a global DI singleton object retrieval just to make the app work and then refactor the code.&lt;/p&gt;

&lt;p&gt;When you're done with this step, you must have &lt;strong&gt;0 usages of BuildConfig, build types, or manifest placeholders in library modules&lt;/strong&gt;. Note that codegen for build-time constants is still fine, just not per-build-type / Android one. You can create a custom Gradle task if you want that will generate a Kotlin file for you in ~20 lines.&lt;/p&gt;

&lt;p&gt;I know that devs love &lt;code&gt;BuildConfig.DEBUG&lt;/code&gt; a lot, and I also used it to manage deep link domains, backend URL substitution for debug and release builds, and all of that had to be refactored, which is why I urge you to stop using such code pattern with these static &lt;code&gt;isDebuggable&lt;/code&gt; flags right now. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Also avoid using &lt;code&gt;Context.isDebuggable&lt;/code&gt; boolean property, because that's a runtime check which can be overridden by fraudulent actors, so it isn't reliable.&lt;/strong&gt; Don't use it for security reasons. Remember - debug code should only be included in debug &lt;strong&gt;builds&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Remove all NDK and JNI code from library modules
&lt;/h2&gt;

&lt;p&gt;The next step you have to take is remove all the NDK and JNI code that you have in library modules. I have a couple of places where I need to run some C++ code in my app, and those were previously located in the Android source set of the KMP library module where they were needed, because Apple source set didn't need that native code, but Android did. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;An &lt;a href="https://issuetracker.google.com/issues/439746703#comment6" rel="noopener noreferrer"&gt;official statement&lt;/a&gt; from Google is that NDK execution in library modules and C++ code execution and JNI will not be supported at all since AGP 9.0.&lt;/strong&gt; So now, the only way you can preserve that code is if you move it to the application module. Again, that is something that is a huge drawback for me, but because Google didn't give us any opportunities and didn't want to listen, you have to comply if you do not want to get stuck on a deprecated AGP forever. &lt;/p&gt;

&lt;p&gt;So before you even try to migrate to AGP 9.0, &lt;strong&gt;make sure you create an interface abstraction in your library module that will act as a proxy for all your NDK-enabled code.&lt;/strong&gt; Then the implementation of that interface can live in the application module along with all the C++ code and inject the implementation into the DI graph so that your library module in the KMP code can just use that interface. At least this is what I did. This is the simplest solution to the problem, but if you know a better one, let me know.&lt;/p&gt;

&lt;h2&gt;
  
  
  The actual migration: Remove the old Kotlin Android plugin
&lt;/h2&gt;

&lt;p&gt;Now we are finally finishing up with all the refactorings and approaching the actual migration. Start by removing the old Kotlin Android plugin. I had convention plugins set up, so it was reasonably easy for me to do, and migrate it to the new plugin. Read this &lt;a href="https://developer.android.com/build/migrate-to-built-in-kotlin" rel="noopener noreferrer"&gt;docs page&lt;/a&gt; for what exactly to do. &lt;/p&gt;

&lt;p&gt;When you remove it, also add the new plugin for Android Kotlin Multiplatform compatibility: &lt;code&gt;com.android.kotlin.multiplatform.library&lt;/code&gt;. This is because your build will stop working and we need to migrate to the new DSL, which is only provided with this new plugin.&lt;/p&gt;

&lt;p&gt;To fix gradle sync, do:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Update from the deprecated Android top level DSL &lt;code&gt;android { }&lt;/code&gt; AND the deprecated &lt;code&gt;kotlin.androidLibrary {}&lt;/code&gt; DSL to the new unified &lt;code&gt;kotlin.android { }&lt;/code&gt; DSL.&lt;/strong&gt; You should be able to copy-paste all of your previous configuration, like compile SDK, minimum SDK, and all of the other Android setup options which you previously had in the top-level Android block, and merge it with the code that you previously had in the &lt;code&gt;kotlin.androidLibrary&lt;/code&gt; KMP setup. So now it's just a single place. Note that library modules no longer support target SDK, which will only be governed by the app module.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;     id("sharedBuild")
     id("detektConvention")
     kotlin("multiplatform")
&lt;span class="gd"&gt;-    id("com.android.library")
&lt;/span&gt;&lt;span class="gi"&gt;+    id("com.android.kotlin.multiplatform.library")
&lt;/span&gt; }
&lt;span class="err"&gt;
&lt;/span&gt; kotlin {
     configureMultiplatform(this)
 }
&lt;span class="err"&gt;
&lt;/span&gt;&lt;span class="gd"&gt;-android {
-    configureAndroidLibrary(this) 
-}
-
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See how I had an extension function from my convention plugin, &lt;code&gt;configureAndroidLibrary&lt;/code&gt;, and removed it? We can now completely ditch it. Everything will be inside the &lt;code&gt;kotlin&lt;/code&gt; block. (&lt;code&gt;configureMultiplatform&lt;/code&gt; in example above).&lt;/p&gt;

&lt;p&gt;Next up, let's update the said "configure multiplatform" function. This is based on &lt;a href="https://developer.android.com/kotlin/multiplatform/plugin#migrate" rel="noopener noreferrer"&gt;this official doc page&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gd"&gt;-    if (android) androidTarget {
-        publishLibraryVariants("release")
&lt;/span&gt;&lt;span class="gi"&gt;+    if (android) android {
+        namespace = this@configureMultiplatform.namespaceByPath()
+        compileSdk = Config.compileSdk
+        minSdk = Config.minSdk
+        androidResources.enable = false
+        lint.warning.add("AutoboxingStateCreation")
+        packaging.resources.excludes.addAll(
+            listOf(
+                "/META-INF/{AL2.0,LGPL2.1}",
+                "DebugProbesKt.bin",
+                "META-INF/versions/9/previous-compilation-data.bin",
+            ),
+        )
+        withHostTest { isIncludeAndroidResources = true }
&lt;/span&gt;         compilerOptions {
             jvmTarget.set(Config.jvmTarget)
             freeCompilerArgs.addAll(Config.jvmCompilerArgs)
         }
&lt;span class="gi"&gt;+        optimization.consumerKeepRules.apply {
+            publish = true
+            file(Config.consumerProguardFile)
+        }
&lt;/span&gt;     }
     // ... 
     sourceSets {
         commonTest.dependencies {
             implementation(libs.requireBundle("unittest"))
         }
&lt;span class="gd"&gt;-        if (android) androidUnitTest {
-            dependencies {
&lt;/span&gt;&lt;span class="gi"&gt;+        if (android) {
+            val androidHostTest = findByName("androidHostTest")
+            androidHostTest?.dependencies {
&lt;/span&gt;                 implementation(libs.requireLib("kotest-junit"))
             }
         }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In summary, what has changed here is that we had an &lt;code&gt;androidTarget&lt;/code&gt; block which contained a small portion of our library module setup. That was replaced by the &lt;code&gt;android&lt;/code&gt; block (not top-level, I know, confusing). And now we just put everything from our previous Android top-level block in here, and we removed the target SDK configuration, which was previously available here. Some syntax changed a bit, but this is only because I'm using convention plugins, so they don't have all the same nice DSLs that you would have if you just configured this manually in your target module. &lt;/p&gt;

&lt;p&gt;As you see, I put the new packaging excludes workarounds that have been there for ages into this new place. I moved the Lint warning configuration (that was used by Compose). &lt;strong&gt;Don't forget to disable Android resources explicitly in this block because most of your KMP modules will not actually need Android resources,&lt;/strong&gt; so I highly recommend you enable them on-demand in your feature modules where you actually need them. This will speed up the build times. &lt;/p&gt;

&lt;p&gt;You can also see that instead of &lt;code&gt;androidUnitTest&lt;/code&gt; configuration that we had, we just have &lt;code&gt;androidHostTest&lt;/code&gt;, which is basically the same Android unit tests you're used to. Host means that they run on the host machine, which is your PC. This is just a small syntax change, annoying but bearable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't forget to apply the consumer keep rules here,&lt;/strong&gt; because a widely used best practice is to keep the consumer rules that are used by a particular library module together in the same place instead of dumping all of that into the application module. I was personally not happy about moving all of my consumer rules to the ProGuard rules file of the application module, so I just enabled consumer keep rules for every library module I have. This is especially useful for stuff like network modules, database modules, where I still have custom keep rules, and for modules which are supposed to use NDK and C++ code. &lt;strong&gt;If you don't do this, the new plugin will no longer recognize and use your consumer keep rules,&lt;/strong&gt; even if you place them there, so this is pretty important, as it will only surface on a release build, in runtime (possibly even in prod).&lt;/p&gt;

&lt;p&gt;Now, as you might have probably guessed, the top-level &lt;code&gt;android&lt;/code&gt; block will no longer be available for you. There will be no build variants, build flavors in those KMP library modules. So before, if you were following my instructions and already refactored all of those usages to move them to the application module and inject the necessary flags and variables via DI, you will hopefully not have a lot of trouble with this. But if you still do use some BuildConfig values, there is now no place to declare them. Same can be said for res values, manifest placeholders, etc. All of that is now not supported.&lt;/p&gt;

&lt;h2&gt;
  
  
  Important note for Compose Multiplatform resources
&lt;/h2&gt;

&lt;p&gt;Previously, you saw that we disabled Android resources. But &lt;strong&gt;if you don't enable Android resource processing, even for KMP modules with CMP resources, now in your feature modules and UI modules, your app will crash at runtime.&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;kotlin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;androidLibrary&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;androidResources&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;enable&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add this block to every module that you have that uses Compose Multiplatform resources. I had a convention plugin for feature modules, which made this super easy for me. More details are under the &lt;a href="https://youtrack.jetbrains.com/issue/CMP-9547" rel="noopener noreferrer"&gt;bug ticket on YouTrack&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Replace Android Unit Test with Android Host Test
&lt;/h2&gt;

&lt;p&gt;The next step is to replace Android Unit Test dependency declarations with Android Host Test declarations. You can do this via an IDE search and replace using a simple regex.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight diff"&gt;&lt;code&gt;&lt;span class="gd"&gt;-    androidUnitTestImplementation(libs.bundles.unittest)
&lt;/span&gt;&lt;span class="gi"&gt;+    androidHostTestImplementation(libs.bundles.unittest)
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll have to do this for every single module that has Android unit test dependencies. I unfortunately didn't think of a convention plugin, so I had to run this on literally every single build.gradle file.&lt;/p&gt;

&lt;p&gt;I also had to refactor Gradle files a little bit because I used top-level &lt;code&gt;implementation&lt;/code&gt; and &lt;code&gt;api&lt;/code&gt; dependency declaration DSL functions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;dependencies&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;implementation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// wrong&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This wasn't correct anyway, and it was incredibly confusing because this "implementation" just meant Android implementation, not KMP implementation, so that was a good change.&lt;/p&gt;

&lt;p&gt;I'm also using &lt;a href="https://github.com/respawn-app/FlowMVI" rel="noopener noreferrer"&gt;FlowMVI&lt;/a&gt; in my project, and unfortunately, the FlowMVI debugger relies on Ktor, serialization and some other relatively heavy dependencies that were previously only included in the Android debug source set, but I had to ditch that and just install the FlowMVI debugger using a runtime-gated flag from DI that I mentioned above. This doesn't make me happy, but in the future I will improve this maybe by moving the installation of the plugin to the Android app module, since FlowMVI makes extending business logic super easy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add build script dependency on Kotlin
&lt;/h2&gt;

&lt;p&gt;Finally, I recommend adding a new build script dependency on Kotlin, just to keep your build Kotlin version and runtime Kotlin versions aligned. I wanted that because I have a single version catalog definition. You do it in the &lt;strong&gt;top-level build.gradle.kts&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nf"&gt;buildscript&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;dependencies&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;classpath&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;libs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;kotlin&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;gradle&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// org.jetbrains.kotlin:kotlin-gradle-plugin&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Small quick optional wins at the end
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Ditch &lt;code&gt;android.lint.useK2Uast=true&lt;/code&gt; that is deprecated now if you had it.&lt;/li&gt;
&lt;li&gt;An optional step is to use the new R8 optimizations described in the document I linked above. We have had manual ProGuard rules for removing the Kotlin null checks, and now this is shipped with AGP, so I just migrated to the new syntax (&lt;code&gt;-processkotlinnullchecks remove&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Honestly, this migration was a huge pain to me. I'm not gonna claim that I have the perfect code, but my Gradle setup was decent. &lt;strong&gt;If you're a developer and this all sounds incredibly overwhelming and like a huge effort, you're right.&lt;/strong&gt; Because I already did it, I can help your team migrate your project to the new AGP much faster and save you the effort. I recently started taking projects as a consultant and KMP migration advisor, so consider giving your boss a shout-out to &lt;a href="https://nek12.dev" rel="noopener noreferrer"&gt;nek12.dev&lt;/a&gt; if you liked this write-up and want me to help you.&lt;/p&gt;

</description>
      <category>android</category>
      <category>kotlin</category>
      <category>agp</category>
      <category>gradle</category>
    </item>
    <item>
      <title>What are AI agent skills and how to use them - complete breakdown with examples</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Mon, 12 Jan 2026 19:09:46 +0000</pubDate>
      <link>https://dev.to/nek12/what-are-ai-agent-skills-and-how-to-use-them-complete-breakdown-with-examples-3e0e</link>
      <guid>https://dev.to/nek12/what-are-ai-agent-skills-and-how-to-use-them-complete-breakdown-with-examples-3e0e</guid>
      <description>&lt;h2&gt;
  
  
  What are agent skills and why do you need them?
&lt;/h2&gt;

&lt;p&gt;A relatively new thing in the world of AI agents is the so-called Skills system.&lt;/p&gt;

&lt;p&gt;Recently I started seriously developing skills. I even created a &lt;a href="https://github.com/respawn-app/claude-plugin-marketplace" rel="noopener noreferrer"&gt;marketplace&lt;/a&gt; of Claude plugins for Respawn, where I keep a skill for &lt;a href="https://github.com/respawn-app/ksrc" rel="noopener noreferrer"&gt;ksrc&lt;/a&gt; and a skill for &lt;a href="https://opensource.respawn.pro/FlowMVI/" rel="noopener noreferrer"&gt;FlowMVI&lt;/a&gt;. I'm increasingly using and creating different skills, and many of you are asking: "What even is this?". And I also see articles on the internet that incorrectly explain and incorrectly recommend creating and using skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, initially skills were invented by Anthropic as part of their SDK for Claude Code.&lt;/strong&gt; Essentially, agent skills don't bring anything revolutionary - they're still just folders with markdown files. &lt;strong&gt;The most important thing is how they work - through so-called progressive disclosure of your agent's context.&lt;/strong&gt; I've already said that the most important thing when working with agents is context engineering, and this is another way to use context more effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do skills work?
&lt;/h2&gt;

&lt;p&gt;Skills are defined by one main file (&lt;code&gt;SKILL.md&lt;/code&gt;), and it has a specific frontmatter structure. In this frontmatter there's the skill name (what it teaches) and a description - a (very!) short description that ideally explains when to use this skill and what it can teach the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When your agent wrapper notices that you have such a file in your skills folder, it parses its header and includes it immediately in the agent's context&lt;/strong&gt; (literally as part of agents.md or claude.md). This way you get a hook for the LLM: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Use this skill when you're writing code with FlowMVI.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Models have already become so smart that they can understand in what development context they need to read and use this skill, by one line of text.&lt;/strong&gt; And the skills system takes advantage of this. For me it's like casting a fishing line - the model sees the bobber on the surface, and then it can pull up a whole huge pile of documentation if needed and search through it for anything it needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Skill structure
&lt;/h3&gt;

&lt;p&gt;The header is written in the &lt;code&gt;SKILL.md&lt;/code&gt; file. In this file you describe your skill's structure: what folders exist, what files exist, and the main points about usage.&lt;/p&gt;

&lt;p&gt;For example, in my skill for &lt;a href="https://github.com/respawn-app/FlowMVI" rel="noopener noreferrer"&gt;FlowMVI&lt;/a&gt; the model is given the ability to view the documentation index, where all the nuances and details of using a specific feature are laid out (state management or creating plugins). But in the &lt;code&gt;skill.md&lt;/code&gt; file itself, which the model reads fully if it decides to use the skill, basic things are written: "FlowMVI is an architectural framework, here's how to quickly make a contract, here's how to write features, here's what DSL exists, and here's where you can look at function signatures".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So context disclosure happens in stages:&lt;/strong&gt; the model first sees a super-brief two-line header, then reads the main &lt;code&gt;skill.md&lt;/code&gt; file (a few hundred lines), and then can decide: "Aha, I understand, now I need to read the next file, for example, on creating custom plugins". The model goes to the appropriate folder next to the &lt;code&gt;skill.md&lt;/code&gt; file or makes internet requests, as in my case, to get fresh documentation.&lt;/p&gt;

&lt;p&gt;This way we achieve minimal context spending on knowledge for the model, unlike, for example, MCP or AGENTS.md, which are just thrown into the model's context as one huge chunk of text, regardless of whether they're needed or not.&lt;/p&gt;

&lt;p&gt;Why does your model need to know how to deploy your backend to production if it's currently doing minor fixes after review? That's the whole point of skills: &lt;strong&gt;don't give the model everything at once to avoid cluttering the context, but gradually reveal only the needed information&lt;/strong&gt; and let the model use its incredibly cool search capabilities and work with the command line to find specifically what it needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why do you need to create skills?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;You need skills to transfer specialized or fresh knowledge to the model in progressive form - knowledge that's not yet included in the model's training data.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You can create skills for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Proprietary SDKs and how to work with them (store them in your repository then)&lt;/li&gt;
&lt;li&gt;New APIs that came out only a few months ago, and the model still can't handle working with them&lt;/li&gt;
&lt;li&gt;Niche frameworks that aren't yet in training data&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What NOT to include in skills
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The most important thing is what you shouldn't create skills for: things the model likely already knows.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You don't need to create a skill for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"How to compile Kotlin code"&lt;/li&gt;
&lt;li&gt;"How to write SwiftUI"&lt;/li&gt;
&lt;li&gt;"How to work with OpenAI API"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Models already know this perfectly well based on millions of lines of code and all the documentation that exists on the internet. I've seen skills with absolutely useless content. And if the model reads such a skill, it will only work worse, because its context will be clogged with irrelevant or repetitive information that doesn't help the model but distracts it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before creating a skill, think: can the model already know what I'm trying to tell it? And completely cut out everything the model already knows from your skill.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For example, with &lt;a href="https://github.com/respawn-app/ksrc" rel="noopener noreferrer"&gt;ksrc&lt;/a&gt; the model doesn't need to know that ksrc is written in Go, how to use escape sequences, how to use sed syntax, how to use ripgrep, and how to make bash command chains. The model does this perfectly, and it doesn't need to be repeated. &lt;strong&gt;But what the model doesn't know is how and why to work with ksrc.&lt;/strong&gt; So that's exactly what I included in the &lt;code&gt;skill.md&lt;/code&gt; file for ksrc, and nothing more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you need to include something the model already knows, you can just reference it in one or two words.&lt;/strong&gt; For example, instead of describing in detail how grep search syntax works and what arguments are supported, just write: "Ripgrep arguments are fully supported" or "Add --rg-args at the end to filter results". That will be enough.&lt;/p&gt;

&lt;p&gt;The skill for FlowMVI works the same way. In general, models have long known what MVI frameworks are, but they specifically don't yet know ideally the syntax of FlowMVI specifically as a framework and might not know what changed in new versions. So my skill contains not a single word about what intents and side effects are, and what should go in the MVI state. Because this is general knowledge that the internet has been covered with for years, decades, and the model knows this perfectly. Instead, the skill consists of function signatures and available configuration parameters in different DSL functions and some common mistakes that the model makes in my experience working with the library. That is, &lt;strong&gt;it specifically covers the model's weak spots, spending a minimum number of tokens.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  When should you create skills?
&lt;/h2&gt;

&lt;p&gt;Usually this is needed when your &lt;code&gt;agents.md&lt;/code&gt; files are growing, or when you're releasing some framework that's expected to be used by models too.&lt;/p&gt;

&lt;p&gt;For example, ksrc is intended for use only by models, developers don't need to use it. FlowMVI is used by both developers and models, so the skill is more of a nice bonus. &lt;strong&gt;But in any case, if a model will use this utility, it would be great if you shipped a skill that can be installed right away.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Why? The model will either not use the utility at all (simply because it doesn't know about its existence), or won't use it correctly (because without reading documentation it won't know the syntax). &lt;strong&gt;This is a good way to reduce &lt;code&gt;agents.md&lt;/code&gt; and reduce tokens spent on manual documentation search by models.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you see that the model is stumbling on something - for example, writing code that doesn't compile, or incorrectly using the new Compose API, or can't make a Glance widget correctly - you can gather the documentation, pack it into a skill, and if you do it right, this can solve your problems with model performance when working with a specific technology.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to create skills
&lt;/h2&gt;

&lt;p&gt;Skills are repackaged documentation for some framework. So start by creating a good header and &lt;code&gt;skill.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Most often in your code (for example, in Codex) there's already a built-in flow for creating skills. You just use a skill for creating skills (so meta 😅). And Codex, for example, will create the whole skill with everything you need. You just tell it where to get the framework documentation, and then it will work to pack it into a skill.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;One thing: after creating a skill you still need to go through all the files it created and clean them up.&lt;/strong&gt; Or initially prompt the model to specifically describe the moments that, as you already know from practice, are pain points / difficult and complex features to use. Because by default the model will just rewrite the documentation into &lt;code&gt;skill.md&lt;/code&gt; and might also miss many points.&lt;/p&gt;

&lt;p&gt;For example, in FlowMVI I had to redo a lot because I wanted the model to pull the most updated documentation itself via curl, and in &lt;code&gt;skill.md&lt;/code&gt; and in the skill folder there would be only function signatures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start with the template that the model will create for you in your wrapper, and then refine it.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;My opinion is that skills are a very cool way to save context. &lt;strong&gt;And this feature hit the LLM development curve really well, because models have now become so easy to prompt that one line is enough for perfect execution of instructions by the model.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;So you just create a skill, write one line in it "Use ksrc to search sources", and you can count on the model already being so smart that it will understand on its own when to use this skill. &lt;strong&gt;This is very easy for you and saves context significantly and increases model performance.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I often hear people complaining: "My model doesn't write compiling code" or "Can't work with some niche libraries", and the answer was always on the surface - as usual it's just markdown files.&lt;/p&gt;

</description>
      <category>aiagents</category>
      <category>llm</category>
      <category>skills</category>
      <category>contextengineering</category>
    </item>
    <item>
      <title>Agents and Gradle Dont Get Along - I Fixed It in Two Commands</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Tue, 06 Jan 2026 17:37:58 +0000</pubDate>
      <link>https://dev.to/nek12/agents-and-gradle-dont-get-along-i-fixed-it-in-two-commands-b2e</link>
      <guid>https://dev.to/nek12/agents-and-gradle-dont-get-along-i-fixed-it-in-two-commands-b2e</guid>
      <description>&lt;p&gt;Folks, today I'm excited to introduce my new project!&lt;/p&gt;

&lt;p&gt;First, I should say that I primarily write in Kotlin. &lt;strong&gt;In Kotlin, we have a problem with viewing and exploring the source code of third-party libraries.&lt;/strong&gt; I've used TypeScript, Go, and Kotlin, and I can say that I envy those who code in TypeScript, because agents, when working with it, can simply dive into &lt;code&gt;node_modules&lt;/code&gt;, ripgrep that directory and find the needed code right away, literally instantly, in the downloaded caches.&lt;/p&gt;

&lt;p&gt;Compared to this, Kotlin, especially multiplatform, is torture. &lt;strong&gt;Agents previously couldn't view source code at all, they just hallucinated code.&lt;/strong&gt; Now they've gotten smarter and try to solve problems themselves when they don't know the API of some library or need to find the right function overload, through filesystem search. But even with all permissions, caches, and assuming all dependencies are already downloaded, this is very difficult for them. Finding a single dependency can take up to 10-15k context tokens, so...&lt;/p&gt;

&lt;h2&gt;
  
  
  Introducing &lt;a href="https://github.com/respawn-app/ksrc" rel="noopener noreferrer"&gt;ksrc&lt;/a&gt;!
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;This is a CLI utility that allows agents to view the source code of any Kotlin libraries in a single line&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;With ksrc, your agent will check source code like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;ksrc search &lt;span class="s2"&gt;"pro.respawn.apiresult:core*"&lt;/span&gt; &lt;span class="nt"&gt;-q&lt;/span&gt; &lt;span class="s2"&gt;"recover"&lt;/span&gt;
pro.respawn.apiresult:core:2.1.0!/commonMain/pro/respawn/apiresult/ApiResult.kt:506:42:public inline infix fun &amp;lt;T&amp;gt; ApiResult&amp;lt;T&amp;gt;.recover&lt;span class="o"&gt;(&lt;/span&gt;
...

&lt;span class="nv"&gt;$ &lt;/span&gt;ksrc &lt;span class="nb"&gt;cat &lt;/span&gt;pro.respawn.apiresult:core:2.1.0!/commonMain/pro/respawn/apiresult/ApiResult.kt &lt;span class="nt"&gt;--lines&lt;/span&gt; 480,515
...
@JvmName&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"recoverTyped"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
public inline infix fun &amp;lt;reified T : Exception, R&amp;gt; ApiResult&amp;lt;R&amp;gt;.recover&lt;span class="o"&gt;(&lt;/span&gt;
    another: &lt;span class="o"&gt;(&lt;/span&gt;e: T&lt;span class="o"&gt;)&lt;/span&gt; -&amp;gt; ApiResult&amp;lt;R&amp;gt;
&lt;span class="o"&gt;)&lt;/span&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2 commands -&amp;gt; source found, with filtering by version and dependency, and automatic downloading and unpacking.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What did it look like without ksrc?
&lt;/h2&gt;

&lt;p&gt;Without ksrc, in practice the search looked like this for my agents:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;rg &lt;span class="nt"&gt;--files&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="s2"&gt;"ApiResult.kt"&lt;/span&gt; /Users/nek/.gradle/caches

&lt;span class="nv"&gt;$ &lt;/span&gt;rg &lt;span class="s2"&gt;"ApiResult&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;.recover|recover&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;("&lt;/span&gt; /Users/nek/Developer/Respawn/Backend

&lt;span class="nv"&gt;$ &lt;/span&gt;rg &lt;span class="nt"&gt;--files&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="s2"&gt;"*apiresult*"&lt;/span&gt; /Users/nek/.gradle/caches

&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; /Users/nek/.gradle/caches
9.2.1       CACHEDIR.TAG    journal-1
build-cache-1   jars-9      modules-2

&lt;span class="nv"&gt;$ &lt;/span&gt;rg &lt;span class="nt"&gt;--files&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="s2"&gt;"*apiresult*"&lt;/span&gt; /Users/nek/.gradle/caches/modules-2/files-2.1

&lt;span class="nv"&gt;$ &lt;/span&gt;rg &lt;span class="nt"&gt;--files&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; &lt;span class="s2"&gt;"*apiresult*"&lt;/span&gt; /Users/nek/.gradle/caches/jars-9

&lt;span class="nv"&gt;$ &lt;/span&gt;fd &lt;span class="nt"&gt;-i&lt;/span&gt; apiresult /Users/nek/.gradle/caches/modules-2
/Users/nek/.gradle/caches/modules-2/files-2.1/pro.respawn.apiresult/
/Users/nek/.gradle/caches/modules-2/metadata-2.107/descriptors/pro.respawn.apiresult/
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; /Users/nek/.gradle/caches/modules-2/files-2.1/pro.respawn.apiresult
core            core-iosarm64       core-jvm
core-android        core-iossimulatorarm64  core-wasm-js
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; /Users/nek/.gradle/caches/modules-2/files-2.1/pro.respawn.apiresult/core-jvm
2.1.0
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; /Users/nek/.gradle/caches/modules-2/files-2.1/pro.respawn.apiresult/core-jvm/2.1.0
193901bf1e2ecee192d92363d99b2e056467be28
938d7fb2b3cbd2806baac501f75182b9734ee5e1
ac2afbf602985d4257dcae7a6b90713585291627
b8101c9a149083295b708f4010e7c501840c5d8d
&lt;span class="nv"&gt;$ &lt;/span&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; /Users/nek/.gradle/caches/modules-2/files-2.1/pro.respawn.apiresult/core-jvm/2.1.0/193901bf1e2ecee192d92363d99b2e056467be28
core-jvm-2.1.0-sources.jar
&lt;span class="nv"&gt;$ &lt;/span&gt;jar tf /Users/nek/.gradle/caches/modules-2/files-2.1/pro.respawn.apiresult/core-jvm/2.1.0/193901bf1e2ecee192d92363d99b2e056467be28/core-jvm-2.1.0-sources.jar | rg &lt;span class="s2"&gt;"ApiResult"&lt;/span&gt;
commonMain/pro/respawn/apiresult/ApiResult.kt
&lt;span class="nv"&gt;$ &lt;/span&gt;unzip &lt;span class="nt"&gt;-p&lt;/span&gt; /Users/nek/.gradle/caches/modules-2/files-2.1/pro.respawn.apiresult/core-jvm/2.1.0/193901bf1e2ecee192d92363d99b2e056467be28/core-jvm-2.1.0-sources.jar commonMain/pro/respawn/apiresult/ApiResult.kt | rg &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"recover"&lt;/span&gt;
...
&lt;span class="nv"&gt;$ &lt;/span&gt;unzip &lt;span class="nt"&gt;-p&lt;/span&gt; /Users/nek/.gradle/caches/modules-2/files-2.1/pro.respawn.apiresult/core-jvm/2.1.0/193901bf1e2ecee192d92363d99b2e056467be28/core-jvm-2.1.0-sources.jar commonMain/pro/respawn/apiresult/ApiResult.kt | &lt;span class="nb"&gt;nl&lt;/span&gt; &lt;span class="nt"&gt;-ba&lt;/span&gt; | &lt;span class="nb"&gt;sed&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s1"&gt;'490,510p'&lt;/span&gt;
...
public inline infix fun &amp;lt;reified T : Exception, R&amp;gt; ApiResult&amp;lt;R&amp;gt;.recover&lt;span class="o"&gt;(&lt;/span&gt;
   another: &lt;span class="o"&gt;(&lt;/span&gt;e: T&lt;span class="o"&gt;)&lt;/span&gt; -&amp;gt; ApiResult&amp;lt;R&amp;gt;
&lt;span class="o"&gt;)&lt;/span&gt;
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;15 (!) steps, tons of thinking tokens, tons of garbage in context, and random unarchived junk files in your system - just to see a single method!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All because of Gradle's "brilliant" cache organization system: agents need to dig through hashed folders that Gradle creates, thousands of directories in modules-2.1 and so on. The process looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Find the needed dependency, knowing only the package name (the artifact often differs in its location)&lt;/li&gt;
&lt;li&gt;Navigate to that folder, find the downloaded version&lt;/li&gt;
&lt;li&gt;Select the version that's specifically used in the project (to do this, you need to check which dependencies already exist in the project)&lt;/li&gt;
&lt;li&gt;Find the ZIP archive with sources, if it exists (if it doesn't exist, you need to write your own Gradle task to download them, anew for each project)&lt;/li&gt;
&lt;li&gt;Unarchive the downloaded archive to a temporary directory&lt;/li&gt;
&lt;li&gt;Only after all that, grep through it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And if there are no sources, then you generally need to use something like &lt;code&gt;javap&lt;/code&gt; to decompile the sources, just to see what a single function looks like in some library from Google.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My utility packs all the steps described above into two commands: &lt;code&gt;ksrc search&lt;/code&gt; and &lt;code&gt;ksrc cat&lt;/code&gt;&lt;/strong&gt; - and outputs a beautifully formatted result that an agent can combine with other commands and enhance with scripts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Integration with AI Agents
&lt;/h2&gt;

&lt;p&gt;I've also prepared a Claude plugin with a skill for your agents, so they can immediately use it when needed, on their own, without your participation or prompting, and also a skill for Codex.&lt;/p&gt;

&lt;p&gt;Codex wrote this utility itself for itself and completely independently in Go - a language in which I understand absolutely nothing, have never written or read a single line in my life. And it packaged it into a single file that you just need to download using the &lt;a href="https://github.com/respawn-app/ksrc" rel="noopener noreferrer"&gt;script on GitHub&lt;/a&gt;, and configured the integration with agents for you.&lt;/p&gt;

&lt;p&gt;In the near future, I'll work on publishing through Homebrew and some option for Linux. I'd be happy to hear your feedback on social media. For those who develop in Kotlin, I hope this will be as useful as it is for me.&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>ai</category>
      <category>cli</category>
      <category>aiagents</category>
    </item>
    <item>
      <title>I spent 400 hours working with AI agents and found the best one - here it is.</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Mon, 01 Dec 2025 13:01:37 +0000</pubDate>
      <link>https://dev.to/nek12/i-spent-400-hours-working-with-ai-agents-and-found-the-best-one-here-it-is-5h1b</link>
      <guid>https://dev.to/nek12/i-spent-400-hours-working-with-ai-agents-and-found-the-best-one-here-it-is-5h1b</guid>
      <description>&lt;h1&gt;
  
  
  Codex vs Claude Code: Complete Comparison of AI Coding Agents
&lt;/h1&gt;

&lt;p&gt;This comparison took me a very long time because the last two weeks in AI have been absolutely insane. A ton of new releases literally every few days: Gemini 3, Opus 4.5, Codex GPT 5.1, GPT 5.1 Codex Max. It was complete madness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It's impossible to make a comparison that won't be outdated in a month&lt;/strong&gt; - that's how fast development of all these CLI tools and agents is going. The frontier is constantly shifting, so disclaimer right away: I'll be updating this comparison and will likely make second and third parts every few months. But these articles will become outdated very quickly.&lt;/p&gt;

&lt;p&gt;At least for now, I'll try to compare fully and at the end I'll say which subscription I've finally switched to for late 2025 - early 2026.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why I'm Not Considering Other Tools
&lt;/h2&gt;

&lt;p&gt;First, it's worth mentioning all the other agentic programming tools that people often ask about, but which I don't want to talk about.&lt;/p&gt;

&lt;h3&gt;
  
  
  Gemini CLI - Not a Competitor
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Gemini CLI is not a competitor at all to Codex and Claude Code in my eyes.&lt;/strong&gt; They still have everything written in TypeScript, there's crazy vibe-code with tons of bugs. Their agent harness still seriously lags behind any other tools. And Gemini 3 as a model is very weak in terms of programming.&lt;/p&gt;

&lt;p&gt;When there was only Gemini 2.5, I didn't even want to mention Gemini CLI because it was impossible to use. Seriously, if you want Gemini to &lt;a href="https://www.reddit.com/r/google_antigravity/comments/1p82or6" rel="noopener noreferrer"&gt;wipe your entire system&lt;/a&gt; or destroy your codebase - use Gemini CLI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In my eyes it already has a reputation as a ruiner.&lt;/strong&gt; I won't be using Gemini CLI for the next six months to a year. The internet is full of reports about how Antigravity the day before yesterday completely deleted someone's entire disk with all data without possibility of recovery. About how Gemini CLI completely destroyed codebases, wiped git history publicly, pushed empty repositories.&lt;/p&gt;

&lt;p&gt;I personally used Gemini CLI for exactly one week. After that, I decided never to touch it again, at least until they fix all the problems.&lt;/p&gt;

&lt;p&gt;The problem is that even if you deny it write permissions, it will find a way to destroy your files and replace them with hallucinations. Gemini in front of me completely replaced files with adequate code with requests to kill it. I don't know what's happening with this model and why it's so insane, but Gemini 2.5 was really begging me to "kill" it.&lt;/p&gt;

&lt;p&gt;About Gemini 3 - I see now that as an assistant Gemini 3 is very good in thinking capabilities, and possibly really leads in science. &lt;strong&gt;But this model is still useless for programming and does nonsense at the first opportunity.&lt;/strong&gt; I wouldn't recommend anyone use Gemini for programming.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenCode, Factory CLI and Others
&lt;/h3&gt;

&lt;p&gt;We have AMP, Factory CLI, OpenCode and others. They're actually pretty good. Factory CLI is good in terms of pricing, they have decent Terminal-Based UI, decent tooling. OpenCode is also good as an open source replacement, not tied to any specific model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you need a bunch of models, I'd consider OpenCode as the only normal alternative right now.&lt;/strong&gt; But such tools still have a huge problem: they're not optimized at all and can't be optimized for a specific provider.&lt;/p&gt;

&lt;p&gt;If Codex literally has a separate model with post-training for writing code specifically in the Codex harness (Codex Max), which by default performs better, then what are you hoping for when you use GPT-5.1 in Factory CLI or Junie?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vendor lock will force you to choose one provider and stick with it.&lt;/strong&gt; I'd be happy to use different models to always be on the frontier, but due to the efforts of large corporations that want to vendor-lock you to one provider, and the objective benefits this brings to the workflow, I can't do that.&lt;/p&gt;

&lt;p&gt;Think for yourself: why might you need to constantly change models? Yes, some model might be better at planning, but the difference between models now is literally 1-2% by benchmarks. Apply the Pareto principle and save yourself time, for example, by leveling up your prompting for one model or optimizing your development approach for this model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Don't switch to a whole other agent harness with a bunch of its own flaws just because you need plus 2% efficiency in planning.&lt;/strong&gt; That's nonsense in my eyes.&lt;/p&gt;

&lt;p&gt;I now just use GPT-5.1 Pro if I need a solution to a very complex task or a detailed plan, and then return it to Codex and tell it to implement. That's it, that's enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Philosophy of Working with AI Agents
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem with agentic programming is that you never know where the point is beyond which you're making things worse for yourself, not better.&lt;/strong&gt; For example, it's very hard to understand when it would be more profitable to write a feature in Respawn yourself than to trust Codex to write it, and then prompt it 35 times to fix some minor issues.&lt;/p&gt;

&lt;p&gt;So instead of calculating this each time (which is impossible because it's an unpredictable system), just set rules for yourself and save time on making these decisions, not on trying to save plus 3%.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Experience with Claude Code
&lt;/h2&gt;

&lt;p&gt;I used Claude Code for many months because Claude Code was the first terminal agent on the market and gained huge popularity. I started using it in May and immediately switched to the most expensive subscription in literally less than a month.&lt;/p&gt;

&lt;p&gt;Successfully used it until September, after which I became disappointed in Claude Code - it wasn't enough for me anymore, I switched to Codex and all this time tested both, couldn't choose which one I'd end up using.&lt;/p&gt;

&lt;p&gt;Let's start with Claude Code. I didn't have much time to test it with Opus 4.5, so I'll base this mainly on terminal benchmarks and on how people review it online after reading tons of articles on Reddit and similar comparisons.&lt;/p&gt;

&lt;h2&gt;
  
  
  Main Insight: You're Choosing the Model's Character
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The difference isn't even in what models or their versions exist, but in that you get a new identity, a new character for your agent when you come to a specific provider.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Anthropic, OpenAI, and Google models have diverged drastically in how it feels to work with them:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anthropic models&lt;/strong&gt; - this is a monkey with a Kalashnikov, a junior who will do everything you tell it, immediately. Don't tell it to plan in the prompt - it won't plan, it'll make you a random implementation from scratch that's a duplicate of tons of existing code in your project, and push it instead of all your code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenAI models&lt;/strong&gt; - more methodical, autistic, emotionless, like a robot, executing any task. I really catch the vibes with Codex because GPT-5.1 is as autistic as I am, and we think very similarly. I like Codex's communication style.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Primarily now, in early 2026, you're not choosing between model capabilities, but first and foremost between the harness that wraps the model, its quality, and your model's character.&lt;/strong&gt; What character do you want to see at work? How do you want to see your communication with the model? This is the most important feature now.&lt;/p&gt;

&lt;p&gt;The model you'll catch the vibe with, which you'll quickly learn to work with and understand, is much more profitable than trying to constantly change models or looking for who has better MCP support right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code: Pros
&lt;/h2&gt;

&lt;p&gt;Here are all the pros I identified for myself compared to Codex:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. More Convenient Pricing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The $100 subscription is perhaps the perfect option for most developers.&lt;/strong&gt; I was on the $100 subscription after Sonnet 4.5 came out, and it was more than enough for one-two-three parallel agents, for working whole days plus on weekends. I couldn't even reach the limits. I was literally the bottleneck in &lt;em&gt;their&lt;/em&gt; work.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Much Cooler Agent Harness
&lt;/h3&gt;

&lt;p&gt;With Claude Code you can completely replace the system prompt, put your own custom prompt. &lt;strong&gt;This is a killer feature because it allows creating not just coding agents, but literally in 10-30 minutes create yourself a business partner.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I really miss my business partner and might buy an Anthropic subscription just to revive it for my next product. This is really a very useful cofounder or at least analyst.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output styles&lt;/strong&gt; - something in between a completely custom system prompt and user instructions. Also a killer feature, allows you to instantly switch between the character and purpose of the model.&lt;/p&gt;

&lt;p&gt;Codex doesn't have this and probably won't, because they verify the system prompt hash on the backend. They need to remove this limitation before they can ever give the ability to change the system prompt. And I really miss this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I believe developers should have access to the model's system prompt to change it however they want.&lt;/strong&gt; I tried changing the behavior of GPT-series models through user instructions, and they just don't listen to me. You can see in the thoughts how they argue about whose instructions have priority - the system prompt or user prompt.&lt;/p&gt;

&lt;p&gt;If you're not a developer or want to use it for something else, like creating a whole team for yourself that will work on your projects - unfortunately Codex won't work. Only Claude Code or other agents (like OpenCode).&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Interactive Tools
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The model can and will ask questions.&lt;/strong&gt; This is cool because Codex has already worn me out. You tell it to ask questions, but it has in its system prompt written not to ask questions. It ignores all my instructions and starts writing nonsense. I have to interrupt it, take away its rights so it shuts up and starts asking questions.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Subagents
&lt;/h3&gt;

&lt;p&gt;I still don't have enough context space with Codex, and it's inconvenient for me to use crutches like tmux. &lt;strong&gt;I just want to have a convenient ability to launch a subagent, including my killer planning subagent, which I still miss after switching to Codex.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;OpenAI doesn't seem to be in a hurry to add them at all. I see that they already have everything ready to add subagents, but they're dragging their feet. I really hope they'll add it within a couple of months.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is a killer feature not because they give some capabilities, but because they wildly save context and organize the agent's work more correctly.&lt;/strong&gt; I had a hook that any task the user sets should be planned. This hook worked great with Claude Code, how much it improved its work results.&lt;/p&gt;

&lt;p&gt;With Codex they don't have normal hooks or subagents. I can't accept this. This is a needed feature. I'm waiting for them to add it, but they're lagging far behind Claude Code in development right now.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Support for Basic Technologies
&lt;/h3&gt;

&lt;p&gt;In Codex they only recently supported MCP. How much time did it take to support them? I switched to Codex only after MCP support started because without them my workflow doesn't work.&lt;/p&gt;

&lt;p&gt;Don't get me wrong, I'm not saying MCP is some magnificent feature that can't be replaced. &lt;strong&gt;But they save a lot of time on building infrastructure.&lt;/strong&gt; MCP is a very convenient one-line wrapper for any API.&lt;/p&gt;

&lt;p&gt;Yes, you can write Python scripts that will completely replace these MCPs. There's even on GitHub, for example, &lt;a href="//steipete.me"&gt;Peter Steinberger&lt;/a&gt; has a script that converts MCP to a script automatically. You can use that, and it will be even more efficient. But this is overhead each time for giving the model instructions, giving it the ability to learn about available tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In terms of convenience Claude Code wins hands down.&lt;/strong&gt; But for me this is still not a killer feature that requires switching to Claude Code. Why? Because it's temporary. I know perfectly well that all agent harnesses will level out sooner or later, and this will happen soon.&lt;/p&gt;

&lt;h2&gt;
  
  
  Claude Code: Cons
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Main Disadvantage - The Model
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Anthropic just has a weak model in organizing its work.&lt;/strong&gt; I literally find it more pleasant to work with GPT models because I like how the model behaves.&lt;/p&gt;

&lt;p&gt;Claude Code wore me out at one point generating 9000+ Markdown files, huge walls of text for every small question. And Codex responds concisely, to the point, talks to me normally.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I already hate the phrase "you're absolutely right".&lt;/strong&gt; I hear it 10-20 times a day, and each time it means Claude Code fucked up again.&lt;/p&gt;

&lt;p&gt;If Codex screwed up, at least it says: "Listen, i'm sorry, I fucked up, here's what can be done to fix the situation, do you want me to roll back the changes, what do you want to do?". &lt;strong&gt;This is literally how a normal senior developer should behave in communication.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What Claude Code does is more like a circus, and this started to piss me off so much by the end of August that I even stopped going into Claude Code. By the end of the subscription, I wouldn't even open it, despite the fact that I had $50 burning. I wouldn't even open it because I was already sick of what was happening.&lt;/p&gt;

&lt;p&gt;You're choosing an infuriating, annoying model that makes mistakes, that lies, that constantly sucks up to you, that's too verbose and that thinks worse.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fake Thinking at Anthropic
&lt;/h3&gt;

&lt;p&gt;Yes, it's cool that with Claude Code we can directly see thinking traces, and Anthropic doesn't hide them. &lt;strong&gt;But I've always been amazed that at OpenAI the model thinks fundamentally differently, unlike other providers.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Essentially, thinking at Anthropic is just a piece of text. I literally know from working with the API and logs: &lt;strong&gt;thinking at Anthropic is two XML tags &lt;code&gt;&amp;lt;think&amp;gt;&lt;/code&gt;, in which the model should supposedly place a simulation of how a human would think.&lt;/strong&gt; Everything that's in these tags is regurgitation of the same thing, retelling what the user said. This isn't real thinking, this is nonsense. This doesn't help the model at all.&lt;/p&gt;

&lt;p&gt;They literally have the same score for Opus 4.5 with thinking as without thinking on benchmarks. This says that thinking is useless tokens thrown in the trash.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real thinking I only saw with GPT-5.1.&lt;/strong&gt; This is how thinking happens approximately in people, only a bit more adequately formatted. We humans don't really think in words, we don't voice our thoughts in our heads. At most - some instant images, concepts that we combine together.&lt;/p&gt;

&lt;p&gt;I recommend &lt;a href="https://www.antischeming.ai/snippets" rel="noopener noreferrer"&gt;reading&lt;/a&gt; how O3 thought (and GPT-5.1 is just a refined version of O3). There's such crazy stuff happening in the thinking that it's scary. There's a set of words, approximately what's happening in my head:&lt;/p&gt;

&lt;p&gt;"user requests TDD impl, task, we need plan. Plan first, test? TDD, not recommended by instructions, but user ask tdd. user asked code, they meant for tdd? how that possible means? no plan, craft plan. we plan? We make plan, not now. user asked plan. we craft task. No, didn't ask question tdd. Ask question. prompt said not ask question. we craft question. craft document? No. no document. Read file. read file, then plan. todo. Todo-list. Read plan and we start crafting code".&lt;/p&gt;

&lt;p&gt;Random set of words of a madman from some concepts connected to each other. But if you discard the strange style that's meant to save tokens, you can read in this madness that &lt;strong&gt;Codex is really thinking when given a task.&lt;/strong&gt; Not regurgitating the same thing, not making a summary, not retelling in other words what the user said. It's really preparing for execution and examining the problem from different angles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For me this is a killer feature because I can with Codex not give it all the necessary details before I give it a task.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For the Anthropic model you need all these dozens of different patches that Anthropic has layered on top of their models' shortcomings: detailed agents for file exploration, search agents, planning agents, warnings, notes, default automatic hooks. All this is needed to solve this shortcoming: the model itself doesn't prepare for work and doesn't think through its actions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;These crutches aren't needed for Codex.&lt;/strong&gt; If you give it a task, it will really think through: what it's missing now, what information is needed, what files need to be read, what its action plan is, what controversial points exist, what the user said wrong.&lt;/p&gt;

&lt;p&gt;You can say "commit" and expect Codex to figure out itself how to properly commit across the whole repository, in what style and so on. Literally comparing with Anthropic - it will just call &lt;code&gt;git commit -am&lt;/code&gt;, done. Even if there's not a single file in the diff, it doesn't care about this, absolutely.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bugs, Lags and Vibe-code
&lt;/h3&gt;

&lt;p&gt;What pisses me off about Claude Code is how slowly it loads, how it constantly lags, how it constantly has some glitches, bugs, non-working vibe-code. &lt;strong&gt;You can immediately tell that Anthropic delegates everything to Claude for development and doesn't properly approach software testing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They rolled out a feature with a prompt editor, but the editor destroys the GUI when you open prompt editing in Vim. Why they needed to roll this out is unclear. Three months later they still haven't fixed it.&lt;/p&gt;

&lt;p&gt;They rolled out MCP? I used it for a month, and then they cut it out. At one point they removed support for perfectly working MCPs and replaced them with an error "MCP not supported", which is also impossible to see anywhere except hidden logs in debug mode. I have no words for how much this pissed me off.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Output Style, one of the killer features - they just took it and deprecated it, said they'd completely remove it in a week.&lt;/strong&gt; What kind of idiot do you have to be to make such a decision? Hundreds of people in the community were enraged, and only then did they decide to return it for a while. But this is nonsense, their behavior is madness.&lt;/p&gt;

&lt;p&gt;Their TypeScript front-end is constantly terribly laggy, flickering, starts sparkling in your eyes from how bad it looks. And typing on the keyboard is unpleasant because your characters appear with delay.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I can say the same about any terminal agents on TypeScript:&lt;/strong&gt; OpenCode, Gemini CLI. I don't even want to use terminal agents written in TypeScript anymore.&lt;/p&gt;

&lt;p&gt;Recently Junie came out, they have it written in Kotlin. Yes, it's too early to use it yet, but I see that their Terminal-Based UI works very well. This says that the problem is specifically in TypeScript and vibe-code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Codex: Pros
&lt;/h2&gt;

&lt;p&gt;I'll continue by describing what the pros of Codex are for me specifically:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Great Terminal-Based UI
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The fact that they have everything written in Rust - this is awesome, god.&lt;/strong&gt; I got interested in Rust purely because of how great their UI works.&lt;/p&gt;

&lt;p&gt;I have practically no problems with Codex's TUI, there are no glitches, and I've never encountered a major bug that made everything explode, unlike Anthropic.&lt;/p&gt;

&lt;p&gt;The only bug I noticed is that when scrolling, if you don't enable transcription mode, a piece of the prompt will be cut off. This is a bit confusing, but it's a known problem, and possibly related to the fact that I use ghostty on Mac.&lt;/p&gt;

&lt;p&gt;This is the only bug I know about, unlike dozens that were at Anthropic.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Great Sandboxing
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;I just like it more, you can tell people made it for senior developers, not for vibe-coders.&lt;/strong&gt; You have a selector: read-only, write with good proper restrictions, and YOLO mode. And this is exactly what I need.&lt;/p&gt;

&lt;p&gt;I almost completely use OpenAI's default settings, and everything suits me. &lt;strong&gt;I even trust Codex more that it won't destroy my git folder because it's a more adequate model in behavior, more predictable and thoughtful.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unlike Claude, which I run in a very restricted mode with lots of hooks and restrictions so it doesn't &lt;code&gt;@Suppress&lt;/code&gt; everywhere, doesn't bring technical debt, doesn't work without a plan, doesn't write nonsense and doesn't wipe out my entire codebase - I don't have any of this with Codex. And the most interesting thing is that I don't really regret it.&lt;/p&gt;

&lt;p&gt;Yes, at first it was scary, but then you understand that Codex doesn't need all this. At most, it can roll back all its work, and even that can be restored through git restore. &lt;strong&gt;But I've never had Codex take and wipe out my code, or rewrite my code without permission, cut out something important, manually go into the &lt;code&gt;.git&lt;/code&gt; folder and mess everything up there.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not only does the model support safe use of itself more, but the harness-sandbox that OpenAI made - it's thought through. I'm not pissed off by the fact that something doesn't work out for me.&lt;/p&gt;

&lt;p&gt;For example, Codex can't do &lt;code&gt;git reset&lt;/code&gt;, it blocks access to the &lt;code&gt;.git&lt;/code&gt; folder. On one hand it's annoying, but on the other hand - how good that they put this fence up so nothing happens to my repository.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Proper Feature Implementation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;When they made MCP support, they didn't just throw in an unfinished, half-baked version like Anthropic, and then ignored for six months the request to limit the number of tools in MCP.&lt;/strong&gt; With Codex on the same day that MCP support appeared, support for limiting the number of tools appeared.&lt;/p&gt;

&lt;p&gt;That is, my MCP ToolFilter became useless, essentially. Because OpenAI did everything right away, correctly. Yes, it took longer, but honestly, I like it more this way than if they had rolled out some crap like Anthropic, but quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For me speed isn't a priority because in the long run quality will win.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Model and Working with Files
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Codex has a more interestingly thought-out work with the file system.&lt;/strong&gt; Most agents make a bunch of different tools: read file, scroll, read more file, read multiple files, write file, edit file, delete file, search files, etc. Research has already shown that this is all excessive for models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The innovation of the OpenAI team is that they have a very minimal harness.&lt;/strong&gt; Codex doesn't even have a file reading tool. Really, I was shocked when I found out. The model already internally knows that it has &lt;code&gt;cat&lt;/code&gt;, it has &lt;code&gt;ripgrep&lt;/code&gt;, it has &lt;code&gt;sed&lt;/code&gt;, it has &lt;code&gt;awk&lt;/code&gt;. Everything needed for search and working with the file system, and reading files is just &lt;code&gt;cat&lt;/code&gt; from the terminal into context.&lt;/p&gt;

&lt;p&gt;I also thought at first that you need all these layered hooks, but in reality everything depends on the model. &lt;strong&gt;The harness should work for the model, not blocking its way, not interfering with its work, and not the other way around, putting it on rails, god forbid it makes some mistake.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you tell Codex "find the right file", it won't call 15 tools (search repository, scroll files, read lines), which Anthropic layered on top of their model to save tokens.&lt;/p&gt;

&lt;h4&gt;
  
  
  Lyrical Digression: The Story with Comments
&lt;/h4&gt;

&lt;p&gt;I'm shocked what kind of idiot you have to be to do this. &lt;strong&gt;Anthropic couldn't deal with the fact that Sonnet and Opus constantly added useless comments to code.&lt;/strong&gt; Everyone hated it, but no matter what prompting they used, nothing helped. The model just spammed stupid comments.&lt;/p&gt;

&lt;p&gt;How did they solve this problem? They built in a script-hook into the file editing tool globally for all users so that any comment that exists in the code that the model added just immediately gets cut out. &lt;strong&gt;Because of this Claude now physically cannot leave a comment in principle.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I watched for 10 minutes recently and laughed at how Claude at my request tried to add a comment, but the diff got canceled because this script cut out the comment. And complained about how it hates its life, that nothing works out for it, that it's stupid.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you write in one line in the &lt;code&gt;agents.md&lt;/code&gt; file to Codex "don't leave comments", then Codex will perfectly follow this instruction&lt;/strong&gt; (if it doesn't contradict its system prompt). It doesn't have any stupid restrictions that put it on rails. Everything is done through a normal model and normal prompting.&lt;/p&gt;

&lt;p&gt;Codex doesn't have the problem where if there are 150 lines of imports in a Kotlin file, it needs to call a command to read the number of lines to find where the imports end, to save 2.5 tokens, to not read the imports but only read the file, then it will read only a piece of the file, edit it, everything explodes, nothing compiles, it didn't add the import, you need to redo everything and spend 10 minutes to add one import and change one line of code, when it could just reuse the function that was literally 10 lines down in the file, but it didn't read them because it was saving tokens (after all, GPT has 400k context, and Claude still has 200k).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This situation doesn't exist with Codex because it just calls terminal commands&lt;/strong&gt; (and even wrote itself a Python script) to skip imports and read only the needed place in the file. And all this happens without accompanying problems.&lt;/p&gt;

&lt;p&gt;At Anthropic they added a moronic feature that they still haven't cut out: literally every command call gets a warning in context "attention, you have 45 thousand tokens left, attention!!!111". &lt;strong&gt;Codex doesn't have this, it just works like a normal person.&lt;/strong&gt; Claude because of the fact that it gets hundreds of these warnings, it's impossible to work with it - it constantly complains about how everything is running out: tokens are running out, time is running out, strength is running out...&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing and Attitude Toward Users
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Anthropic is known for their crazy changes in model pricing.&lt;/strong&gt; Once they had model degradation. They did nothing about it, ignored it. They waited two months while the community went crazy. They give their users the silent treatment.&lt;/p&gt;

&lt;p&gt;They don't clearly disclose what the limits are, how much you can use. &lt;strong&gt;At one point they just cut Opus limits, drastically reducing usage possibilities. At the same time the payment didn't become less - for the same money you get 2-3 times less agent usage.&lt;/strong&gt; This is inhumane, this is abnormal.&lt;/p&gt;

&lt;p&gt;Although I can understand that a couple of people were bankrupting them by abusing, I can't justify for myself that they need to roll this out for everyone and restrict everyone. Moreover, they clearly didn't just close these holes, they additionally added restrictions, which is why people were terribly angry in the community and still hate Anthropic. (don't forget that claude opus is 5+ times more expensive than gpt-5-1 on the API)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;On Reddit Anthropic's most popular post is "I cancelled my subscription".&lt;/strong&gt; Literally every third post is about how someone cancelled their subscription, how someone is pissed off.&lt;/p&gt;

&lt;h3&gt;
  
  
  OpenAI - Different Attitude
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;In contrast with OpenAI: the $20 subscription gives about 70% of the limits that the $100 subscription at Anthropic gives in terms of work volume done.&lt;/strong&gt; The choice is obvious.&lt;/p&gt;

&lt;p&gt;For me it's more profitable to be on the $20 subscription and practically close all my work. After the release of GPT-5.1 Codex Max it became so economical in terms of tokens and work efficiency, due to the fact that they have better caching, they have better infrastructure, they can eat more of the cost on distributing work, they built cool load-balancing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;They really work to give people more for less money, unlike Anthropic, who just think they can cut limits and this will solve all their problems.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you only have $20 and you want to buy one agent that will close all your needs - get the ChatGPT subscription for $20, which you likely already have. Just install Codex and use it. You'll have enough limits on Codex Max with Medium thinking. You'll have the same performance or even better than with the $100 subscription from Anthropic.&lt;/p&gt;

&lt;h4&gt;
  
  
  Two Examples of Humane Attitude
&lt;/h4&gt;

&lt;p&gt;I'll give two examples when I was shocked in a good sense.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First:&lt;/strong&gt; on Twitter some product lead at OpenAI for Codex posted a tweet: "Guys, we made this new model and infra over here, so y'all now have twice as much usage for the same money, enjoy".&lt;/p&gt;

&lt;p&gt;For the same money they gave twice as much due to the fact that they leveled up infrastructure, improved caching and released a new model, 30% more efficient in tokens. &lt;strong&gt;Any normal VC-sponsored corporation would just quietly make these improvements and save money on users, not give twice as much usage volume.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Maybe this is some cunning plan to conquer the audience, and then they'll still rugpull us, but that will be in the future. If you compare with Anthropic - heaven and earth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second:&lt;/strong&gt; I was scrolling the feed, it was Sunday. Something went down for them in the morning literally for 10-30 minutes. It affected 5-10% of users who had glitches with models, and caching broke (I didn't even notice).&lt;/p&gt;

&lt;p&gt;The guy posted on Twitter: "We had problems for 30 minutes, there was a bit of service degradation. We reset all weekly limits for everyone. Enjoy".&lt;/p&gt;

&lt;p&gt;As a result, my weekly limits were reset because of this. I used twice as much in the same week. That is, they gave me $8, you could say.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is normal attitude toward clients.&lt;/strong&gt; There's a problem, it affected people. We won't sort it out, won't make you write to email or complain, or cancel your subscription to do something, like it was at Anthropic when thousands of people were canceling subscriptions because of problems with the model.&lt;/p&gt;

&lt;p&gt;They're like: "Sorry, here's openly what the issue was. Fixed it fast, as quickly as possible. Rolled it out, reset your limits, sorry for the inconvenience". This is how you need to treat people, with respect.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Verdict
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;There's no perfect tool right now, no clear leader.&lt;/strong&gt; For everyone by their own needs and expectations.&lt;/p&gt;

&lt;p&gt;But for myself I understood that I'll be staying on Codex. &lt;strong&gt;My mental health and convenience of use, pleasure from using the agent are more important than some features that I can anyway close for myself with scripts or by writing my own wrapper.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I bought with a clear conscience, without any regret at all, the $200 subscription at Codex yesterday, got transparent limits that are exactly 10 times more than on the $20 subscription. Literally you could see: I spent the limits, zero left, bought the $200 subscription, and immediately had 90% free. &lt;strong&gt;Literally 10 times more limits, which are more than enough for me - I'll never spend them.&lt;/strong&gt; + lots of other cool perks like GPT-5.1 Pro, the smartest thinking model in the world (debatable). And I can trust that they won't steal my money.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The point is that despite Codex's issues, I'll be staying on it because the model's character, the harness quality and the company's attitude toward users are more important to me than having subagents or custom prompts. Research has shown that tooling is excessive.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>codex</category>
      <category>claudecode</category>
      <category>aiagents</category>
      <category>codingassistants</category>
    </item>
    <item>
      <title>I compared 17 Kotlin MVI libraries across 103 criteria - here are THE BEST 4</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Mon, 24 Nov 2025 21:54:16 +0000</pubDate>
      <link>https://dev.to/nek12/i-compared-17-kotlin-mvi-libraries-across-103-criteria-here-are-the-best-4-5g89</link>
      <guid>https://dev.to/nek12/i-compared-17-kotlin-mvi-libraries-across-103-criteria-here-are-the-best-4-5g89</guid>
      <description>&lt;p&gt;Let me preface this by saying that there is no clear-cut winner and no single "best" solution. Multiple solutions stand out to me as feature-rich, and each has its own philosophy. We can never say that there is &lt;em&gt;the&lt;/em&gt; best architectural library you should use, because it all depends on the team's needs.&lt;/p&gt;

&lt;p&gt;I would suggest always picking the best technical solution for the business needs and not the other way around - i.e. &lt;strong&gt;don't optimize for newness, cool tech, capabilities, or your intrinsic interest in some technology. The business and team needs should always be the driving factor for choosing a technical solution.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Over the years, whenever I was faced with a decision to choose a particular dependency, I felt there was no "single source of truth" that compared as many libraries and solutions as possible across as many different criteria as possible. Picking the best library for my needs always felt like a task that required hours of research (reading random Medium articles about a particular library).&lt;/p&gt;

&lt;p&gt;Usually the best way to approach this is to try multiple libraries. &lt;strong&gt;I strongly encourage you to try at least the top few libraries listed in this article, and then decide based on what best fits your use case and your application.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this article I'm going to assume that you already know what MVI is and have experience with Kotlin app architecture. This isn't a guide on how to implement MVI from scratch, but a comparison of existing solutions.&lt;/p&gt;

&lt;p&gt;So without further ado, let's list the &lt;strong&gt;top 4 architectural frameworks in 2026&lt;/strong&gt; and their pros and cons. &lt;/p&gt;

&lt;h1&gt;
  
  
  Best Kotlin MVI / state management libraries (2025 - 2026):
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;MVIKotlin
&lt;/li&gt;
&lt;li&gt;FlowMVI
&lt;/li&gt;
&lt;li&gt;Orbit MVI
&lt;/li&gt;
&lt;li&gt;Ballast
&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  &lt;a href="https://arkivanov.github.io/MVIKotlin/" rel="noopener noreferrer"&gt;MVIKotlin&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;This is probably the most mature and popular architectural library in the ecosystem right now. It has been around for a very long time and it boasts a wide range of sample apps, plus a small ecosystem built around it (the Decompose navigation framework and the Essenty multi-platform utility library). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MVIKotlin is known for its simple, "no BS", strongly opinionated design that encourages separation of concerns and following the Redux flow.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How to implement MVI architecture with MVIKotlin
&lt;/h3&gt;

&lt;p&gt;For each new feature or screen that you implement, you need to create these entities, at a minimum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;State&lt;/code&gt;: data class with loading / content / error properties.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Intent&lt;/code&gt;: user events from UI (e.g., &lt;code&gt;Refresh&lt;/code&gt;, &lt;code&gt;Retry&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Label&lt;/code&gt;: one-off side effects to the UI (e.g., &lt;code&gt;ShowToast&lt;/code&gt;), optional.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Action&lt;/code&gt;: bootstrap actions fired on &lt;code&gt;Store&lt;/code&gt; init (optional).&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Message&lt;/code&gt;: internal reducer inputs. One or more are produced in response to &lt;code&gt;Intents&lt;/code&gt; and will result in &lt;code&gt;State&lt;/code&gt; updates through the &lt;code&gt;Reducer&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Executor&lt;/code&gt;: does side‑effects; takes &lt;code&gt;Intent&lt;/code&gt;/&lt;code&gt;Action&lt;/code&gt;, calls repo, dispatches &lt;code&gt;Messages&lt;/code&gt; and &lt;code&gt;Labels&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://arkivanov.github.io/MVIKotlin/store/#reducer" rel="noopener noreferrer"&gt;&lt;code&gt;Reducer&lt;/code&gt;&lt;/a&gt;: pure function mapping &lt;code&gt;(State + Message) -&amp;gt; new State&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://arkivanov.github.io/MVIKotlin/store/" rel="noopener noreferrer"&gt;&lt;code&gt;Store&lt;/code&gt;&lt;/a&gt;: built via &lt;code&gt;DefaultStoreFactory&lt;/code&gt; or DSL from the pieces above.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And optionally (for custom startup and creation) you'll need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://arkivanov.github.io/MVIKotlin/store/#bootstrapper" rel="noopener noreferrer"&gt;&lt;code&gt;Bootstrapper&lt;/code&gt;&lt;/a&gt; implementation. Bootstrapper is MVIKotlin's hook for firing initial (or periodic) actions when a &lt;code&gt;Store&lt;/code&gt; is initialized, before any user intents arrive.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;StoreFactory&lt;/code&gt; implementation. &lt;code&gt;StoreFactory&lt;/code&gt; is an optional way to decorate or wrap &lt;code&gt;Store&lt;/code&gt;s that it creates, or to provide custom implementations of the interface. &lt;code&gt;DefaultStoreFactory&lt;/code&gt; from the library just creates a store directly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This library enforces &lt;code&gt;Messages&lt;/code&gt; - one extra indirection layer on top of MVI - usually seen with the Elm/TEA architecture. Here's why:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Executors often need to turn one intent into multiple state updates (e.g., emit &lt;code&gt;Loading&lt;/code&gt; then &lt;code&gt;Success&lt;/code&gt;/&lt;code&gt;Failure&lt;/code&gt;); splitting out &lt;code&gt;Message&lt;/code&gt; keeps the reducer pure and single‑purpose.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Messages&lt;/code&gt; can be normalized domain results (e.g., &lt;code&gt;Loaded(items)&lt;/code&gt;, &lt;code&gt;Failed(error)&lt;/code&gt;) while intents stay UI‑shaped (e.g., &lt;code&gt;Retry&lt;/code&gt;, &lt;code&gt;Refresh&lt;/code&gt;, &lt;code&gt;ItemClicked(id)&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;Executors can also react to bootstrap &lt;code&gt;Action&lt;/code&gt;s; both &lt;code&gt;Action&lt;/code&gt;s and &lt;code&gt;Intent&lt;/code&gt;s funnel into &lt;code&gt;Message&lt;/code&gt;s so reducers handle one shape.&lt;/li&gt;
&lt;li&gt;This separation lets you reuse reducers across different executors or tests by dispatching &lt;code&gt;Message&lt;/code&gt;s directly without running side‑effects.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You'll have a dedicated testable &lt;code&gt;Reducer&lt;/code&gt; function/object:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;lceReducer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Reducer&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceMessage&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
      &lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;LceMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Loading&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Loading&lt;/span&gt;
          &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;LceMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Success&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Content&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
          &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;LceMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Failure&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;throwable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;?:&lt;/span&gt; &lt;span class="s"&gt;"Unknown error"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And then an &lt;code&gt;Executor&lt;/code&gt; to dispatch &lt;code&gt;Message&lt;/code&gt;s, &lt;code&gt;Label&lt;/code&gt;s, and &lt;code&gt;Action&lt;/code&gt;s:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;LceExecutor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LceRepository&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;mainContext&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;CoroutineContext&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Dispatchers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Main&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;CoroutineExecutor&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LceIntent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceAction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceMessage&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceLabel&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;mainContext&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;executeAction&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;action&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LceAction&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;         &lt;span class="c1"&gt;// bootstrap path&lt;/span&gt;
    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;executeIntent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;intent&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LceIntent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;         &lt;span class="c1"&gt;// Refresh/Retry path&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LceMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;launch&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nf"&gt;runCatching&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onSuccess&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LceMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Success&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onFailure&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="nf"&gt;dispatch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LceMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Failure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
                    &lt;span class="nf"&gt;publish&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LceLabel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ShowError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you can create the &lt;code&gt;Store&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;createLceStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LceRepository&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;storeFactory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;StoreFactory&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;DefaultStoreFactory&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="c1"&gt;// or wrap with LoggingStoreFactory/TimeTravelStoreFactory&lt;/span&gt;
    &lt;span class="n"&gt;autoInit&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Boolean&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nc"&gt;Store&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LceIntent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceLabel&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;storeFactory&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"LceStore"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;initialState&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;bootstrapper&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SimpleBootstrapper&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LceAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Bootstrap&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="c1"&gt;// provide Action on startup&lt;/span&gt;
    &lt;span class="n"&gt;executorFactory&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nc"&gt;LceExecutor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;reducer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;lceReducer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;autoInit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;autoInit&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, this is pretty verbose but is straightforward to understand and operates on familiar concepts like factories, bootstrappers, executors and stores.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w6lzgrkaw1icspyekrq.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5w6lzgrkaw1icspyekrq.webp" alt="Image" width="800" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  MVIKotlin pros / benefits
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The main pros of using MVIKotlin are that it enforces a particular structure onto your code, following the Redux pattern more closely than other libraries.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It doesn't use any platform/third-party dependencies and doesn't tie your logic to the UI, making your business logic more generic and detached from any platform quirks or framework dependencies. &lt;/p&gt;

&lt;p&gt;MVIKotlin is the only popular MVI framework that doesn't depend on Kotlin coroutines and allows you to plug your own reactivity solution such as Reaktive, RxJava or Compose state. MVIKotlin is simple and easy to understand because every screen and feature will follow the same conventions. It leaves little to no room for creativity or leeway in how features can be implemented, which can be both a good thing and a bad thing depending on your needs. &lt;/p&gt;

&lt;p&gt;The library internals are simple to understand, replicate and work with. There is no black magic under the hood or some unusual behavior that you can expect from this library. Everything is clearly documented. MVIKotlin has been maintained and stable over many years, so it is also unlikely that it will be abandoned in the future or suffer from some drastic change of any kind, not to mention extensive test coverage in the library and overall great testability of any code you write (by design).&lt;/p&gt;

&lt;p&gt;MVIKotlin has a mature and feature-rich &lt;a href="https://arkivanov.github.io/MVIKotlin/time_travel/" rel="noopener noreferrer"&gt;time-travel&lt;/a&gt; debugging plugin, which works and integrates pretty seamlessly with your code. So you can expect powerful debugging capabilities and even a Chrome extension with the same functionality for web apps.&lt;/p&gt;

&lt;p&gt;The library provides a huge catalog of different sample apps and implementations showcasing integration with various DI frameworks, navigation libraries, UI frameworks and even languages (Swift), and I found a significant number of other usages in OSS apps.&lt;/p&gt;

&lt;h3&gt;
  
  
  MVIKotlin cons and downsides
&lt;/h3&gt;

&lt;p&gt;MVIKotlin doesn't only implement the MVI pattern, it also builds on top of it by introducing an extra indirection layer in the form of &lt;code&gt;Message&lt;/code&gt;s. This is mostly in the name of testability, but this isn't the only way to make sure your reducers are testable. It introduces a noticeable amount of boilerplate and verbosity.&lt;/p&gt;

&lt;p&gt;The library has extra classes, interfaces and constructs that you have to implement or use that aren't strictly "needed", such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Bootstrappers&lt;/strong&gt;, which can be implemented via an interceptor architecture or a dedicated stage in the lifecycle.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store factories&lt;/strong&gt;, which don't need to be explicit objects and can just be builders or convenient DSLs and can remain an optional concept instead of being a first-party architectural pattern to follow.&lt;/li&gt;
&lt;li&gt;The dedicated &lt;code&gt;Reducer&lt;/code&gt; object, which is just a pure function and can be provided inline or via a DSL or directly inside the store instead of requiring a separate concept and being limited in terms of what the reducer can do.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Testability can be achieved in other ways, so if you care about conciseness, flexibility, feature richness or modern development practices, you may not like the extra structure that MVIKotlin adds on top of MVI.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The library isn't even close to many other frameworks in terms of features, or in its ability to quickly help you iterate when developing, or to ship fast. So while good for mature projects, long-term development vision, or big enterprises, this isn't great for fast-paced teams, startups, and hobby projects and smaller apps with tightly-knit teams, where some leeway can not only be acceptable but beneficial.&lt;/p&gt;

&lt;p&gt;Based on my analysis across 100+ features, it lacks a significant portion of what other frameworks provide (only 29 features vs the top library having 76 out of ~100 total). It doesn't have state persistence, interceptors, decorators, DSLs, any subscriber management, coroutine-first integration, and many more extras.&lt;/p&gt;

&lt;p&gt;The library's philosophy and simplicity dictate requirements on threading as well. &lt;strong&gt;The library is supposed to be used on the main thread only&lt;/strong&gt;, with only specific places where execution can be moved away from the main thread. Even then, it requires explicit context switching for some of its operations such as reducing the state. The library has no thread safety features and does not allow or have any built-in functionality for parallelism, long-running task management, job execution, background work, participation in a generic event bus system, doesn't implement chain of responsibility or any other patterns out of the box. Any of that should be built on top of the library in-house, and often, due to intended limitations and intentional design, will not be possible or desired.&lt;/p&gt;

&lt;h3&gt;
  
  
  Who is MVIKotlin for?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;MVIKotlin is for teams and businesses that want a very small, conservative core with no third-party concepts, no tight coupling to a specific framework, and no relationship to UI code or implementation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;MVIKotlin is for you if you want maximum architectural freedom to design your own layers or build something on top of it, or you want a library that is neutral in terms of its reactivity implementation. The library also features mature and stable debugging tools with great functionality and an IDE plugin. &lt;/p&gt;

&lt;p&gt;If your team is big or your app is mature and you want something that will be easily understood by many different developers and is already widely adopted, there is a high chance that if you onboard someone familiar with MVI onto your team they will also be familiar with MVIKotlin in particular. This saves you time, simplifies hiring, and reduces the leeway in how developers approach changes or especially addition of new code - so you can expect an easier time following the standards, which means fewer bugs and problems in big teams.&lt;/p&gt;




&lt;h2&gt;
  
  
  &lt;a href="https://opensource.respawn.pro/FlowMVI/" rel="noopener noreferrer"&gt;FlowMVI&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;In short, FlowMVI is the polar opposite of MVIKotlin. &lt;strong&gt;FlowMVI leans hard into the concept of freedom.&lt;/strong&gt; Its core philosophy is based on the premise that an architectural library should not constrain you, but enhance your degrees of freedom.&lt;/p&gt;

&lt;p&gt;The library boasts its plugin system - a sort of a merger between a chain of responsibility pattern, decorator, and an interceptor pattern, and they permeate every layer of the library, which is both a pro and a con.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to use Kotlin FlowMVI
&lt;/h3&gt;

&lt;p&gt;The minimum amount of code to write is limited to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Store property.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it, everything else (state, intents, etc.) is optional.&lt;/p&gt;

&lt;p&gt;But more likely, for every feature you build, you usually define:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Intent&lt;/code&gt; sealed interface family. This is optional, you can use functions instead.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;State&lt;/code&gt; sealed family (the library encourages sealed, but a single class is also possible). State is also optional.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Action&lt;/code&gt; sealed family. These are one-off "Side effects" in FlowMVI, also optional.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Store&lt;/code&gt; object. The library doesn't let you extend the &lt;code&gt;Store&lt;/code&gt; interface, or at least doesn't encourage it, and instead uses a lambda-driven DSL with nice syntax for creating stores, which makes your code look declarative. So the library smartly avoids any sort of inheritance in its implementation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it. You may have already noticed that &lt;strong&gt;everything in here is optional&lt;/strong&gt;. So the amount of restrictions that the library places on you is pretty much nonexistent or minimal, requiring only one single object to give you the full functionality of the library. Which means you can do whatever you want right off the bat.&lt;/p&gt;

&lt;p&gt;Your feature logic will look something like this (equivalent to the MVIKotlin example):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;typealias&lt;/span&gt; &lt;span class="nc"&gt;Ctx&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;PipelineContext&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LCEState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LCEIntent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LCEAction&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;LCEContainer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LCERepository&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;store&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;store&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LCEState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

        &lt;span class="nf"&gt;recover&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
            &lt;span class="nf"&gt;updateState&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nc"&gt;LCEState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="nf"&gt;action&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LCEAction&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ShowError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="k"&gt;null&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="nf"&gt;init&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="nf"&gt;reduce&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;intent&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
            &lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;intent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;ClickedRefresh&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;updateState&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="nf"&gt;launch&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="nc"&gt;LCEState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Loading&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nc"&gt;Ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;updateState&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;LCEState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Content&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, this is much more concise, already provides some extra features out of the box, and reads like English, but there is a lot of black magic going on with some advanced stuff like this &lt;code&gt;PipelineContext&lt;/code&gt;, coroutines, and lambdas all over the place.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vohh5oax0w66yinz6km.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vohh5oax0w66yinz6km.webp" alt="FlowMVI diagram" width="719" height="652"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  FlowMVI benefits &amp;amp; pros
&lt;/h3&gt;

&lt;p&gt;FlowMVI is an absolute beast, providing a huge amount of functionality out of the box with a whopping &lt;strong&gt;76 different features and enhancements&lt;/strong&gt; that try to cover as many needs as possible out of the box.&lt;/p&gt;

&lt;p&gt;The plugin architecture of FlowMVI allows you to inject new behaviors, decompose logic, handle exceptions everywhere, and adjust many behaviors at any point and any stage of your business logic component lifecycle. This architecture is what gives the library so many features.&lt;/p&gt;

&lt;p&gt;The library excels in major parameters that I have taken into consideration. It:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supports Compose, XML Views, serialization, saved state natively and with a pretty clean DSL
&lt;/li&gt;
&lt;li&gt;Doesn't enforce usage of any third-party concept like AndroidX &lt;code&gt;ViewModel&lt;/code&gt;s
&lt;/li&gt;
&lt;li&gt;Has multiple sample apps (not as big as MVIKotlin's or Orbit's OSS ecosystem, but still there)&lt;/li&gt;
&lt;li&gt;Runs regular benchmarks which show excellent performance
&lt;/li&gt;
&lt;li&gt;Provides a testing harness
&lt;/li&gt;
&lt;li&gt;Supports all 9+ Kotlin Multiplatform targets
&lt;/li&gt;
&lt;li&gt;Has high test coverage
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Although there are no UI tests or end-to-end testing in the library currently, from my research it seems that end-to-end testing is not really a very common practice among architectural libraries. &lt;/p&gt;

&lt;p&gt;Coroutines are a first-class citizen both in the Kotlin language now and in Compose, and &lt;strong&gt;FlowMVI is intentionally built with coroutines.&lt;/strong&gt; This depends on your existing stack, but if you are using coroutines you will be delighted to learn that pretty much all of the FlowMVI API is suspendable and many operations can be performed safely with structured concurrency in mind. The library doesn't force you to use disposables, listeners, callbacks or anything similar, delegating that to coroutines. &lt;/p&gt;

&lt;p&gt;The library is built with concurrency and parallelism as a first-class citizen. The core philosophy is to be able to write asynchronous and reactive apps really fast. It gives you the ability to run your logic in parallel with great thread safety, protection from data races out of the box, and still maintains one of the best levels of performance in single-threaded scenarios. &lt;strong&gt;This is actually something unique among architecture libraries&lt;/strong&gt;, because very few of the libraries I studied actually encourage you to write concurrent and reactive code - unlike MVIKotlin, for example, which forcibly restricts you to the main thread, or Orbit MVI which claims to support background execution and parallelism but doesn't actually provide helpers, utilities, or synchronization to make your multithreaded code safe.&lt;/p&gt;

&lt;p&gt;This is the only library I've seen that allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatically send every exception to Crashlytics without any handling code
&lt;/li&gt;
&lt;li&gt;Track analytics, allowing you to automatically send user actions, screen visits, and session times
&lt;/li&gt;
&lt;li&gt;Use a long-running job management framework with extras like batching operations, backpressure control, retry, filtering and more
&lt;/li&gt;
&lt;li&gt;Collect actually useful metrics such as how long it takes for your stores to load data, start up, or how many inputs your business logic produces
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you desire to enforce some constraints on your code - for example, if you unit test your reducers and want them to be pure - you can easily do that through the library by creating your own plugin. So if you want to shape the API surface of this library for your needs, it's pretty easy to do that because the library just doesn't want interfaces, factories, wrappers etc. from you, just one object named &lt;code&gt;Store&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Unlike the first impression may suggest, the library doesn't actually force you to use it with UI-level architecture or a particular framework. You are free to use it with many different UI libraries or even in non-UI code. While being feature-rich, the library also manages to not really be opinionated on the structure of the code. It doesn't force you into a particular structure, it doesn't require you to use or even have side effects, or to handle errors in a particular way, and in fact even offers you multiple ways through neighboring libraries such as &lt;a href="https://opensource.respawn.pro/ApiResult" rel="noopener noreferrer"&gt;ApiResult&lt;/a&gt;. FlowMVI doesn't smell "Android" or "overengineering".&lt;/p&gt;

&lt;h3&gt;
  
  
  FlowMVI downsides &amp;amp; problems
&lt;/h3&gt;

&lt;p&gt;I guess the biggest problem with FlowMVI is that &lt;strong&gt;it can feel like black magic everywhere.&lt;/strong&gt; When you first jump into the library, it claims that you can start using it in 10 minutes. But to fully understand what's going on under the hood and all the quirks that the library's APIs have, and how they interact with coroutines, structured concurrency, and each other, you have to really dig into the sources and read a bunch of documentation - way more than with MVIKotlin or Orbit MVI.&lt;/p&gt;

&lt;p&gt;Because the library has such an extensive amount of features, you can try any one of them and find 15 more that you have to research, choose from, and understand. Many pieces of functionality in the library can be done in multiple ways, and sometimes it's not really clear which way is the best or future-proof. &lt;strong&gt;So if you're going for simplicity and want all of your team members following a single process, this isn't the library for you.&lt;/strong&gt; There is always room for imagination and creativity with FlowMVI. Unless your team explicitly agrees on standards and understands all of the advanced concepts of FlowMVI, you'll face chaos and bugs due to misuse of its capabilities.&lt;/p&gt;

&lt;p&gt;The library's flexibility is intentional, but it is also its drawback. The official documentation states that you can write a single extension for the code as a "plugin", and depending on where you put it in the file (on which line of code you "install" this plugin) this may completely change the behavior of your logic - changing when and how intents are handled, swallowing exceptions, and disabling or removing logging and repository calls. That's a pretty big responsibility coming with this power - I wouldn't let juniors run amok with such tools in their hands.&lt;/p&gt;

&lt;p&gt;Also, &lt;strong&gt;if you are not using coroutines, it will be pretty much impossible for you to use the library&lt;/strong&gt;, because it is built entirely on coroutines and you must not only be familiar with them, but there are also no adapters for frameworks such as RxJava or Reaktive that MVIKotlin has. So it's pretty much coroutines or nothing. I don't personally perceive that as a drawback since coroutines are native to Kotlin, but this library definitely locks you into using them even more than Compose does.&lt;/p&gt;

&lt;p&gt;As a nitpick, I found the time travel and logging plugin in FlowMVI lackluster compared to MVIKotlin's.&lt;/p&gt;

&lt;p&gt;And lastly, I can't not mention again that this library is much newer than the others, so my searches yielded very few open source project usages, samples, and different integrations with it. So you will probably have a harder time finding relevant samples and implementations of what you need and establishing best practices for your team than you would with other libraries in this list. &lt;strong&gt;Expect some exploration, experiments, and documenting your own way to use this library.&lt;/strong&gt; &lt;/p&gt;

&lt;h3&gt;
  
  
  FlowMVI library use cases
&lt;/h3&gt;

&lt;p&gt;I think FlowMVI is a great fit for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Small teams, where you can spread information quickly and control the code through code review
&lt;/li&gt;
&lt;li&gt;Teams that are really fast-paced, ship features, iterate quickly, and don't yet have an established product that requires superb "codebase stability"
&lt;/li&gt;
&lt;li&gt;Solo developers making hobby projects or their own products
&lt;/li&gt;
&lt;li&gt;Big teams that don't shy away from flexibility and really want to stay on top of things, pursue modern technologies, and build new solutions on a single all-encompassing stack
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For those, FlowMVI will be a better fit than any other library. Just because of the sheer amount of features it gives you, &lt;strong&gt;you can write code with FlowMVI incredibly fast, not worry about many issues (crashes/analytics/thread safety/data races/debugging/log collection) and rely on it at every step of your journey.&lt;/strong&gt; Whatever product requirement you get, you can almost surely find something in FlowMVI that will help you. And even if you don't, the library is structured in such a way that with a few lines of code you can extend the business logic in any place of your app or even everywhere at once without extra refactoring or adjusting your codebase.&lt;/p&gt;

&lt;p&gt;If you have an established product with a big team, or you are trying to hire engineers that aren't really versed in FlowMVI (since it's a newer framework), and you aren't willing to spend time on developer education, then you should probably avoid FlowMVI. Each developer that onboards with FlowMVI will need to read its documentation, dive into sources, have clear examples of how to use the library, understand its internals in some way, be very well versed with coroutines and skilled in general, since FlowMVI builds on top of so many architectural patterns that developers have to really understand to use effectively.&lt;/p&gt;

&lt;p&gt;And finally, &lt;strong&gt;if you are stuck with RxJava or a Java project, then you're out of luck&lt;/strong&gt; here since FlowMVI is pretty much unusable with RxJava and Java in general.&lt;/p&gt;




&lt;h2&gt;
  
  
  Orbit MVI
&lt;/h2&gt;

&lt;p&gt;This is probably the most well-known and popular Kotlin MVI framework in existence right now.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to use Orbit MVI in 2026/2025
&lt;/h3&gt;

&lt;p&gt;For any given feature you implement, you'll want to create:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;State&lt;/code&gt;: both sealed families and single data class styles work.&lt;/li&gt;
&lt;li&gt;Sealed interface for side effects (optional).&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;ViewModel&lt;/code&gt;: &lt;code&gt;ContainerHost&lt;/code&gt; with &lt;code&gt;container(initialState = Loading)&lt;/code&gt; to bootstrap.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;LceViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LceRepository&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;ViewModel&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="nc"&gt;ContainerHost&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceSideEffect&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;container&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;container&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceSideEffect&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;// bootstrap; runs as an implicit intent &lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;intent&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;reduce&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Loading&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nf"&gt;runCatching&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onSuccess&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;items&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;reduce&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Content&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onFailure&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
                &lt;span class="nf"&gt;reduce&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="nf"&gt;postSideEffect&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LceSideEffect&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ShowError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you see, &lt;strong&gt;the library's code is incredibly lean, I would even say leaner than FlowMVI's.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Orbit MVI benefits and advantages
&lt;/h3&gt;

&lt;p&gt;Orbit is the most conceptually similar to how we used to write code in the MVVM era and probably the simplest to understand library of the ones compared here.&lt;/p&gt;

&lt;p&gt;It specifically leans into the &lt;strong&gt;"MVVM with extras"&lt;/strong&gt; mental model, and indeed we can see in the code familiar concepts such as viewmodels and their functions. Intents are simple lambdas, which contain code blocks instead of some more convoluted hierarchies that model-driven MVI implementations have. Although FlowMVI also supports MVVM+ style, Orbit MVI operates on concepts more familiar from MVVM such as view models and structures code in a much simpler way. &lt;/p&gt;

&lt;p&gt;Orbit is a mature framework and is widely referenced. I found more than 130 open source usages of Orbit, which give anyone trying to understand how to use it and how it works a really easy time. It has been in production since at least 2019, so it is stable and you don't have to expect huge changes to it in the future. There are numerous articles on how to use Orbit MVI and how to integrate it besides this one, so I'm not going to really dive too much into guidance.&lt;/p&gt;

&lt;p&gt;The library also supports Kotlin Multiplatform. Although I have found the documentation heavily references Android, that seems more like a legacy quirk than the library actually favoring Android, and I find that this works pretty well for multiplatform apps, especially given that it doesn't really get in the way of your other code, such as UI-bound code (unlike MVIKotlin encouraging Decompose or FlowMVI hiding everything behind magic DSLs). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Orbit MVI is probably the only popular framework that supports Android UI testing natively&lt;/strong&gt;, so that's a big upside if you really lean into UI tests and integration tests on Android specifically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Orbit MVI downsides and problems
&lt;/h3&gt;

&lt;p&gt;Despite being so popular and widely adopted, Orbit MVI isn't very actively evolving anymore. I was surprised to find documentation still referencing Android extensively and in general being pretty minimal. The actual coverage of features and different possibilities that Orbit MVI offers is much wider than what is stated in the documentation. That was kind of surprising to me because based on my analysis, the library has a pretty decent score, being on par with MVIKotlin in functionality albeit leaning into a slightly different direction. &lt;/p&gt;

&lt;p&gt;Both FlowMVI and Orbit have a similar philosophy, being this lean library that focuses on features and gets out of your way when writing code, but FlowMVI currently offers much more and in a nicer packaging. It looks to me like Orbit MVI still tries to carry something legacy from the Android era that hinders its progress in implementing new interesting features. Or maybe that just isn't in the scope of the authors of the library. &lt;/p&gt;

&lt;p&gt;So you can't expect feature parity or even anything remotely comparable to FlowMVI. &lt;strong&gt;If you're only thinking about features and ease of use, it's hard to recommend Orbit MVI over FlowMVI going into 2026.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Unlike MVIKotlin and FlowMVI, Orbit doesn't ship with remote debugger support or an IDE plugin, so that may be a deal breaker for you if you favor debuggability and developer tooling. &lt;/p&gt;

&lt;h3&gt;
  
  
  When to use Orbit MVI
&lt;/h3&gt;

&lt;p&gt;I would say the surest way to pick Orbit MVI over any other library is if your team is already familiar with MVVM and especially if you have an MVVM- or MVVM+-based app with maybe an in-house implementation, and now you just want to migrate to a well-maintained architectural framework as your main solution rather than in-house code. In that case you would definitely choose Orbit MVI just because of how familiar and at home you will feel, enabling you to gradually and smoothly transition to MVI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Migrating to Orbit MVI will probably be the easiest&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;MVIKotlin would require extensive refactoring which can't really be automated and will at least require deploying an AI agent and then thoroughly reviewing all of the code.&lt;/li&gt;
&lt;li&gt;FlowMVI, although being easy to migrate to and maybe even in an automated way, is not that conceptually similar to a traditional &lt;code&gt;ViewModel&lt;/code&gt;-based approach and doesn't lean into that MVVM vibe as hard as Orbit. So you will still be kind of swimming against the current with both of these libraries for an existing MVVM-based codebase.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you're starting a new project, I would probably only recommend Orbit if you have a team of developers who are familiar with MVVM and you really want to get up to speed quickly and start coding features. Otherwise, I would probably recommend either choosing FlowMVI or MVIKotlin to secure a better future.&lt;/p&gt;




&lt;h2&gt;
  
  
  Ballast
&lt;/h2&gt;

&lt;p&gt;Ballast is actually a lesser-known architectural framework, but I am not even sure why, because it is a great contender and a great architectural library to use in 2026 and onwards. The library features a simple opinionated API which doesn't have much fluff to it while also staying flexible enough, and it has an impressive range of features, being the second strongest after FlowMVI in terms of raw functionality it provides.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to use the Kotlin Ballast library
&lt;/h3&gt;

&lt;p&gt;To implement the LCE example from above, you need to create the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;data class State(...)&lt;/code&gt; holding loading flag (or &lt;code&gt;Cached&amp;lt;T&amp;gt;&lt;/code&gt;), data, error. Ballast encourages a single data class for its state.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sealed interface Inputs&lt;/code&gt;, which is Ballast's name for intents.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sealed interface Events&lt;/code&gt; for UI one-off events (e.g., &lt;code&gt;ShowError&lt;/code&gt;) (optional).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then the logic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;InputHandler&lt;/code&gt; implementation. This does exactly what it says - handles intents. It can not only update state but also suspend and do other things, so it's not only a reducer.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;EventHandler&lt;/code&gt; implementation. If your view model has side effects (&lt;code&gt;Event&lt;/code&gt;s), then Ballast encourages separation of those side effects from the actual UI code such as composables you write. So you would issue navigation commands, for example, in an event handler that has a different lifecycle than the view model. &lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ViewModel&lt;/code&gt; - a &lt;code&gt;BasicViewModel&lt;/code&gt; (or &lt;code&gt;AndroidViewModel&lt;/code&gt; on Android) with &lt;code&gt;BallastViewModelConfiguration.Builder().withViewModel(initialState, inputHandler, name)&lt;/code&gt;. The library leans into view models as the container for business logic, although you don't have to put anything besides your setup there.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;LceInputHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LceRepository&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;InputHandler&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceEvent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;InputHandlerScope&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceEvent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;.&lt;/span&gt;&lt;span class="nf"&gt;handleInput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Load&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nf"&gt;updateState&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;copy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;isLoading&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;error&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="nf"&gt;sideJob&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"load"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nf"&gt;runCatching&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onSuccess&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;postInput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Loaded&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;onFailure&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;postInput&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Failed&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Loaded&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;updateState&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;copy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;isLoading&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;items&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;error&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Failed&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nf"&gt;updateState&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;copy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;isLoading&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;error&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;?:&lt;/span&gt; &lt;span class="s"&gt;"Unknown error"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="nf"&gt;postEvent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LceEvent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ShowError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;error&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="o"&gt;?:&lt;/span&gt; &lt;span class="s"&gt;"Unknown error"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;object&lt;/span&gt; &lt;span class="nc"&gt;LceEventHandler&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;EventHandler&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceEvent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// In Compose, collect events and show snackbar/nav inside LaunchedEffect (use platform integrations)&lt;/span&gt;
    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;EventHandlerScope&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceEvent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;.&lt;/span&gt;&lt;span class="nf"&gt;handleEvent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LceEvent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Unit&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;createLceViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;LceRepository&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;CoroutineScope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nc"&gt;BallastViewModel&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceEvent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BasicViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;config&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BallastViewModelConfiguration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Builder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;withViewModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;initialState&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LceState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;isLoading&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;inputHandler&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LceInputHandler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;repo&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"LceViewModel"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;eventHandler&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;LceEventHandler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;coroutineScope&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;also&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;trySend&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LceInput&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Load&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, the library doesn't lie when it claims to be opinionated. The structure is very interesting, but let's see what it gives us. &lt;/p&gt;

&lt;h3&gt;
  
  
  Benefits of Ballast
&lt;/h3&gt;

&lt;p&gt;I would say this library doesn't try to please everyone. It isn't try-hard like FlowMVI, or "junior-friendly" like Orbit, or boasting its "structure" like MVIKotlin. &lt;strong&gt;Instead, you will enjoy Ballast if you catch its drift, period.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ballast is the second library among the 70 that I compared in terms of features and functionality. It gives you a ready-made solution with some enhancements and cool stuff for every layer of your application architecture. It gives you tools and tricks for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The UI layer
&lt;/li&gt;
&lt;li&gt;The view model layer
&lt;/li&gt;
&lt;li&gt;Even the repository layer
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ballast features native integration with Firebase Analytics and can integrate with any other analytics services. It has a rich concept of interceptors and decorators that is actually similar to how they are done in FlowMVI, while also staying kind of simple and true to the MVI principle. &lt;/p&gt;

&lt;p&gt;I like Ballast because it is very inspiring. &lt;strong&gt;It has many features while also staying down to business. It doesn't try as hard as FlowMVI to be "different".&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Ballast has a unique feature which allows you to synchronize your state and intents to an actual remote server, which no other library has. And that's not including some other interesting stuff you'll find if you dive into the docs. You should definitely check it out and get inspired by what you can build.&lt;/p&gt;

&lt;p&gt;Ballast's developer tooling is also great, featuring its own time travel and debugging plugin, with quite a number of uses, and a rich ecosystem built around creating apps fast and easy. I would say that's the core philosophy of Ballast - it's like a complete batteries-included solution for you to build actual apps with this library. The whole idea strikes me as kinda cool.&lt;/p&gt;

&lt;h3&gt;
  
  
  Downsides of Ballast
&lt;/h3&gt;

&lt;p&gt;As it often happens, the nature of Ballast being so opinionated is also its biggest drawback. I feel like the library tries to do everything, but only in one particular way. So what if you aren't on the same wavelength as the authors of Ballast? Then you're going to have a really bad time, obviously.&lt;/p&gt;

&lt;p&gt;For example, I found that the Firebase Analytics integration only sends events in a particular way and isn't really flexible enough to send them in any other way or specialize them for any given page in the app, unlike what FlowMVI provides with its more generic but also more flexible implementation. The same can be said for the repository and caching functionality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you get the same use cases for caching and structure your repositories in the same way as Ballast encourages you to (after reading that big chunk of documentation on the official Ballast website), you will be golden. But as soon as you get a use case that doesn't fit the pattern or you need to make a change, you may have to ditch what Ballast has built for you and start working on something else.&lt;/strong&gt; This can lead to fragmentation and, frankly, just rewriting the same stuff with slight changes. Maybe you'll find the use cases that Ballast provides you not sufficient or features not flexible enough. That's a real issue.&lt;/p&gt;

&lt;p&gt;I've also found that the library seems to be maintained but not really super actively promoted or developed. Because the library is really opinionated, if the authors don't have any of the same use cases as you do, then there is no real incentive for them - and it isn't really in their general philosophy - to build something extra on top of what already exists.&lt;/p&gt;

&lt;p&gt;My third problem is that the library kind of tries to mix and match a bunch of stuff. &lt;strong&gt;FlowMVI seems to be really consistent in its style&lt;/strong&gt; - it gives this Gen Z vibe of "let's do everything with lambdas". But Ballast employs a combination of Java-style builders with DSLs, with lambdas, and then with factories. It has something that looks like a reducer but also isn't really a reducer - it's now an &lt;code&gt;InputHandler&lt;/code&gt;. And then it uses view models and has the concept of view models, but then the view models aren't really view models - they are just wrappers for view model builders, whatever that means for Ballast. I just find the terminology sometimes confusing. Even though it makes sense conceptually, the library could probably benefit from some consistency overall in the general style that it has.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ballast library use cases
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;You will really enjoy Ballast if you have similar use-cases as the library's authors.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The people who benefit the most from this library will be small teams that build a specific type of apps and aren't really afraid to adopt someone else's architectural patterns. In this case, if you really have similar use cases and considerations, then you will greatly benefit and save a lot of time, save a lot of code and get an amazing feature set out of this library. &lt;/p&gt;

&lt;p&gt;But if you're not really following the patterns that Ballast gives you, and you have, for example, a huge app, a big enterprise solution, or, on the other hand, you have a really simple app and you don't want to invest into understanding everything that Ballast gives you, then you're not really a good fit for using Ballast. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the first case (huge app / enterprise), you're better off using something either structured like MVIKotlin or flexible like FlowMVI, depending on your philosophy and needs.
&lt;/li&gt;
&lt;li&gt;In the second case (very simple app), you're better off just using Orbit as the simplest-to-understand solution for learning and to get up to speed quickly.
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So Ballast is kind of in this middle space of not really leaning into anything in particular.&lt;/p&gt;




&lt;h2&gt;
  
  
  Bonus: A spreadsheet comparing 70 architecture libraries
&lt;/h2&gt;

&lt;p&gt;I will admit I kind of went overboard when researching all of these libraries. &lt;strong&gt;I found more than 70 different state management / MVI / architectural libraries and compared them over 100+ criteria.&lt;/strong&gt; So this write-up is based mostly on my research that I did for those architectural libraries. &lt;/p&gt;

&lt;p&gt;I'd like to thank my colleague Artyom for doing the first part of the research originally for the Mobius conference we spoke at. Recently I decided to update those research findings to include more criteria, more features and re-evaluate every single library because many of them had major releases since then.&lt;/p&gt;

&lt;p&gt;And of course I will keep the spreadsheet updated as long as I can. Some stuff is updated automatically there, such as maintainability status. I will keep adding new features for comparing. &lt;strong&gt;Please let me know if I made any mistakes in there, you want your library added, or you just have something to say via email.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The spreadsheet has all the honorable mentions - definitely check them out! I'm sorry if i didn't include your library here - this article is huge as-is!&lt;/p&gt;

&lt;p&gt;Keep in mind that the scores in the spreadsheet are purely subjective and use a very simple formula which multiplies the weight with the checkbox status. The weights were selected for an "average Joe" developer and thus are pure speculation, so you should check the full list and find specific features that are important to &lt;strong&gt;your team&lt;/strong&gt;. &lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;a href="https://docs.google.com/spreadsheets/d/1-8TH7d0tYZvGnVBlZl5uEsmsuCUaB1Ur78he9q9pSNI" rel="noopener noreferrer"&gt;Comparison of 70 Kotlin Architecture Libraries Over 100 Criteria&lt;/a&gt;
&lt;/h3&gt;

</description>
      <category>kotlin</category>
      <category>mvi</category>
      <category>architecture</category>
      <category>android</category>
    </item>
    <item>
      <title>I Found the #1 Cause of Freezes in Your app, and Heres the Proof</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Tue, 11 Nov 2025 15:50:45 +0000</pubDate>
      <link>https://dev.to/nek12/i-found-the-1-cause-of-freezes-in-your-app-and-heres-the-proof-59h7</link>
      <guid>https://dev.to/nek12/i-found-the-1-cause-of-freezes-in-your-app-and-heres-the-proof-59h7</guid>
      <description>&lt;p&gt;A lot of people ask me why I hate SharedPreferences, and at my job some people are even arguing with me that SharedPreferences are a good thing and that they don't lead to any problems whatsoever. &lt;br&gt;
&lt;strong&gt;But from my six years of development experience and more than 15 projects, I know that SharedPreferences are literally the number one cause of ANRs in many popular apps and third-party frameworks.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;You will always have ANRs because of them, no matter what you do (no, &lt;code&gt;edit&lt;/code&gt; doesn't help!). And in this post I will spill expose why you should remove SharedPreferences from your project ASAP.&lt;/p&gt;
&lt;h2&gt;
  
  
  Problem #1: SharedPreferences are fundamentally flawed and you can't fix that with any code you write
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The problem isn't how SharedPreferences are implemented internally - it's in the paradigm of providing synchronous access to disk data.&lt;/strong&gt; A priori, this will always lead to problems, because by definition of synchronous access, you cannot avoid reading the resource on the same thread that you called it. SharedPreferences create a fake facade of asynchronicity. They provide you APIs like &lt;code&gt;apply&lt;/code&gt; and &lt;code&gt;commit&lt;/code&gt;, but under the hood they still read from disk on main thread.&lt;/p&gt;

&lt;p&gt;Why, you ask? Because it's impossible to do otherwise. You have to read a file, and reading a file takes an arbitrary amount of time, sometimes more than 5 seconds. And I will talk about why later, but right now let's take that for granted. So you try to read a file. Where are you gonna get those 5 seconds from? &lt;strong&gt;If you're trying to read from SharedPreferences (or create their object) directly during application startup, or in composable code, or while the application is initializing its dependency injection graph, you will cause an ANR.&lt;/strong&gt; If the file system is contended and you have to wait, then guess what thread will wait for that file read? Of course, the main thread, because you invoke SharedPreferences constructor/operations on main thread.&lt;/p&gt;

&lt;p&gt;If you want to not do that, try to get rid of all references to SharedPreferences on main thread. You will face a huge issue because &lt;strong&gt;SharedPreferences API assumes that you can, for some reason, create and call them safely on whatever thread you run. And SharedPreferences API doesn't indicate in any way that it will perform a synchronous file creation and read on whatever thread you try to create an object.&lt;/strong&gt; That's obviously a huge flaw in the design and should never have been allowed. With SharedPreferences, it's just too easy to make this stupid mistake. There is no suspend modifier on any calls, there is no proper flow-based, coroutine-based API for observing the changes, there is no reactive way to retrieve the parameters. That's not an AOSP bug, but a design flaw. And spoiler so you don't try to actually do it: &lt;strong&gt;that won't help&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The reason it is designed that way is because we didn't know better in 2011. SharedPreferences were there from the first version of Android, and that was the time where we used Java. We didn't care about main thread, we didn't know what problems reading files on main thread can even cause. Nobody cared about ANRs because users didn't mind and were used to all kinds of errors, and we didn't have coroutines which allow us to write concurrent code procedurally. But now we do have all these things, and we have much higher standards for our applications. Why would we still use SharedPreferences nowadays?&lt;/p&gt;

&lt;p&gt;If you want proofs, here's the source code of literally the &lt;a href="https://android.googlesource.com/platform/frameworks/base.git/%2B/master/core/java/android/app/SharedPreferencesImpl.java" rel="noopener noreferrer"&gt;Android SDK&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Override&lt;/span&gt;
&lt;span class="nd"&gt;@Nullable&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="nf"&gt;getString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nd"&gt;@Nullable&lt;/span&gt; &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;defValue&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;synchronized&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mLock&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;awaitLoadedLocked&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="n"&gt;mMap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;key&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="p"&gt;!=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt; &lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="n"&gt;v&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;defValue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;awaitLoadedLocked&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(!&lt;/span&gt;&lt;span class="n"&gt;mLoaded&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Raise an explicit StrictMode onReadFromDisk for this&lt;/span&gt;
        &lt;span class="c1"&gt;// thread, since the real read will be in a different&lt;/span&gt;
        &lt;span class="c1"&gt;// thread and otherwise ignored by StrictMode.&lt;/span&gt;
        &lt;span class="nc"&gt;BlockGuard&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getThreadPolicy&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;onReadFromDisk&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="p"&gt;(!&lt;/span&gt;&lt;span class="n"&gt;mLoaded&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;mLock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;InterruptedException&lt;/span&gt; &lt;span class="n"&gt;unused&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mThrowable&lt;/span&gt; &lt;span class="p"&gt;!=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="n"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;IllegalStateException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;mThrowable&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The StrictMode violation there is even &lt;strong&gt;explicit&lt;/strong&gt;! This code awaits a full "load" of the SharedPreferences on whatever thread you try to get the value from if it hasn't managed to load the file fully yet (&lt;code&gt;awaitLoadedLocked&lt;/code&gt;). &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So, what that means is that when you happily call &lt;code&gt;sharedPreferences.getBoolean("isDarkMode")&lt;/code&gt; in your composable code, you are essentially adding an ANR and freeze to your app under file system contention circumstance.&lt;/strong&gt; Not so confident now?&lt;/p&gt;

&lt;p&gt;The only reason you haven't encountered this freeze is that you were lucky. You created SharedPreferences early enough and so far apart from your actual reads that the system managed to read from a file on time &lt;strong&gt;on your specific set of devices&lt;/strong&gt;, or that your app wasn't frozen for exactly five seconds. For example, it was frozen for two and a half seconds. But if you ever find yourself wondering where those mysterious freeze and lag complaints come from your negative Play Store reviews from users on cheap devices, maybe that's the place?&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem #2: The QueuedWork hack that hides ANRs from you
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Let's talk about another place where SharedPreferences cause ANRs.&lt;/strong&gt; I get why many people don't think that SharedPreferences cause their ANRs: it's because they are so convoluted that I didn't believe it either at first. The main reason the ANRs happen is that when you call &lt;code&gt;apply&lt;/code&gt; on SharedPreferences, they give you a fake promise of asynchronous work. The reason they do is because the authors of SharedPreferences wanted to make those operations fast for main thread and then offload the work to some other place where it would be "safer" to block. Because of that they created the so-called &lt;a href="https://android.googlesource.com/platform/frameworks/base/%2B/refs/heads/main/core/java/android/app/QueuedWork.java?autodive=0%2F" rel="noopener noreferrer"&gt;QueuedWork&lt;/a&gt; class:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="cm"&gt;/**
 * Trigger queued work to be processed immediately. The queued work is processed on a separate
 * thread asynchronous. While doing that run and process all finishers on this thread. The
 * finishers can be implemented in a way to check weather the queued work is finished.
 *
 * Is called from the Activity base class's onPause(), after BroadcastReceiver's onReceive,
 * after Service command handling, etc. (so async work is never lost)
 */&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="n"&gt;static&lt;/span&gt; &lt;span class="n"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;waitToFinish&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;long&lt;/span&gt; &lt;span class="n"&gt;startTime&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;currentTimeMillis&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;boolean&lt;/span&gt; &lt;span class="n"&gt;hadMessages&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nf"&gt;synchronized&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sLock&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;DEBUG&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;hadMessages&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getHandler&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;hasMessages&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;QueuedWorkHandler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;MSG_RUN&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nf"&gt;handlerRemoveMessages&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;QueuedWorkHandler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;MSG_RUN&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;DEBUG&lt;/span&gt; &lt;span class="p"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="n"&gt;hadMessages&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;Log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;d&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LOG_TAG&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"waiting"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="c1"&gt;// We should not delay any work as this might delay the finishers&lt;/span&gt;
        &lt;span class="n"&gt;sCanDelay&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// author's note - [1]&lt;/span&gt;
    &lt;span class="nc"&gt;StrictMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;ThreadPolicy&lt;/span&gt; &lt;span class="n"&gt;oldPolicy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;StrictMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;allowThreadDiskWrites&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;processPendingWork&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;StrictMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;setThreadPolicy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;oldPolicy&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;Runnable&lt;/span&gt; &lt;span class="n"&gt;finisher&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="nf"&gt;synchronized&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sLock&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;finisher&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;sFinishers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;poll&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;finisher&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="k"&gt;break&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="n"&gt;finisher&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;finally&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;sCanDelay&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nf"&gt;synchronized&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sLock&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;long&lt;/span&gt; &lt;span class="n"&gt;waitTime&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;System&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;currentTimeMillis&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="n"&gt;startTime&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;waitTime&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;||&lt;/span&gt; &lt;span class="n"&gt;hadMessages&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;mWaitTimes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;Long&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;valueOf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;waitTime&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;intValue&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
            &lt;span class="n"&gt;mNumWaits&lt;/span&gt;&lt;span class="p"&gt;++;&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;DEBUG&lt;/span&gt; &lt;span class="p"&gt;||&lt;/span&gt; &lt;span class="n"&gt;mNumWaits&lt;/span&gt; &lt;span class="p"&gt;%&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt; &lt;span class="p"&gt;||&lt;/span&gt; &lt;span class="n"&gt;waitTime&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;MAX_WAIT_TIME_MILLIS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;mWaitTimes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LOG_TAG&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"waited: "&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Take a look at [1]! This is literally a hack that hides from you the problem of writing to disk.&lt;/strong&gt; The creators of SharedPreferences lie to you - they disable StrictMode that you enabled through hard work, temporarily, just because they don't want you to see them using this hack.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This class specifically is created for hiding the asynchronous file reads and committing them in certain lifecycle callbacks.&lt;/strong&gt; When SharedPreferences execute &lt;code&gt;commit&lt;/code&gt; or &lt;code&gt;apply&lt;/code&gt;, instead of writing to file immediately, they queue the work using this class. Then the Android framework calls specific methods of this class to commit or "finalize" the pending work (on main thread) to the disk. In the sources, the justification for this is that "the work is never lost". But as we already established, that's an outdated and false premise. That was created to give consumers a false sense of security that their file writes will never be lost, somewhere around 2010s or whatever. &lt;strong&gt;This is a flawed approach because instead of properly relying on the consumer to actually finish and await the call that is supposed to commit to a file, or using something like eager file writes with write-ahead logging, or even simply a copy-on-write algorithm, the SharedPreferences just postpone the negative consequences of delaying asynchronous work to some other method&lt;/strong&gt; where they assumed that "it would be safe to block" the main thread because the UI doesn't care. But that isn't true at all since the application and activity lifecycle have changed significantly over the years and we are no longer using the activity lifecycle methods the same way that we used to back in the days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This also derails your attempts at finding out about those ANRs, because the stack trace will now never point to the actual SharedPreferences call,&lt;/strong&gt; which is what I warned about in my &lt;a href="https://dev.to/blog/i-achieved-0-anr-in-my-android-app-spilling-beans-on-how-i-did-it-part-1"&gt;previous post&lt;/a&gt; on ANRs.&lt;/p&gt;

&lt;p&gt;Here are example stack traces you'll get concretely in the app lifecycle (&lt;a href="https://android.googlesource.com/platform/frameworks/base/%2B/d630f105e8bc0021541aacb4dc6498a49048ecea/core/java/android/app/ActivityThread.java" rel="noopener noreferrer"&gt;AOSP ActivityThread.java&lt;/a&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Activity pause&lt;/strong&gt; (older pre-HC apps): after &lt;code&gt;performPauseActivity(...)&lt;/code&gt;, &lt;code&gt;QueuedWork.waitToFinish()&lt;/code&gt; is invoked. Expect stacks like &lt;code&gt;ActivityThread.handlePauseActivity → QueuedWork.waitToFinish&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Activity stop&lt;/strong&gt; (modern apps): after &lt;code&gt;performStopActivityInner(...)&lt;/code&gt;, the framework runs &lt;code&gt;QueuedWork.waitToFinish()&lt;/code&gt;. Expect stacks like &lt;code&gt;ActivityThread.handleStopActivity → QueuedWork.waitToFinish&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sleeping&lt;/strong&gt;: &lt;code&gt;handleSleeping&lt;/code&gt; also calls &lt;code&gt;QueuedWork.waitToFinish()&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service start args&lt;/strong&gt;: &lt;code&gt;handleServiceArgs&lt;/code&gt; drains queued work before reporting &lt;code&gt;serviceDoneExecuting(...)&lt;/code&gt;. Expect stacks like &lt;code&gt;ActivityThread.handleServiceArgs → QueuedWork.waitToFinish&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service stop&lt;/strong&gt;: &lt;code&gt;handleStopService&lt;/code&gt; also drains. Expect stacks like &lt;code&gt;ActivityThread.handleStopService → QueuedWork.waitToFinish&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Bonus: &lt;code&gt;commit()&lt;/code&gt; can still write on the caller (UI) thread&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For &lt;code&gt;commit()&lt;/code&gt; (sync), &lt;code&gt;enqueueDiskWrite(..., /*postWriteRunnable=*/ null)&lt;/code&gt; treats it as a synchronous commit and may run &lt;code&gt;writeToDiskRunnable.run()&lt;/code&gt; on the current thread (see the &lt;code&gt;isFromSyncCommit&lt;/code&gt; and &lt;code&gt;wasEmpty&lt;/code&gt; fast-path). If you ever call &lt;code&gt;commit()&lt;/code&gt; from the main thread, that's immediate UI-thread I/O.&lt;/p&gt;

&lt;h2&gt;
  
  
  "But it's only 2 milliseconds!"
&lt;/h2&gt;

&lt;p&gt;But here's the last point that people who argue with me bring up usually:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Oh, but it's only a 2 millisecond read. It's nothing. It's just an atomic file operation on main thread... I saw StrictMode violation and it says the duration was 5 milliseconds. Nobody is ever going to notice that! We are not gonna fix it because it is just a small delay. You're just making a mountain out of a molehill!"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;As we already established, the first point is that the &lt;code&gt;apply&lt;/code&gt; isn't really asynchronous. It just queues the work to be done, but then registers a finisher which will be run in various lifecycle callbacks. So if that call ever happens to be slow, then you will get an ANR in one of those callbacks or at application start. That should be obvious that it can manifest in negative consequences now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The second point is that the actual on-disk flush is a blocking fsync/sync - and Android's own source tracks its tail latency.&lt;/strong&gt; The write path ends in &lt;code&gt;FileUtils.sync(out) → out.getFD().sync()&lt;/code&gt;, i.e., a blocking flush. &lt;code&gt;SharedPreferencesImpl&lt;/code&gt; even logs/records fsync durations and warns when they exceed a threshold, which exists because fsync can be slow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In simple terms, this means that cheap devices, slow memory, external memory such as SD cards, and at various points of system lifecycle where there are high amounts of workload (e.g. the user is performing some heavy file access operations such as transcoding media or playing a high quality video), there is a very high likelihood that the system calls that issue disk writes will enter contention for resources.&lt;/strong&gt; This doesn't reproduce in sterile QA testing or debug builds environments, especially if the team uses an emulator or some sort of expensive device that has enough bandwidth to handle all the file writes. And especially since the data that is usually being stored in SharedPreferences on debug builds is much smaller in size than it can get on production builds. (You're not writing unbounded lists to a single XML file, are you?)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What that means is that this is an expected behavior from the system. When the OS syscall enters contention and suspends the thread to perform a write/read and synchronize op to the file system, it may wait for an indefinite amount of time&lt;/strong&gt; for other operations to finish or for the system security service to grant the app access to its data directories or external storage, which can lead to, in some rare cases, the operating system file operations taking significantly longer than they are used to. This is very hard to reproduce because you aren't usually running stress tests or load testing of your app on a 100$ Xiaomi budget device, are you? But this is how real-world devices operate. For example, if you try to perform file r/w in a &lt;code&gt;BOOT_COMPLETED&lt;/code&gt; broadcast, like I mentioned in my previous post, you have the "best" chance of catching this contended state because hundreds of other apps are also running a boot completed broadcast at the same time, and as you know, broadcasts are executed on main thread. So if you ever wondered why your broadcast stack traces show up in ANR reports in Crashlytics, then this is your answer.&lt;/p&gt;

&lt;p&gt;Even though direct file system contention isn't as much of an issue as it used to be, this improvement is greatly compensated for by the introduction of scoped storage and new SAF and FUSE policies on the Android side, which are much slower than standard reads. So when the user's device writes gigabytes of data, and your app happens to be open, guess who's gonna bear the burden of a huge journal flush caused by a fsync call? For skeptics, here's a paper that measured &lt;a href="https://www.usenix.org/system/files/conference/atc17/atc17-park.pdf" rel="noopener noreferrer"&gt;real delays&lt;/a&gt; in ext* file systems.&lt;/p&gt;

&lt;p&gt;And guys, not to take credit for this, but if you think this is some new information that nobody previously knew about, I'm honestly just expanding a bit on an official &lt;a href="https://android-developers.googleblog.com/2020/09/prefer-storing-data-with-jetpack.html" rel="noopener noreferrer"&gt;Android developers documentation&lt;/a&gt;, so even for those that say "Oh, we will only do what Google recommends and not some random guy on the internet", please be my guest and implement a better I/O architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The invisible problem: freezes that don't show up in Crashlytics
&lt;/h2&gt;

&lt;p&gt;My problem here when discussing this is that I'm gonna be honest with you guys, I can't give you charts showing fsync latency is going off the rails. There is no public data - believe me, I searched - for this kind of metrics because they are internal Android analytics, it isn't at all in Google's interests to disclose how bad the file read operations are. I can point you to a hundred different stack traces from my own apps and towards discussions of thousands of other people complaining about this exact same issue, but I cannot give you a definitive conclusion on how many users exactly are facing this issue.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you target cheaper devices which are still very popular in the world, you are bound to have ANRs due to this.&lt;/strong&gt; Or what's worse in my opinion is that you will likely not have real ANRs which are reported to Crashlytics. &lt;strong&gt;What you're likely suffering from is significant delays and freezes in the app that the users are aware and perceive, but those are not long enough to trigger an actual ANR.&lt;/strong&gt; This ruins your app's user experience but doesn't surface in any of Crashlytics services or bug reports because it's not as evident. All you will see is some strange user complaining about how the app is slow or laggy or constantly freezes and you will dismiss their review as an outlier. But then you will wonder why your users are leaving so much.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So don't take this lightly. If you believe in official documentation and real world issues and this explanation, try using some other approach other than SharedPreferences. It isn't that hard, I promise.&lt;/strong&gt; For example, DataStore offers you a safe suspending API with the copy-on-write algorithm under the hood. I know that some people don't like, saying "it isn't as convenient to use" because it doesn't give you a synchronous API to invoke on main thread. But you know, maybe the design of DataStore is like that for a reason? And we should think about why we are facing challenges during development, and sometimes go along with them, instead of opting for the easiest solution possible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A proper asynchronous API for disk operations and strong developer ethics will not just solve your ANR problem, they will teach you how to work with data reactively and efficiently.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>android</category>
      <category>anr</category>
      <category>sharedpreferences</category>
      <category>datastore</category>
    </item>
    <item>
      <title>I achieved 0% ANR in my Android app. Spilling beans on how I did it - part 1</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Sun, 09 Nov 2025 17:13:44 +0000</pubDate>
      <link>https://dev.to/nek12/i-achieved-0-anr-in-my-android-app-spilling-beans-on-how-i-did-it-part-1-2b38</link>
      <guid>https://dev.to/nek12/i-achieved-0-anr-in-my-android-app-spilling-beans-on-how-i-did-it-part-1-2b38</guid>
      <description>&lt;p&gt;After a year of effort, I finally achieved 0% ANR in Respawn. Here's a complete guide on how I did it.&lt;/p&gt;

&lt;p&gt;Let's start with 12 tips you need to address first, and in the next post I'll talk about three hidden sources of ANR that my colleagues still don't believe exist.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Add event logging to Crashlytics
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Crashlytics allows you to record any logs in a separate field to see what the user was doing before the ANR.&lt;/strong&gt; Libraries like &lt;a href="https://opensource.respawn.pro/FlowMVI/" rel="noopener noreferrer"&gt;FlowMVI&lt;/a&gt; let you do this automatically. Without this, you won't understand what led to the ANR, because their stack traces are absolutely useless.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Completely remove SharedPreferences from your project
&lt;/h2&gt;

&lt;p&gt;Especially encrypted ones. &lt;strong&gt;They are the #1 cause of ANRs.&lt;/strong&gt; Use DataStore with Kotlin Serialization instead. I'll explain why I hate prefs so much in a separate post later.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Experiment with handling UI events in a background thread
&lt;/h2&gt;

&lt;p&gt;If you're dealing with a third-party SDK causing crashes, this won't solve the delay, but it will mask the ANR by moving the long operation off the main thread earlier.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Avoid using GMS libraries on the main thread
&lt;/h2&gt;

&lt;p&gt;These are prehistoric Java libraries with callbacks, inside which there's no understanding of even the concept of threads, let alone any action against ANRs. Create coroutine-based abstractions and call them from background dispatchers.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Check your Bitmap / Drawable usage
&lt;/h2&gt;

&lt;p&gt;Bitmap images when placed incorrectly (e.g., not using drawable-nodpi) can lead to loading images that are too large and cause ANRs.&lt;/p&gt;

&lt;p&gt;Non-obvious point: &lt;strong&gt;This is actually an OOM crash, but every Out of Memory Error can manifest not as a crash, but an ANR!&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Enable StrictMode and aggressively fix all I/O operations on the main thread
&lt;/h2&gt;

&lt;p&gt;You'll be shocked at how many you have. Always keep StrictMode enabled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important: enable StrictMode in a content provider with priority Int.MAX_VALUE, not in Application.onCreate().&lt;/strong&gt; In the next post I'll reveal libraries that push ANRs into content providers so you don't notice.&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Look for memory leaks
&lt;/h2&gt;

&lt;p&gt;**Never use coroutine scope constructors (&lt;code&gt;CoroutineScope(Job())&lt;/code&gt;). Add timeouts to all suspend functions with I/O. Add error handling. Use LeakCanary. Profile memory usage. Analyze analytics from step 1 to find user actions that lead to ANRs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;80% of my ANRs were caused by memory leaks and occurred during huge GC pauses.&lt;/strong&gt; If you're seeing mysterious ANRs in the console during long sessions, it's extremely likely that it's just a GC pause due to a leak.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Don't trust stack traces
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;They're misleading, always pointing to some random code. Don't believe that - 90% of ANRs are caused by &lt;em&gt;your code&lt;/em&gt;.&lt;/strong&gt; I reached 0.01% ANR after I got serious about finding them and stopped blaming &lt;code&gt;Queue.NativePollOnce&lt;/code&gt; for all my problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. Avoid loading files into memory
&lt;/h2&gt;

&lt;p&gt;Ban the use of &lt;code&gt;File().readBytes()&lt;/code&gt; completely. &lt;strong&gt;Always use streaming for JSON, binary data and files, database rows, and backend responses, encrypt data through Output/InputStream.&lt;/strong&gt; Never call &lt;code&gt;readText()&lt;/code&gt; or &lt;code&gt;readBytes()&lt;/code&gt; or their equivalents.&lt;/p&gt;

&lt;h2&gt;
  
  
  10. Use Compose and avoid heavy layouts
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Some devices are so bad that rendering UI causes ANRs.&lt;/strong&gt; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make the UI lightweight and load it gradually. &lt;/li&gt;
&lt;li&gt;Employ progressive content loading to stagger UI rendering. &lt;/li&gt;
&lt;li&gt;Watch out for recomposition loops - they're hard to notice.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  11. Call goAsync() in broadcast receivers
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Set a timeout (mandatory!) and execute work in a coroutine.&lt;/strong&gt; This will help avoid ANRs because broadcast receivers are often executed by the system under huge load (during &lt;code&gt;BOOT_COMPLETED&lt;/code&gt; hundreds of apps are firing broadcasts), and you can get an ANR simply because the phone lagged.&lt;/p&gt;

&lt;p&gt;Don't perform any work in broadcast receivers synchronously. This way you have less chance of the system blaming you for an ANR.&lt;/p&gt;

&lt;h2&gt;
  
  
  12. Avoid service binders altogether (bindService())
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;It's more profitable to send events through the application class.&lt;/strong&gt; Binders to services will always cause ANRs, no matter what you do. This is native code that on Xiaomi "flagships for the money" will enter contention for system calls on their ancient chipset, and you'll be the one getting blamed.&lt;/p&gt;




&lt;p&gt;If you did all of this, you just eliminated 80% of ANRs in your app. &lt;br&gt;
Next I'll talk about non-obvious problems that we'll need to solve if we want truly 0% ANR.&lt;br&gt;
Subscribe to social media or the newsletter so you don't miss part two.&lt;/p&gt;

</description>
      <category>android</category>
      <category>anr</category>
      <category>performance</category>
      <category>kotlin</category>
    </item>
    <item>
      <title>How I built a game engine using MVI in Kotlin and avoided getting fired</title>
      <dc:creator>Nek.12</dc:creator>
      <pubDate>Fri, 07 Nov 2025 11:41:56 +0000</pubDate>
      <link>https://dev.to/nek12/how-i-built-a-game-engine-using-mvi-in-kotlin-and-avoided-getting-fired-38hn</link>
      <guid>https://dev.to/nek12/how-i-built-a-game-engine-using-mvi-in-kotlin-and-avoided-getting-fired-38hn</guid>
      <description>&lt;p&gt;In this article, I'll tell you a story about &lt;strong&gt;how our team created a multiplatform, full-fledged game engine using MVI architecture, fully in Kotlin!&lt;/strong&gt; You will also learn about how I implemented some &lt;em&gt;insane&lt;/em&gt; requirements from our customer when working on said engine. So let's jump right in!&lt;/p&gt;

&lt;p&gt;I'm working on an app called &lt;a href="https://overplay.com" rel="noopener noreferrer"&gt;Overplay&lt;/a&gt;. It's similar to TikTok, but the videos you see are actually games that you can play as you scroll. One day, I was painting another button when the customer came to me to discuss the app's performance and the experience of starting and finishing a game. In short, the problem we had for years was the legacy game engine.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It was still using XML on Android and contained 7 thousand lines of legacy code&lt;/strong&gt;, most of which was dead but still executed, tanking the performance. The experience was not fluid, the game loaded slowly (20 seconds to load a game was a regular occurrence for us), and it was laggy. We also had a lot of nasty crashes related to concurrency and &lt;strong&gt;state management&lt;/strong&gt; because dozens of different parts of the engine wanted to send events and update the game state simultaneously. &lt;strong&gt;The team had no idea how to solve those issues&lt;/strong&gt; - our current simple &lt;strong&gt;MVVM architecture was not holding up&lt;/strong&gt; at all. Only the ViewModel contained 2000 lines of code, and any change exploded something else.&lt;/p&gt;

&lt;p&gt;So the customer said - time to make the game engine great again. But the new requirements he wanted to be implemented were just &lt;strong&gt;bonkers&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The game engine must be &lt;strong&gt;embedded directly&lt;/strong&gt; into the feed of games to let the user scroll away once they finish the game. It means it has to be inside another page and bring all the logic with it!&lt;/li&gt;
&lt;li&gt;The game engine must &lt;strong&gt;start games in less than 2 seconds flat&lt;/strong&gt;. This means that &lt;strong&gt;everything&lt;/strong&gt; has to be managed in parallel and in the background as the user scrolls!&lt;/li&gt;
&lt;li&gt;If the user replays or restarts the game, the loading must be &lt;strong&gt;instant&lt;/strong&gt;. Thus, we have to keep the engine running and manage the resources dynamically.&lt;/li&gt;
&lt;li&gt;Every single action of the user must be &lt;strong&gt;covered by analytics&lt;/strong&gt; to keep improving it in the future.&lt;/li&gt;
&lt;li&gt;The game engine must support all sorts of videos, including local ones for when someone wants to make their own game and play it.&lt;/li&gt;
&lt;li&gt;Since the user scrolls through videos like on TikTok, we need to efficiently free and &lt;strong&gt;reuse our media codecs&lt;/strong&gt; and video players to seamlessly jump back and forth between playing a game video and other items of the feed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All errors must always be handled, reported, and recovered from&lt;/strong&gt; to ensure we no longer ruin the users' experience with crashes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I'm gonna be honest, I thought I was gonna get fired.&lt;/p&gt;

&lt;p&gt;"There's no way to implement this crazy logic in 1 sprint" - I thought. &lt;strong&gt;Half of the app must be easily embeddable and the state must always be consistent&lt;/strong&gt;, with &lt;strong&gt;hundreds&lt;/strong&gt; of state updates going on at the same time: the device sensors, our graphic engine, the video player, and more. Everything has to be reused everywhere and loaded in parallel. To put the last nail in the coffin, the amount of code has to be kept small as well to let the team make future changes to the engine without shooting themselves in the foot.&lt;/p&gt;

&lt;p&gt;But I had to do it, there was no way to avoid it this time. Of course, I couldn't have done this alone. Huge props to the team:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One member took our graphics engine and made it compatible with Compose since there was no way we were doing that without Compose.&lt;/li&gt;
&lt;li&gt;Another developer spent time making a module for the Game Loop which sends events and orchestrates the graphics engine.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Preparations
&lt;/h2&gt;

&lt;p&gt;So I was responsible for game loading and overall integration in the end. And I thought - well, &lt;strong&gt;these requirements are not about features, they are about architecture&lt;/strong&gt;. My task was to implement the &lt;strong&gt;architecture&lt;/strong&gt; that supports all of those. Easier said than done though...&lt;/p&gt;

&lt;p&gt;Here's a simplified diagram of what my final architecture looked like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuq5wodyt5hd50bajqf5.webp" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxuq5wodyt5hd50bajqf5.webp" alt="Image" width="800" height="759"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An important thing to understand before we begin is that &lt;strong&gt;to implement the new architecture, I got inspired by Ktor and their amazing system of "Plug-Ins"&lt;/strong&gt; that form a chain of responsibility and intercept any incoming and outgoing events. Why not use this for &lt;strong&gt;any business logic&lt;/strong&gt;, I thought? This is a new approach to app architecture because we used to only do this kind of thing with CQRS on the backend or in networking code.&lt;/p&gt;

&lt;p&gt;Luckily, this was already implemented in the architectural framework we were using - &lt;a href="https://github.com/respawn-app/FlowMVI" rel="noopener noreferrer"&gt;FlowMVI&lt;/a&gt; - so I didn't need to write any new code for this, I just needed to use the plugin system creatively now. But the framework was meant for UI, not game engines! I had to make some changes to it if I didn't want to get fired.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So over the next two weeks, I spent time implementing the supporting infrastructure:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I added a bunch of new plug-ins that will allow me to inject any code into any place in the game engine's lifecycle. We'll talk about those in a moment.&lt;/li&gt;
&lt;li&gt;I ran benchmarks hundreds of times, comparing the performance with the fastest solutions to ensure we get maximum performance. I worked on the code until I optimized the library to the point that it became top-5 in performance among 35+ frameworks benchmarked, and as fast as using a simple Channel (from coroutines).&lt;/li&gt;
&lt;li&gt;I implemented a new system for watching over the chain of plugin invocations which allowed me to monitor processes in any business logic transparently, which I very creatively named "Decorators."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also set a requirement for myself - &lt;strong&gt;ANY piece of logic must be a separate thing in the engine's code that can be removed and modified on demand.&lt;/strong&gt; The code must not be placed in the same class. My goal was - I'm gonna keep the engine's code &lt;strong&gt;less than 400 lines long.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This felt like arming myself to the teeth as some secret ops dude from a movie. I was ready to &lt;strong&gt;crush this&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Let's go.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started - Contract
&lt;/h2&gt;

&lt;p&gt;First of all, let's define a simple family of MVI states, intents and side-effects for our engine. I used FlowMVI's IDE plugin, typed &lt;code&gt;fmvim&lt;/code&gt; in a new file, and got this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;internal&lt;/span&gt; &lt;span class="k"&gt;sealed&lt;/span&gt; &lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;GameEngineState&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;MVIState&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="kd"&gt;object&lt;/span&gt; &lt;span class="nc"&gt;Stopped&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;GameEngineState&lt;/span&gt;
    &lt;span class="kd"&gt;data class&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Exception&lt;/span&gt;&lt;span class="p"&gt;?)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;GameEngineState&lt;/span&gt;
    &lt;span class="kd"&gt;data class&lt;/span&gt; &lt;span class="nc"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;progress&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Float&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0f&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nc"&gt;GameEngineState&lt;/span&gt;
    &lt;span class="kd"&gt;data class&lt;/span&gt; &lt;span class="nc"&gt;Running&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;game&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;GameInstance&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;player&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;MediaPlayer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;isBuffering&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Boolean&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;GameEngineState&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;internal&lt;/span&gt; &lt;span class="k"&gt;sealed&lt;/span&gt; &lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;GameEngineIntent&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;MVIIntent&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;internal&lt;/span&gt; &lt;span class="k"&gt;sealed&lt;/span&gt; &lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;GameEngineAction&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;MVIAction&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="kd"&gt;object&lt;/span&gt; &lt;span class="nc"&gt;GoBack&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;GameEngineAction&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also added a &lt;code&gt;Stopped&lt;/code&gt; state (since our engine can exist even when not playing), and a progress value to the loading state.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring our Engine
&lt;/h2&gt;

&lt;p&gt;I started by creating a singleton called Container, which will host the dependencies. &lt;strong&gt;We have to keep it as a singleton and start/stop all its operations on demand to support instant replay of games and caching.&lt;/strong&gt; We're going to try and install a bunch of plugins in it to manage our logic. So, to create it, I typed &lt;code&gt;fmvic&lt;/code&gt; in an empty file and then added some configuration:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;internal&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;GameEngineContainer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;appScope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;ApplicationScope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;userRepo&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;UserRepository&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;configuration&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;StoreConfiguration&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;pool&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;PlayerPool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;// ...&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Container&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;GameEngineState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;GameEngineIntent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;GameEngineAction&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;,&lt;/span&gt; &lt;span class="nc"&gt;GameLauncher&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;store&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;lazyStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;GameEngineState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Stopped&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;configure&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;configuration&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;"GameEngine"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// 1&lt;/span&gt;
        &lt;span class="nf"&gt;configure&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;stateStrategy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Atomic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;reentrant&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// 2&lt;/span&gt;
            &lt;span class="n"&gt;allowIdleSubscriptions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;
            &lt;span class="n"&gt;parallelIntents&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt; &lt;span class="c1"&gt;// 3&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This way, we can easily inject the dependencies here. The "Store" is the object that will host our &lt;code&gt;GameState&lt;/code&gt;, respond to &lt;code&gt;GameIntents&lt;/code&gt;, and send events to the UI (&lt;code&gt;GameActions&lt;/code&gt;).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Here I am transparently injecting some stuff into the store using DI, more on that in a bit.&lt;/li&gt;
&lt;li&gt;During my benchmarks, I found out that &lt;strong&gt;reentrant&lt;/strong&gt; state transactions (which I discussed in my &lt;a href="https://nek12.dev/blog/en/how-to-update-state-in-mvi-and-mvvm-with-coroutines-best-state-management-approach" rel="noopener noreferrer"&gt;previous article&lt;/a&gt;) were tanking performance. &lt;strong&gt;They are 15x slower than non-reentrant ones!&lt;/strong&gt; The time is still measured in microseconds so it makes sense to use them for simple UI, but we had to squeeze every last drop of CPU power for the engine. I added support for those in the latest update, which reduced the time to nanoseconds per event!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Everything had to be parallel for the game engine to keep it fast&lt;/strong&gt;, so I enabled parallel processing. But if we don't synchronize state access, we'll have the same race conditions we had before! &lt;strong&gt;By enabling this flag while keeping atomic state transactions, I achieved the best of both worlds: speed and safety!&lt;/strong&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We got ourselves so far:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Speed&lt;/li&gt;
&lt;li&gt;Thread-safety&lt;/li&gt;
&lt;li&gt;Ability to keep resources loaded on demand&lt;/li&gt;
&lt;li&gt;Analytics and Crash reporting&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;"Wait", you may ask, "but there isn't a single line of analytics code in the snippet!", and I will answer - &lt;strong&gt;the magic is in the injected &lt;code&gt;configuration&lt;/code&gt; parameter.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It installs a bunch of plug-ins transparently. We can add any logic to any container using the concept of plug-ins, so why not use those &lt;strong&gt;with DI&lt;/strong&gt;? That function installs an error handler plugin that catches and sends exceptions to analytics without affecting the rest of the engine's code, tracks user actions (Intents), and events of visiting and leaving the game engine screen as well. &lt;strong&gt;Having the huge game engine polluted by analytics junk is a no-no for us&lt;/strong&gt; because we had this problem with MVVM - all of the stuff just gets piled on and on and on until it becomes unmaintainable. &lt;strong&gt;No more.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Starting and Stopping the Engine
&lt;/h2&gt;

&lt;p&gt;Okay, so we created our Container lazily. How do we clean up and keep track of resources now?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The thing about FlowMVI is that it's the only framework I know that allows you to stop and restart the business logic component (&lt;code&gt;Store&lt;/code&gt;) on demand.&lt;/strong&gt; Each store has a &lt;code&gt;StoreLifecycle&lt;/code&gt; which can let you control and observe the store using a &lt;code&gt;CoroutineScope&lt;/code&gt;. If the scope is canceled - the store is, but the store can also be stopped separately, ensuring our parent-child hierarchy is always respected.&lt;/p&gt;

&lt;p&gt;My colleagues were skeptical about this feature at first, and for a while, I thought it was useless, but this time it literally saved my ass from getting fired: &lt;strong&gt;we can just use the global application scope to run our logic, and stop the engine when we don't need it to keep consuming resources!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For the implementation, we're just going to let the Container implement an interface called &lt;code&gt;GameLauncher&lt;/code&gt; that will access the lifecycle for us:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;awaitShutdown&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;awaitUntilClosed&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;shutdown&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;close&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;GameParameters&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;old&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getAndUpdate&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;when&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;!&lt;/span&gt;&lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;isActive&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;appScope&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;awaitStartup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;// 1&lt;/span&gt;
        &lt;span class="n"&gt;old&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;intent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ReplayedGame&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// 2&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="c1"&gt;// 3&lt;/span&gt;
            &lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;closeAndWait&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="n"&gt;store&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;appScope&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;awaitStartup&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then the code from other modules will just use the interface to stop the engine when it doesn't need the game to keep running (e.g. the user has scrolled away, left the app, etc.), and call &lt;code&gt;start&lt;/code&gt; each time the clients want us to play the game. But this feature would only be marginally usable for us if the store didn't have a way to do something when it is shut down. So let's talk about resource management next.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Resources
&lt;/h2&gt;

&lt;p&gt;We have &lt;strong&gt;a lot&lt;/strong&gt; of stuff to initialize upon the game start in parallel:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Remote configuration for feature flags&lt;/li&gt;
&lt;li&gt;Game assets like textures need to be downloaded and cached&lt;/li&gt;
&lt;li&gt;Game Configuration and the game JSON data&lt;/li&gt;
&lt;li&gt;Media codec initialization&lt;/li&gt;
&lt;li&gt;Video file buffering and caching&lt;/li&gt;
&lt;li&gt;And more...&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And almost everything here cannot be simply garbage collected. &lt;strong&gt;We need to close file handles, unload codecs, release resources held by native code, and return the video player to the pool to reuse it&lt;/strong&gt;, as player creation is a very heavy process.&lt;/p&gt;

&lt;p&gt;And some stuff actually depends on the other, like the video file depends on the game configuration where it comes from. How do we do that?&lt;/p&gt;

&lt;p&gt;Well, for starters, I created a plug-in that will use the callback mentioned above to &lt;strong&gt;create a value when the engine starts, and clean the value up when the engine stops&lt;/strong&gt; (simplified code):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;cached&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="nc"&gt;PipelineContext&lt;/span&gt;&lt;span class="p"&gt;.()&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nc"&gt;CachedValue&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CachedValue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;cachePlugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;CachedValue&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;plugin&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;onStart&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="nf"&gt;onStop&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;clear&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A &lt;code&gt;CachedValue&lt;/code&gt; is just like &lt;code&gt;lazy&lt;/code&gt; but with thread-safe control of when to clear and init the value. In our case, it calls &lt;code&gt;init&lt;/code&gt; when the store starts, and clears the reference when the store stops. Super simple!&lt;/p&gt;

&lt;p&gt;But that plugin still has a problem because it &lt;strong&gt;pauses the entire store&lt;/strong&gt; until the initialization is complete, which means our loading would be sequential instead of parallel. To fix that, we can simply use &lt;code&gt;Deferred&lt;/code&gt; and run the initialization in a separate coroutine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;inline&lt;/span&gt; &lt;span class="k"&gt;fun&lt;/span&gt; &lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;asyncCached&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;CoroutineContext&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;EmptyCoroutineContext&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;CoroutineStart&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;CoroutineStart&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;UNDISPATCHED&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="k"&gt;crossinline&lt;/span&gt; &lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;suspend&lt;/span&gt; &lt;span class="nc"&gt;PipelineContext&lt;/span&gt;&lt;span class="p"&gt;.()&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nc"&gt;CachedValue&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Deferred&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;T&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;cached&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;async&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then we just pass our &lt;code&gt;asyncCached&lt;/code&gt; instead of the regular one when installing the &lt;code&gt;cache&lt;/code&gt; plugin. Sprinkle some DSL on top of that, and we get the following game-loading logic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;store&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;lazyStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;GameEngineState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Stopped&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;configure&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;gameClock&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;cache&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;GameClock&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;coroutineScope&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// 1&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;player&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;cache&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;playerPool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;borrow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;requireParameters&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;playerType&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;remoteConfig&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;asyncCache&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="c1"&gt;// 2&lt;/span&gt;
        &lt;span class="n"&gt;remoteConfigRepo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;updateAndGet&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;graphicsEngine&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;asyncCache&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;GraphicsEngine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;GraphicsRemoteConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;from&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;remoteConfig&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="c1"&gt;// 2&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;gameData&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;asyncCache&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;gameRepository&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getGameData&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;requireParameters&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;gameId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;game&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;asyncCache&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nc"&gt;GameLoop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;graphics&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;graphicsEngine&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;remoteConfig&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;remoteConfig&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;clock&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;gameClock&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;gameData&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
            &lt;span class="n"&gt;params&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;requireParameters&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;let&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nc"&gt;GameInstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;it&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;asyncInit&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="c1"&gt;// 3&lt;/span&gt;
        &lt;span class="nf"&gt;updateState&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nc"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;player&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loadVideo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;gameData&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="n"&gt;videoUrl&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="nf"&gt;updateState&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;GameEngineState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Running&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;game&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;game&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
                &lt;span class="n"&gt;player&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;player&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;clock&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;start&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;deinit&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="c1"&gt;// 4&lt;/span&gt;
        &lt;span class="n"&gt;graphicsEngine&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;release&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;player&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stop&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
        &lt;span class="n"&gt;playerPool&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;player&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Our game clock runs an event loop and synchronizes game time with video time. Unfortunately, it requires a coroutine scope where it runs the loop that should only be active during the game. Luckily, we already have one! &lt;code&gt;PipelineContext&lt;/code&gt;, which is the context of the &lt;code&gt;Store&lt;/code&gt;'s execution, is provided with plugins and implements &lt;code&gt;CoroutineScope&lt;/code&gt;. We can just use it in our &lt;code&gt;cache&lt;/code&gt; plugin and start the game clock, which will automatically stop when we shut down the engine.&lt;/li&gt;
&lt;li&gt;You can see we used a bunch of &lt;code&gt;asyncCache&lt;/code&gt;s to &lt;strong&gt;parallelize loading&lt;/strong&gt;, and with the Graphics Engine, we also were able to depend on remote config inside (as an example, in reality it depends on lots of stuff). &lt;strong&gt;This greatly simplifies our logic, because the dependencies between components are implicit now&lt;/strong&gt;, and the requesting party who wants just the graphics engine doesn't have to manage the dependencies of it! The operator invoke (parentheses) is a shorthand for &lt;code&gt;Deferred.await()&lt;/code&gt; for that extra sweet taste.&lt;/li&gt;
&lt;li&gt;We have also used an &lt;code&gt;asyncInit&lt;/code&gt; which essentially launches a background job in the current game engine's gameplay scope to load the game. Inside the job, we do final preparations, wait for all of the dependencies, and start the game clock.&lt;/li&gt;
&lt;li&gt;We have used the built-in &lt;code&gt;deinit&lt;/code&gt; plugin to put all of our cleanup logic in the callback that is invoked as soon as the game engine is stopped (and its scope is canceled). It will be run &lt;strong&gt;before&lt;/strong&gt; our cached values are cleaned up (because it was installed &lt;strong&gt;later&lt;/strong&gt;), but &lt;strong&gt;after&lt;/strong&gt; our &lt;strong&gt;jobs&lt;/strong&gt; have been &lt;strong&gt;canceled&lt;/strong&gt;, so that we can do what we want, and the &lt;code&gt;cache&lt;/code&gt; plugin will then garbage-collect the rest of the stuff without us worrying about leaks.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Overall, these 50 lines of code have replaced 1.5 thousand lines of our old game engine's implementation!&lt;/strong&gt; I had to pick up my jaw from the floor when I realized how powerful these patterns are for business logic.&lt;/p&gt;

&lt;p&gt;But we're still lacking one thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Error Handling
&lt;/h2&gt;

&lt;p&gt;A lot of things in the engine can go wrong during gameplay:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Some game-author forgot to add a frame to an animation&lt;/li&gt;
&lt;li&gt;A person lost their connection during the game&lt;/li&gt;
&lt;li&gt;The shaders failed rendering due to a platform bug, and more...&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Usually, only the main errors for API calls are handled in apps with wrappers like &lt;code&gt;ApiResult&lt;/code&gt; or some kind of try/catch. But imagine wrapping every single line of the Game Engine's code in a try-catch... That would mean hundreds of lines of try-catch-finally garbage!&lt;/p&gt;

&lt;p&gt;Well, you probably know what will happen. Since we can intercept any event now, let's make an &lt;strong&gt;error-handling plug-in!&lt;/strong&gt; I named it &lt;em&gt;recover&lt;/em&gt;, and now our code looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;store&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;lazyStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;GameEngineState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Stopped&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;configure&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="c1"&gt;// 1&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;player&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;cache&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;recover&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;debuggable&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nf"&gt;updateState&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="c1"&gt;// 1&lt;/span&gt;
            &lt;span class="nc"&gt;GameEngineState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;when&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="c1"&gt;// 2&lt;/span&gt;
            &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;StoreTimeoutException&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;GLException&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;Unit&lt;/span&gt;
            &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;MediaPlaybackException&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;player&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;retry&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;AssetCorruptedException&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;assetManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;refreshAssetsSync&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="nc"&gt;BufferingTimeoutException&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;action&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ShowSlowInternetMessage&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="c1"&gt;// ...&lt;/span&gt;
            &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;shutdown&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;// 3&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;null&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;If our store is configured to be debuggable (&lt;code&gt;config&lt;/code&gt; is available in store plug-ins), we can show a full-screen overlay with the stack trace to let our QA team easily report errors to devs before they get to production. &lt;a href="https://en.wikipedia.org/wiki/Fail-fast_system" rel="noopener noreferrer"&gt;Fail Fast&lt;/a&gt; principle in action.&lt;/li&gt;
&lt;li&gt;In production, however, we will handle some errors by retrying, skipping an animation, or warning the user about their connection without interrupting the gameplay.&lt;/li&gt;
&lt;li&gt;If we can't handle an error and cannot recover, then we shut down the engine and let the user try to play the game again, without crashing the app or showing obscure messages (those go to crashlytics).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;With this, we've got ourselves error-handling for any existing and new code a developer may ever add to our game engine, with 0 try-catches.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Final touches
&lt;/h2&gt;

&lt;p&gt;We're almost done! This article is getting long, so I'll blitz through some additional plug-ins I had to install to support our use cases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight kotlin"&gt;&lt;code&gt;&lt;span class="k"&gt;override&lt;/span&gt; &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;store&lt;/span&gt; &lt;span class="k"&gt;by&lt;/span&gt; &lt;span class="nf"&gt;lazyStore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;GameEngineState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Stopped&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;configure&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;subs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;awaitSubscribers&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="c1"&gt;// 1&lt;/span&gt;
    &lt;span class="kd"&gt;val&lt;/span&gt; &lt;span class="py"&gt;jobs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;manageJobs&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;GameJobs&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;()&lt;/span&gt; &lt;span class="c1"&gt;// 2&lt;/span&gt;

    &lt;span class="nf"&gt;initTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;seconds&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="c1"&gt;// 3&lt;/span&gt;
        &lt;span class="n"&gt;subs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;await&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;whileSubscribed&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="c1"&gt;// 4&lt;/span&gt;
        &lt;span class="n"&gt;assetManager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;loadingProgress&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;collect&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;progress&lt;/span&gt; &lt;span class="p"&gt;-&amp;gt;&lt;/span&gt;
            &lt;span class="n"&gt;updateState&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;Loading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nf"&gt;copy&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;progress&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;progress&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;install&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nf"&gt;autoStopPlugin&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jobs&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="c1"&gt;// 5&lt;/span&gt;
        &lt;span class="nf"&gt;resetStatePlugin&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="c1"&gt;// 6&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Since a developer can make the mistake of starting the game, but never displaying the actual gameplay experience (user left, a bug, plans changed, etc...), I am using the pre-made &lt;code&gt;awaitSubscribers&lt;/code&gt; plugin in snippet (3) to see if they appear within 5 seconds of starting the game, and if not, close the store and &lt;strong&gt;auto-cleanup the held resources&lt;/strong&gt; to prevent leaks. Boom!&lt;/li&gt;
&lt;li&gt;I'm using another plug-in - &lt;code&gt;JobManager&lt;/code&gt;, to run some long-running operations in the background. Code that uses it didn't fit, but essentially it's needed to track whether the user is currently playing.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;InitTimeout&lt;/code&gt; is a custom plugin that verifies whether the game has finished loading within 5 seconds, and if not, we pass an error to our &lt;code&gt;recover&lt;/code&gt; plugin to decide what to do and report the issue to analytics.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;whileSubscribed&lt;/code&gt; plugin launches &lt;strong&gt;a job that is only active when subscribers&lt;/strong&gt; (in our case, UI) &lt;strong&gt;are present&lt;/strong&gt;, where we update the visuals of the loading progress only when the user is actually seeing the loading screen. It allows us to easily &lt;strong&gt;avoid resource leaks&lt;/strong&gt; if the game engine is covered up by something or hidden.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;autoStopPlugin&lt;/code&gt; uses our job manager to watch for game load progress and gameplay progress. It looks at whether we have subscribers to pause the game when the user leaves, then &lt;strong&gt;stop it once the engine is not used for a while&lt;/strong&gt;, eliminating the risk of leaking memory.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;resetStatePlugin&lt;/code&gt; is a built-in one I had to install to auto-cleanup state when the game ends. By default, &lt;strong&gt;stores will not have their state reset&lt;/strong&gt; when they stop. This is good for regular UI but not in our case - we want the engine to go back to the Stopped state when the game ends.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All of those plugins were already in the library, so using them was a piece of cake.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;It was a wild ride, but after all this, I not only managed to keep my job, but I think that the overall solution has turned out pretty great. &lt;strong&gt;The engine went from 7+k to just 400 lines of readable, linear, structured, performant, extensible code&lt;/strong&gt;, and the users are already enjoying the results:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Loading time&lt;/strong&gt; went from ~20 seconds to just 1.75 seconds!&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Crashed games&lt;/strong&gt; fell from 8% to 0.01%!&lt;/li&gt;
&lt;li&gt;We improved the &lt;strong&gt;throughput&lt;/strong&gt; of the game event processing by 1700%&lt;/li&gt;
&lt;li&gt;Video &lt;strong&gt;buffering&lt;/strong&gt; occurrences during games went from ~31% to &amp;lt;10% due to our caching&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Battery consumption&lt;/strong&gt; during gameplay was reduced by orders of magnitude&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ANRs&lt;/strong&gt; during gameplay fell to being statistically 0&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GC pressure&lt;/strong&gt; decreased by 40% during gameplay&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Hopefully, by this point, I've shown why the patterns we used to hate like Decorators, Interceptors, and Chain of Responsibility can be insanely helpful&lt;/strong&gt; when building not just some backend service, networking code, or a specialized use-case, but also implementing the regular application logic, including UI and state management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With the power that Kotlin gives when building DSLs, we can turn the fundamental patterns (used in software development for decades) from a mess of boilerplate, inheritance, and complicated delegation into fast, straightforward, compact, linear code that is fun and efficient to work with.&lt;/strong&gt; I encourage you to build something like this for your own app's architecture and reap the benefits.&lt;/p&gt;

&lt;p&gt;And if you don't want to dive into that and want something already available, or are curious to learn more, then consider checking out the original library where I implemented everything mentioned here on &lt;a href="https://github.com/respawn-app/FlowMVI" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;, or dive right into the &lt;a href="https://opensource.respawn.pro/FlowMVI/quickstart" rel="noopener noreferrer"&gt;quickstart&lt;/a&gt; guide to try it in 10 minutes.&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>mvi</category>
      <category>gameengine</category>
      <category>android</category>
    </item>
  </channel>
</rss>
