<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Krun_Dev</title>
    <description>The latest articles on DEV Community by Krun_Dev (@krun_dev).</description>
    <link>https://dev.to/krun_dev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/krun_dev"/>
    <language>en</language>
    <item>
      <title>The Unofficial MojoWiki</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Mon, 11 May 2026 22:38:46 +0000</pubDate>
      <link>https://dev.to/krun_dev/the-unofficial-mojowiki-4gn2</link>
      <guid>https://dev.to/krun_dev/the-unofficial-mojowiki-4gn2</guid>
      <description>&lt;h2&gt;Don't Give Up on Mojo Just Yet: The Survival Manual Modular Forgot to Write&lt;/h2&gt;

&lt;p&gt;The honeymoon phase with the Mojo programming language is usually short. It starts with a clean install and ends abruptly when you hit your first undocumented toolchain crash, a silent PATH ghost, or a borrow checker error that makes Rust look like child's play. Most developers hit these walls, realize the official documentation is missing the "how-to-fix-this-mess" section, and immediately retreat to the safety of Python.&lt;/p&gt;

&lt;p&gt;But the problem isn't necessarily the language—it is the massive information gap between the clean, sanitized marketing hype and the gritty, production-grade reality of a toolchain still in heavy development. If you have spent more time debugging your environment than writing kernels, you are not alone. You have simply hit the "early adopter tax" that no one likes to talk about on stage.&lt;/p&gt;

&lt;p&gt;I have spent months documenting the rough neighborhood of Mojo development. While the official docs show you the dream of a unified AI stack, I have spent that time mapping the mines, the pitfalls, and the architectural dead-ends. The result is the Unofficial MojoWiki: an exhaustive engineering audit of over 50 real-world pitfalls, architectural traps, and "hour-one" failures that stop production builds in their tracks. This is the manual that helps you move past the "Hello World" stage and into actual systems engineering.&lt;/p&gt;

&lt;h2&gt;What they are not telling you in the official release notes&lt;/h2&gt;

&lt;p&gt;We are skipping the boilerplate and the fan-boy enthusiasm. This isn't a fan-site; it is a cold, technical autopsy of Mojo's current production state as of mid-2026. Official sources are great at telling you what a language can do, but they are notoriously bad at telling you what it cannot do yet, or where it fails in silence. In the full manual, we tackle the cynical reality of the current ecosystem:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;strong&gt;Toolchain Instability and Ghost Commands:&lt;/strong&gt; Why the Modular CLI occasionally forgets your PATH after a minor update and how to manually re-link your environment when the auth server decides to time out without an error message.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;The Hidden Python Interop Tax:&lt;/strong&gt; Everyone talks about how easy it is to call NumPy, but no one mentions the hidden cost of tensor copies and the pointer-chasing overhead that can wipe out your performance gains before your specialized kernel even starts.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Memory Ownership Scars:&lt;/strong&gt; Navigating the brutal transition from Python’s carefree garbage collection to Mojo’s strict ownership model. We look at why your values "do not live long enough" and how to solve lifetime errors without resorting to memory-unsafe hacks.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;The SIMD Alignment Trap:&lt;/strong&gt; Why your "optimized" code might be running at scalar speeds because the compiler decided to fall back to a slower path in silence rather than flagging an alignment error.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Partial Concurrency Implementation:&lt;/strong&gt; The real state of async/await. We discuss why parallelize works like a charm but the high-level async model still faces stability issues in high-load production environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;The Engineering Guide for Those Who Stayed&lt;/h2&gt;

&lt;p&gt;If you are tired of the sanitized, perfect world described by PR departments and you actually need to know why your binary size just tripled or why the LSP is leaking 4GB of RAM in VS Code, this guide is for you. I have categorized every major bottleneck I found—from binary bloat and cryptic MLIR syntax errors to the nuances of struct vs class memory layout.&lt;/p&gt;

&lt;p&gt;This isn't just a list of bugs; it is a strategic map for engineers who have decided to stick with Mojo despite the friction. It is about understanding the "why" behind the crashes. We cover the environment variables that actually matter, the build flags that Modular keeps tucked away in their GitHub issues, and the workarounds for the current lack of a mature package manager.&lt;/p&gt;

&lt;p&gt;This is the Unofficial MojoWiki. It is rough, it is human, and it is built for developers who need to ship working binaries, not just read pitch decks. Mojo is a masterpiece of modern engineering, but it is a wild animal that needs to be tamed. I have documented the first 50 hours of hell so you can skip the frustration and get straight to the part where you build something world-changing.&lt;/p&gt;

&lt;p&gt;If you are ready to stop fighting the toolchain and start mastering the machine, the full engineering deep-dive is live and updated for the 2026 reality. Stop guessing, start engineering, and let's see what Mojo can really do when you know where the traps are hidden.&lt;/p&gt;

&lt;p&gt;For a more detailed breakdown, head over to &lt;a href="https://krun.pro/mojowiki/" rel="noopener noreferrer"&gt;MojoWiki&lt;/a&gt; — where we have covered nearly every solution to your engineering headaches and production pains.&lt;/p&gt;

&lt;p&gt;Keep an eye out for updates at krun.pro&lt;/p&gt;

</description>
      <category>mojo</category>
      <category>mojowiki</category>
      <category>manual</category>
    </item>
    <item>
      <title>Rust Generator yield</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Sun, 10 May 2026 20:36:59 +0000</pubDate>
      <link>https://dev.to/krun_dev/rust-generator-yield-1j60</link>
      <guid>https://dev.to/krun_dev/rust-generator-yield-1j60</guid>
      <description>&lt;h2&gt;Rust Generator yield: What the Compiler Builds Under async/await&lt;/h2&gt;

&lt;p&gt;Every async function in Rust is a compiler-generated state machine built on generators. Yield is not a niche feature — it is the foundation of async/await. Most developers use it without realizing it can inflate memory usage and binary size significantly.&lt;/p&gt;

&lt;p&gt;The compiler transforms each await into a suspension point and builds an enum where every state represents a stage of execution. This enum is wrapped into a Future and driven by poll(). Execution resumes exactly where it stopped, using a match-based dispatch.&lt;/p&gt;

&lt;p&gt;The critical detail: all variables alive across an await are stored inside the state. They are not dropped — they are preserved. If a large object exists at that point, it becomes part of the state machine until completion. The size of the future equals its largest state, even if rarely used.&lt;/p&gt;

&lt;p&gt;Nested async calls make this worse. Each function embeds its own state machine, and sizes compound, not linearly but multiplicatively. Real-world cases show futures exceeding hundreds of kilobytes, which can break memory-constrained systems.&lt;/p&gt;

&lt;p&gt;Execution is cooperative: no threads, no stack switching. The executor repeatedly polls the state machine, and a waker schedules it when I/O is ready. This is efficient — but only if state size is under control.&lt;/p&gt;

&lt;p&gt;Python and Rust generators look similar but behave very differently. Python generators are heap-based and managed by the runtime. Rust generators are stackless, strictly typed, and require pinning to guarantee memory stability. Resume arguments are enforced at compile time, not runtime.&lt;/p&gt;

&lt;p&gt;Async in Rust is effectively a generator wrapped as a Future. Yield maps to a pending state, completion maps to ready. This is not an abstraction — it is the actual implementation.&lt;/p&gt;

&lt;p&gt;The main issue in production is state size growth. Since the largest state defines memory usage, deeply nested async flows can create oversized futures. The common fix is heap allocation via boxing, which limits size but adds overhead.&lt;/p&gt;

&lt;p&gt;Generators are still unstable due to self-referential memory problems. Pin ensures safety but adds complexity. Stable alternatives include iterator-based patterns, macro-based generators, and upcoming language features designed to simplify this model.&lt;/p&gt;

&lt;p&gt;Bottom line: Rust async is explicit state machines. If you ignore how state is built and stored, you risk hidden memory costs and performance issues.&lt;/p&gt;

&lt;p&gt;Learn more on this page &lt;a href="https://krun.pro/rust-generator-yield/" rel="noopener noreferrer"&gt;https://krun.pro/rust-generator-yield/&lt;/a&gt; — the full deep-dive on Rust Generator yield: What the Compiler Actually Builds Under async/await&lt;/p&gt;

</description>
      <category>rust</category>
      <category>yield</category>
      <category>async</category>
      <category>coroutines</category>
    </item>
    <item>
      <title>Python modern toolchain</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Fri, 08 May 2026 13:14:28 +0000</pubDate>
      <link>https://dev.to/krun_dev/python-modern-toolchain-afg</link>
      <guid>https://dev.to/krun_dev/python-modern-toolchain-afg</guid>
      <description>&lt;h2&gt;Your Python Toolchain Is Costing You Real Money — And You're Probably Fine With That&lt;/h2&gt;

&lt;p&gt;Let's be honest. Most Python teams in 2026 are still running flake8, black, isort, and Poetry side by side like it's 2021. Three separate lint processes, a formatter that argues with your import sorter, a dependency manager with a 40-second install time, and a pyproject.toml that looks like it was configured by a committee. Nobody chose this setup deliberately — it just accumulated, sprint by sprint, like technical debt usually does.&lt;/p&gt;

&lt;p&gt;The uncomfortable question: how much CI time does your team burn per week on tooling that has already been replaced by something an order of magnitude faster?&lt;/p&gt;

&lt;h2&gt;The Numbers Are Not in Your Favor&lt;/h2&gt;

&lt;p&gt;Ruff — a Python linter and formatter written in Rust — lints the entire CPython repository (680 000+ lines) in under 0.5 seconds. Flake8 takes 30–40 seconds on the same codebase. That's not a benchmark cherry-picked from a blog post. That's reproducible on your machine, today.&lt;/p&gt;

&lt;p&gt;Do the math for your team. If lint runs on every PR and your pipeline takes 35 seconds just for code quality checks — across 10 engineers, 4 PRs a day — you're burning roughly 23 minutes of CI compute daily on a solved problem. Ruff collapses that to under a minute. Ruff replaces flake8, black, and isort as a single binary with zero configuration conflicts, because there's only one tool making decisions.&lt;/p&gt;

&lt;p&gt;That's not hype. That's arithmetic.&lt;/p&gt;

&lt;h2&gt;Poetry Had Its Moment. uv Is What Happens Next.&lt;/h2&gt;

&lt;p&gt;uv is the dependency manager that Poetry should have been. Faster resolution, a readable lockfile, native PEP 735 support for dependency groups, and a single binary that also replaces pipx for tool management. The migration from Poetry to uv involves exactly three things: reformatting pyproject.toml to standard &lt;code&gt;[project.dependencies]&lt;/code&gt;, replacing &lt;code&gt;poetry install&lt;/code&gt; with &lt;code&gt;uv sync --frozen&lt;/code&gt;, and updating your CI cache path from Poetry's global store to &lt;code&gt;~/.cache/uv&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The result is reproducible builds that don't require Poetry to be installed anywhere — just uv. On a cold CI runner, &lt;code&gt;uv sync&lt;/code&gt; on a medium-sized FastAPI service completes in under 8 seconds. Poetry on the same project: 45–60 seconds. This is the kind of improvement that makes DevOps people genuinely happy, which is rare enough to be worth mentioning.&lt;/p&gt;

&lt;h2&gt;Pyright Strict Mode Is Not a Punishment — It's a Safety Net You Didn't Know You Needed&lt;/h2&gt;

&lt;p&gt;The moment you enable Pyright strict mode on a real codebase, you get a red wall of errors. Most teams turn it off and go back to &lt;code&gt;typeCheckingMode = "basic"&lt;/code&gt; within an hour. That's the wrong move. The errors it surfaces — untyped decorators, missing return annotations, unknown member types from third-party libraries — are real bugs waiting to happen in production, not style pedantry.&lt;/p&gt;

&lt;p&gt;The trick is incremental adoption. Enable strict per-module, not globally. Fix &lt;code&gt;reportMissingTypeArgument&lt;/code&gt; first — it's mechanical and automatable. Use &lt;code&gt;ParamSpec&lt;/code&gt; to type decorators properly instead of reaching for &lt;code&gt;Any&lt;/code&gt;. Add stub packages for boto3, requests, redis from PyPI rather than sprinkling &lt;code&gt;# type: ignore&lt;/code&gt; everywhere and hoping for the best.&lt;/p&gt;

&lt;h2&gt;The Distroless Angle Nobody Talks About&lt;/h2&gt;

&lt;p&gt;Combining uv with a distroless Docker base image produces Python containers below 80 MB with no shell, no package manager, and no attack surface beyond what your application actually needs. The multi-stage Dockerfile pattern — uv in the builder stage, &lt;code&gt;gcr.io/distroless/python3-debian12&lt;/code&gt; as runtime — is reproducible, auditable, and passes most enterprise security scans without extra hardening. It also forces the architectural discipline of proper logging and tracing instead of &lt;code&gt;docker exec&lt;/code&gt; debugging, which is where containerized services should be anyway.&lt;/p&gt;

&lt;p&gt;The full write-up covers all of this in detail — real pyproject.toml configs you can copy directly, the exact Dockerfile with uv sync and nonroot user, Pyright strict error categories ranked by fix complexity, private PyPI index configuration in uv, and a minimal microservice template that ditches setup.py, requirements.txt, and every other config file that doesn't belong in a modern Python project.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Full article with configs, Dockerfiles, and Pyright fixes:&lt;/b&gt; the complete version is available on &lt;a href="https://krun.pro/python-modern-toolchain/" rel="noopener noreferrer"&gt;https://krun.pro/python-modern-toolchain/&lt;/a&gt; — written for developers who've already heard the pitch and want the implementation details.&lt;/p&gt;

</description>
      <category>python</category>
      <category>cpython</category>
      <category>ruff</category>
      <category>docker</category>
    </item>
    <item>
      <title>Developer Burnout: Meaningless Work</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Tue, 05 May 2026 22:08:44 +0000</pubDate>
      <link>https://dev.to/krun_dev/developer-burnout-meaningless-work-3a0k</link>
      <guid>https://dev.to/krun_dev/developer-burnout-meaningless-work-3a0k</guid>
      <description>&lt;h2&gt;We’re Not Burned Out — We’re Just Building Things Nobody Cares About&lt;/h2&gt;

&lt;p&gt;I’ve seen a pattern in software teams that nobody really wants to say out loud.&lt;/p&gt;

&lt;p&gt;Developers aren’t burning out because they work too much. Some of the most drained engineers I know are working 30–35 hours a week, fully remote, with “reasonable” deadlines.&lt;/p&gt;

&lt;p&gt;And still — they feel completely done with it.&lt;/p&gt;

&lt;p&gt;Not tired. Not stressed. Just… disconnected.&lt;/p&gt;

&lt;h3&gt;The Real Problem Isn’t Workload&lt;/h3&gt;

&lt;p&gt;We’ve been sold a very simple story: burnout = too much work.&lt;/p&gt;

&lt;p&gt;So the solution is always the same — take breaks, reduce hours, rest more.&lt;/p&gt;

&lt;p&gt;But that advice completely falls apart in real teams.&lt;/p&gt;

&lt;p&gt;Because people aren’t collapsing from exhaustion. They’re collapsing from doing work that doesn’t seem to matter.&lt;/p&gt;

&lt;h3&gt;You Ship Features. Nothing Changes.&lt;/h3&gt;

&lt;p&gt;You build something. It goes live. Everyone moves on.&lt;/p&gt;

&lt;p&gt;No one talks about it again. No visible impact. No real feedback.&lt;/p&gt;

&lt;p&gt;Sometimes it quietly gets disabled a few months later and nobody even notices.&lt;/p&gt;

&lt;p&gt;That’s not burnout from overwork. That’s burnout from irrelevance.&lt;/p&gt;

&lt;h3&gt;Feature Factories Are Killing Motivation&lt;/h3&gt;

&lt;p&gt;Most teams don’t build products. They produce output.&lt;/p&gt;

&lt;p&gt;Features are shipped because they were planned — not because they were proven to matter.&lt;/p&gt;

&lt;p&gt;Success is measured in velocity, not in whether anything actually improved for a user.&lt;/p&gt;

&lt;p&gt;So developers keep shipping, keep closing tickets, keep moving… but nothing meaningful changes.&lt;/p&gt;

&lt;h3&gt;That’s Where Motivation Dies&lt;/h3&gt;

&lt;p&gt;Most developers don’t hate coding.&lt;/p&gt;

&lt;p&gt;They hate coding when it doesn’t connect to anything real.&lt;/p&gt;

&lt;p&gt;When effort and outcome are disconnected long enough, motivation doesn’t fade slowly — it just shuts off.&lt;/p&gt;

&lt;p&gt;At some point, you stop caring not because you’re lazy, but because caring doesn’t change anything.&lt;/p&gt;

&lt;h3&gt;This Isn’t a “Take a Break” Problem&lt;/h3&gt;

&lt;p&gt;This isn’t solved by vacations, meditation, or better time management.&lt;/p&gt;

&lt;p&gt;Because the issue isn’t energy. It’s meaning.&lt;/p&gt;

&lt;p&gt;And you can’t rest your way out of something that feels pointless.&lt;/p&gt;

&lt;h3&gt;The Uncomfortable Truth&lt;/h3&gt;

&lt;p&gt;A lot of what we call “developer burnout” is actually just long-term exposure to meaningless work.&lt;/p&gt;

&lt;p&gt;And once you start seeing it, it’s hard to unsee it.&lt;/p&gt;

&lt;p&gt;Not every team has this problem. But many do — quietly, structurally, and consistently.&lt;/p&gt;

&lt;h3&gt;Full Article&lt;/h3&gt;

&lt;p&gt;If this hits close to home, I broke it down in detail here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://krun.pro/developer-burnout-meaningless/" rel="noopener noreferrer"&gt;Read the full article&lt;/a&gt;&lt;/p&gt;

</description>
      <category>developer</category>
      <category>burnout</category>
      <category>burned</category>
    </item>
    <item>
      <title>Why AI Code That Looks Correct Still Breaks Real Backend Systems</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Sat, 02 May 2026 20:42:18 +0000</pubDate>
      <link>https://dev.to/krun_dev/why-ai-code-that-looks-correct-still-breaks-real-backend-systems-577k</link>
      <guid>https://dev.to/krun_dev/why-ai-code-that-looks-correct-still-breaks-real-backend-systems-577k</guid>
      <description>&lt;h2&gt;AI Code Not Working in Real Projects&lt;/h2&gt;

&lt;p&gt;
The gap between “code that works in isolation” and code that survives in production is where most AI-assisted development failures begin. Tools like ChatGPT or Copilot generate code in a simplified execution model: no real middleware chains, no legacy constraints, no hidden coupling, no operational history.
&lt;/p&gt;

&lt;p&gt;
Production systems are the opposite. They are stateful, layered, and full of implicit contracts that are never explicitly described in prompts. This is why AI-generated code often compiles and passes local tests but breaks when integrated into a real backend.
&lt;/p&gt;

&lt;h3&gt;Why AI Generated Code Works in Isolation but Fails in Production&lt;/h3&gt;

&lt;p&gt;
Isolation tests create a misleading signal of correctness. A function may validate tokens, transform data, or query a mock database perfectly in a sandbox environment. However, once placed into a real execution chain, it interacts with middleware, caching layers, authentication pipelines, and side effects that were never part of the prompt context.
&lt;/p&gt;

&lt;p&gt;
The failure is not logical — it is environmental. The AI has no visibility into execution order, shared state, or framework-specific lifecycle hooks, so it generates correct logic for a system that does not match your actual architecture.
&lt;/p&gt;

&lt;h3&gt;AI Code Breaks Backend Assumptions&lt;/h3&gt;

&lt;p&gt;
Backend systems rely on implicit contracts between layers: services, repositories, controllers, and error-handling middleware. These contracts are rarely visible in code snippets but are critical for system stability.
&lt;/p&gt;

&lt;p&gt;
AI-generated implementations often violate these boundaries by returning null instead of throwing typed exceptions, or by bypassing centralized error handlers. These issues rarely crash the system — instead, they silently corrupt observability and make debugging significantly harder.
&lt;/p&gt;

&lt;h3&gt;Silent Failure Patterns in Data Access Layers&lt;/h3&gt;

&lt;p&gt;
One of the most common issues appears in repository and service layers. AI-generated code tends to simplify error handling, often catching exceptions and returning fallback values without propagating failure states properly.
&lt;/p&gt;

&lt;p&gt;
This breaks system-wide assumptions about consistency and error propagation. The frontend or downstream services may interpret invalid states as valid responses, resulting in incorrect rendering or silent logic failures.
&lt;/p&gt;

&lt;h3&gt;Business Logic Is the First Thing AI Gets Wrong&lt;/h3&gt;

&lt;p&gt;
AI models struggle most with domain-specific rules. Pricing engines, discount logic, and permission systems often depend on internal constraints that are not visible in training data or prompt context.
&lt;/p&gt;

&lt;p&gt;
As a result, generated implementations may look correct in unit tests but diverge from the authoritative business rules in production systems, especially when edge cases or enterprise-specific rules are involved.
&lt;/p&gt;

&lt;h3&gt;Context Collapse in Large Codebases&lt;/h3&gt;

&lt;p&gt;
Even modern LLMs with extended context windows cannot fully represent a real production system. A backend with hundreds of modules, services, and dependencies far exceeds practical prompt limits.
&lt;/p&gt;

&lt;p&gt;
This forces the model to infer missing structure, which leads to statistically plausible but architecturally incorrect assumptions. Over time, this produces duplicated logic and inconsistent patterns across the codebase.
&lt;/p&gt;

&lt;h3&gt;Inconsistent Output Across Sessions and Files&lt;/h3&gt;

&lt;p&gt;
Because each AI interaction is stateless, identical tasks can produce different architectural decisions depending on what context is included in the prompt. This creates fragmentation when multiple developers or multiple sessions generate code for the same system.
&lt;/p&gt;

&lt;p&gt;
The result is inconsistent patterns for error handling, service abstraction, and data flow — all of which appear locally correct but diverge globally.
&lt;/p&gt;

&lt;h3&gt;How AI Code Damages System Architecture&lt;/h3&gt;

&lt;p&gt;
The most expensive impact of AI-generated code is not functional bugs, but architectural degradation. Small shortcuts accumulate over time and erode separation of concerns.
&lt;/p&gt;

&lt;h3&gt;AI Introduces Tight Coupling Between Layers&lt;/h3&gt;

&lt;p&gt;
Without full architectural context, AI tends to collapse boundaries between layers. Controllers may directly access databases, or UI logic may depend on raw API responses instead of stable domain abstractions.
&lt;/p&gt;

&lt;p&gt;
Each individual change appears harmless, but collectively they remove the system’s ability to evolve safely.
&lt;/p&gt;

&lt;h3&gt;Invisible Production Bugs and State Corruption&lt;/h3&gt;

&lt;p&gt;
The most dangerous failures are not exceptions, but subtle inconsistencies: race conditions, partial updates, and incorrect assumptions about data presence.
&lt;/p&gt;

&lt;p&gt;
These issues only surface under real load conditions, distributed execution, or concurrent operations — making them difficult to reproduce in local environments.
&lt;/p&gt;

&lt;h3&gt;Preventing AI-Induced System Drift&lt;/h3&gt;

&lt;p&gt;
The solution is not avoiding AI tools, but constraining them with system-aware infrastructure: codebase indexing, architecture enforcement, and strict type validation.
&lt;/p&gt;

&lt;p&gt;
Without these guardrails, AI will always optimize for local correctness instead of global system consistency.
&lt;/p&gt;

&lt;p&gt;You can learn more on the page &lt;a href="https://krun.pro/why-ai-code-breaks/" rel="noopener noreferrer"&gt;https://krun.pro/why-ai-code-breaks/&lt;/a&gt;&lt;br&gt;
 — an in-depth analysis and breakdown by Krun Dev.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>code</category>
      <category>projects</category>
    </item>
    <item>
      <title>Why Mojo Fails Before Benchmark</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Tue, 28 Apr 2026 15:30:56 +0000</pubDate>
      <link>https://dev.to/krun_dev/why-mojo-fails-before-benchmark-50m0</link>
      <guid>https://dev.to/krun_dev/why-mojo-fails-before-benchmark-50m0</guid>
      <description>&lt;h2&gt;Why Your Mojo System Design Fails Before the First Benchmark&lt;/h2&gt;

&lt;p&gt;Running Mojo but seeing benchmarks that mirror your old Python code is not a coincidence—it’s architectural debt. Mojo exposes low-level performance tools, but carrying over Python habits will hurt you fast. The closer you get to the metal, the more your script-like patterns backfire.&lt;/p&gt;

&lt;h2&gt;Quick Takeaways&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Remaining in Python-interop mode incurs reference counting overhead that wipes out Mojo's performance benefits.&lt;/li&gt;
  &lt;li&gt;Object lists generate cache misses; SIMD-aligned Structs avoid them, delivering measurable speed-ups.&lt;/li&gt;
  &lt;li&gt;Mojo's borrow checker errors almost always result from ownership violations: owned, borrowed, and inout are explicit and essential.&lt;/li&gt;
  &lt;li&gt;parallelize() is safe only when workers do not share mutable memory. Otherwise, you invite race conditions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;The Python Brain Trap&lt;/h2&gt;

&lt;p&gt;Developers transitioning from Python often assume performance comes automatically. It doesn’t. Mojo provides SIMD, manual memory control, and zero-cost abstractions, but you must actively apply them. Writing Mojo code that mimics Python structures leads to severe slowdowns. PythonObject types carry reference counting costs—tight numeric loops can lose 40–60% of execution time.&lt;/p&gt;

&lt;h2&gt;Python-Interop: A Performance Dead Zone&lt;/h2&gt;

&lt;p&gt;The Python interop layer exists for migration convenience, not throughput. Using Python lists, Python functions, or PythonObject types inside Mojo kernels turns your code into a thin CPython wrapper. Every attribute access, length check, or loop iteration passes through Python’s runtime, killing performance.&lt;/p&gt;

&lt;h2&gt;Reference Counting: The Hidden Tax&lt;/h2&gt;

&lt;p&gt;Reference counting in PythonObject types introduces unpredictable micro-stalls in loops. Production Mojo code should convert Python data at the boundary and immediately switch to native Mojo types like &lt;code&gt;DTypePointer&lt;/code&gt;, &lt;code&gt;Tensor&lt;/code&gt;, or SIMD vectors for internal computation.&lt;/p&gt;

&lt;h2&gt;Memory Layout: Keeping the CPU Happy&lt;/h2&gt;

&lt;p&gt;Cache locality dominates performance. L1 caches are small (32–64 KB), so sequential memory access in contiguous arrays drastically reduces cache misses. Lists of heap-allocated objects scatter data, causing costly cache misses and slowing loops.&lt;/p&gt;

&lt;h2&gt;Structs vs Object Lists&lt;/h2&gt;

&lt;p&gt;Mojo beginners often model data as lists of structs (AoS). Iterating fields in such lists forces the CPU to load entire objects. A struct of arrays (SoA) keeps fields contiguous, enabling SIMD operations and prefetching, often yielding 4–8x speed improvements on numeric kernels.&lt;/p&gt;

&lt;pre&gt;
struct ParticleSystem:
 var x_positions: DTypePointer[DType.float32]
 var y_positions: DTypePointer[DType.float32]
 var masses: DTypePointer[DType.float32]
 var count: Int
&lt;/pre&gt;

&lt;h2&gt;Heap Allocation in Loops&lt;/h2&gt;

&lt;p&gt;Allocating inside hot loops is costly. Each allocation invokes the memory allocator and triggers eventual garbage collection. Pre-allocate buffers outside loops and reuse them for maximum performance.&lt;/p&gt;

&lt;h2&gt;Borrow Checker and Ownership&lt;/h2&gt;

&lt;p&gt;Mojo enforces ownership similar to Rust, preventing segmentation faults and silent corruption. Variables can be &lt;code&gt;owned&lt;/code&gt;, &lt;code&gt;borrowed&lt;/code&gt;, or &lt;code&gt;inout&lt;/code&gt;. Misusing them, especially &lt;code&gt;owned&lt;/code&gt; after a transfer, leads to runtime crashes.&lt;/p&gt;

&lt;h2&gt;Safe Parallelism&lt;/h2&gt;

&lt;p&gt;parallelize() is easy but dangerous. Only partition data into isolated chunks for workers. Each worker should write to its own buffer, then reduce results sequentially. Shared mutable memory leads to unpredictable race conditions and inconsistent outcomes.&lt;/p&gt;

&lt;h2&gt;Five Best Practices for Mojo System Design&lt;/h2&gt;

&lt;ol&gt;
  &lt;li&gt;
&lt;strong&gt;Use @value for pure data structs only:&lt;/strong&gt; Avoid shallow copies of heap pointers.&lt;/li&gt;
  &lt;li&gt;
&lt;strong&gt;Pre-allocate outside hot loops:&lt;/strong&gt; Reuse buffers to avoid repeated allocation costs.&lt;/li&gt;
  &lt;li&gt;
&lt;strong&gt;Pointer is powerful but dangerous:&lt;/strong&gt; Use Pointer[T] only when ownership system cannot express requirements.&lt;/li&gt;
  &lt;li&gt;
&lt;strong&gt;Understand decorator semantics:&lt;/strong&gt; Overuse of &lt;code&gt;@always_inline&lt;/code&gt; or &lt;code&gt;@parameter&lt;/code&gt; can backfire.&lt;/li&gt;
  &lt;li&gt;
&lt;strong&gt;Profile first:&lt;/strong&gt; Focus on memory layout and cache misses before algorithm tweaks.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;EEAT Takeaways&lt;/h2&gt;

&lt;p&gt;This article provides deep, expert insight into Mojo’s system-level design challenges. It emphasizes why Python habits harm performance, how memory layout affects cache efficiency, and how ownership and parallelism must be managed carefully. By following these principles, developers can transform script-like Mojo prototypes into high-performance kernels.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Krun Dev&lt;/em&gt;&lt;br&gt;
&lt;a href="https://krun.pro/mojo-system-design/" rel="noopener noreferrer"&gt;&lt;em&gt;krun.pro&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mojo</category>
      <category>benchmark</category>
      <category>systems</category>
      <category>design</category>
    </item>
    <item>
      <title>Garbage Collection in Rust Without a Single unsafe Block</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Fri, 24 Apr 2026 20:16:50 +0000</pubDate>
      <link>https://dev.to/krun_dev/garbage-collection-in-rust-without-a-single-unsafe-block-l1f</link>
      <guid>https://dev.to/krun_dev/garbage-collection-in-rust-without-a-single-unsafe-block-l1f</guid>
      <description>&lt;h2&gt;Mastering Garbage Collection in Rust without a single unsafe block&lt;/h2&gt;

&lt;p&gt;Let’s be real: most Rust GC libraries have a dirty secret buried in their &lt;code&gt;Cargo.toml&lt;/code&gt;. They claim to be "memory safe" but immediately reach for &lt;code&gt;unsafe&lt;/code&gt; blocks the moment a cyclic graph appears. They’ll tell you that managing back-references and complex object ownership is "impossible" within the borrow checker’s constraints. But that’s just a lack of imagination. You don't need raw pointers to build a high-performance collector; you need a better architecture. By swapping raw pointer manipulation for arena-based indices, we move the safety burden from your tired brain to the Rust compiler, where it belongs.&lt;/p&gt;

&lt;p&gt;The core shift is simple: use a &lt;code&gt;Vec&lt;/code&gt;-backed arena. Instead of juggling &lt;code&gt;*mut T&lt;/code&gt; and praying you don't hit a use-after-free, you operate with &lt;code&gt;u32&lt;/code&gt; indices. This isn't just a workaround—it’s a robust design pattern that turns pointer arithmetic into bounds-checked lookups. It’s clean, it’s readable, and it’s 100% compliant with the strict rules of Safe Rust. No "trust me, bro" comments required in the source tree.&lt;/p&gt;

&lt;h3&gt;Eliminate raw pointer manipulation using Vec-backed arenas&lt;/h3&gt;

&lt;p&gt;The foundation of this zero-unsafe approach is the &lt;code&gt;Heap&lt;/code&gt; struct, which acts as the sole owner of all objects. When you allocate something, you don't get a direct reference; you get a &lt;code&gt;Gc&amp;lt;T&amp;gt;&lt;/code&gt; handle. This handle is a lightweight &lt;code&gt;Copy&lt;/code&gt; type containing an index and a generation counter. By using indices instead of pointers, we let the CPU’s MMU and Rust’s runtime handle the heavy lifting of memory validation. If you try to access a stale handle, the system doesn't segfault—it simply returns &lt;code&gt;None&lt;/code&gt;. LGTM, right? It’s the kind of stability that lets you ship on Fridays without breaking a sweat.&lt;/p&gt;

&lt;h3&gt;Implementing mark-and-sweep algorithm for heterogeneous object graphs&lt;/h3&gt;

&lt;p&gt;How do we reclaim memory without falling back on &lt;code&gt;Arc&lt;/code&gt; or &lt;code&gt;Rc&lt;/code&gt;? We build a classic two-phase &lt;strong&gt;mark-and-sweep&lt;/strong&gt; collector. In the marking phase, we start from the "roots"—the &lt;code&gt;Root&amp;lt;T&amp;gt;&lt;/code&gt; handles living on your stack—and traverse the graph. Each heap-allocated type implements a &lt;code&gt;Trace&lt;/code&gt; trait, which is the contract telling the collector: "Hey, here are the other indices I’m holding." We use a simple &lt;code&gt;Vec&amp;lt;u32&amp;gt;&lt;/code&gt; as a worklist to color the graph. Since we're just iterating over vectors and hash sets, the borrow checker remains perfectly happy. No pointer magic, just pure logic.&lt;/p&gt;

&lt;h3&gt;Managing Rooted handles with RAII to prevent memory leaks&lt;/h3&gt;

&lt;p&gt;The real secret sauce is how we handle liveness. A &lt;code&gt;Root&amp;lt;T&amp;gt;&lt;/code&gt; isn't just a wrapper; it’s an RAII guard. As long as that &lt;code&gt;Root&lt;/code&gt; is in scope, the object is shielded from the collector. The moment it drops, it unregisters itself from the root set. We use Rust’s lifetime system (&lt;code&gt;'heap&lt;/code&gt;) to ensure you can't sneak a root past the lifetime of the heap itself. It’s a structural guarantee that prevents premature collection and dangling handles before they even happen. It's not just "safe"—it's architecturally impossible to mess up.&lt;/p&gt;

&lt;h2&gt;Scale memory management with generation counters and slots&lt;/h2&gt;

&lt;p&gt;Standard advice often pushes &lt;code&gt;Arc&amp;lt;RwLock&amp;lt;T&amp;gt;&amp;gt;&lt;/code&gt; for shared state, but if you’re building a scripting engine or a complex DOM, &lt;code&gt;Arc&lt;/code&gt; is a trap. It can't handle cycles, leading to silent memory leaks that bloat your process until it hits the OOM killer. Our arena-based GC solves this because it doesn't care about reference counts; it only cares about reachability. If the root set can't find you, you're gone. It’s a much more powerful liveness criterion that handles the "spaghetti graphs" of modern apps with ease.&lt;/p&gt;

&lt;h3&gt;Handling TypeId for dynamic dispatch in typed arenas&lt;/h3&gt;

&lt;p&gt;To keep things fast and type-safe, we utilize per-type arenas keyed by &lt;code&gt;TypeId&lt;/code&gt;. This avoids the "vtable tax" of wrapping everything in a &lt;code&gt;Box&amp;lt;dyn Trace&amp;gt;&lt;/code&gt;. When you call &lt;code&gt;heap.alloc::&amp;lt;T&amp;gt;()&lt;/code&gt;, the system dispatches to the correct typed vector. This keeps your data contiguous in memory, which is a massive win for cache locality. We’re not just building a safe collector; we’re building an efficient one that respects the hardware while playing by the rules of the language.&lt;/p&gt;

&lt;p&gt;At the end of the day, safe Rust is about making the right thing the easy thing. By moving your object graph into an arena, you trade a tiny bit of raw pointer speed for a massive gain in maintainability and correctness. Stop fighting the borrow checker and start using it to build better tools. Stay sharp, and keep those tags closed.&lt;/p&gt;

&lt;p&gt;
To master the technical implementation of &lt;a href="https://krun.pro/rust-garbage-collectio/" rel="noopener noreferrer"&gt;zero-unsafe GC architecture&lt;/a&gt; in Rust and see the code in action, visit my site.
&lt;/p&gt;

</description>
      <category>rust</category>
      <category>garbage</category>
      <category>arenas</category>
      <category>indices</category>
    </item>
    <item>
      <title>Kotlin 2.3.21 Fixes</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Thu, 23 Apr 2026 22:18:23 +0000</pubDate>
      <link>https://dev.to/krun_dev/kotlin-2321-fixes-3nl9</link>
      <guid>https://dev.to/krun_dev/kotlin-2321-fixes-3nl9</guid>
      <description>&lt;h2&gt;Kotlin 2.3.21: The Release That Finally Makes KMP Build Performance Predictable&lt;/h2&gt;

&lt;p&gt;Kotlin 2.3.21 is one of those releases you don’t notice from the changelog, but you feel immediately if you work with Kotlin Multiplatform in production. It doesn’t introduce flashy language features — it fixes the stuff that quietly destroys developer productivity: broken incremental builds, unstable Wasm compilation, and fragile iOS linking via SPM.&lt;/p&gt;

&lt;p&gt;For teams building high-load systems, shared codebases, or multi-target apps (Android, iOS, WebAssembly), this release is less about “new Kotlin features” and more about restoring trust in the build system. The K2 compiler finally behaves like a stable foundation instead of experimental infrastructure.&lt;/p&gt;

&lt;h3&gt;Kotlin Wasm incremental build performance improvements&lt;/h3&gt;

&lt;p&gt;One of the biggest pain points in Kotlin/Wasm was unpredictable incremental compilation. A small change in a shared module could trigger full rebuilds or backend failures due to klib metadata invalidation. Kotlin 2.3.21 introduces per-symbol fingerprinting inside klib metadata, which allows the incremental compilation engine to correctly detect what actually changed.&lt;/p&gt;

&lt;p&gt;In real-world KMP projects, this means a rebuild that previously took 20–45 seconds or failed entirely now completes in a few seconds. The important part is not just speed — it’s consistency. Developers can iterate without constantly falling back to clean builds, which fundamentally changes the feedback loop in Wasm development.&lt;/p&gt;

&lt;h3&gt;Kotlin KMP iOS SPM linking issues resolved&lt;/h3&gt;

&lt;p&gt;Another major improvement is in Kotlin/Native integration with Swift Package Manager. Before 2.3.21, static frameworks built with isStatic = true often failed with unresolved arm64 symbols, forcing teams to manually inject linkerOpts and maintain fragile build workarounds.&lt;/p&gt;

&lt;p&gt;The issue was rooted in incomplete parsing of SPM binary metadata. Kotlin 2.3.21 fixes this by properly resolving .pbe package graphs and feeding correct dependency paths into the LLVM linker pipeline. As a result, most manual linker configuration hacks can now be removed.&lt;/p&gt;

&lt;p&gt;For teams maintaining KMP iOS production apps, this directly reduces build complexity and eliminates one of the most frustrating classes of “works locally but fails in CI” errors.&lt;/p&gt;

&lt;h3&gt;Kotlin K2 compiler compatibility and multi-module visibility fixes&lt;/h3&gt;

&lt;p&gt;K2 introduced stricter compilation rules, but in earlier versions it overreached in some cases. One of the most disruptive issues was incorrect handling of protected companion object members in multi-module projects, where valid inheritance-based access patterns were rejected.&lt;/p&gt;

&lt;p&gt;Kotlin 2.3.21 aligns compiler behavior with the language specification. Subclasses can now correctly access protected companion members across modules without triggering false compilation errors. This removes the need for architectural workarounds introduced during K1 → K2 migration phases.&lt;/p&gt;

&lt;p&gt;For large codebases, this is not just a fix — it’s cleanup of accumulated technical debt caused by compiler inconsistency.&lt;/p&gt;

&lt;h3&gt;Gradle build performance and CI/CD optimization in Kotlin 2.3.21&lt;/h3&gt;

&lt;p&gt;Beyond compiler fixes, Kotlin 2.3.21 improves Gradle task behavior, particularly around Android build pipelines. Tasks like MergeMappingFileTask previously re-executed unnecessarily due to overly broad input tracking. This led to wasted time in release builds, especially in large monorepos.&lt;/p&gt;

&lt;p&gt;The fix improves input/output declaration accuracy, allowing Gradle to properly reuse cached results and avoid redundant execution. In large-scale projects, this translates into noticeable CI/CD time savings and better remote cache hit rates.&lt;/p&gt;

&lt;p&gt;Combined with reduced compiler memory usage, this release makes parallel builds more efficient in shared CI environments, where resource constraints often become a bottleneck.&lt;/p&gt;

&lt;h2&gt;Why Kotlin 2.3.21 matters for Kotlin Multiplatform development in 2026&lt;/h2&gt;

&lt;p&gt;What makes Kotlin 2.3.21 important is not any single feature, but the cumulative effect of removing build system unpredictability. Kotlin Multiplatform has always been powerful but fragile at the edges — especially when dealing with Wasm, iOS linking, and incremental compilation across large module graphs.&lt;/p&gt;

&lt;p&gt;This release addresses those exact failure points. It doesn’t change how you write Kotlin code. It changes whether your builds behave consistently when your system scales.&lt;/p&gt;

&lt;p&gt;For teams running KMP in production, the upgrade is less about new capabilities and more about stability: faster incremental builds, fewer CI failures, and a compiler that finally behaves predictably under real-world load.&lt;/p&gt;

&lt;p&gt;Full breakdown of the Kotlin 2.3.21 update is available at &lt;a href="https://krun.pro/kotlin-2-3-21/" rel="noopener noreferrer"&gt;https://krun.pro/kotlin-2-3-21/&lt;/a&gt;&lt;br&gt;
.&lt;br&gt;
If you’re working with Kotlin Multiplatform or production systems, it’s worth reading to understand what actually changed under the hood.&lt;br&gt;
Take a look and get familiar with the real impact of this release in a practical engineering context.&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>release</category>
      <category>kmp</category>
      <category>multiplatform</category>
    </item>
    <item>
      <title>Kotlin in Production Backend Systems</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Thu, 23 Apr 2026 15:10:20 +0000</pubDate>
      <link>https://dev.to/krun_dev/kotlin-in-production-backend-systems-12g6</link>
      <guid>https://dev.to/krun_dev/kotlin-in-production-backend-systems-12g6</guid>
      <description>&lt;h2&gt;What Actually Breaks When Kotlin Hits Production Load&lt;/h2&gt;


&lt;p&gt;
Kotlin feels great—until prod traffic shows up and starts asking uncomfortable questions. Most teams celebrate the migration from Java somewhere around “less boilerplate, nicer syntax,” and then move on. That’s exactly where &lt;a href="https://krun.pro/kotlin-production-backend/" rel="noopener noreferrer"&gt;Kotlin backend production issues&lt;/a&gt; begin, hiding behind clean code and waiting for real load to expose them.
&lt;/p&gt;

&lt;p&gt;
This isn’t about Kotlin being bad. It’s about misunderstanding what Kotlin actually is: a thin, elegant layer on top of the JVM. And the JVM doesn’t magically become nicer just because your code does. Garbage collection, heap pressure, thread pools, blocking I/O—those are still running the show. Kotlin just makes it easier to write code that accidentally stresses all of them faster.
&lt;/p&gt;

&lt;p&gt;
Coroutines are the biggest example. They look like lightweight magic—spin up thousands, no problem. Except they don’t run on magic, they run on a finite thread pool. Throw one blocking call into the mix (hello JDBC, hello legacy HTTP client), and suddenly your “highly concurrent” system is just a queue waiting for threads to free up. From the outside: random latency spikes. From the inside: you quietly DDoS’d your own thread pool.
&lt;/p&gt;

&lt;p&gt;
Then comes observability. Classic logging assumes one request = one thread. Coroutines don’t play that game. Execution jumps threads, and your trace IDs don’t come along for the ride unless you explicitly force them to. The result? Logs that look complete but tell you nothing. Traces that start strong and then just… vanish. Not a bug—just missing context propagation that nobody wired up.
&lt;/p&gt;

&lt;p&gt;
And yes, Kotlin’s famous null safety. Works great—right until external data enters the system. Reflection-based tools like Jackson don’t care about your non-null types. They’ll happily inject nulls into places your compiler swore were safe. You won’t notice until runtime, under load, when something explodes far away from where the data came in.
&lt;/p&gt;

&lt;p&gt;
The pattern is consistent: Kotlin doesn’t introduce most of these problems—it hides them better. The teams that succeed with Kotlin backend systems treat it like what it is: JVM engineering with nicer syntax. They audit blocking calls, profile allocation rates, wire up context propagation early, and assume external data will break their type guarantees.
&lt;/p&gt;

&lt;p&gt;
Write Kotlin for humans. Debug it like a JVM system. That’s the difference between “it works on staging” and “it survives production.”
&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>backend</category>
      <category>systems</category>
      <category>production</category>
    </item>
    <item>
      <title>Senior Python Challenges</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Thu, 23 Apr 2026 12:51:19 +0000</pubDate>
      <link>https://dev.to/krun_dev/senior-python-challenges-4glb</link>
      <guid>https://dev.to/krun_dev/senior-python-challenges-4glb</guid>
      <description>&lt;h2&gt;Senior Python Challenges: What I Learned After Moving From Writing Code to Running Systems in Production&lt;/h2&gt;

&lt;p&gt;Working with Python as a senior developer feels very different from writing scripts or building small services. At scale, the language stops being “simple and forgiving” and starts exposing every architectural decision you made earlier. What used to be elegant code in development often becomes a performance bottleneck, a concurrency trap, or a silent reliability risk in production.&lt;/p&gt;

&lt;p&gt;This is a summary of the problems I repeatedly run into in real systems, why they happen, and how I approach them today—not in theory, but in production environments where downtime and latency actually matter.&lt;/p&gt;

&lt;h3&gt;Performance: When Clean Code Stops Being Fast Code&lt;/h3&gt;

&lt;p&gt;One of the first lessons I learned the hard way is that Python performance is rarely about syntax. It’s about how much work the interpreter is forced to do at runtime.&lt;/p&gt;

&lt;p&gt;A simple loop that looks harmless in code review can become a serious issue under load.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
import time

def slow_sum(n):
    result = 0
    for i in range(n):
        result += i
    return result

start = time.time()
slow_sum(10**7)
print(time.time() - start)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In isolation, this looks fine. In production, it becomes a measurable latency spike when called repeatedly. What changed my approach was learning to stop assuming “readable Python is always acceptable Python under scale.”&lt;/p&gt;

&lt;p&gt;The fix is rarely micro-optimization. It is choosing the right abstraction level from the start.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
def fast_sum(n):
    return sum(range(n))
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The real senior-level shift here is discipline: I profile first, and only then decide whether pure Python is even justified.&lt;/p&gt;

&lt;h3&gt;Concurrency: The GIL Reality Check&lt;/h3&gt;

&lt;p&gt;Early in my career, I assumed threads meant parallelism. Python quickly corrected that assumption.&lt;/p&gt;

&lt;p&gt;The Global Interpreter Lock (GIL) changes how concurrency actually behaves in CPU-bound workloads. Adding threads often makes things worse, not better.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
import threading

def cpu_task(n):
    count = 0
    while count &amp;lt; n:
        count += 1
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Even when I scale this with multiple threads, the result is still serialized execution under the interpreter.&lt;/p&gt;

&lt;p&gt;What I had to internalize is simple: threads in Python are not a performance tool for CPU-heavy work.&lt;/p&gt;

&lt;p&gt;Real scaling starts only when I move to separate processes:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
from multiprocessing import Pool

def cpu_task(n):
    return sum(i * i for i in range(n))

with Pool(4) as p:
    results = p.map(cpu_task, [10**7] * 4)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This mental model shift—from “parallel threads” to “isolated processes”—is one of the most important transitions in senior Python work.&lt;/p&gt;

&lt;h3&gt;Async Systems: Where Bugs Stop Being Visible&lt;/h3&gt;

&lt;p&gt;Async Python is powerful, but it’s also one of the easiest places to introduce invisible production issues.&lt;/p&gt;

&lt;p&gt;What I’ve learned is that async failures rarely crash systems—they degrade them silently.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
import asyncio

async def fetch_data(n):
    await asyncio.sleep(n)
    return n

async def main():
    results = await asyncio.gather(fetch_data(1), fetch_data(2))
    print(results)

asyncio.run(main())
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The danger here is not the syntax. It is mixing blocking and non-blocking code in ways that only appear under load.&lt;/p&gt;

&lt;p&gt;My rule now is strict: if a function is async, nothing inside it should block. Ever.&lt;/p&gt;

&lt;h3&gt;Memory: The Slowest Type of Production Failure&lt;/h3&gt;

&lt;p&gt;Memory issues are dangerous because they don’t fail immediately. They accumulate.&lt;/p&gt;

&lt;p&gt;Circular references, hidden caches, or long-lived objects can silently grow memory usage until the system becomes unstable.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
import gc

a = []
b = [a]
a.append(b)

del a, b
gc.collect()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;What I learned is that garbage collection is not a safety net for architecture mistakes. It is just cleanup, not control.&lt;/p&gt;

&lt;p&gt;In real systems, I now treat memory as a design constraint, not an implementation detail.&lt;/p&gt;

&lt;h3&gt;Testing: When Mocks Start Hiding Real Problems&lt;/h3&gt;

&lt;p&gt;One of the most misleading things in Python systems is over-mocked testing. It creates confidence that does not survive production.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
from unittest.mock import MagicMock

service = MagicMock()
service.fetch_data.return_value = {"id": 1}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This kind of test passes easily, but it often stops validating real behavior.&lt;/p&gt;

&lt;p&gt;What I rely on now is dependency injection and realistic integration paths instead of heavy mocking. It keeps tests closer to actual system behavior.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
class DataFetcher:
    def __init__(self, client):
        self.client = client

    def get_data(self):
        return self.client.fetch()
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This structure is far more stable when systems evolve.&lt;/p&gt;

&lt;h3&gt;Dependencies: The Silent Source of Production Breakage&lt;/h3&gt;

&lt;p&gt;Dependency management is one of those problems that looks solved until it isn’t. Conflicts, transitive upgrades, and version drift can break systems without any code changes.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
# requirements.txt
Django==4.2
requests==2.32
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;What I’ve learned is that reproducibility matters more than convenience. I now treat environment consistency as part of system design, not setup.&lt;/p&gt;

&lt;p&gt;Tools that lock dependency graphs are not optional in production systems—they are mandatory.&lt;/p&gt;

&lt;h3&gt;Security: The Small Mistakes That Become Incidents&lt;/h3&gt;

&lt;p&gt;Security issues in Python are usually not complex. They are simple mistakes in unsafe assumptions.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
cursor.execute(f"SELECT * FROM users WHERE id={user_input}")
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This kind of pattern is still surprisingly common in legacy systems.&lt;/p&gt;

&lt;p&gt;The safe version is always explicit and parameterized:&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
cursor.execute("SELECT * FROM users WHERE id=?", (user_input,))
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;What I learned is that security is not a feature you add later—it is something you enforce in every layer of data handling.&lt;/p&gt;

&lt;h3&gt;Conclusion: Senior Python Work Is System Thinking, Not Syntax&lt;/h3&gt;

&lt;p&gt;At this level, Python is no longer about writing code that works. It is about building systems that stay stable under load, scale without surprises, and fail in predictable ways.&lt;/p&gt;

&lt;p&gt;Performance, concurrency, memory, testing, dependencies, and security are not separate topics—they are interconnected failure surfaces.&lt;/p&gt;

&lt;p&gt;The biggest shift in my own thinking was realizing this: Python does not hide complexity. It reveals it over time.&lt;/p&gt;

&lt;p&gt;Senior development is not about avoiding problems. It is about designing systems where problems are visible, isolated, and controllable before they reach production.&lt;br&gt;
Source: &lt;a href="https://krun.pro/mastering-senior-python-pitfalls/" rel="noopener noreferrer"&gt;https://krun.pro/mastering-senior-python-pitfalls/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>senior</category>
      <category>challenges</category>
      <category>code</category>
    </item>
    <item>
      <title>10 Python Pitfalls</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Wed, 22 Apr 2026 21:45:21 +0000</pubDate>
      <link>https://dev.to/krun_dev/10-python-pitfalls-3pm7</link>
      <guid>https://dev.to/krun_dev/10-python-pitfalls-3pm7</guid>
      <description>&lt;h2&gt;10 Python Pitfalls That Scream You Are a Junior Developer&lt;/h2&gt;

&lt;p&gt;Python looks easy at first, but when your project hits production and heavy load, small mistakes can become big problems. This article covers 10 common pitfalls that slow your code, waste memory, and reveal you as a junior. If you want Python that runs fast, stays stable, and scales in 2026, this guide is for you.&lt;/p&gt;

&lt;p&gt;One common trap is mutable default arguments. Using lists or dictionaries as default parameters might seem handy, but Python creates the object once when the function is defined, and it gets shared across all calls. Data from one request can leak into another. The fix is to use None and create the object inside the function so each call starts fresh.&lt;/p&gt;

&lt;p&gt;Performance bottlenecks are everywhere. Heavy for-loops trigger type checks, lookups, and memory tasks for every item. On large datasets, this slows everything down. List comprehensions, generator expressions, or NumPy vectorization are faster and more efficient.&lt;/p&gt;

&lt;p&gt;The Global Interpreter Lock (GIL) is often misunderstood. It blocks multiple threads from running Python bytecode at the same time, which limits CPU-bound tasks. Using the multiprocessing module spins up separate processes for each core and bypasses GIL.&lt;/p&gt;

&lt;p&gt;Memory management is another issue. Loading large datasets into memory at once can crash production. Generators let you process items one at a time, keeping memory use low and predictable.&lt;/p&gt;

&lt;p&gt;Type hinting is essential. Dynamic typing is fine for small projects, but in larger codebases, missing hints lead to bugs. Tools like Mypy or Pyright catch errors before runtime and improve IDE autocompletion. Treat type hints as contracts between parts of your code.&lt;/p&gt;

&lt;p&gt;Async code has its pitfalls. Blocking calls inside async functions stop the event loop. Use awaitable, non-blocking calls and libraries like httpx or motor to maintain concurrency.&lt;/p&gt;

&lt;p&gt;Pythonic encapsulation avoids unnecessary boilerplate. Instead of writing explicit getters and setters for everything, use property decorators to keep your classes clean and readable.&lt;/p&gt;

&lt;p&gt;Error handling matters. Catch only expected exceptions and use context managers to manage resources. Blindly swallowing errors hides bugs and can make production unstable.&lt;/p&gt;

&lt;p&gt;Advanced data structures matter for performance. Using dictionaries for millions of objects wastes memory. Data classes or named tuples reduce overhead, provide structure, and are easier to debug.&lt;/p&gt;

&lt;p&gt;Efficient iteration is key. Avoid complex nested loops and use the itertools module. Functions like chain let you iterate over multiple collections without creating temporary lists in memory.&lt;/p&gt;

&lt;p&gt;Mastering these pitfalls will make your code more stable, readable, and ready for high-load systems. Writing clever code is fun, but writing code that runs well in production is what separates juniors from senior Python developers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://krun.pro/10-python-anti-patterns/" rel="noopener noreferrer"&gt;https://krun.pro/10-python-anti-patterns/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>pitfalls</category>
      <category>junior</category>
    </item>
    <item>
      <title>Kotlin Coroutines in Production</title>
      <dc:creator>Krun_Dev</dc:creator>
      <pubDate>Wed, 22 Apr 2026 21:36:15 +0000</pubDate>
      <link>https://dev.to/krun_dev/kotlin-coroutines-in-production-5757</link>
      <guid>https://dev.to/krun_dev/kotlin-coroutines-in-production-5757</guid>
      <description>&lt;h2&gt;Your Coroutines Work Locally. Then Production Happens.&lt;/h2&gt;

&lt;p&gt;You wrote the async code. It's elegant, non-blocking, and runs beautifully on your machine. Then you deploy — and somewhere around 3 AM, Grafana wakes you up with a memory graph that looks like a ski slope. Welcome to Kotlin Coroutines in Production.&lt;/p&gt;

&lt;p&gt;This guide skips the Hello World phase entirely. It's about what happens when real load hits — thread starvation, silent memory leaks, and exception handlers that don't actually handle anything. The kind of bugs that only show up at scale and only at the worst possible time.&lt;/p&gt;

&lt;h2&gt;Scopes, Supervisors, and Why the Wrong Choice Crashes Everything&lt;/h2&gt;

&lt;p&gt;Most developers treat coroutineScope and supervisorScope as roughly the same thing. They are not. With coroutineScope, one failing child cancels the parent and every sibling — great for all-or-nothing operations, catastrophic for independent tasks. In production, supervisorScope is almost always the right call. Understanding the difference between coroutineContext vs coroutineScope vs supervisorScope is what separates code that survives partial failures from code that doesn't.&lt;/p&gt;

&lt;h2&gt;Exception Handling That Actually Works&lt;/h2&gt;

&lt;p&gt;Wrapping await() in a try-catch is not enough. By the time your catch block runs, the parent scope may already be cancelling. Exceptions in coroutines behave differently depending on whether you used launch or async — and a "swallowed" exception in production means 500 errors with no logs on the backend, or an "App has stopped" dialog on Android. The right pattern is a CoroutineExceptionHandler installed at every root scope, paired with supervisorScope to contain blast radius.&lt;/p&gt;

&lt;h2&gt;Thread Starvation and the Custom Dispatcher You Actually Need&lt;/h2&gt;

&lt;p&gt;Dispatchers.IO is a reasonable default. It is not enough when you mix non-blocking code with slow legacy database drivers under serious load. The answer is a custom coroutine dispatcher for heavy IO — an isolated fixed thread pool for the slow stuff, so the rest of your app stays responsive. Pair that with limitedParallelism(n) on Dispatchers.Default to cap background CPU work, and you have a proper bulkhead that keeps your latency-sensitive paths alive when everything else is under pressure.&lt;/p&gt;

&lt;h2&gt;Leaks, Ghosts, and the Danger of GlobalScope&lt;/h2&gt;

&lt;p&gt;A coroutine lives in memory as long as its Job is active. Lose the reference, and you have a ghost — running, consuming resources, invisible. The most common cause is GlobalScope used for "just a quick task." The diagnostic tool is DebugProbes from kotlinx-coroutines-debug: it dumps every active coroutine with a stack trace so you can see exactly what's suspended and why. The long-term fix is simpler — never break the parent-child hierarchy, and always bind coroutines to a lifecycle-aware scope.&lt;/p&gt;

&lt;h2&gt;If It Works on Your Machine, That Is Not Enough&lt;/h2&gt;

&lt;p&gt;Structured concurrency pitfalls in large-scale systems, state confinement without locks, the island effect that leaves thousands of zombie tasks burning CPU — it's all in the full article.&lt;br&gt;&lt;br&gt;Production won't wait for you to finish the docs. But reading this first might mean you actually sleep through the night.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://krun.pro/kotlin-coroutines-in-production/" rel="noopener noreferrer"&gt;https://krun.pro/kotlin-coroutines-in-production/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kotlin</category>
      <category>coroutines</category>
      <category>async</category>
      <category>code</category>
    </item>
  </channel>
</rss>
