<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Space</title>
    <description>The latest articles on DEV Community by Space (@bitsabhi).</description>
    <link>https://dev.to/bitsabhi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bitsabhi"/>
    <language>en</language>
    <item>
      <title>The bounty trap: how open source reward systems exploit the people they claim to serve</title>
      <dc:creator>Space</dc:creator>
      <pubDate>Tue, 07 Apr 2026 09:37:37 +0000</pubDate>
      <link>https://dev.to/bitsabhi/the-bounty-trap-how-open-source-reward-systems-exploit-the-people-they-claim-to-serve-2k7e</link>
      <guid>https://dev.to/bitsabhi/the-bounty-trap-how-open-source-reward-systems-exploit-the-people-they-claim-to-serve-2k7e</guid>
      <description>&lt;p&gt;&lt;strong&gt;Open source bounty systems — from Web3 audit contests to traditional bug bounties — share a single structural flaw that corrupts nearly every platform in the ecosystem: the entity deciding whether to pay is the same entity that benefits from not paying.&lt;/strong&gt; This conflict of interest, combined with AI-generated spam, platform collapses, and extreme earnings inequality, has created a system where skilled developers and security researchers routinely perform high-value work for free. The evidence spans every major platform and reveals not isolated bad actors but a systemic pattern of value extraction dressed up as opportunity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The judge is the defendant: why every platform has the same problem
&lt;/h2&gt;

&lt;p&gt;The core failure is architectural. In virtually every bounty system — HackerOne, Bugcrowd, Immunefi, Code4rena, Algora — &lt;strong&gt;the bounty poster unilaterally decides whether to pay&lt;/strong&gt;. There is no binding contract, no neutral arbiter, and no meaningful legal recourse for the researcher. Bug bounty terms are written with phrases like "at the company's sole discretion," and platforms consistently side with their paying customers over their unpaid labor force.&lt;/p&gt;

&lt;p&gt;Katie Moussouris, who helped create the Pentagon's bug bounty program and co-authored the ISO vulnerability disclosure standards, has publicly called this "an inherent conflict of interest." Andrew Crocker, Senior Staff Attorney at the EFF, noted that bounty program terms "give the company leeway to determine 'in its sole discretion' whether a researcher has met the criteria." Immunefi's own blog acknowledges: "We've seen some less-than-healthy behavior from projects, and whitehats end up feeling like they don't have any negotiating power and that they should be happy to just accept whatever payment they're offered."&lt;/p&gt;

&lt;p&gt;The tactics for avoiding payment are remarkably consistent across platforms. Companies retroactively narrow scope to exclude reported vulnerabilities. They mark critical findings as "informational" to avoid payouts. They claim bugs were "already known" — without producing evidence or timestamps. They silently patch reported vulnerabilities and then deny the report was valid. When Immunefi removed a project for failing to pay a &lt;strong&gt;minimum of $500,000&lt;/strong&gt; in bounties, the whitehats still received nothing; Immunefi has no enforcement mechanism beyond removal.&lt;/p&gt;

&lt;p&gt;The power asymmetry extends to silencing. &lt;strong&gt;Bugcrowd classifies all submissions as "Confidential Information of the Program Owner"&lt;/strong&gt; by default. Researchers who go public risk suspension or permanent bans — as Trust Security discovered when Immunefi suspended them for 90 days after they criticized a bounty decision publicly. Jonathan Leitschuh, who discovered a critical Zoom vulnerability, declined the bounty because the NDA would have prevented him from ever discussing the flaw: "A lot of these programs are structured on this idea of non-disclosure. What I end up feeling like is that they are trying to buy researcher silence." Bruce Schneier shared academic research showing "legal agreements surrounding vulnerability disclosure muzzle researchers while allowing companies to not fix the vulnerabilities."&lt;/p&gt;

&lt;h2&gt;
  
  
  A wall of shame worth $2.5 million in unpaid Web3 bounties
&lt;/h2&gt;

&lt;p&gt;The Web3 bounty ecosystem, anchored by Immunefi and Code4rena, demonstrates these failures at scale. The Bug Bounty Wall of Shame — a community-maintained ledger of projects that "rugged" security researchers — documents nearly &lt;strong&gt;$2.5 million in unpaid bounties&lt;/strong&gt; across dozens of projects.&lt;/p&gt;

&lt;p&gt;The cases follow a pattern. Arbitrum advertised a $2 million maximum bounty; when a whitehat found a vulnerability putting &lt;strong&gt;352,000 ETH (~$680 million)&lt;/strong&gt; at risk, they received only 25% of the maximum. Cronos had &lt;strong&gt;$2.5 million at risk&lt;/strong&gt;; after the project silently fixed the bug before even responding to the report, the researcher received &lt;strong&gt;$1,600&lt;/strong&gt; as a "token of appreciation." dHEDGE, with &lt;strong&gt;$14.44 million at risk&lt;/strong&gt;, paid &lt;strong&gt;$500 in "goodwill."&lt;/strong&gt; Magic Link ignored a vulnerability affecting $10 million in user funds through eight reminders over a month, paid nothing, then publicly announced the patch as a "new security feature." GhostMarket ignored Immunefi's mediation order entirely and went silent.&lt;/p&gt;

&lt;p&gt;Immunefi's explicit "No Fix, No Pay" policy creates a perverse incentive: projects can acknowledge a critical vulnerability and still avoid payment by choosing not to fix it. The platform admits this is "an eternally frustrating experience for whitehats."&lt;/p&gt;

&lt;p&gt;One anonymous researcher documented "14 weeks in Immunefi limbo" on Medium after discovering a critical vulnerability in a project advertising seven-figure payouts. The project closed the report within two days citing an irrelevant technical reason. After Immunefi mediation confirmed the report was valid, the project changed its rejection reason to "duplicate" — two months after submission. The researcher observed: "Beyond pausing a BBP, Immunefi actually has no enforcement mechanisms in place for most projects, and bad-faith actors clearly don't care about being booted from the platform."&lt;/p&gt;

&lt;p&gt;Code4rena's competitive audit model compounds the problem with extreme earnings inequality. In 2023, the platform processed &lt;strong&gt;31,512 bug submissions&lt;/strong&gt; across 114 audits and paid out &lt;strong&gt;$4,823,059&lt;/strong&gt; total. Of more than 10,000 registered wardens, only &lt;strong&gt;1,323 earned anything&lt;/strong&gt; — meaning roughly &lt;strong&gt;87% earned zero&lt;/strong&gt;. The average earning warden took home &lt;strong&gt;$3,646 for the year&lt;/strong&gt;. cmichel, the first person to earn $1 million on Code4rena, watched his hourly rate drop from $2,000 to $500 as competition increased from ~10 to ~59 wardens per contest. "This is great for the sponsors who receive an insane amount of value," he wrote, "but bad for the auditors as they all compete for the same pot." Zellic, which acquired Code4rena in 2024, eventually admitted the economics "make more sense as a public good rather than a rent-seeking business."&lt;/p&gt;

&lt;h2&gt;
  
  
  AI slop made the economics completely untenable
&lt;/h2&gt;

&lt;p&gt;On January 31, 2026, Daniel Stenberg shut down curl's bug bounty program after 6.5 years, &lt;strong&gt;87 confirmed vulnerabilities&lt;/strong&gt;, and over &lt;strong&gt;$100,000 paid to researchers&lt;/strong&gt;. The reason: AI-generated submissions had overwhelmed the program's volunteer security team. The confirmed vulnerability rate plummeted from above &lt;strong&gt;15%&lt;/strong&gt; to below &lt;strong&gt;5%&lt;/strong&gt; — "not even one in twenty was real." By July 2025, submission volume had spiked to &lt;strong&gt;eight times the normal rate&lt;/strong&gt;. In the first 21 days of January 2026 alone, 20 submissions arrived with zero valid vulnerabilities.&lt;/p&gt;

&lt;p&gt;The reports weren't obviously bad. They featured walls of perfectly formatted text, plausible-sounding technical language, and references to functions that simply don't exist. Stenberg wrote: "We are effectively being &lt;strong&gt;DDoSed&lt;/strong&gt;," and later: "The never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk. Time and energy that is completely wasted while also hampering our will to live."&lt;/p&gt;

&lt;p&gt;The economic mechanism is simple. Previously, generating a credible security report required time, skill, and deep codebase knowledge — a natural quality filter. AI reduced the submission cost to near zero while leaving the triage cost unchanged or higher, since AI-generated reports look more plausible and take longer to debunk. This is a classic market failure: when the cost of filing claims drops to zero but the cost of evaluating claims stays constant, the system collapses.&lt;/p&gt;

&lt;p&gt;Curl was not alone. Django's Security Team documented receiving AI-fabricated reports "on a nearly daily basis." Node.js dealt with a &lt;strong&gt;19,000-line AI-generated pull request&lt;/strong&gt; and imposed minimum reputation scores. libxml2's sole maintainer ended support for embargoed vulnerability reports entirely, citing the "unsustainable burden of handling security triage as an unpaid volunteer." CycloneDX pulled its bounty program after receiving "almost entirely AI slop reports." Apache Log4j's maintainer reported reviewing &lt;strong&gt;67 submissions&lt;/strong&gt; since July 2024, half arriving in the final two months. tldraw began &lt;strong&gt;auto-closing all external pull requests&lt;/strong&gt; in January 2026.&lt;/p&gt;

&lt;p&gt;Mitchell Hashimoto's Ghostty terminal emulator moved to invitation-only contributions, requiring all contributors be vouched for by existing trusted members. His assessment: "It's a fucking war zone out here man. Maintainer morale at an all time low." RedMonk analyst Kate Holterhoff coined the term "AI Slopageddon" to describe the phenomenon. An Oxford Academic paper studying COVID-19's supply shock on Bugcrowd — when submissions increased &lt;strong&gt;151%&lt;/strong&gt; — showed that AI represents a far larger version of the same dynamic.&lt;/p&gt;

&lt;h2&gt;
  
  
  HackerOne and Bugcrowd: the traditional platforms are failing too
&lt;/h2&gt;

&lt;p&gt;Traditional security bounty platforms exhibit the same structural failures with additional layers of institutional dysfunction. Tommy DeVoss, one of HackerOne's highest-earning researchers with over &lt;strong&gt;$2 million earned&lt;/strong&gt;, stated flatly: "Historically, mediation with HackerOne has been worthless." John Jackson of Sakura Samurai described submitting a vulnerability to Ford, watching it get fixed, being ignored for months on disclosure requests, and then getting &lt;strong&gt;banned from Ford's program&lt;/strong&gt; when he pushed back. Patrick Martin had a plaintext credential disclosure go &lt;strong&gt;10 months without response or payment&lt;/strong&gt; despite being silently fixed.&lt;/p&gt;

&lt;p&gt;In February 2022, HackerOne &lt;strong&gt;froze $50,000 in already-earned bounties&lt;/strong&gt; from Russian researcher Anton Subbotin after the Ukraine invasion — for work already completed, discussed, and patched. CEO Mårten Mickos initially tweeted the funds would be donated to UNICEF without the researcher's consent. In January 2026, researcher Jakub Ciolek reported that HackerOne completely ghosted him on an &lt;strong&gt;$8,500 payout&lt;/strong&gt; from the Internet Bug Bounty program for months; the company responded only after The Register published the story. HackerOne's IBB program has since paused submissions entirely.&lt;/p&gt;

&lt;p&gt;The earnings data reveals stark inequality behind the platform's marketing. HackerOne touts &lt;strong&gt;$81 million in annual payouts&lt;/strong&gt; and "hacker millionaires," but only about &lt;strong&gt;1.6% of registered accounts&lt;/strong&gt; produce valid work. &lt;strong&gt;Only 12% of researchers earn $20,000 or more annually.&lt;/strong&gt; The median per-bounty payment is approximately &lt;strong&gt;$800&lt;/strong&gt;. One researcher tracked &lt;strong&gt;782 hours over 150 days&lt;/strong&gt; and earned $5,650 — an hourly rate of &lt;strong&gt;$9.80&lt;/strong&gt;, below minimum wage in most Western countries. A critical RCE vulnerability that pays $500–$5,000 on bounty platforms could fetch &lt;strong&gt;$50,000–$500,000&lt;/strong&gt; on the gray market — a gap of &lt;strong&gt;100x to 500x&lt;/strong&gt;. Lviv Van Houtven of Latacora put it bluntly: "HackerOne has weaponized triage... Their business model is misery."&lt;/p&gt;

&lt;p&gt;Apple's program drew particular criticism. Researcher RenwaX23 discovered a Universal Cross-Site Scripting vulnerability in Safari rated &lt;strong&gt;Critical at 9.8/10&lt;/strong&gt;, capable of impersonating users and accessing iCloud. Apple paid &lt;strong&gt;$1,000&lt;/strong&gt; — from a program advertising payouts up to $2 million. The Washington Post interviewed more than two dozen researchers who complained about Apple's pattern of slow fixes, limited feedback, and non-payment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bountysource proved platforms can simply steal the money
&lt;/h2&gt;

&lt;p&gt;The Bountysource collapse is the clearest illustration of how bounty systems can fail catastrophically. The platform, which facilitated bounties across &lt;strong&gt;55,000 GitHub issues&lt;/strong&gt; for projects including BorgBackup, elementary OS, Nextcloud, and Nim, was acquired by cryptocurrency company CanYa in 2017 and then sold to The Blockchain Group in 2020.&lt;/p&gt;

&lt;p&gt;On &lt;strong&gt;June 16, 2020&lt;/strong&gt;, Bountysource emailed users announcing a new Terms of Service with a critical clause: any bounty unclaimed for two years would be "retained by Bountysource." The change was retroactive, with only two weeks' notice. After backlash forced a reversal, trust was destroyed — elementary OS published "Goodbye, Bountysource" and withdrew entirely.&lt;/p&gt;

&lt;p&gt;The final collapse was quieter. By mid-2023, payouts stopped entirely. Bountysource went silent on all communication. The Blockchain Group filed for bankruptcy in November 2023. Investigative reporting by Evan Boehs documented at least &lt;strong&gt;$21,000 in stolen developer earnings&lt;/strong&gt; — money for work already completed. The NewPipe project lost approximately &lt;strong&gt;€6,400&lt;/strong&gt; in accumulated bounties. Users on GitHub issue #1586 — titled "CRITICAL: Bountysource is Insolvent, do not use!" — called the behavior "abuse of the escrow, essentially an embezzlement" and discussed filing with French financial authorities.&lt;/p&gt;

&lt;p&gt;Boehs observed: "There has been shockingly little discussion of this event. The community quietly accepted their loss, and the voices of developers who lost thousands of dollars were never amplified."&lt;/p&gt;

&lt;h2&gt;
  
  
  The maintainer-as-judge problem enables quiet self-dealing
&lt;/h2&gt;

&lt;p&gt;Beyond platform-level failures, bounty systems create perverse dynamics at the project level. In September 2023, Wasmer's CEO used Algora to post a &lt;strong&gt;$5,000 bounty&lt;/strong&gt; on the Zig project's repository without consulting maintainers, triggering multiple developers to start duplicate work simultaneously. Zig's community manager responded: "We're not going to let startups burn our contributor community so that they can squeeze one extra tweet out of their moat-building efforts." The Zig team later published a landmark critique arguing that development bounties "foster competition at expense of cooperation," transfer all risk to workers, and "penalize any form of thoughtfulness in favor of reckless action."&lt;/p&gt;

&lt;p&gt;Developer Valentin Chmara documented a pattern on the Orama project: "The maintainer eventually closed every related PR, including mine, and shipped the fix internally." A review of 23 crypto bounty programs found systemic use of Contributor License Agreements as gates — developers sign away code rights, their PR gets closed without merge, and their code may appear in the product anyway. Academic research on Bountysource confirms the pattern: &lt;strong&gt;bounty issues actually have lower closing rates than non-bounty issues&lt;/strong&gt;, and it takes longer for bounty issues to get closed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What a fair bounty system would actually require
&lt;/h2&gt;

&lt;p&gt;A handful of models point toward solutions. The FreeBSD Foundation acts as an intermediary, taking money from companies wanting a feature, finding a qualified contractor, and ensuring quality review — exclusive assignment rather than competition. Opire charges &lt;strong&gt;zero fees&lt;/strong&gt; to developers, placing all platform costs on bounty creators. Immunefi enforces "The Bug Bounty Program Is Law" — projects cannot retroactively change terms, and minimum payouts for critical bugs are binding. Zellic now runs Code4rena contests at zero platform fee, acknowledging the old model was extractive.&lt;/p&gt;

&lt;p&gt;The structural reforms needed are clear from the evidence. &lt;strong&gt;Escrow&lt;/strong&gt; would prevent companies from receiving vulnerability information before committing funds. &lt;strong&gt;Independent arbitration&lt;/strong&gt; by named third parties would eliminate the judge-is-the-defendant problem. &lt;strong&gt;Binding scope&lt;/strong&gt; would prevent retroactive exclusions. &lt;strong&gt;Duplicate splitting&lt;/strong&gt; — dividing bounties among researchers who independently find the same bug within a reasonable window — would compensate parallel work. &lt;strong&gt;Mandatory disclosure rights&lt;/strong&gt; after 90 days would prevent indefinite NDAs from buying silence. &lt;strong&gt;Exclusive assignment&lt;/strong&gt; for development bounties would eliminate the wasteful battle-royale dynamic and allow collaboration.&lt;/p&gt;

&lt;h2&gt;
  
  
  The quiet extraction engine behind the opportunity narrative
&lt;/h2&gt;

&lt;p&gt;The bounty ecosystem functions as a remarkably efficient mechanism for extracting high-value labor at below-market rates. A program paying &lt;strong&gt;$100,000 per year&lt;/strong&gt; in bounties replaces &lt;strong&gt;$500,000+&lt;/strong&gt; of professional penetration testing. The platform takes a 20% cut. Researchers split the remainder for work that, if sold on the gray market, would be worth orders of magnitude more. HackerOne's own data shows top earners in India earn &lt;strong&gt;16x the median local software engineer salary&lt;/strong&gt; — revealing that developing-world researchers accept far lower absolute amounts, creating a global race to the bottom.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;63% of ethical hackers have withheld security flaws&lt;/strong&gt; they discovered, with the top reason being threatening legal language. Parsia Hakimian, a senior offensive security engineer, compared the economics to "multilayer marketing operations, where very few people make most of the money while the rest don't make much at all." Legal experts told CSO Online that bounty platforms likely violate California AB 5 and the Federal Labor Standards Act by treating researchers as independent contractors when they meet employee criteria.&lt;/p&gt;

&lt;p&gt;The system persists because the marketing works. Headlines about hacker millionaires and $81 million annual payouts obscure the reality that the median researcher earns poverty-level wages for highly skilled work, has no legal protection, cannot build a public portfolio due to NDAs, and can have completed work rejected without explanation or recourse. As one Hacker News commenter who spent seven years in the bug bounty community summarized: "The problem is that the majority of companies don't act in good faith. Even when you have something fully exploitable and valid, they will many times find some way to not pay you or lower the severity to pay you very little."&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The evidence across every major bounty platform — Immunefi, Code4rena, HackerOne, Bugcrowd, Bountysource, Algora — reveals not a collection of individual failures but a &lt;strong&gt;single structural defect replicated across the entire ecosystem&lt;/strong&gt;. When the party deciding payment is the party that benefits from non-payment, exploitation is not a bug but the equilibrium state. AI slop has accelerated the collapse by making it economically impossible for volunteer-maintained programs to process submissions, but the underlying power asymmetry existed long before large language models. The bounty economy's fundamental promise — that talented individuals can earn fair compensation for valuable security and development work — requires structural reforms that no major platform has yet been willing to implement fully: escrow, binding contracts, independent arbitration, and the elimination of the judge-as-defendant model. Until those changes arrive, bounty systems will continue to function as what they demonstrably are: mechanisms for transferring risk and extracting labor from the people who can least afford to absorb it.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>career</category>
    </item>
    <item>
      <title>Built a Tool That Deep-Links to Exact Answers, Not Question Pages</title>
      <dc:creator>Space</dc:creator>
      <pubDate>Tue, 31 Mar 2026 11:57:16 +0000</pubDate>
      <link>https://dev.to/bitsabhi/built-a-tool-that-deep-links-to-exact-answers-not-question-pages-2i6e</link>
      <guid>https://dev.to/bitsabhi/built-a-tool-that-deep-links-to-exact-answers-not-question-pages-2i6e</guid>
      <description>&lt;h2&gt;
  
  
  The Problem That Wouldn't Stop Bugging Me
&lt;/h2&gt;

&lt;p&gt;Last year I spent 6+ years building HashiCorp Vault infrastructure at Expedia. Every week, the same pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Junior dev hits a Vault 403 error&lt;/li&gt;
&lt;li&gt;They search Stack Overflow&lt;/li&gt;
&lt;li&gt;They find a question page with 23 answers&lt;/li&gt;
&lt;li&gt;They scroll past 8 wrong answers to find the right one&lt;/li&gt;
&lt;li&gt;Next week, different dev, same error, same scroll&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The answer existed. The address was permanent — &lt;code&gt;stackoverflow.com/a/38716397&lt;/code&gt;. But nobody could get to it directly. They always landed on the question page and had to dig.&lt;/p&gt;

&lt;p&gt;One day I was looking at how Slack threads work internally. Every message has a coordinate: &lt;code&gt;channel/p{unix_microseconds}?thread_ts={parent_ts}&lt;/code&gt;. The timestamp IS the ID. If you know the coordinate, you go straight to the message — no scrolling, no searching.&lt;/p&gt;

&lt;p&gt;I realized: this isn't just Slack. Every platform has permanent answer addresses.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Stack Overflow: &lt;code&gt;/a/{answer_id}&lt;/code&gt; — the accepted answer, not the question&lt;/li&gt;
&lt;li&gt;Reddit: &lt;code&gt;/comments/{post}/{slug}/{comment_id}&lt;/code&gt; — the top comment, not the post&lt;/li&gt;
&lt;li&gt;Hacker News: &lt;code&gt;/item?id={objectID}&lt;/code&gt; — if Algolia says it's a comment, the objectID IS the deep-link&lt;/li&gt;
&lt;li&gt;GitHub: &lt;code&gt;/{org}/{repo}/issues/{n}&lt;/code&gt; — direct to the issue&lt;/li&gt;
&lt;li&gt;Twitter/X: &lt;code&gt;/{user}/status/{snowflake}&lt;/code&gt; — snowflake IDs embed the timestamp: &lt;code&gt;ts = (id &amp;gt;&amp;gt; 22) + 1288834974657&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every answer on every platform has a deterministic address in &lt;code&gt;(platform, thread_id, timestamp)&lt;/code&gt; space. I started calling this "thread coordinates."&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;phi-thread is a Chrome extension (also a Python CLI on PyPI) that searches across 5 platforms simultaneously and returns deep-links to the &lt;strong&gt;exact answer&lt;/strong&gt; — not the question page, not the post, the answer itself.&lt;/p&gt;

&lt;p&gt;Here's what happens when you search "docker container keeps restarting":&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stack Overflow:&lt;/strong&gt; The API returns questions. But phi-thread doesn't stop there — it fetches the &lt;code&gt;accepted_answer_id&lt;/code&gt;, calls the answers endpoint, pulls the answer body, and gives you &lt;code&gt;stackoverflow.com/a/38716397&lt;/code&gt;. You click, you're at the fix. No scrolling past 22 other answers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reddit:&lt;/strong&gt; For each matching post, phi-thread fetches &lt;code&gt;{permalink}.json?limit=1&amp;amp;sort=top&lt;/code&gt; to extract the top comment. You get a link to the actual comment, not the post.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hacker News:&lt;/strong&gt; Algolia's API returns both stories AND comments. If the result is a comment (has &lt;code&gt;story_id&lt;/code&gt; and &lt;code&gt;comment&lt;/code&gt; tag), the &lt;code&gt;objectID&lt;/code&gt; is already the deep-link: &lt;code&gt;news.ycombinator.com/item?id={objectID}&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Then it ranks them. Platform score × title relevance. A perfectly matching SO answer with 5 upvotes beats a Reddit post with 500 upvotes but irrelevant title.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three-Layer Architecture
&lt;/h2&gt;

&lt;p&gt;The interesting part (to me) is how the knowledge base builds itself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 1 — Exact cache.&lt;/strong&gt; SHA-256 hash of the normalized query → JSON in &lt;code&gt;chrome.storage.local&lt;/code&gt;. 24-hour TTL. Same exact question = instant results, &amp;lt; 1ms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 2 — Keyword index.&lt;/strong&gt; This is the one I'm most proud of. Every answer that comes back from a search gets its keywords extracted. Each keyword maps to the answer's coordinate and score in a persistent index.&lt;/p&gt;

&lt;p&gt;Why this matters: you search "docker container restarting" today. Tomorrow you search "docker restart loop" — different query, but the keyword index finds yesterday's answers because they share the keywords "docker" and "restart." &lt;strong&gt;The KB builds itself from your search history.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The index grows forever. After a few hundred searches, it knows your stack. Searches for things you've queried before resolve from the index in ~5ms instead of 12 seconds of live API calls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Layer 3 — Live search.&lt;/strong&gt; Parallel &lt;code&gt;fetch()&lt;/code&gt; calls to SO, Reddit, HN, GitHub, and DuckDuckGo (catches Medium, Dev.to, blogs, Twitter). All in the service worker, all client-side.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "CONNECT" Feature
&lt;/h2&gt;

&lt;p&gt;This is the one I'm most excited about. When you're on a Stack Overflow question page, phi-thread automatically extracts the question title, searches OTHER platforms (Reddit, HN, GitHub), and shows you a small floating panel: "φ 3 answers found on other platforms."&lt;/p&gt;

&lt;p&gt;The question you're reading on SO might already be answered in an HN comment or a Reddit thread. phi-thread bridges that gap. No one platform has all the answers, but collectively they do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Zero Dependencies, Zero Server
&lt;/h2&gt;

&lt;p&gt;The entire extension is vanilla JS — no React, no build step, no bundler. The search engine is ~550 lines of code. It uses &lt;code&gt;fetch()&lt;/code&gt; for API calls and &lt;code&gt;chrome.storage.local&lt;/code&gt; for persistence. Installs in 1 second.&lt;/p&gt;

&lt;p&gt;No API keys. All platforms are queried through their public free-tier APIs. Rate limits are generous enough for individual use.&lt;/p&gt;

&lt;p&gt;No server. Everything runs in your browser. Your search history, your keyword index, your cache — all local. Nothing leaves your machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Spacetime Analogy (for the Curious)
&lt;/h2&gt;

&lt;p&gt;I found a fascinating paper while building this: Krioukov et al. (2012) proved that complex networks and de Sitter spacetime share identical large-scale causal structure. Past light cones in spacetime map to hyperbolic discs in networks, both producing the same power-law degree distributions.&lt;/p&gt;

&lt;p&gt;This isn't a coincidence I'm forcing. The math is genuinely there. Lamport's happened-before relation in distributed systems (1978) was explicitly inspired by special relativity. Partially ordered sets — where "A happened before B" is defined but "A and B are concurrent" is also possible — describe both spacetime events and thread replies.&lt;/p&gt;

&lt;p&gt;An answer on Stack Overflow and an event in spacetime are both points in a partially ordered set with a permanent coordinate. I wrote a literature review on this if anyone's interested (links in the repo).&lt;/p&gt;

&lt;p&gt;I'm not claiming threads ARE spacetime. But they share mathematical structure, and that structure is what makes permanent answer coordinates possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;The self-building KB has an interesting property: if thousands of users each build their own index, and we aggregate them (anonymously — just keyword → URL → count), we get &lt;strong&gt;community-validated answers.&lt;/strong&gt; When 50 developers independently find the same SO answer for "postgres connection pooling," that answer is probably excellent — without anyone voting or curating.&lt;/p&gt;

&lt;p&gt;I'm working on cloud sync as a Pro feature, which would enable this aggregated intelligence layer. But the free version is fully functional right now.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Chrome Web Store: &lt;a href="https://chromewebstore.google.com/detail/phi-thread" rel="noopener noreferrer"&gt;phi-thread&lt;/a&gt; (search for "phi-thread")&lt;/li&gt;
&lt;li&gt;PyPI: &lt;code&gt;pip install phi-thread&lt;/code&gt; (CLI version)&lt;/li&gt;
&lt;li&gt;GitHub: &lt;a href="https://github.com/0x-auth/phi-thread" rel="noopener noreferrer"&gt;github.com/0x-auth/phi-thread&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;How it works: &lt;a href="https://phi-thread.netlify.app" rel="noopener noreferrer"&gt;phi-thread.netlify.app&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Zero dependencies. Zero tracking. Zero server. Just a router to the answer that already exists.&lt;/p&gt;

&lt;p&gt;Connect, don't create.&lt;br&gt;
🌌&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>productivity</category>
      <category>showdev</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Built a K8s Scheduler That Beats the Default in Every Benchmark.</title>
      <dc:creator>Space</dc:creator>
      <pubDate>Sun, 29 Mar 2026 08:58:44 +0000</pubDate>
      <link>https://dev.to/bitsabhi/built-a-k8s-scheduler-that-beats-the-default-in-every-benchmark-10pg</link>
      <guid>https://dev.to/bitsabhi/built-a-k8s-scheduler-that-beats-the-default-in-every-benchmark-10pg</guid>
      <description>&lt;p&gt;Your Kubernetes cluster is wasting 10-20% of its compute budget right now. Here's proof, and a fix.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;Kubernetes default scheduler uses "Least Allocated" scoring. It picks the node with the most free resources. Sounds fair, right?&lt;/p&gt;

&lt;p&gt;Wrong. Here's what actually happens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Node A: 90% CPU used, 10% RAM used  → score: ~50
Node B: 50% CPU used, 50% RAM used  → score: ~50
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;They score the &lt;strong&gt;same&lt;/strong&gt;. But Node A is practically dead — 90% of its RAM is stranded because no pod can use it (CPU is full). You're paying for that RAM every month.&lt;/p&gt;

&lt;p&gt;At 50 nodes, this adds up to &lt;strong&gt;thousands of dollars per month&lt;/strong&gt; in wasted resources.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fix: Vector Alignment Scheduling
&lt;/h2&gt;

&lt;p&gt;I built &lt;strong&gt;Lambda-G&lt;/strong&gt; — a drop-in K8s scheduler plugin that replaces the default Score phase with vector-alignment scoring.&lt;/p&gt;

&lt;p&gt;Instead of treating CPU and RAM as independent numbers, Lambda-G treats each node as a &lt;strong&gt;vector&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Node vector:  [cpu_free, ram_free, iops_free, network_free]
Pod vector:   [cpu_req, ram_req, iops_req, network_req]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The score is the &lt;strong&gt;directional alignment&lt;/strong&gt; between these vectors. A CPU-heavy pod gets steered toward a RAM-heavy node. Result: &lt;strong&gt;symmetric exhaustion&lt;/strong&gt; — all resources drain evenly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Math (30 seconds)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Score = φ × alignment + exhaustion_bonus - entropy_penalty
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;alignment&lt;/code&gt; = cosine similarity between pod request and node capacity vectors&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;exhaustion_bonus&lt;/code&gt; = how much more balanced the node becomes after placement&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;entropy_penalty&lt;/code&gt; = punishment for creating stranded resources&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;φ&lt;/code&gt; = 1.618 (golden ratio — the optimal self-reference weight)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why golden ratio? It's the fixed point of self-reference: φ - 1 = 1/φ. Each scoring layer decays by exactly 1/φ from the previous, creating a mathematically optimal relevance function.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark Results
&lt;/h2&gt;

&lt;p&gt;I tested Lambda-G against the default scheduler across 5 scenarios:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Scenario&lt;/th&gt;
&lt;th&gt;Default&lt;/th&gt;
&lt;th&gt;Lambda-G&lt;/th&gt;
&lt;th&gt;Winner&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Mixed Workload (20 nodes, 200 pods)&lt;/td&gt;
&lt;td&gt;87.2&lt;/td&gt;
&lt;td&gt;97.0&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Lambda-G&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Scale Test (50 nodes, 500 pods)&lt;/td&gt;
&lt;td&gt;85.9&lt;/td&gt;
&lt;td&gt;96.7&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Lambda-G&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CPU-Heavy Skew (10 nodes)&lt;/td&gt;
&lt;td&gt;98.2&lt;/td&gt;
&lt;td&gt;99.1&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Lambda-G&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAM-Heavy Skew (10 nodes)&lt;/td&gt;
&lt;td&gt;96.2&lt;/td&gt;
&lt;td&gt;98.2&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Lambda-G&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dense Packing (10 nodes, 150 pods)&lt;/td&gt;
&lt;td&gt;88.0&lt;/td&gt;
&lt;td&gt;96.0&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Lambda-G&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Lambda-G wins all 5 scenarios.&lt;/strong&gt; Zero stranded nodes in 4/5 scenarios (vs 1-10 with default).&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────┐     ┌──────────────┐     ┌─────────────┐
│  K8s API Server  │────▶│  Lambda-G    │────▶│  Rust Brain  │
│  (watches pods)  │     │  Controller   │     │  (scoring)   │
└─────────────────┘     │  (Python/kopf)│     │  379ns/score │
                        └──────────────┘     └─────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rust scoring engine&lt;/strong&gt;: Sub-microsecond per-node scoring&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python controller&lt;/strong&gt;: kopf-based K8s operator, watches for annotated pods&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Helm chart&lt;/strong&gt;: One-command install&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safety valve&lt;/strong&gt;: &lt;code&gt;FailurePolicy: Ignore&lt;/code&gt; — if Lambda-G crashes, K8s falls back to default. Zero risk.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Docker&lt;/span&gt;
docker pull bitsabhi/lambda-g-controller:latest

&lt;span class="c"&gt;# Helm&lt;/span&gt;
helm &lt;span class="nb"&gt;install &lt;/span&gt;lambda-g ./charts/lambda-g

&lt;span class="c"&gt;# Or just run the benchmark yourself&lt;/span&gt;
git clone https://github.com/0x-auth/lambda-g-scheduler
&lt;span class="nb"&gt;cd &lt;/span&gt;lambda-g-scheduler
python3 benchmark_simulation.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Auditor (Free)
&lt;/h2&gt;

&lt;p&gt;Before installing the scheduler, run the auditor to see how much you're wasting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python3 coherence_engine/auditor/auditor.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It scans your cluster and shows stranded resources + estimated monthly cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works Under the Hood
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Pod arrives with &lt;code&gt;schedulerName: lambda-g&lt;/code&gt; annotation&lt;/li&gt;
&lt;li&gt;Controller fetches all nodes' capacity vectors&lt;/li&gt;
&lt;li&gt;Rust brain scores each node in &amp;lt;1μs using cosine alignment + entropy metrics&lt;/li&gt;
&lt;li&gt;Pod gets bound to the highest-scoring node&lt;/li&gt;
&lt;li&gt;If controller is down, K8s default scheduler takes over (safety valve)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The scoring function in Rust:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="nf"&gt;calculate_score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cpu_free&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;f64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ram_free&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;f64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;cpu_req&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;f64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ram_req&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;f64&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;f64&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;phi&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;1.618033988749895&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;initial_entropy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cpu_free&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;ram_free&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="nf"&gt;.abs&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;final_entropy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;cpu_free&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;cpu_req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ram_free&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;ram_req&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;&lt;span class="nf"&gt;.abs&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;recovery&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;initial_entropy&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;final_entropy&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;exhaustion&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;1.0&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;cpu_free&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;cpu_req&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ram_free&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;ram_req&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;recovery&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;phi&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;100.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;exhaustion&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;10.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;17 lines. That's the entire brain.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Benchmarking on real EKS/GKE clusters (simulation results above, live results coming)&lt;/li&gt;
&lt;li&gt;4-dimensional scoring (CPU + RAM + IOPS + Network)&lt;/li&gt;
&lt;li&gt;AWS/GCP Marketplace listing&lt;/li&gt;
&lt;li&gt;PDF audit reports for enterprise&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub&lt;/strong&gt;: &lt;a href="https://github.com/0x-auth/lambda-g-scheduler" rel="noopener noreferrer"&gt;github.com/0x-auth/lambda-g-scheduler&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Docker Hub&lt;/strong&gt;: &lt;code&gt;bitsabhi/lambda-g-controller&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PyPI&lt;/strong&gt; (fractal search, same author): &lt;code&gt;pip install fractal-search&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Built by &lt;a href="https://github.com/0x-auth" rel="noopener noreferrer"&gt;Abhishek Srivastava&lt;/a&gt; — independent researcher working on φ-weighted optimization for distributed systems.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;If you're running a K8s cluster with 10+ nodes, try the auditor. You might be surprised how much you're wasting.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;φ = 1.618033988749895&lt;/p&gt;

</description>
      <category>algorithms</category>
      <category>kubernetes</category>
      <category>performance</category>
      <category>showdev</category>
    </item>
    <item>
      <title>How I Built a Route Optimizer Within 0.13% of LKH-3 (Rust + API)</title>
      <dc:creator>Space</dc:creator>
      <pubDate>Tue, 10 Mar 2026 16:08:25 +0000</pubDate>
      <link>https://dev.to/bitsabhi/how-i-built-a-route-optimizer-within-013-of-lkh-3-rust-api-31l9</link>
      <guid>https://dev.to/bitsabhi/how-i-built-a-route-optimizer-within-013-of-lkh-3-rust-api-31l9</guid>
      <description>&lt;h2&gt;
  
  
  How I Built a Route Optimizer Within 0.13% of LKH-3 (Rust + API)
&lt;/h2&gt;

&lt;p&gt;I spent months trying to match the best TSP solver in the world. Here's the story of how I got within 0.13%.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Challenge
&lt;/h2&gt;

&lt;p&gt;LKH-3, written by Keld Helsgaun, is the solver everyone measures against. It's been the gold standard for the Travelling Salesman Problem for over two decades. Researchers have spent entire careers trying to beat it.&lt;/p&gt;

&lt;p&gt;I didn't set out to beat it. I wanted to see how close a clean, modern Rust implementation could get — and then make it accessible as a simple API call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where I Ended Up
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Instance&lt;/th&gt;
&lt;th&gt;Cities&lt;/th&gt;
&lt;th&gt;LKH-3&lt;/th&gt;
&lt;th&gt;Lambda-G&lt;/th&gt;
&lt;th&gt;Gap&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;kroA100&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;21,907&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;21,908&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.005%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;pr1002&lt;/td&gt;
&lt;td&gt;1,002&lt;/td&gt;
&lt;td&gt;259,045&lt;/td&gt;
&lt;td&gt;261,921&lt;/td&gt;
&lt;td&gt;1.11%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;shared_benchmark&lt;/td&gt;
&lt;td&gt;1,000&lt;/td&gt;
&lt;td&gt;23,342&lt;/td&gt;
&lt;td&gt;23,373&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.13%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;On kroA100, Lambda-G finds essentially the exact same tour as LKH-3. On 1,000 cities, we're within 0.13%. And it beats Google OR-Tools on the same benchmark.&lt;/p&gt;

&lt;p&gt;All tests: 60 second time limit, same hardware.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://lambda-g-optimizer.netlify.app" rel="noopener noreferrer"&gt;Try it live — click to place cities and watch the optimization happen →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  The Wall I Hit
&lt;/h2&gt;

&lt;p&gt;My first 8 versions were mediocre. I was getting 3-5% gaps on 1,000-city problems — respectable, but nowhere near LKH-3. The algorithm was correct. The code was clean. But something was off.&lt;/p&gt;

&lt;p&gt;I profiled the code. The bottleneck wasn't the search strategy — it was the &lt;strong&gt;inner loop overhead&lt;/strong&gt;. The solver was spending too much time on bookkeeping and not enough on actual search.&lt;/p&gt;

&lt;p&gt;I was running out of iterations long before I was running out of good moves to find.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Breakthrough
&lt;/h2&gt;

&lt;p&gt;The fix came from careful profiling and micro-optimizations. No single silver bullet — just relentless attention to what the CPU was actually doing.&lt;/p&gt;

&lt;p&gt;Same time budget, but significantly more iterations. And more iterations means finding better solutions.&lt;/p&gt;

&lt;p&gt;This dropped my gap from 3-5% to under 1%. The rest of the improvement came from tuning the perturbation strategy.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Algorithm
&lt;/h2&gt;

&lt;p&gt;Lambda-G uses &lt;strong&gt;Iterated Local Search&lt;/strong&gt; — conceptually simple, but the details matter:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Build initial tour (nearest-neighbour + greedy patching)
2. Local search: 2-opt + Or-opt until no improvement
3. Perturb: double-bridge (4-opt, non-sequential reconnection)
4. Accept or reject perturbation
5. Repeat until time limit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why Double-Bridge Matters
&lt;/h3&gt;

&lt;p&gt;2-opt and Or-opt are greedy — they always improve the tour. But they get stuck in local minima. You need a way to escape.&lt;/p&gt;

&lt;p&gt;Double-bridge cuts the tour in 4 places and reconnects the segments in a way that &lt;strong&gt;no sequence of 2-opt moves can reach&lt;/strong&gt;. It's like teleporting to a different region of the solution space.&lt;/p&gt;

&lt;p&gt;After each perturbation, the local search kicks in again and optimizes from the new starting point. Sometimes it finds something better. Sometimes it doesn't. But over thousands of iterations, it converges toward the global optimum.&lt;/p&gt;

&lt;h3&gt;
  
  
  Candidate Lists
&lt;/h3&gt;

&lt;p&gt;Checking every possible 2-opt swap is O(n²). For 10,000 cities, that's 100 million comparisons per iteration — way too slow.&lt;/p&gt;

&lt;p&gt;The trick: precompute a set of nearest neighbours for each city. Only consider swaps involving these neighbours. This prunes the search space dramatically with almost no loss in solution quality, because good moves almost always involve nearby cities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Parallelism
&lt;/h3&gt;

&lt;p&gt;Multiple independent workers run with different random seeds. Best tour wins. Simple, embarrassingly parallel, and scales linearly with cores.&lt;/p&gt;




&lt;h2&gt;
  
  
  Beyond Delivery Trucks
&lt;/h2&gt;

&lt;p&gt;Here's the thing most people don't realize: TSP isn't just about delivery routes. It's about &lt;strong&gt;optimal ordering&lt;/strong&gt; of anything.&lt;/p&gt;

&lt;p&gt;The algorithm doesn't care what your "cities" are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Logistics &amp;amp; Delivery&lt;/strong&gt; → minimize fuel and driver hours&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Chip Design / VLSI&lt;/strong&gt; → minimize total wire length between components&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DNA Sequencing&lt;/strong&gt; → order fragments for genome assembly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Warehouse Picking&lt;/strong&gt; → minimize picker travel time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CNC / Laser Cutting&lt;/strong&gt; → minimize toolhead movement&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Telescope Scheduling&lt;/strong&gt; → minimize slew time between targets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Send coordinates for Euclidean problems. Send a distance matrix for everything else — road networks, genomic distances, whatever.&lt;/p&gt;




&lt;h2&gt;
  
  
  Making It an API
&lt;/h2&gt;

&lt;p&gt;I wanted this to be useful, not just a benchmark trophy. So I wrapped it in a REST API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.bitsabhi.com/optimize&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Authorization&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bearer YOUR_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;points&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;]],&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;time_limit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;25.0&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tour&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;        &lt;span class="c1"&gt;# [0, 3, 1, 2]
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;length&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;      &lt;span class="c1"&gt;# 341.42
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;improvement&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="c1"&gt;# 17.6% vs nearest-neighbour
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Response times:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;100 cities → ~0.5 seconds&lt;/li&gt;
&lt;li&gt;1,000 cities → ~3 seconds&lt;/li&gt;
&lt;li&gt;10,000 cities → ~25 seconds&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It also accepts CSV uploads and distance matrices. One endpoint, any format.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distance Matrix (for non-Euclidean problems):
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.bitsabhi.com/optimize&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;headers&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Authorization&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bearer YOUR_KEY&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;matrix&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;29&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;82&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;29&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;82&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;55&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;68&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;46&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;68&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;time_limit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;25.0&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Profiling beats intuition.&lt;/strong&gt; The bottleneck was never where I thought it was. The profiler doesn't lie.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Candidate lists are essential.&lt;/strong&gt; Pruning the search space is what separates toy solvers from production solvers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Distance matrices unlock everything.&lt;/strong&gt; The moment you accept an arbitrary matrix instead of just coordinates, you go from "route optimizer" to "universal ordering optimizer." DNA sequencing, road networks, financial portfolio rebalancing — same API call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Rust is worth the rewrite.&lt;/strong&gt; Porting from Python to Rust took about a week. The speedup is permanent and compounds with every iteration of the solver. For compute-bound problems, the language matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Perturbation is everything.&lt;/strong&gt; A perfect local search that gets stuck in local minima will lose to a mediocre local search with good perturbation. Double-bridge was the key to escaping local optima.&lt;/p&gt;




&lt;h2&gt;
  
  
  Pricing
&lt;/h2&gt;

&lt;p&gt;No subscriptions. Pay once, use forever.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Free:&lt;/strong&gt; 100 cities, $0 forever&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Research:&lt;/strong&gt; 500 cities, $0 (.edu email required)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Builder:&lt;/strong&gt; 1,000 cities, $49 lifetime&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pro:&lt;/strong&gt; 10,000 cities, $149 lifetime&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://lambda-g-optimizer.netlify.app" rel="noopener noreferrer"&gt;Try it free → lambda-g-optimizer.netlify.app&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;I'm exploring vehicle routing (multiple trucks with capacity constraints), time windows, and asymmetric TSP. If you have a real-world problem that needs optimal ordering, I'd love to hear about it — especially edge cases and weird domains I haven't thought of.&lt;/p&gt;

&lt;p&gt;What are you solving that needs optimal ordering?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Abhishek Srivastava — &lt;a href="https://orcid.org/0009-0006-7495-5039" rel="noopener noreferrer"&gt;ORCID&lt;/a&gt; — &lt;a href="mailto:bitsabhi@gmail.com"&gt;bitsabhi@gmail.com&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>rust</category>
      <category>algorithms</category>
      <category>api</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
