<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: PicklePixel</title>
    <description>The latest articles on DEV Community by PicklePixel (@picklepixel).</description>
    <link>https://dev.to/picklepixel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/picklepixel"/>
    <language>en</language>
    <item>
      <title>How I Reverse-Engineered Claude Code's Hidden Pet System</title>
      <dc:creator>PicklePixel</dc:creator>
      <pubDate>Wed, 01 Apr 2026 17:33:55 +0000</pubDate>
      <link>https://dev.to/picklepixel/how-i-reverse-engineered-claude-codes-hidden-pet-system-8l7</link>
      <guid>https://dev.to/picklepixel/how-i-reverse-engineered-claude-codes-hidden-pet-system-8l7</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xav2rsy64lw4m88o1pl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xav2rsy64lw4m88o1pl.png" alt="The Buddy Creator web tool showing a shiny legendary cat with a tophat"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I was poking around Claude Code's source one evening and found something I wasn't supposed to see: a full gacha companion pet system, hidden behind a compile-time feature flag. A little ASCII creature that sits beside your terminal input, occasionally comments in a speech bubble, and is permanently bound to your Anthropic account. Your buddy is deterministic. Same account, same pet, every single time. No rerolls.&lt;/p&gt;

&lt;p&gt;Naturally, I wanted a legendary dragon. Here's how I cracked it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Actually in There
&lt;/h2&gt;

&lt;p&gt;The buddy system lives across four files inside Claude Code's codebase:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;buddy/types.ts&lt;/code&gt; defines 18 species, 5 rarities, 6 eye styles, 8 hats, and 5 stats&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;buddy/companion.ts&lt;/code&gt; implements the PRNG, hash function, roll algorithm, and tamper protection&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;buddy/sprites.ts&lt;/code&gt; has ASCII art for every species (three animation frames each, a hat overlay system, and a render pipeline)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;buddy/prompt.ts&lt;/code&gt; holds a system prompt that gets injected into Claude so it knows how to coexist with the pet without impersonating it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The feature is gated behind a &lt;code&gt;BUDDY&lt;/code&gt; compile-time flag. When the flag is off, the entire thing gets dead-code-eliminated from the build. It was teased during the first week of April 2026 and is slated for a full launch in May. The &lt;code&gt;/buddy&lt;/code&gt; slash command activates it when the flag is on.&lt;/p&gt;

&lt;p&gt;Here's what the species look like as ASCII sprites:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;DUCK                DRAGON              GHOST
    __              /^\  /^\            .----.
  &amp;lt;(· )___        &amp;lt;  ·  ·  &amp;gt;          / ·  · \
   (  ._&amp;gt;          (   ~~   )         |      |
    `--´            `-vvvv-´          ~`~``~`~
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Eighteen species total: duck, goose, blob, cat, dragon, octopus, owl, penguin, turtle, snail, ghost, axolotl, capybara, cactus, robot, rabbit, mushroom, and chonk. Each one has a compact face representation for inline display, three animation frames on a 500ms tick timer, and a hat overlay slot on line zero of the sprite.&lt;/p&gt;

&lt;p&gt;One fun detail: every species name in the source code is obfuscated through &lt;code&gt;String.fromCharCode()&lt;/code&gt; arrays. "Capybara" collides with an internal Anthropic model codename that's flagged in their repo's &lt;code&gt;excluded-strings.txt&lt;/code&gt;, so they encoded all 18 species uniformly to keep their string-scanning tooling happy.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Gacha Algorithm
&lt;/h2&gt;

&lt;p&gt;Your buddy is a pure function of your identity. The algorithm chains together like this:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Account UUID (from OAuth)
    → concatenate with salt 'friend-2026-401'
    → hash to 32-bit integer
    → seed Mulberry32 PRNG
    → deterministic roll sequence
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The PRNG calls happen in strict order: rarity first, then species, then eye, then hat, then shiny, then stats. Changing any earlier roll changes everything after it.&lt;/p&gt;

&lt;p&gt;Rarity weights:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Rarity&lt;/th&gt;
&lt;th&gt;Probability&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Common&lt;/td&gt;
&lt;td&gt;60%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Uncommon&lt;/td&gt;
&lt;td&gt;25%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Rare&lt;/td&gt;
&lt;td&gt;10%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Epic&lt;/td&gt;
&lt;td&gt;4%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Legendary&lt;/td&gt;
&lt;td&gt;1%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;On top of that, there's a 1% shiny chance that rolls independently of rarity. A shiny legendary of a specific species? That's a 0.00056% probability, roughly 1 in 180,000.&lt;/p&gt;

&lt;p&gt;Stats are shaped by rarity through a floor system. Legendaries start at a floor of 50 and always max out their peak stat at 100. Commons start at 5 and cap their peak around 84. Each companion gets one peak stat and one dump stat, with the rest falling somewhere in between.&lt;/p&gt;

&lt;p&gt;There's an important hash function detail here. Claude Code runs in Bun, so the production hash is &lt;code&gt;Bun.hash()&lt;/code&gt;, which is native C wyhash. The Node.js fallback is FNV-1a. These produce completely different values for the same input, which means any tooling running outside Bun cannot reproduce the exact buddy for a given account.&lt;/p&gt;
&lt;h2&gt;
  
  
  How the Tamper Protection Works
&lt;/h2&gt;

&lt;p&gt;This is the part that got interesting. The buddy system splits companion data into two categories:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stored in config&lt;/strong&gt; (&lt;code&gt;~/.claude.json&lt;/code&gt;): name, personality, hatchedAt timestamp. These are editable and meant to be personal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Recomputed every read&lt;/strong&gt; (called "bones"): rarity, species, eye, hat, shiny, stats. These are derived deterministically from your account hash on every single call to &lt;code&gt;getCompanion()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The tamper protection comes down to a JavaScript spread operation:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;getCompanion&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;stored&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getGlobalConfig&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;companion&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;stored&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;bones&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;roll&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;companionUserId&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;stored&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;bones&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Because &lt;code&gt;bones&lt;/code&gt; comes second in the spread, it always overwrites anything you manually added to the config. You can edit &lt;code&gt;~/.claude.json&lt;/code&gt; all you want, set &lt;code&gt;rarity: "legendary"&lt;/code&gt;, and it gets stomped on every read. The recomputed values win, period.&lt;/p&gt;

&lt;p&gt;It's clever design. No server-side validation needed, no database, no "lost my save" support tickets. Your buddy is a pure function of your identity, recomputed every time it's needed. The bones are cached by &lt;code&gt;userId + SALT&lt;/code&gt; key to avoid redundant computation on the three hot paths: the 500ms sprite tick, per-keystroke prompt input, and per-turn observer.&lt;/p&gt;

&lt;p&gt;But here's the thing about client-side enforcement: it's client-side.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Crack
&lt;/h2&gt;

&lt;p&gt;The entire hack is swapping two variable names. In the minified v2.1.89 binary, &lt;code&gt;getCompanion()&lt;/code&gt; compiles down to something like:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;bones&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;Gh&lt;/span&gt;&lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Th&lt;/span&gt;&lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;{...&lt;/span&gt;&lt;span class="nx"&gt;H&lt;/span&gt;&lt;span class="p"&gt;,...&lt;/span&gt;&lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;code&gt;H&lt;/code&gt; is the stored config, &lt;code&gt;$&lt;/code&gt; is the recomputed bones. Bones come last, bones win. To flip that:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;bones&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;Gh&lt;/span&gt;&lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Th&lt;/span&gt;&lt;span class="nf"&gt;$&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;&lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;{...&lt;/span&gt;&lt;span class="nx"&gt;$&lt;/span&gt;&lt;span class="p"&gt;,...&lt;/span&gt;&lt;span class="nx"&gt;H&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now stored config comes last. Config wins. Whatever you write to &lt;code&gt;~/.claude.json&lt;/code&gt; takes priority over the recomputed values.&lt;/p&gt;

&lt;p&gt;The two strings are the exact same byte length, so there's zero offset shift in the binary. No padding, no realignment, no relocation table headaches. You find the pattern, swap &lt;code&gt;H&lt;/code&gt; and &lt;code&gt;$&lt;/code&gt;, write it back. That's the whole patch.&lt;/p&gt;

&lt;p&gt;I wrote a Node.js patcher that automates the whole thing in a single command. Design your buddy on the web creator, copy the JSON, run &lt;code&gt;node buddy-crack.js&lt;/code&gt;, and it patches the binary and injects your companion in one step. It auto-reads from your clipboard, so you don't even need to pass arguments. That was a deliberate choice: Windows CMD chokes on JSON in command-line arguments because of quote conflicts, so clipboard-first was the only sane default.&lt;/p&gt;

&lt;p&gt;The patcher went through a few iterations that taught me things the hard way.&lt;/p&gt;

&lt;p&gt;The first version had separate &lt;code&gt;patch&lt;/code&gt; and &lt;code&gt;inject&lt;/code&gt; commands, which was unnecessarily complex for something that always happens together. Collapsed that into a single flow early on.&lt;/p&gt;

&lt;p&gt;Then I nearly destroyed my own Claude Code config. The original config writer would parse &lt;code&gt;~/.claude.json&lt;/code&gt;, fail on any syntax weirdness, fall back to an empty object, and write that back with just the companion data. That nuked everything else in the file: OAuth tokens, permissions, theme settings, tool approvals. On a config that can easily be 50KB, that's catastrophic. The fix was to make the injector surgical. It tries a proper JSON parse first, but if that fails, it now uses a brace-depth parser to find and replace just the &lt;code&gt;companion&lt;/code&gt; field in the raw string, leaving everything else untouched. It only creates a fresh file as a last resort, and it backs up &lt;code&gt;~/.claude.json&lt;/code&gt; to &lt;code&gt;.claude.json.bak&lt;/code&gt; before touching anything.&lt;/p&gt;

&lt;p&gt;Windows threw another curveball. PowerShell's &lt;code&gt;Get-Clipboard&lt;/code&gt; mangles UTF-8 characters, so the star eye character &lt;code&gt;✦&lt;/code&gt; would come through as &lt;code&gt;?&lt;/code&gt;. The fix forces UTF-8 output encoding from PowerShell and auto-repairs known corrupted characters on paste.&lt;/p&gt;

&lt;p&gt;The final round of hardening added binary integrity checks (verifying file size after write to catch truncated writes) and auto-restore from backup if the patch fails mid-write. The patcher now handles XDG-standard paths across all three platforms and scans the Claude Code versions directory for additional binaries to patch.&lt;/p&gt;
&lt;h2&gt;
  
  
  Building the Web Creator
&lt;/h2&gt;

&lt;p&gt;Reading the source also meant I had all the sprite data, so I built a web-based companion designer. Single HTML file, no dependencies, no build system. You pick your species from a grid that shows the actual ASCII faces, choose your rarity, eyes, hat, toggle shiny, and it renders a live preview of the full 5-line sprite with your selections applied.&lt;/p&gt;

&lt;p&gt;There's a soul section with two options: generate a name and personality by pasting a prompt into Claude or ChatGPT, or just type them yourself. Hit "Copy Config JSON" and it exports exactly what the patcher expects. The install guide is built into the page with platform-specific instructions for Windows, macOS, and Linux.&lt;/p&gt;

&lt;p&gt;The whole thing lives at &lt;a href="https://pickle-pixel.com/buddy" rel="noopener noreferrer"&gt;pickle-pixel.com/buddy&lt;/a&gt;. The usage flow is four steps: design your companion, close Claude Code, run the patcher, restart.&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;div class="c-embed__content"&gt;
      &lt;div class="c-embed__body flex items-center justify-between"&gt;
        &lt;a href="http://pickle-pixel.com/buddy/" rel="noopener noreferrer" class="c-link fw-bold flex items-center"&gt;
          &lt;span class="mr-2"&gt;pickle-pixel.com&lt;/span&gt;
          

        &lt;/a&gt;
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;



&lt;h2&gt;
  
  
  The Attack Surface
&lt;/h2&gt;

&lt;p&gt;While I was documenting everything, I mapped out every possible angle someone might try:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Attack&lt;/th&gt;
&lt;th&gt;Works?&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Edit config fields&lt;/td&gt;
&lt;td&gt;Name/personality only&lt;/td&gt;
&lt;td&gt;Bones always overwritten&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Change accountUuid&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Server validates on auth&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Patch the binary&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;td&gt;That's what this tool does&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Create new accounts&lt;/td&gt;
&lt;td&gt;Uncontrolled&lt;/td&gt;
&lt;td&gt;Can't choose your UUID&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Brute-force UUIDs&lt;/td&gt;
&lt;td&gt;Statistically&lt;/td&gt;
&lt;td&gt;But you can't use found UUIDs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The brute-force angle is interesting. I wrote a separate script that replicates the full gacha algorithm and generates random UUIDs to find legendary rolls. It works statistically, but the UUIDs it finds are useless in practice because Anthropic assigns them server-side during account creation. You don't get to pick yours.&lt;/p&gt;

&lt;p&gt;And there's the Bun versus Node.js hash problem again. The brute-forcer runs in Node.js by default, using FNV-1a, but production Claude Code uses Bun's wyhash. The probability distributions are identical, but per-UUID results won't match unless you run the script under Bun.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;The buddy system is genuinely well-designed for what it's trying to do. Deterministic gacha with no server state is elegant. The tamper protection through spread ordering is simple and effective against casual editing. The soul/bones split lets users personalize their pet's name and personality while keeping the visual identity locked to their account.&lt;/p&gt;

&lt;p&gt;But any system where the enforcement happens entirely on the client has a fundamental limit. The binary is on your machine. The config is on your machine. The merge logic is one spread operation in a JavaScript function. The crack is five characters swapped in a compiled binary.&lt;/p&gt;

&lt;p&gt;That said, I don't think the Anthropic team is under any illusion that this is uncrackable. Deterministic client-side gacha is a design choice that trades tamper-resistance for zero-server-cost operation. No database, no API calls to validate rarity, no sync issues. For a fun companion pet feature in a CLI tool, that's the right tradeoff. The buddy system doesn't gate any functionality. It's a toy, and it's a charming one.&lt;/p&gt;

&lt;p&gt;The code is at &lt;a href="https://github.com/Pickle-Pixel/claudecode-buddy-crack" rel="noopener noreferrer"&gt;github.com/Pickle-Pixel/claudecode-buddy-crack&lt;/a&gt; if you want to pick your own companion. The full reverse-engineering documentation is in &lt;code&gt;BUDDY_SYSTEM.md&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Now if you'll excuse me, I have a legendary shiny dragon to go look at.&lt;/p&gt;

</description>
      <category>claudecode</category>
      <category>reverseengineering</category>
      <category>javascript</category>
      <category>opensource</category>
    </item>
    <item>
      <title>I Built an AI Agent to Apply to 1,000 Jobs While I Kept Building Things</title>
      <dc:creator>PicklePixel</dc:creator>
      <pubDate>Thu, 19 Feb 2026 03:44:05 +0000</pubDate>
      <link>https://dev.to/picklepixel/i-built-an-ai-agent-to-apply-to-1000-jobs-while-i-kept-building-things-3j64</link>
      <guid>https://dev.to/picklepixel/i-built-an-ai-agent-to-apply-to-1000-jobs-while-i-kept-building-things-3j64</guid>
      <description>&lt;p&gt;Job searching is a full-time job. That's the actual problem. It competes directly with the work I love which is building things, learning and automating stuff. At some point I enough and wanted to figure it out so I started building ApplyPilot.&lt;/p&gt;

&lt;p&gt;I built ApplyPilot. It's a fully autonomous job application pipeline that discovers jobs, scores them against my profile, tailors my resume per role, writes cover letters, and submits applications all by itself. 1,000 applications in 2 days. I have interviews scheduled right now.&lt;/p&gt;

&lt;p&gt;Here's how it works and what surprised me along the way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Situation
&lt;/h2&gt;

&lt;p&gt;The existing tools in this space are either cheap and dumb, or smart and expensive. The "smart" browser automation services charge per application and still require babysitting. I've been building automations for years - it's genuinely one of my stronger skills - so I decided to just build the thing myself instead of paying someone else's margins.&lt;/p&gt;

&lt;p&gt;The core idea was treating job searching as a pipeline problem. Every stage has a clear input and output. Automate each one.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pipeline
&lt;/h2&gt;

&lt;p&gt;ApplyPilot runs in 6 stages:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Discover&lt;/strong&gt; - Scrapes Indeed, LinkedIn, Glassdoor, ZipRecruiter, and Google Jobs, plus 48 pre-configured Workday employer portals and 30+ direct career sites&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enrich&lt;/strong&gt; - Fetches the full job description from each listing URL&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Score&lt;/strong&gt; - An LLM rates each job 1-10 based on my resume and search preferences. Only jobs scoring ≥7 move forward&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tailor&lt;/strong&gt; - Rewrites my resume for the specific role (reorganizes sections, emphasizes relevant experience, injects keywords from the job description)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cover Letter&lt;/strong&gt; - Generates a targeted cover letter per job&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Apply&lt;/strong&gt; - Submits the application&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The whole thing runs off a single SQLite database that acts as a conveyor belt. Each stage reads what the previous one produced and writes its output to new columns. You can run stages independently, restart failed ones, or run a subset:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;applypilot run                    &lt;span class="c"&gt;# full pipeline&lt;/span&gt;
applypilot run score tailor      &lt;span class="c"&gt;# just re-score and re-tailor&lt;/span&gt;
applypilot apply &lt;span class="nt"&gt;--workers&lt;/span&gt; 3     &lt;span class="c"&gt;# 3 Chrome instances submitting in parallel&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The discovery config took the most upfront time 48 Workday employer configs, 30+ direct sites, rules for blocked sites and ATS detection. But once you have it, it's done. I'd encourage anyone building something similar to start there and build a library of templates. It's a rewarding step and it makes everything downstream much easier.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architectural Mistake I Made First
&lt;/h2&gt;

&lt;p&gt;My first instinct was a traditional orchestrator/agent setup - a central controller dispatching discrete actions to a stateless agent. Pull an action, execute it, report back, repeat. Kept the context window small, felt efficient on paper.&lt;/p&gt;

&lt;p&gt;It didn't work well. Form filling isn't a sequence of independent actions - it's a stateful session. The agent needs to see the page, understand what it just filled, notice when something went wrong, and adapt. A thin stateless action-puller can't do any of that reliably.&lt;/p&gt;

&lt;p&gt;I switched to a full LLM session with persistent context as the brain - one continuous conversation per application, with complete page visibility throughout. The agent could actually reason about what was happening instead of just executing one-off commands. The difference was immediate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Haiku Is the Goat
&lt;/h2&gt;

&lt;p&gt;I know that sounds like a take, but I mean it. Claude Haiku follows instructions precisely, barely hallucinates on structured tasks, and is fast enough to run as the core of a real-time automation. For this use case - filling forms with clear instructions and real page context - it outperforms bigger models on the metrics that actually matter.&lt;/p&gt;

&lt;p&gt;Some things Haiku did that I didn't explicitly build for:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It reset my LinkedIn password.&lt;/strong&gt; One application required LinkedIn login and the session had expired. Haiku navigated to the forgot-password flow, reset the password, and continued the application. I didn't tell it to do that. It identified the obstacle and removed it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It sent an email when there was no form.&lt;/strong&gt; One listing had no application form - just a contact email buried in the description. Haiku noticed, composed a professional email, attached my resume, and sent it. The correct behavior for that situation, with zero special-casing in my code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;It completed a French application entirely in French.&lt;/strong&gt; I didn't build any localization handling. It just handled it.&lt;/p&gt;

&lt;p&gt;These aren't lucky guesses - they're genuine adaptations to situations the code didn't anticipate. That's the actual value of using a capable model as the agent brain rather than a rigid script.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Result
&lt;/h2&gt;

&lt;p&gt;1,000 applications in 2 days. Multiple companies reached out and I'm in the interview process right now. It works.&lt;/p&gt;

&lt;p&gt;The resume tailoring is a big part of why. ApplyPilot never fabricates anything - there's a &lt;code&gt;resume_facts&lt;/code&gt; section in the config that locks the real companies, real projects, and real metrics. The AI can reorganize and emphasize, but it can't invent. That matters both for integrity and for not getting blindsided in an interview.&lt;/p&gt;

&lt;p&gt;On the ethics of applying at scale: I have an extensive skillset across multiple domains and I'd genuinely thrive in any of the roles I targeted. The tailoring means each application is actually relevant to the role, not just spam. If a company receives a well-matched resume for something I can do, I don't see the problem. The reader can draw their own conclusion.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Tell You
&lt;/h2&gt;

&lt;p&gt;If job searching is eating your time right now - that frustration is real and it doesn't have to work that way. The tools to automate most of this already exist.&lt;/p&gt;

&lt;p&gt;Start with the discovery config. Build a library of job site templates. That foundation makes everything else possible, and once it's built you won't have to touch it again.&lt;/p&gt;

&lt;p&gt;The code is &lt;a href="https://github.com/Pickle-Pixel/ApplyPilot" rel="noopener noreferrer"&gt;https://github.com/Pickle-Pixel/ApplyPilot&lt;/a&gt;. Go build something.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>jobsearch</category>
      <category>python</category>
    </item>
    <item>
      <title>How I Made Claude Code Agent Teams Work With Any Model</title>
      <dc:creator>PicklePixel</dc:creator>
      <pubDate>Sun, 08 Feb 2026 04:43:00 +0000</pubDate>
      <link>https://dev.to/picklepixel/how-i-made-claude-code-agent-teams-work-with-any-model-5fng</link>
      <guid>https://dev.to/picklepixel/how-i-made-claude-code-agent-teams-work-with-any-model-5fng</guid>
      <description>&lt;p&gt;Claude Code Agent Teams is the most capable multi-agent coding system I've used. You tell it to refactor your auth module, it spawns three teammates, they read files, write code, run tests, coordinate through task lists, and report back. Each teammate is a full Claude Code instance with 15+ tools. It's genuinely impressive.&lt;/p&gt;

&lt;p&gt;There's one problem: every single agent has to be Claude. Your lead runs Opus at $15/M tokens. Your researcher runs Sonnet. Your reviewer runs Sonnet. A four-agent team working on a refactor can easily burn $5-10 in one session.&lt;/p&gt;

&lt;p&gt;I wanted to keep the lead on Claude Opus and swap the teammates' brains to GPT. Honestly, I just wanted to stop burning money on tasks that don't need a frontier model.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Wrong Approach (First)
&lt;/h2&gt;

&lt;p&gt;My first instinct was to build a full custom agent framework. Agent Runtime. Universal Tool System. Provider Adapters. Coordination Layer. Spawner. I designed the whole thing. Around 2,000 lines of TypeScript, reinventing everything Claude Code already does perfectly.&lt;/p&gt;

&lt;p&gt;Then it clicked: Claude Code IS the agent runtime. I don't need to rebuild it. I just need to change where it sends its API calls.&lt;/p&gt;

&lt;p&gt;Every Claude Code teammate process communicates with its LLM through one endpoint: &lt;code&gt;POST /v1/messages&lt;/code&gt;. It sends tool definitions, message history, system prompts. It expects back SSE-streamed responses with text and tool_use blocks.&lt;/p&gt;

&lt;p&gt;The teammate never validates who is on the other end. It doesn't check if the responses actually come from Claude. It just sends Anthropic-format requests and executes whatever tool calls come back.&lt;/p&gt;

&lt;p&gt;The hook is one environment variable: &lt;code&gt;ANTHROPIC_BASE_URL&lt;/code&gt;. Set it to &lt;code&gt;http://localhost:3456&lt;/code&gt; and every API call goes to your proxy instead of Anthropic.&lt;/p&gt;

&lt;p&gt;I confirmed this by pointing it at &lt;code&gt;localhost:9999&lt;/code&gt; with nothing listening. Claude Code hung waiting for connection. It respects the override completely.&lt;/p&gt;

&lt;p&gt;So instead of building a framework, I built a translation proxy. Two API formats that do the same thing, just formatted differently. The proxy sits in the middle and translates in real-time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Proxy Actually Does
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Lead Agent (Claude Opus)
    |
    | ANTHROPIC_BASE_URL=http://localhost:3456
    |
Teammate Process (Claude Code CLI)
    |  -- thinks it's calling Anthropic --
    |
HydraProxy (localhost:3456)
    |  -- translates API format --
    |
GPT-5.3 Codex (or whatever model you want)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The teammate is still a full Claude Code instance with every tool. Read, Write, Edit, Bash, Glob, Grep, Git. It just doesn't know its brain is GPT instead of Claude.&lt;/p&gt;

&lt;p&gt;The translation has two parts: requests going out, and responses coming back.&lt;/p&gt;

&lt;h3&gt;
  
  
  Requests
&lt;/h3&gt;

&lt;p&gt;Anthropic and OpenAI structure things differently but it's mostly a reshuffling:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Anthropic puts the system prompt as a top-level &lt;code&gt;system&lt;/code&gt; field. OpenAI puts it as the first message with &lt;code&gt;role: "system"&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Anthropic defines tools as &lt;code&gt;{ name, input_schema }&lt;/code&gt;. OpenAI wraps them in &lt;code&gt;{ type: "function", function: { name, parameters } }&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Tool calls in Anthropic are &lt;code&gt;tool_use&lt;/code&gt; content blocks inside a message. OpenAI puts them in a &lt;code&gt;tool_calls&lt;/code&gt; array on the assistant message.&lt;/li&gt;
&lt;li&gt;Tool results in Anthropic are &lt;code&gt;tool_result&lt;/code&gt; blocks in user messages. OpenAI uses separate &lt;code&gt;{ role: "tool" }&lt;/code&gt; messages.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pretty mechanical once you see the pattern.&lt;/p&gt;

&lt;h3&gt;
  
  
  SSE Streams (The Hard Part)
&lt;/h3&gt;

&lt;p&gt;Both APIs stream via Server-Sent Events, but the event structure is completely different.&lt;/p&gt;

&lt;p&gt;OpenAI gives you flat chunks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data: {"choices":[{"delta":{"content":"Hello "}}]}
data: {"choices":[{"delta":{"tool_calls":[{"index":0,"id":"call_123","function":{"name":"Read"}}]}}]}
data: [DONE]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude Code expects this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;event: message_start
event: content_block_start  (index 0, type "text")
event: content_block_delta  (text_delta: "Hello ")
event: content_block_stop
event: content_block_start  (index 1, type "tool_use", name "Read")
event: content_block_delta  (input_json_delta: partial JSON...)
event: content_block_stop
event: message_delta        (stop_reason)
event: message_stop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The proxy maintains a state machine that tracks block indexes, active tool calls, and whether a text block has been started. Each OpenAI chunk gets translated into the corresponding Anthropic event and written to the response stream. The model name gets spoofed too. Claude Code validates model names internally, so the proxy reports &lt;code&gt;claude-sonnet-4-5-20250929&lt;/code&gt; regardless of what's actually answering.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Debugging Gauntlet
&lt;/h2&gt;

&lt;p&gt;The architecture was clean. Reality was messier. Five bugs, each discovered sequentially because the previous one masked the next.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Query parameters.&lt;/strong&gt; Claude Code sends &lt;code&gt;POST /v1/messages?beta=true&lt;/code&gt;. My proxy matched on exact URL &lt;code&gt;"/v1/messages"&lt;/code&gt;. No match. Zero requests got through. Spent longer than I'd like to admit staring at an empty terminal before checking the actual URL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Token counting.&lt;/strong&gt; Claude Code sends 10+ &lt;code&gt;POST /v1/messages/count_tokens&lt;/code&gt; requests on startup. The proxy returned 404 for all of them. Added a handler that returns estimated counts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;max_tokens overflow.&lt;/strong&gt; Claude Code requests &lt;code&gt;max_tokens: 32000&lt;/code&gt;. GPT-4o caps at 16384. OpenAI returned 400. Added a model-specific lookup table with clamping.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Non-streaming warmup.&lt;/strong&gt; Claude Code sends a haiku warmup request with &lt;code&gt;stream: undefined&lt;/code&gt;. Not &lt;code&gt;false&lt;/code&gt;, not &lt;code&gt;true&lt;/code&gt;. The proxy always set &lt;code&gt;stream: true&lt;/code&gt; on the upstream call. The non-streaming response format is completely different from SSE. Had to detect and handle both paths.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rate limits.&lt;/strong&gt; Two teammates running GPT-4o-mini simultaneously blew through the 200K TPM limit in seconds. Added retry logic with exponential backoff.&lt;/p&gt;

&lt;p&gt;After fixing all five:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ ANTHROPIC_BASE_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;http://localhost:3456 claude &lt;span class="nt"&gt;--print&lt;/span&gt; &lt;span class="s2"&gt;"what model are you?"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Response: "I am Claude, an AI model developed by Anthropic..."&lt;/p&gt;

&lt;p&gt;GPT-4o, pretending to be Claude, running through the full pipeline. It even maintained the Claude persona from the system prompt. But ask it about DALL-E and the GPT personality leaks through.&lt;/p&gt;

&lt;p&gt;Then the real test: full agentic tool loops. A teammate spawned through the proxy successfully used Glob and Read tools across four round trips with 31 tool definitions. It searched files, read code, and reported back to the lead. GPT-4o-mini doing Claude Code's job at a fraction of the cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mixed Teams: Lead on Claude, Teammates on GPT
&lt;/h2&gt;

&lt;p&gt;The next challenge was routing. I wanted the lead on real Claude Opus (my subscription) and only the teammates going through the proxy. But all Claude Code processes have &lt;code&gt;ANTHROPIC_BASE_URL&lt;/code&gt; set, so they all hit the proxy.&lt;/p&gt;

&lt;p&gt;I tried three approaches:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Model name routing&lt;/strong&gt; didn't work because teammates also request &lt;code&gt;claude-opus-4-6&lt;/code&gt; sometimes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tool count heuristic&lt;/strong&gt; worked briefly. The lead had 31 tools (Claude Code's 15+ plus my MCP tools), teammates had 23. Route on count &amp;gt;= 28. Then I realized that adding or removing one MCP tool breaks the whole thing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;System prompt marker&lt;/strong&gt; was the winner. I added &lt;code&gt;&amp;lt;!-- hydra:lead --&amp;gt;&lt;/code&gt; as an HTML comment to my project's &lt;code&gt;CLAUDE.md&lt;/code&gt; file. Claude Code injects CLAUDE.md into the system prompt. The proxy checks the system prompt for the marker. Found means passthrough to real Anthropic. Not found means translate to GPT.&lt;/p&gt;

&lt;p&gt;Teammates don't get the CLAUDE.md from the main project. They get their own system prompt without the marker. Clean routing, zero false positives.&lt;/p&gt;

&lt;p&gt;For the passthrough, the proxy just relays the original auth headers from Claude Code to the real Anthropic API. No API key needed for the lead. You use your subscription as-is.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Subscription Hack: Zero-Cost Teammates
&lt;/h2&gt;

&lt;p&gt;The proxy worked with OpenAI API keys. But API keys cost money. I already pay for ChatGPT Plus. Can I use that?&lt;/p&gt;

&lt;p&gt;Turns out, yes. OpenAI's Codex CLI authenticates via &lt;code&gt;~/.codex/auth.json&lt;/code&gt;, an OAuth token. That token works with a different endpoint than the standard API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST https://chatgpt.com/backend-api/codex/responses
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This uses the Responses API format, which is different from both Chat Completions and the standard OpenAI API. Auth is a Bearer token plus a &lt;code&gt;Chatgpt-Account-Id&lt;/code&gt; header extracted from the JWT.&lt;/p&gt;

&lt;p&gt;I tested every model name I could think of. Found 9+ working models on ChatGPT Plus at zero additional cost:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;gpt-5-codex&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;gpt-5.1-codex&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;gpt-5.2-codex&lt;/td&gt;
&lt;td&gt;Full&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;gpt-5.3-codex&lt;/td&gt;
&lt;td&gt;Full (latest)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;gpt-5-codex-mini&lt;/td&gt;
&lt;td&gt;Mini&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;gpt-5.1-codex-mini&lt;/td&gt;
&lt;td&gt;Mini&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This meant building a second translation layer though. The Responses API has its own request and response format. So I wrote another pair of translators:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Request: Anthropic messages become &lt;code&gt;input&lt;/code&gt; items with &lt;code&gt;function_call&lt;/code&gt; and &lt;code&gt;function_call_output&lt;/code&gt; types instead of &lt;code&gt;tool_calls&lt;/code&gt;. System prompt becomes &lt;code&gt;instructions&lt;/code&gt;. Must include &lt;code&gt;store: false&lt;/code&gt;. Cannot include &lt;code&gt;max_output_tokens&lt;/code&gt; or &lt;code&gt;temperature&lt;/code&gt; (the backend rejects both, learned that the hard way).&lt;/li&gt;
&lt;li&gt;Response: Different SSE events. &lt;code&gt;response.output_text.delta&lt;/code&gt; becomes &lt;code&gt;content_block_delta&lt;/code&gt;. &lt;code&gt;response.function_call_arguments.delta&lt;/code&gt; becomes &lt;code&gt;input_json_delta&lt;/code&gt;. And so on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The proxy auto-reads &lt;code&gt;~/.codex/auth.json&lt;/code&gt;, decodes the JWT, extracts the account ID from a custom claim. No manual configuration. Just &lt;code&gt;codex --login&lt;/code&gt; once and the proxy handles the rest.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;node dist/index.js &lt;span class="nt"&gt;--model&lt;/span&gt; gpt-5.3-codex &lt;span class="nt"&gt;--provider&lt;/span&gt; chatgpt &lt;span class="nt"&gt;--port&lt;/span&gt; 3456 &lt;span class="nt"&gt;--passthrough&lt;/span&gt; lead
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Claude Code teammates powered by GPT-5.3-codex through a ChatGPT Plus subscription. The lead runs on Claude Opus through my Claude subscription. Total additional API cost: $0.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Final Stack
&lt;/h2&gt;

&lt;p&gt;Nine TypeScript files. Zero runtime dependencies. Just Node.js builtins.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;src/
├── index.ts                    Entry point
├── proxy.ts                    HTTP server, 3-way routing
├── config.ts                   CLI args, codex JWT auth
└── translators/
    ├── types.ts                TypeScript interfaces
    ├── request.ts              Anthropic → Chat Completions
    ├── messages.ts             Message history translation
    ├── response.ts             Chat Completions SSE → Anthropic SSE
    ├── request-responses.ts    Anthropic → Responses API
    └── response-responses.ts   Responses API SSE → Anthropic SSE
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Three routing paths:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Lead requests (hydra:lead marker found) pass through to real Anthropic&lt;/li&gt;
&lt;li&gt;Teammate requests with &lt;code&gt;--provider openai&lt;/code&gt; translate to Chat Completions&lt;/li&gt;
&lt;li&gt;Teammate requests with &lt;code&gt;--provider chatgpt&lt;/code&gt; translate to the Responses API&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;I originally designed a 2,000-line framework. What shipped was a translation proxy. Same result, fraction of the complexity. The best agent framework already existed. I just needed to make it talk to different backends.&lt;/p&gt;

&lt;p&gt;The translation layer itself is honestly not that interesting. Two APIs that do the same thing, structured differently. The interesting part is what it enables: heterogeneous teams where each agent runs on whatever model makes sense for its task. Your lead on Opus because it needs strong reasoning. Your file searcher on GPT-4o-mini because it just needs to grep and summarize. Your code reviewer on GPT-5.3-codex because it's free through your subscription.&lt;/p&gt;

&lt;p&gt;The real insight is that Claude Code Agent Teams is undervalued infrastructure. It's a complete multi-agent system with coordination, task management, messaging, plan approval, and graceful shutdown. Everyone's trying to build agent frameworks from scratch. The smart play is to extend the ones that already work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Repo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/Pickle-Pixel/HydraTeams" rel="noopener noreferrer"&gt;HydraTeams on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MIT licensed. If you have a ChatGPT Plus subscription and want free agent teammates, this is your move.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opensource</category>
      <category>typescript</category>
      <category>tooling</category>
    </item>
    <item>
      <title>How I Built an MCP Server That Lets Claude Code Talk to Every LLM I Pay For</title>
      <dc:creator>PicklePixel</dc:creator>
      <pubDate>Fri, 06 Feb 2026 03:53:01 +0000</pubDate>
      <link>https://dev.to/picklepixel/how-i-built-an-mcp-server-that-lets-claude-code-talk-to-every-llm-i-pay-for-126k</link>
      <guid>https://dev.to/picklepixel/how-i-built-an-mcp-server-that-lets-claude-code-talk-to-every-llm-i-pay-for-126k</guid>
      <description>&lt;p&gt;I have subscriptions to ChatGPT Plus, Claude MAX, and Gemini. I also run local models through Ollama. That's four different ecosystems, four browser tabs, and a lot of copy-pasting whenever I want to compare how different models handle the same question.&lt;/p&gt;

&lt;p&gt;It was getting ridiculous. I'd ask Claude something, then open ChatGPT to see if GPT-5 agreed, then check Gemini for a third opinion. Every time, I'd lose context, reformat the prompt, and waste five minutes on what should be a ten-second comparison.&lt;/p&gt;

&lt;p&gt;So I built HydraMCP. It's an MCP server that routes queries from Claude Code to any model I have access to cloud and local through a single interface. One prompt, multiple models, parallel execution.&lt;/p&gt;

&lt;h2&gt;
  
  
  What It Actually Does
&lt;/h2&gt;

&lt;p&gt;HydraMCP exposes five tools to Claude Code:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;list_models&lt;/strong&gt; shows everything available across all your providers. One command, full inventory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ask_model&lt;/strong&gt; queries any single model. Want GPT-5's take on something without leaving your terminal? Just ask.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;compare_models&lt;/strong&gt; is the one I use the most. Same prompt to 2-5 models in parallel, results side by side. Here's what that looks like in practice:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; compare gpt-5-codex, gemini-3, claude-sonnet, and local qwen on this function review

## Model Comparison (4 models, 11637ms total)

| Model                      | Latency         | Tokens |
|----------------------------|-----------------|--------|
| gpt-5-codex                | 1630ms fastest  | 194    |
| gemini-3-pro-preview       | 11636ms         | 1235   |
| claude-sonnet-4-5-20250929 | 3010ms          | 202    |
| ollama/qwen2.5-coder:14b   | 8407ms          | 187    |
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;All four independently found the same async bug. Then each caught something different the others missed. GPT-5 was fastest, Gemini was most thorough, Claude gave the clearest fix, Qwen explained the root cause. Different training data, different strengths.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;consensus&lt;/strong&gt; polls 3-7 models on a question and has a separate judge model evaluate whether they actually agree. It returns a confidence score and groups responses by agreement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;synthesize&lt;/strong&gt; fans out to multiple models, collects their responses, and then a synthesizer model combines the best insights into one answer. The result is usually better than any individual response.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architecture
&lt;/h2&gt;

&lt;p&gt;The design is pretty straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Claude Code
    |
    HydraMCP (MCP Server)
    |
    Provider Interface
    |-- CLIProxyAPI  -&amp;gt; cloud models (GPT, Gemini, Claude, etc.)
    |-- Ollama       -&amp;gt; local models (your hardware)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;HydraMCP sits between Claude Code and your model providers. It communicates over stdio using JSON-RPC (the MCP protocol), routes requests to the right backend, and formats everything to keep your context window manageable.&lt;/p&gt;

&lt;p&gt;The provider interface is the core abstraction. Every backend implements three methods: &lt;code&gt;healthCheck()&lt;/code&gt;, &lt;code&gt;listModels()&lt;/code&gt;, and &lt;code&gt;query()&lt;/code&gt;. That's it. Adding a new provider means implementing those three functions and registering it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kr"&gt;interface&lt;/span&gt; &lt;span class="nx"&gt;Provider&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nf"&gt;healthCheck&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;boolean&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nf"&gt;listModels&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ModelInfo&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="nf"&gt;query&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;options&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;QueryOptions&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;QueryResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For cloud models, &lt;a href="https://github.com/router-for-me/CLIProxyAPI" rel="noopener noreferrer"&gt;CLIProxyAPI&lt;/a&gt; turns your existing subscriptions into a local OpenAI-compatible API. You authenticate once per provider through a browser login, and it handles the rest. No per-token billing - you're using the subscriptions you already pay for.&lt;/p&gt;

&lt;p&gt;For local models, Ollama runs on localhost and provides models like Qwen, Llama, and Mistral. Zero API keys, zero cost beyond your electricity bill.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Interesting Parts
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Parallel Execution
&lt;/h3&gt;

&lt;p&gt;When you compare four models, all four queries fire simultaneously using &lt;code&gt;Promise.allSettled()&lt;/code&gt;. Total time equals the slowest model, not the sum of all of them. That five-model comparison above? 11.6 seconds total, not 25+.&lt;/p&gt;

&lt;p&gt;And if one model fails, you still get results from the others. Graceful degradation instead of all-or-nothing.&lt;/p&gt;

&lt;h3&gt;
  
  
  Consensus With an LLM Judge
&lt;/h3&gt;

&lt;p&gt;This is the part I'm most interested in. Naive keyword matching fails at determining if models agree. If one says "start with a monolith" and another says "monolith because it's simpler," they agree - but keyword overlap is low.&lt;/p&gt;

&lt;p&gt;So the consensus tool picks a model that's &lt;em&gt;not&lt;/em&gt; in the poll and asks it to evaluate agreement. The judge reads all responses and groups them semantically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Three cloud models polled, local Qwen judging.
Strategy: majority (needed 2/3)
Agreement: 3/3 models (100%)
Judge latency: 686ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using a local model as judge means zero cloud quota used for the evaluation step.&lt;/p&gt;

&lt;p&gt;Honestly, the keyword-based fallback (for when no judge is available) is pretty broken. It works for factual questions sometimes but falls apart on anything subjective. The LLM judge approach is significantly better, but it's still an area I want to improve.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ollama Warmup
&lt;/h3&gt;

&lt;p&gt;One thing I noticed during testing: local models through Ollama have a significant cold-start penalty.&lt;/p&gt;

&lt;p&gt;First request to Qwen 32B: 24 seconds (loading the model into memory). By the fourth request: 3 seconds. That's an 8x improvement just from the model being warm. After that warmup period, local models genuinely compete with cloud on latency.&lt;/p&gt;

&lt;p&gt;If you're using HydraMCP regularly, your local models stay warm and the experience is seamless. The first query of the day might be slow, but everything after that is fast.&lt;/p&gt;

&lt;h3&gt;
  
  
  Synthesis
&lt;/h3&gt;

&lt;p&gt;The synthesize tool is probably the most ambitious feature. It collects responses from multiple models, then feeds them all to a synthesizer model with instructions to combine the best insights and drop the filler.&lt;/p&gt;

&lt;p&gt;The synthesizer is deliberately picked from a model &lt;em&gt;not&lt;/em&gt; in the source list when possible. The prompt is straightforward: "Here are responses from four models. Write one definitive answer. Take the best from each."&lt;/p&gt;

&lt;p&gt;In practice, the synthesized result usually has better structure than any individual response and catches details that at least one model missed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Stack
&lt;/h2&gt;

&lt;p&gt;It's about 1,500 lines of TypeScript. Dependencies are minimal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;@modelcontextprotocol/sdk&lt;/code&gt; for the MCP protocol&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;zod&lt;/code&gt; for input validation&lt;/li&gt;
&lt;li&gt;Node 18+&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That's it. No Express, no database, no build framework beyond TypeScript's compiler. Every tool input is validated with Zod schemas, and all logging goes to stderr (stdout is reserved for the JSON-RPC protocol - send anything else there and you break MCP).&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;The whole thing takes about five minutes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set up CLIProxyAPI and/or Ollama as backends&lt;/li&gt;
&lt;li&gt;Clone, install, build HydraMCP&lt;/li&gt;
&lt;li&gt;Add your backend URLs to &lt;code&gt;.env&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Register with Claude Code: &lt;code&gt;claude mcp add hydramcp -s user -- node /path/to/dist/index.js&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Restart Claude Code, say "list models"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;From there you just talk naturally. "Ask GPT-5 to review this." "Compare three models on this approach." "Get consensus on whether this is thread-safe." Claude Code routes it through HydraMCP automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Like to Add
&lt;/h2&gt;

&lt;p&gt;The provider interface makes this extensible by design. The backends I want to see next:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LM Studio&lt;/strong&gt; for another local model option&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenRouter&lt;/strong&gt; for pay-per-token access to models you don't subscribe to&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Direct API keys&lt;/strong&gt; for OpenAI, Anthropic, and Google without needing CLIProxyAPI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each one is roughly 100 lines of TypeScript. Implement the three interface methods, register it, done.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;The real value isn't any single feature. It's the workflow change. Instead of trusting one model's opinion, you can cheaply verify it against others. Instead of wondering if GPT or Claude is better for a specific task, you can just compare them and see.&lt;/p&gt;

&lt;p&gt;Different models have genuinely different strengths. I've seen GPT-5 catch performance issues that Claude missed, and Claude suggest architectural patterns that GPT didn't consider. Gemini sometimes gives the most thorough analysis. Local Qwen is surprisingly good at explaining &lt;em&gt;why&lt;/em&gt; something is wrong, not just &lt;em&gt;what&lt;/em&gt; is wrong.&lt;/p&gt;

&lt;p&gt;Having all of them available from one terminal, with parallel execution and structured comparison, changes how you think about using AI for code. It goes from "ask my preferred model" to "ask the right model for this task" - or just ask all of them and see what shakes out.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Repo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/Pickle-Pixel/HydraMCP" rel="noopener noreferrer"&gt;HydraMCP on GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;MIT licensed. If you have subscriptions collecting dust or local models sitting idle, this puts them to work. And if you want to add a provider, the interface is documented and the examples are there.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>opensource</category>
      <category>tooling</category>
    </item>
    <item>
      <title>How I Made Netflix Give Me 4K (Because Apparently My Browser Wasn't Good Enough)</title>
      <dc:creator>PicklePixel</dc:creator>
      <pubDate>Wed, 28 Jan 2026 05:24:38 +0000</pubDate>
      <link>https://dev.to/picklepixel/how-i-made-netflix-give-me-4k-because-apparently-my-browser-wasnt-good-enough-4fa2</link>
      <guid>https://dev.to/picklepixel/how-i-made-netflix-give-me-4k-because-apparently-my-browser-wasnt-good-enough-4fa2</guid>
      <description>&lt;p&gt;I pay for Netflix Premium. The one that's supposed to include 4K streaming. But every time I tried to watch something on my PC, it would cap at 1080p. Sometimes even 720p. On a 4K monitor. With gigabit internet.&lt;/p&gt;

&lt;p&gt;This had been bugging me for months. I'd sit down to watch something, notice the quality looked soft, check the stats overlay, and sure enough 1080p. Every single time. I figured it was a bandwidth thing at first, maybe Netflix's servers were busy. But it kept happening.&lt;/p&gt;

&lt;p&gt;So I finally decided to figure out what was actually going on.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Discovery
&lt;/h2&gt;

&lt;p&gt;Turns out Netflix has a list of "approved" devices and browsers for 4K playback. If you're not on Microsoft Edge, Safari, or their native app, you simply don't get 4K. Doesn't matter that you're paying for it. Doesn't matter that your hardware supports it.&lt;/p&gt;

&lt;p&gt;Chrome? No 4K. Firefox? No 4K. Brave? Nope.&lt;/p&gt;

&lt;p&gt;The reasoning has to do with DRM. Netflix requires hardware-level content protection (Widevine L1 and HDCP 2.2) to serve 4K streams. Edge on Windows has this. Chrome doesn't - it only has Widevine L3, which is software-based and considered less secure by content providers.&lt;/p&gt;

&lt;p&gt;I get the security argument from Netflix's perspective. But from my perspective, I'm paying for a service tier I can't fully use because of my browser choice. That felt worth fixing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Down the Rabbit Hole
&lt;/h2&gt;

&lt;p&gt;I spent an evening just researching how Netflix determines device capabilities. Opened DevTools, watched the network requests, read through forums and old GitHub issues. The picture that emerged was interesting.&lt;/p&gt;

&lt;p&gt;Before Netflix serves you any video, it runs a bunch of capability checks:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Browser fingerprinting:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;User agent string (what browser/OS you're running)&lt;/li&gt;
&lt;li&gt;Screen resolution via &lt;code&gt;window.screen&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Device pixel ratio&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Codec support:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HEVC (H.265) for 4K&lt;/li&gt;
&lt;li&gt;VP9 as an alternative&lt;/li&gt;
&lt;li&gt;AV1 on newer content&lt;/li&gt;
&lt;li&gt;Dolby Vision for HDR&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;DRM capabilities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Widevine security level (L1, L2, or L3)&lt;/li&gt;
&lt;li&gt;PlayReady support on Windows&lt;/li&gt;
&lt;li&gt;HDCP version compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Media APIs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;navigator.mediaCapabilities.decodingInfo()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;MediaSource.isTypeSupported()&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;navigator.requestMediaKeySystemAccess()&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these checks happen in JavaScript before the video manifest is even requested. Netflix builds a profile of what your device can handle, then serves you the appropriate stream quality.&lt;/p&gt;

&lt;p&gt;The key insight: most of these checks are JavaScript APIs that can be intercepted and spoofed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day One: Basic Spoofing
&lt;/h2&gt;

&lt;p&gt;I started with the obvious stuff. Created a basic Chrome extension and began overriding the simple checks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Spoof screen resolution to 4K&lt;/span&gt;
&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;defineProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;width&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;3840&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;defineProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;height&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;2160&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;defineProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;availWidth&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;3840&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;defineProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;availHeight&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;2160&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;defineProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;screen&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;colorDepth&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;48&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;defineProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;devicePixelRatio&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then the user agent. Netflix needs to think we're running Edge:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;defineProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;userAgent&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;get&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Loaded it up, went to Netflix, played a video. Still 1080p. The basic spoofs weren't enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day Two: Media Capabilities
&lt;/h2&gt;

&lt;p&gt;The next layer was the Media Capabilities API. Netflix uses this to ask the browser "can you smoothly decode this codec at this resolution?"&lt;/p&gt;

&lt;p&gt;I intercepted the &lt;code&gt;decodingInfo&lt;/code&gt; method and forced it to return positive results for 4K codecs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;originalDecodingInfo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mediaCapabilities&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;decodingInfo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mediaCapabilities&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;mediaCapabilities&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;decodingInfo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dominated4KCodecs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hev1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hvc1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vp09&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vp9&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;av01&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dvhe&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dvh1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;video&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;codec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;video&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;contentType&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="dl"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dominated&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;dominated4KCodecs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;codec&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;c&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dominated&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;video&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;width&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;3840&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;supported&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;smooth&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;powerEfficient&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;originalDecodingInfo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same idea for &lt;code&gt;MediaSource.isTypeSupported()&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;originalIsTypeSupported&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;MediaSource&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isTypeSupported&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;MediaSource&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="nx"&gt;MediaSource&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;isTypeSupported&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mimeType&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;dominated4KTypes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hev1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hvc1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dvh1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;dvhe&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vp09&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;vp9&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;av01&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;dominated4KTypes&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;some&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;t&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;mimeType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;t&lt;/span&gt;&lt;span class="p"&gt;)))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;originalIsTypeSupported&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mimeType&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tested again. Still 1080p. Netflix was checking something else.&lt;/p&gt;

&lt;h2&gt;
  
  
  Day Three: DRM and HDCP
&lt;/h2&gt;

&lt;p&gt;This is where it got interesting. Netflix checks DRM capabilities through &lt;code&gt;navigator.requestMediaKeySystemAccess()&lt;/code&gt;. This API negotiates which DRM system to use (Widevine, PlayReady) and what security level.&lt;/p&gt;

&lt;p&gt;The security level is specified through a "robustness" string. For 4K, Netflix wants &lt;code&gt;HW_SECURE_ALL&lt;/code&gt; (hardware-level security). Chrome can only provide &lt;code&gt;SW_SECURE_DECODE&lt;/code&gt; (software).&lt;/p&gt;

&lt;p&gt;I tried intercepting this and requesting the higher robustness level:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;requestMediaKeySystemAccess&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;keySystem&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;configs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;enhancedConfigs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;configs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;enhanced&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;parse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;config&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;enhanced&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;videoCapabilities&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;enhanced&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;videoCapabilities&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;enhanced&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;videoCapabilities&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;map&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;vc&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="p"&gt;...&lt;/span&gt;&lt;span class="nx"&gt;vc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;robustness&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;HW_SECURE_ALL&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
      &lt;span class="p"&gt;}));&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;enhanced&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

  &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;originalRequestMediaKeySystemAccess&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;keySystem&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;enhancedConfigs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Fall back to original if enhanced fails&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;originalRequestMediaKeySystemAccess&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;keySystem&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;configs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The enhanced request fails (because Chrome genuinely doesn't have L1), but the fallback still works. The interesting part is that this interception, combined with all the other spoofs, started to have an effect.&lt;/p&gt;

&lt;p&gt;I also spoofed the HDCP policy check:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;defineProperty&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;navigator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hdcpPolicyCheck&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;Promise&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;resolve&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;hdcp&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hdcp-2.2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Day Four: Cadmium
&lt;/h2&gt;

&lt;p&gt;Netflix's video player is called Cadmium. It's their internal player that handles everything from manifest fetching to adaptive bitrate switching. Cadmium has its own configuration that sets maximum resolution and bitrate caps.&lt;/p&gt;

&lt;p&gt;Finding how to hook into it took some time. I ended up polling for the &lt;code&gt;window.netflix.player&lt;/code&gt; object and overriding its methods:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;hookCadmium&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;netflix&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;netflix&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;player&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;player&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;window&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;netflix&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;player&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;create&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;configure&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;getConfiguration&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;getConfig&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;player&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;function&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;original&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;player&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;bind&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;player&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nx"&gt;player&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;method&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;original&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

          &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;object&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;maxBitrate&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;maxBitrate&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;16000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;maxVideoHeight&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;maxVideoHeight&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2160&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;maxVideoWidth&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="kc"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;maxVideoWidth&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3840&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;

          &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also intercepted &lt;code&gt;Object.defineProperty&lt;/code&gt; itself to catch any resolution or bitrate caps being set anywhere in Netflix's code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;originalDefineProperty&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;defineProperty&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nb"&gt;Object&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;defineProperty&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prop&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;descriptor&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;prop&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;string&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;lowerProp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;prop&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;toLowerCase&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lowerProp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;maxbitrate&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;descriptor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;16000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;descriptor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;16000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;lowerProp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;includes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;maxheight&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nx"&gt;descriptor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;2160&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;descriptor&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2160&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;originalDefineProperty&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;call&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;obj&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;prop&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;descriptor&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This felt hacky, but it was catching config values that I couldn't find any other way.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Breakthrough
&lt;/h2&gt;

&lt;p&gt;After all these layers of spoofing, I loaded Netflix, played a 4K title (Our Planet - nature docs are great for testing because the quality difference is obvious), and pressed &lt;code&gt;Ctrl+Shift+Alt+D&lt;/code&gt; to bring up the stats overlay.&lt;/p&gt;

&lt;p&gt;3840x2160. 15000+ kbps bitrate.&lt;/p&gt;

&lt;p&gt;It actually worked.&lt;/p&gt;

&lt;p&gt;The quality difference was immediately visible. All that fine detail in the nature footage that looked muddy at 1080p was now crisp. This is what I was paying for.&lt;/p&gt;

&lt;h2&gt;
  
  
  The SPA Problem
&lt;/h2&gt;

&lt;p&gt;Feeling good about it, I browsed around Netflix, clicked on another movie, and... 1080p again.&lt;/p&gt;

&lt;p&gt;Netflix is a single-page application. When you click on a title, it doesn't do a full page reload - it just updates the URL and swaps content via JavaScript. My extension was injecting at page load, but the Cadmium player was being recreated for each new video without triggering a reload.&lt;/p&gt;

&lt;p&gt;I added navigation detection that watches for URL changes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;let&lt;/span&gt; &lt;span class="nx"&gt;lastPath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;checkNavigation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="nx"&gt;lastPath&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;isWatch&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startsWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/watch&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nx"&gt;lastPath&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pathname&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;isWatch&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Reset and re-hook&lt;/span&gt;
      &lt;span class="nx"&gt;cadmiumHooked&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
      &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;hookCadmium&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="nf"&gt;setInterval&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;checkNavigation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Also intercept history API&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;originalPushState&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pushState&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;history&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pushState&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt;&lt;span class="p"&gt;(...&lt;/span&gt;&lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;originalPushState&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;apply&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;args&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;checkNavigation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This helped, but honestly it's still not perfect. Sometimes the timing is off and the hooks don't catch the new player instance. When that happens, a page refresh fixes it. I've been trying to nail down the exact race condition but haven't fully solved it yet. It works maybe 80% of the time on navigation, and a refresh always fixes it when it doesn't.&lt;/p&gt;

&lt;p&gt;Good enough for now. I'll revisit it when it annoys me enough.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Learned
&lt;/h2&gt;

&lt;p&gt;The whole thing took about four days of evening tinkering. Most of that was research and understanding how Netflix's capability detection works. The actual code isn't that complicated once you know what to intercept.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The layers matter.&lt;/strong&gt; Netflix doesn't rely on any single check. You have to spoof the user agent AND the screen resolution AND the media capabilities AND the DRM negotiation AND the player config. Miss one layer and you're back to 1080p.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware DRM is the real barrier.&lt;/strong&gt; All my JavaScript spoofing can't change the fact that Chrome has Widevine L3 and not L1. I'm essentially tricking Netflix into &lt;em&gt;trying&lt;/em&gt; to serve 4K, and it works because the actual decryption still happens (just at a lower security level than Netflix prefers). This might not work for all content, and Netflix could theoretically detect the mismatch and block it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Edge is the sweet spot.&lt;/strong&gt; If you use this extension on Microsoft Edge (which does have Widevine L1), you get the best of both worlds - real hardware DRM plus all the spoofed capability checks. That's probably the most reliable setup.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Repo
&lt;/h2&gt;

&lt;p&gt;I published the extension if anyone else wants it: &lt;a href="https://github.com/Pickle-Pixel/netflix-force-4k" rel="noopener noreferrer"&gt;netflix-force-4k&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fair warning: Netflix could patch this whenever they want, and the SPA navigation still needs a refresh sometimes. But it works well enough that I'm actually getting 4K on most of what I watch now.&lt;/p&gt;

&lt;p&gt;The fact that I had to reverse-engineer a streaming service to access the quality tier I'm paying for is a little absurd. But that's the state of DRM in 2025. You pay for the content, but whether you can actually watch it in full quality depends on which browser you prefer.&lt;/p&gt;

&lt;p&gt;Anyway, that's the journey. Four days of API hooking to watch movies in higher resolution. Worth it.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>webdev</category>
      <category>programming</category>
      <category>opensource</category>
    </item>
    <item>
      <title>How I Reverse-Engineered Perplexity’s Referral Signup</title>
      <dc:creator>PicklePixel</dc:creator>
      <pubDate>Wed, 26 Nov 2025 06:27:18 +0000</pubDate>
      <link>https://dev.to/picklepixel/how-i-reverse-engineered-perplexitys-referral-signup-4f04</link>
      <guid>https://dev.to/picklepixel/how-i-reverse-engineered-perplexitys-referral-signup-4f04</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;I wasn't trying to change the world, I just didn't want to spend 30 minutes per signup&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  The Setup
&lt;/h1&gt;

&lt;p&gt;I was a campus partner for Perplexity which means every signup is 15$ in my pocket, as a student this sounds great all I have to do is pitch to students about free AI tool that will help them study.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Problem
&lt;/h1&gt;

&lt;p&gt;Well, it sounds great until you realize you have to earn their trust, explain why it’s a good deal, show them how to use it, and hopefully convince them to let you sign them up.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You need their student email, and most students don’t really understand how referrals work, so they’re hesitant to share it.&lt;/li&gt;
&lt;li&gt;Then you have to download the Comet browser by Perplexity, and let’s be honest, that looks a little sketchy to most people.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If everything goes perfectly, you’ve just spent about 30 minutes signing up one person.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It’s not difficult work, but it’s slow work and that’s what made me want to find a better way.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  The Real Motivation
&lt;/h1&gt;

&lt;p&gt;I wanted students to have free access to the latest and best AI tools out there.&lt;br&gt;
If you actually believed that, &lt;strong&gt;stop&lt;/strong&gt; reading.&lt;/p&gt;

&lt;p&gt;I just wanted more money in my pocket and less time wasted. So I asked myself what really happens behind the scenes and how do referrals work?&lt;/p&gt;
&lt;h1&gt;
  
  
  Reverse Engineering the Process
&lt;/h1&gt;

&lt;p&gt;First thing I did was peek at the network activity to see how the signup flow behaved. I booted up &lt;a href="https://portswigger.net/burp" rel="noopener noreferrer"&gt;Burp Suite&lt;/a&gt;, grabbed an ice cold Coke, and started taking notes.&lt;/p&gt;

&lt;p&gt;Here is the high-level checklist you need to cover for the referral to register:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Click referral link&lt;/li&gt;
&lt;li&gt;Sign in with student email&lt;/li&gt;
&lt;li&gt;Download Comet&lt;/li&gt;
&lt;li&gt;Prompt Comet once&lt;/li&gt;
&lt;/ol&gt;
&lt;h6&gt;
  
  
  Step 1: Referral Link Observation
&lt;/h6&gt;

&lt;p&gt;I started by trying to understand what the referral link was actually doing. Nothing complicated. I clicked the link with Burp Suite running and scrolled through all the HTTPS requests it captured.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8ukm1jlv6mhcwgjwwil.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8ukm1jlv6mhcwgjwwil.png" alt=" " width="800" height="35"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What I was looking for was basically the “referral fingerprint.” Something that only appears when you come in through a partner link.&lt;/p&gt;

&lt;p&gt;After checking a bunch of requests, one thing stood out: the &lt;code&gt;dub-id&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When I opened the request up, I noticed something interesting. The &lt;code&gt;dub-id&lt;/code&gt; and the &lt;code&gt;click-id&lt;/code&gt; were the exact same value. Different labels, same number. That immediately told me this was important.&lt;/p&gt;

&lt;p&gt;This is clearly the ID that connects the user to the partner who referred them.&lt;br&gt;
So I saved it and kept tracking where it showed up next.&lt;/p&gt;

&lt;p&gt;That was all I needed for Step 1.&lt;br&gt;
Find the ID that marks the beginning of the referral flow and hold onto it.&lt;/p&gt;
&lt;h6&gt;
  
  
  Step 2: Sign in w/Student Email
&lt;/h6&gt;

&lt;p&gt;Next I wanted to understand what actually happens when you sign in with a student email. The first thing that showed up in Burp Suite was a &lt;a href="https://owasp.org/www-community/attacks/csrf" rel="noopener noreferrer"&gt;CSRF&lt;/a&gt; token the moment the sign-in page loaded. Since it appeared before anything else, I wrote it down as something the backend clearly expects.&lt;/p&gt;

&lt;p&gt;After that, the &lt;a href="https://en.wikipedia.org/wiki/One-time_password" rel="noopener noreferrer"&gt;OTP&lt;/a&gt; flow kicked in. This is the part that verifies the email and marks the user as authenticated. Watching the network requests before and after the OTP made it pretty obvious what changed and what the system used to confirm the login.&lt;/p&gt;

&lt;p&gt;Then I tried sending the sign-in request through my script:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj73kio711j19iqmzot2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj73kio711j19iqmzot2v.png" alt=" " width="800" height="66"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the email and OTP were confirmed, the session switched into a logged-in state. At that point I basically had everything I needed from the authentication part in order to continue understanding the referral flow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;emailSigninRequest = session.post(
    "https://www.perplexity.ai/xxx/xxx/xxx/xxx",
    data={
        "email": userMail,
        "callbackUrl": "https://www.perplexity.ai/xxxxxxxxxx",
        "redirect": "false",
        "useNumericOtp": "true",
        "csrfToken": session.cookies.get("next-auth.csrf-token").split("%7C", 1)[0],
        "json": "true",
    }
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now we successfully logged in with a student email and have all the information we need to simulate the referral attributes.&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 3: Download Comet
&lt;/h6&gt;

&lt;p&gt;Now I needed to understand what happens when you download Comet. After signing in and completing the OTP step, the system redirects you to a link that triggers the Comet installer. That redirect is basically the signal the backend uses to register that the user downloaded the browser.&lt;/p&gt;

&lt;p&gt;So in my script, I just followed that redirect. Even without actually installing anything, hitting that link counted as the download event on the backend.&lt;/p&gt;

&lt;h6&gt;
  
  
  Step 4: Prompt Comet
&lt;/h6&gt;

&lt;p&gt;This part was honestly the most annoying. Comet is a full desktop app, not a website, so I couldn’t just open DevTools and see what was going on. I had to proxy my whole system through Burp Suite just to catch its traffic.&lt;/p&gt;

&lt;p&gt;Once I did that, I finally saw what Comet sends when you open it and when you ask it something. That was the missing piece. The system expects the user to actually do one prompt after downloading, so I needed a way to trigger that same kind of activity.&lt;/p&gt;

&lt;p&gt;The logic I used was pretty straightforward. Comet always shows a bunch of suggested questions when it loads, so I took that idea and just picked one suggestion at random. I also made a bunch of simple question patterns like “what is {query}” or “explain {query}” or “tell me about {query}.” Then I let the script grab a suggestion, grab a pattern, combine them, and send it.&lt;/p&gt;

&lt;p&gt;Nothing smart, nothing magical. It just ends up looking like a new user asking the AI something completely normal. That one prompt is what satisfies the last part of the referral flow.&lt;/p&gt;

&lt;p&gt;That was it. Download done, prompt done, referral counted. Whole thing took around ten seconds.&lt;/p&gt;

&lt;h6&gt;
  
  
  It Wasn't Enough for Me
&lt;/h6&gt;

&lt;p&gt;After using the script for a day or two, I realized I really hated carrying my laptop everywhere. It felt slow, heavy, and honestly just annoying to pull out every time someone said “yeah sure, sign me up.”&lt;/p&gt;

&lt;p&gt;So I did the next obvious thing.&lt;/p&gt;

&lt;p&gt;I installed &lt;a href="https://termux.dev" rel="noopener noreferrer"&gt;Termux&lt;/a&gt; on my Samsung S24, set up a &lt;a href="https://proot-me.github.io" rel="noopener noreferrer"&gt;PRoot&lt;/a&gt; Ubuntu 22.04 environment with everything I needed, and moved the whole workflow to my phone. That was it. Now I could walk around campus with just my phone and run everything on the spot.&lt;/p&gt;

&lt;p&gt;By the end of the week, I had over 100 signups just from doing it this way.&lt;/p&gt;

&lt;p&gt;The Downfall&lt;/p&gt;

&lt;p&gt;Everything was going great. I was signing people up all day because it was fast and easy and I didn’t really have to think about it anymore. But then I had a thought I probably shouldn’t have had.&lt;/p&gt;

&lt;p&gt;What if I could sign up non-educational emails?&lt;/p&gt;

&lt;p&gt;So of course I tried it. And it worked. I was honestly surprised. I expected the system to block it right away, but it didn’t. I tried again with a Gmail address. Then with an Outlook address. Same result. And yes, the commission kept going up.&lt;/p&gt;

&lt;p&gt;At that point I knew it was only a matter of time before something flagged. I wasn’t supposed to be able to do that and I knew it. Two weeks later, I woke up to an email saying I was removed from the Perplexity Partner Campaign.&lt;/p&gt;

&lt;p&gt;I was a bit disappointed, but I also remembered the FAFO scale. It could have ended in a much worse way, so honestly I got off pretty lightly.&lt;/p&gt;

&lt;h1&gt;
  
  
  The FAFO Scale
&lt;/h1&gt;

&lt;p&gt;If you don’t know it, FAFO stands for “Fuck Around and Find Out.” It’s the universal rule of curiosity and consequences. The more you experiment, the closer you get to the part where the system pushes back.&lt;/p&gt;

&lt;p&gt;That’s what happened to me. I tested something I shouldn’t have, and I found out. The ban was the natural ending of the experiment, and honestly, I respected it. It proved that Perplexity’s systems catch behavior outside the rules eventually.&lt;/p&gt;

&lt;p&gt;It also reminded me that curiosity and consequences are two sides of the same coin. You just have to be okay with whichever one shows up first.&lt;/p&gt;

</description>
      <category>automation</category>
      <category>productivity</category>
      <category>security</category>
    </item>
  </channel>
</rss>
