The Instagram follower-tool ecosystem has a malware problem. In January 2026, the GhostPoster campaign was found to have spread across 17 extensions with 840,000+ cumulative installs across Chrome, Firefox, and Edge — hiding JavaScript malware inside PNG icon files using steganography. In March, two Chrome extensions were reported to have turned malicious after ownership transfer; in ShotBird's case, researchers documented fake Chrome update prompts used to deliver credential-theft malware. The "privacy policy" is a legal checkbox. The permissions list is where the real policy actually lives.
I built Reciprocity, a Manifest V3 Chrome extension that computes the set difference between who you follow and who follows you back on Instagram, and automates the unfollows. Zero servers, no external dependencies and only two permissions: tabs and storage. Host permissions locked to www.instagram.com and instagram.com. That's it, nothing else.
Privacy isn't a policy or a promise. It's an architecture with no server-side collection path and no third-party exfiltration endpoint.
In Chrome MV3, building this safely forces you into a specific, often painful set of constraints.
The MV3 Two-World Problem
Chrome extensions run content scripts in an "isolated world." You share the DOM with the page, but not the JavaScript execution environment (window). Great for security, but fatal if you need to intercept the page's own network requests before they happen.
To parse a user's Instagram following list without forcing them to scroll a modal for an hour, you have to hook into fetch() and XMLHttpRequest. To do that before Instagram's minified React bundle mounts, your code must run at document_start in the MAIN world.
The catch: MAIN world scripts have zero access to chrome.runtime.* APIs. They can't talk to your background service worker. Ergo, they can't read your extension storage.
So you build a two-world bridge:
-
content-main.js(MAIN world): Hooksfetch, parses GraphQL responses, drives direct API pagination. -
content.js(Isolated world): Orchestrates the state machine, talks to the background service worker, manages the execution queue.
They communicate via window.postMessage. But throwing messages across the window boundary on a public site is fundamentally insecure. Page JS can see them. Page JS can forge them.
Build the Bridge, Then Mistrust It
window.postMessage is a broadcast channel in hostile territory. The W3C WebExtensions working group has had an open proposal since 2021 for a secure replacement — acknowledging that the current approach is fundamentally broken — and no solution has shipped. Grammarly's extension exposed authentication tokens to every website a user visited through an unvalidated postMessage bridge. Twenty-two million users, and JavaScript on any visited page could abuse the bridge to access a session.
So the rule in Reciprocity is simple: the MAIN world is scan-only.
It can contribute observations, but not authority.
When the MAIN world intercepts a GraphQL batch of followers, it sends scan data across the bridge tagged with a __RECIPROCITY__ prefix, a per-scan scanId and a per-pagination-run requestId, plus a rotated 32-char hex bridge token negotiated at session start. The isolated world validates every incoming payload. In the real code, that means checking the bridge source marker, token, scan correlation, phase, and request correlation before accepting scan data. Simplified:
window.addEventListener('message', (event) => {
if (event.source !== window) return;
const msg = event.data;
if (!msg || msg.source !== '__RECIPROCITY__') return;
if (msg.token !== currentBridgeToken) return;
if (msg.scanId !== activeScanId) return;
// validated — process scan data only
});
If the token doesn't match, or if the scanId is stale, the message is silently dropped. The token is not a secret from page JavaScript once the bridge is established; its job is correlation, freshness, and stale-session rejection, not authentication against a page that's already listening. The real security boundary is narrower and stronger: even if a hostile page can forge scan traffic, it still cannot cross the isolated-world/runtime boundary into the unfollow path.
Is this overengineered for a tool that most people will run once a month? Maybe. But the threat model isn't "normal user on a clean page." It's "normal user on a page where any third-party script — Instagram's own ad SDK, a browser toolbar, an injected A/B test — can postMessage into your bridge." That very bridge has exactly one job: make sure that even in that environment, a hostile page can at worst corrupt or spam scan-only data, not cross into the unfollow path.
Destructive actions never originate from the MAIN world. The background service worker accumulates the lists, computes the set difference, and holds the state. When the user clicks "Execute", the background script talks only to the isolated world via chrome.runtime.sendMessage. This is the part of the architecture I'm proudest of — not because it's clever, but because the guarantee is structural. It doesn't depend on discipline, or code review, or "we'd never route unfollows through MAIN." It depends on the fact that the MAIN world physically cannot reach the unfollow path.
The page JavaScript cannot trigger, spoof, or even perceive an unfollow command.
Stop Puppeteering the DOM
The standard approach to scraping a single-page app is DOM puppeteering — scrolling the viewport to trigger lazy loading. I tried this first. It's brittle in ways that compound: (i)if the tab loses focus, Chrome throttles requestAnimationFrame and the scrolling stalls; (ii) if Instagram changes a modal's CSS class, the scroller breaks. You're simulating a human to trick a UI into loading data that already has an API.
That is backwards.
Reciprocity captures the endpoint shape Instagram is already using when available, then takes over with direct cursor-based pagination, falling back to the well-known REST endpoint when needed. content-main.js makes direct fetch() calls to Instagram's endpoints using cursor-based pagination for list extraction — no scroll simulation, no dependency on Instagram's UI rendering pipeline.
Because we aren't relying on UI rendering, the background script spawns a dedicated, unfocused Chrome window for the scan. The user clicks "Scan" and goes back to whatever they were doing. The extension pages through up to 50,000 users per list silently — 800–2000ms between API calls, with a 5-second pause every 50 requests.
Rate Limits Are Part of the Product
Instagram will shadowban you for velocity. Unfollow 300 people in two minutes and your account goes quiet for weeks. The architecture has to absorb that constraint, not defer it to user discipline.
Reciprocity enforces: 20 unfollows per rolling 60-minute window, 100/day hard cap, 3–8 seconds of randomized delay between each request.
Rolling window, not clock-hour buckets. If I execute 20 unfollows at 2:55 PM, I shouldn't get 20 more at 3:00 PM. Both limits derive from a single unfollowSuccessTimestamps array in chrome.storage.local — epoch-millisecond entries, continually pruned to retain only same-day and last-hour entries. One data structure, two constraints, zero drift.
The execution lock is persistent. If the user closes the browser mid-unfollow, the unfollowExecutionState snapshot in storage prevents concurrent bulk runs when they reopen. There's a 90-second stale-lock reconciliation to handle bad exits. The background state machine (idle → scanning_following → scanning_followers → processing → done, plus error and interrupted recovery paths) is the sole truthbearer.
Testing Without a Build System
337 tests are covered through five files and, again, no external dependencies.
The tests use Node.js built-in node:test and node:assert. No Vitest, no Jest, no test runner that needs its own config file. But this created a real constraint: the extension's source files are vanilla scripts, i.e., no module.exports or ESM exports. They're designed to run in a browser, not in Node.
The solution is ugly and deliberate. Each test file copies the pure functions it needs to test inline. validateBatchUser(), normalizeTimestamps(), the message normalization logic — they exist as duplicated source in the test files, extracted by hand from the extension code.
This is a maintenance cost I chose to pay. The alternative was introducing a build step — a bundler that could tree-shake exports for the browser while making them available to Node. For a four-file extension with no external dependencies, a bundler is not simplification. It's a new failure mode wearing a productivity costume.
The same zero-dependency principle that keeps the permission surface minimal keeps the toolchain auditable. There's no package-lock.json because there are no packages to lock in the first place. The npm/package-manager supply-chain surface here is zero.
What the Constraints Produced
Every MV3 constraint I fought against turned into a structural property I'd now defend:
- The two-world split forced the MAIN world to be scan-only. Destructive actions are architecturally unreachable from page JS.
-
window.postMessageis insecure by design, so every payload goes through token rotation and validation. A compromised page can corrupt scan data, but it cannot cross into the unfollow path. - With only
tabsandstoragein the permission set, there's no cookie-store permission, nowebRequest, and no broad host access.
None of this is invisible to the user. The manifest.json is under 60 lines. A skeptical developer can read it in a few minutes and verify that the trust model matches the claim. That verifiability is the entire product thesis.
Chrome disabled Manifest V2 for all users in Chrome 138 on July 24, 2025; Chrome 139 removed the enterprise-policy escape hatch. MV3 is the environment extensions actually live in now. You can spend your time trying to smuggle the old model forward, or you can let the constraints shape the architecture.
The constraints here were the architecture.
Reciprocity is on the Chrome Web Store. Install it, right-click, inspect the source. Don't believe me, verify.
Top comments (0)