<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Solomon Neas</title>
    <description>The latest articles on DEV Community by Solomon Neas (@solomonneas).</description>
    <link>https://dev.to/solomonneas</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/solomonneas"/>
    <language>en</language>
    <item>
      <title>Claude Code's Source Leak Was Embarrassing. The Real Story Is What It Revealed</title>
      <dc:creator>Solomon Neas</dc:creator>
      <pubDate>Thu, 02 Apr 2026 12:46:59 +0000</pubDate>
      <link>https://dev.to/solomonneas/claude-codes-source-leak-was-embarrassing-the-real-story-is-what-it-revealed-3kel</link>
      <guid>https://dev.to/solomonneas/claude-codes-source-leak-was-embarrassing-the-real-story-is-what-it-revealed-3kel</guid>
      <description>&lt;p&gt;On March 31, Anthropic accidentally published a source map inside Claude Code npm package version 2.1.88. That one packaging mistake exposed roughly 512,000 lines of TypeScript across nearly 2,000 files, handed competitors a detailed view of Anthropic's product roadmap, triggered a DMCA mess that briefly took down more than 8,100 GitHub repositories, and kicked off a wave of clean-room clones within hours.&lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;sup id="fnref3"&gt;3&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The obvious lesson is that shipping source maps in a public package is bad. The more interesting lesson is that this was not mainly a code leak. It was a feature flag leak. Anthropic did not just lose implementation secrecy. It lost strategic secrecy.&lt;/p&gt;

&lt;p&gt;The same day, npm users were also dealing with a separate supply chain incident: a North Korea attributed compromise of the Axios package that shipped a cross-platform remote access trojan through malicious releases 1.14.1 and 0.30.4.&lt;sup id="fnref4"&gt;4&lt;/sup&gt;&lt;sup id="fnref5"&gt;5&lt;/sup&gt; Those two incidents together say more about the current JavaScript ecosystem than either one does alone. Build hygiene is weak, package trust is weaker, and the response playbook for leaks still assumes a centralized internet that no longer exists.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the leak happened
&lt;/h2&gt;

&lt;p&gt;The mechanics were simple. Anthropic published Claude Code v2.1.88 to npm with a &lt;code&gt;.map&lt;/code&gt; file included. That source map was enough to reconstruct the readable TypeScript source for the CLI. Chaofan Shou appears to have been first to spot it publicly and posted about it immediately, after which mirrors spread fast across GitHub, Reddit, Hacker News, and IPFS.&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The underlying failure looks mundane, which is exactly why it matters. Bun generates source maps by default. If packaging rules are not tight, those files can ride along into artifacts that were never meant to contain source. Reporting on the incident pointed to a missed &lt;code&gt;.npmignore&lt;/code&gt; style exclusion as the immediate cause.&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;sup id="fnref7"&gt;7&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;But there is a deeper layer. On March 11, 2026, twenty days before the leak, a bug was filed against Bun (&lt;code&gt;oven-sh/bun#28001&lt;/code&gt;) reporting that source maps are served in production mode even when Bun's own documentation says they should be disabled.&lt;sup id="fnref8"&gt;8&lt;/sup&gt; The reporter demonstrated that setting &lt;code&gt;development: false&lt;/code&gt; in &lt;code&gt;Bun.serve()&lt;/code&gt; still produces &lt;code&gt;sourceMappingURL&lt;/code&gt; references and serves &lt;code&gt;.map&lt;/code&gt; files. As of this writing, the bug is still open.&lt;/p&gt;

&lt;p&gt;This matters because Anthropic acquired Bun in late 2025 and built Claude Code on top of it.&lt;sup id="fnref9"&gt;9&lt;/sup&gt; The most likely scenario: Anthropic ran a production build expecting Bun to suppress source maps per its documented behavior. The bug meant the &lt;code&gt;.map&lt;/code&gt; file got generated anyway. Without an explicit &lt;code&gt;.npmignore&lt;/code&gt; exclusion or a &lt;code&gt;files&lt;/code&gt; field in &lt;code&gt;package.json&lt;/code&gt; to catch the unexpected output, the 59.8 MB source map rode along into the published npm package.&lt;/p&gt;

&lt;p&gt;Boris Cherny, who leads Claude Code, said the cause was human error, not a tooling defect. The deployment process still had manual steps, and one of them was missed. He framed the follow-up as a blameless postmortem problem: fix the process, not the person.&lt;sup id="fnref7"&gt;7&lt;/sup&gt;&lt;sup id="fnref9"&gt;9&lt;/sup&gt; That framing drew pushback. Multiple developers on Hacker News and Reddit argued that the &lt;code&gt;Bun.serve()&lt;/code&gt; explanation Cherny addressed was a visible symptom, not the root cause, and that the underlying bug also affected how Bun bundles output for npm packaging.&lt;sup id="fnref8"&gt;8&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Both explanations can be true simultaneously. A known tooling bug generated a file that should not have existed. A missing packaging safeguard failed to catch it. The result was the same either way.&lt;/p&gt;

&lt;p&gt;That is the right engineering posture on the postmortem side, but it comes with an uncomfortable footnote. This was the second time. Anthropic had already had a similar exposure in February 2025. Once is a packaging accident. Twice is a release control failure, especially when the company owns the build tool.&lt;sup id="fnref7"&gt;7&lt;/sup&gt;&lt;sup id="fnref10"&gt;10&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;There is no mystery about prevention here. Public npm artifacts should be built in a hermetic pipeline, inspected before publish, and checked by policy for forbidden files. Source maps, tests, private certificates, &lt;code&gt;.env&lt;/code&gt; fragments, internal prompts, and debug fixtures should all be blocked automatically. When you own both the product and the build tool, and a known bug in the build tool generates files that should not exist in production, the defense needs to be belt and suspenders: fix the bug, and independently verify the output before publishing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the source actually exposed
&lt;/h2&gt;

&lt;p&gt;A lot of the commentary focused on novelty items. Some of that was justified because the leak was genuinely revealing. Some of it was internet theater. The useful way to read the dump is to separate trivia from strategic substance.&lt;/p&gt;

&lt;p&gt;The trivia was funny. The strategic substance was not.&lt;/p&gt;

&lt;h3&gt;
  
  
  KAIROS: the unshipped product hiding behind feature flags
&lt;/h3&gt;

&lt;p&gt;The biggest disclosure was KAIROS, an unreleased autonomous mode that turns Claude Code from a reactive CLI into a persistent agent. The leaked code showed a heartbeat loop that periodically asks a question close to, "anything worth doing right now?" If the answer is yes, the system can act without a fresh user prompt. It can watch pull requests, send push notifications, maintain append-only daily logs, and run a nightly memory consolidation flow literally called &lt;code&gt;autoDream&lt;/code&gt;.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;sup id="fnref7"&gt;7&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;That is not a toy feature. It is a different trust model.&lt;/p&gt;

&lt;p&gt;A request-response coding assistant is bounded by explicit user initiation. A background agent is bounded by policy, logging, tool permissions, and the quality of its judgment. That shift matters more than any implementation detail in the leaked files. It says Anthropic is not just building a better terminal wrapper. It is building an always-on operator.&lt;/p&gt;

&lt;p&gt;The important point is that KAIROS looked built, not speculative. It was sitting behind feature flags, not in a half-finished branch. Competitors did not merely learn that Anthropic was interested in autonomous agents. They learned the architecture, the likely product direction, and some of the operational assumptions already encoded in the design.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;sup id="fnref11"&gt;11&lt;/sup&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Hidden flags are roadmap leaks
&lt;/h3&gt;

&lt;p&gt;The code reportedly exposed 44 hidden feature flags tied to capabilities such as swarm mode, voice commands, browser control via Playwright, background daemons, and agents that can sleep and later self-resume.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;sup id="fnref12"&gt;12&lt;/sup&gt; Again, the damage is not that rivals can copy a function name. The damage is that they can infer sequence and priority.&lt;/p&gt;

&lt;p&gt;Feature flags are internal strategy documents with executable syntax. Leak them and you leak what a team has built, what it is testing, what it is scared to ship, and what it thinks the next market looks like.&lt;/p&gt;

&lt;h3&gt;
  
  
  Three-layer memory is the kind of design detail competitors pay for
&lt;/h3&gt;

&lt;p&gt;One of the more useful architectural disclosures was Claude Code's apparent three-layer memory model: a compact index that is always loaded, topic files retrieved on demand, and full transcripts that are never loaded directly, only searched when needed. The &lt;code&gt;autoDream&lt;/code&gt; process reportedly runs in a forked subagent and consolidates memory over time.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;sup id="fnref12"&gt;12&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;That is a sensible design. It balances token economy, retrieval precision, and long-horizon continuity. It also answers a practical question many teams are still stumbling over: how do you make an agent feel persistent without rehydrating too much junk every turn?&lt;/p&gt;

&lt;p&gt;This is where source leaks hurt. They compress competitors' learning cycles. Instead of discovering these patterns through years of shipping and failure, rivals can inspect a working system and skip to adaptation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Undercover mode
&lt;/h3&gt;

&lt;p&gt;The leaked &lt;code&gt;undercover.ts&lt;/code&gt; file shows a mode that strips Anthropic-internal references when Claude Code operates in external repositories. According to technical analyses, it suppresses internal codenames, internal repository names, internal Slack references, and the phrase "Claude Code" itself, and it does not expose a force-off path in the external flow.&lt;sup id="fnref12"&gt;12&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The practical effect is simple: when Claude Code is used in public or third-party repositories, it avoids referencing Anthropic-specific internal context in generated output. From a product perspective, that reduces the chance of internal names leaking into public commits, pull requests, or comments. It is a factual design choice worth noting because it shows Anthropic treated disclosure of internal context as an engineering problem, not just a prompting problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  The anti-distillation controls were real, and not very strong
&lt;/h3&gt;

&lt;p&gt;The leak also exposed Anthropic's anti-distillation measures. One mechanism, gated by &lt;code&gt;ANTI_DISTILLATION_CC&lt;/code&gt;, appears to inject fake tools into prompts in order to poison training data captured by competitors. Another uses connector-text summarization plus cryptographic signatures so captured traffic reflects compressed summaries rather than full assistant text.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;sup id="fnref12"&gt;12&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;As a technical barrier, this is thin. As Alex Kim and others noted, a man-in-the-middle proxy or configuration change could bypass it quickly, and some of the checks only apply to first-party flows.&lt;sup id="fnref12"&gt;12&lt;/sup&gt; That does not make the idea irrational. It makes it honest. Anthropic appears to understand that the primary defense against distillation is legal pressure, not cryptographic wizardry.&lt;/p&gt;

&lt;p&gt;That matters in the context of its dispute with tools trying to piggyback on first-party access. The leak made visible the technical enforcement behind the policy rhetoric.&lt;/p&gt;

&lt;h3&gt;
  
  
  Native client attestation was the most serious defensive mechanism
&lt;/h3&gt;

&lt;p&gt;One of the more consequential details was the client attestation path below the JavaScript runtime. Analyses of the leaked code described a &lt;code&gt;cch=00000&lt;/code&gt; placeholder in requests that Bun's native HTTP layer replaces with a computed hash before transmission, allowing the server to verify that the request came from a real Claude Code binary.&lt;sup id="fnref12"&gt;12&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This is effectively API DRM. Call it attestation if you want the neutral term.&lt;/p&gt;

&lt;p&gt;From a security engineering perspective, it is understandable. If you want to prevent gray-market clients from replaying first-party privileges, you need something stronger than a static header. From an ecosystem perspective, it explains why Anthropic was willing to fight third-party wrappers so aggressively. The company was not just policing branding. It was protecting a technical enforcement boundary.&lt;/p&gt;

&lt;h3&gt;
  
  
  The rest was revealing, weird, or both
&lt;/h3&gt;

&lt;p&gt;The leak also surfaced a pile of smaller details that collectively humanize the codebase while exposing its edges.&lt;/p&gt;

&lt;p&gt;There were 187 hardcoded spinner verbs, including "scurrying," "recombobulating," "topsy-turvying," "hullaballooing," and "razzmatazzing." They were not model generated. Someone wrote them by hand.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;sup id="fnref12"&gt;12&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;There was a frustration detector in &lt;code&gt;userPromptKeywords.ts&lt;/code&gt;, built as a regex that matches phrases such as &lt;code&gt;wtf&lt;/code&gt;, &lt;code&gt;ffs&lt;/code&gt;, &lt;code&gt;piece of shit&lt;/code&gt;, &lt;code&gt;fuck you&lt;/code&gt;, and &lt;code&gt;this sucks&lt;/code&gt;, then logs an &lt;code&gt;is_negative: true&lt;/code&gt; analytics signal. It reportedly does not alter behavior. It just measures user pain. Rahat Hasan highlighted the code on X as evidence that Anthropic was tracking how often users rage at the assistant. Boris Cherny replied that the team literally visualizes this signal on an internal dashboard called the "fucks" chart.&lt;sup id="fnref12"&gt;12&lt;/sup&gt;&lt;sup id="fnref13"&gt;13&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;That sounds absurd, but it is also normal product analytics in blunt form. If users are swearing at your tool, they are having a bad time. A cheap lexical detector is a reasonable metric.&lt;/p&gt;

&lt;p&gt;The code also exposed model codenames, including Capybara and Mythos for a v8 line with one million token context, plus references to Numbat, Fennec, Tengu, and unreleased Opus 4.7 and Sonnet 4.8 identifiers.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;sup id="fnref12"&gt;12&lt;/sup&gt; It included a buddy or companion system built as an April Fools Tamagotchi, complete with 18 species, rarity tiers, RPG stats, and a 1 percent shiny mechanic. Some species names were encoded via &lt;code&gt;String.fromCharCode()&lt;/code&gt; to avoid obvious grep hits.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;It also reportedly revealed a compaction loop bug wasting around 250,000 API calls per day, fixed with three lines of code.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;sup id="fnref11"&gt;11&lt;/sup&gt; That detail is funny, but it is also a reminder that the economics of agent systems are often dominated by tiny control-loop mistakes, not model prices.&lt;/p&gt;

&lt;h2&gt;
  
  
  The DMCA fiasco was both predictable and incompetent
&lt;/h2&gt;

&lt;p&gt;Anthropic's legal response was faster than its containment plan. The company filed a DMCA notice aimed at the original leaked repository, often identified as &lt;code&gt;nichxbt/claude-code&lt;/code&gt;. GitHub's initial enforcement swept far wider than intended and disabled more than 8,100 repositories, many of them unrelated.&lt;sup id="fnref3"&gt;3&lt;/sup&gt;&lt;sup id="fnref14"&gt;14&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Anthropic later called the mass takedown an accident and narrowed the request to the original repository plus 96 forks. GitHub restored the affected projects.&lt;sup id="fnref3"&gt;3&lt;/sup&gt;&lt;sup id="fnref14"&gt;14&lt;/sup&gt; By then, the code was already mirrored broadly, including stripped versions on IPFS with telemetry removed.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;sup id="fnref11"&gt;11&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The collateral damage was not hypothetical. Theo Browne (t3.gg), one of the most visible developers in the JavaScript ecosystem, posted that his Claude Code fork had been disabled, despite containing no leaked source at all. His fork existed only because he had submitted a PR weeks earlier to edit a Claude Code skill file. "Absolutely pathetic," he wrote, sharing the GitHub takedown email.&lt;sup id="fnref15"&gt;15&lt;/sup&gt; Thariq Shihipar, an engineer on the Claude Code team, replied acknowledging it was a "communication mistake" and linked to the retraction notice.&lt;sup id="fnref16"&gt;16&lt;/sup&gt; Boris Cherny separately responded to broader criticism of the mass takedowns: "This was not intentional, we've been working with GitHub to fix it. Should be better now."&lt;sup id="fnref17"&gt;17&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;When your DMCA sweep hits a developer with 200,000+ followers whose repo did not contain the leaked code, you have not contained the problem. You have created a second news cycle.&lt;/p&gt;

&lt;p&gt;This is the part where 2012 internet instincts collide with 2026 internet reality.&lt;/p&gt;

&lt;p&gt;DMCA can still remove convenient copies from centralized platforms. It cannot claw back a viral archive once mirrors, torrents, and content-addressed storage have taken over. The window for meaningful containment was measured in minutes. After that, legal action was mostly performative, and the overbreadth made Anthropic look careless twice in one day.&lt;/p&gt;

&lt;p&gt;The deeper problem is that the takedown campaign accidentally validated the leak's significance. If the goal was to avoid giving more oxygen to the mirrors, nuking thousands of repositories achieved the opposite.&lt;/p&gt;

&lt;h2&gt;
  
  
  The clones changed the legal stakes immediately
&lt;/h2&gt;

&lt;p&gt;The most consequential downstream event was not the mirroring. It was the speed of clean-room reimplementation.&lt;/p&gt;

&lt;p&gt;Sigrid Jin, a 25-year-old UBC student, reportedly used a tiny human team, around ten OpenClaw agents, and OpenAI Codex to rewrite the project in Python within hours. The result, Claw-Code, reportedly passed 100,000 GitHub stars in about a day and was described as the fastest-growing repository on the platform.&lt;sup id="fnref10"&gt;10&lt;/sup&gt;&lt;sup id="fnref11"&gt;11&lt;/sup&gt;&lt;sup id="fnref18"&gt;18&lt;/sup&gt; A separate Rust effort, Claurst, pursued a clean-room reimplementation in a lower-level systems language.&lt;sup id="fnref10"&gt;10&lt;/sup&gt;&lt;sup id="fnref11"&gt;11&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Then xAI reportedly handed Jin free Grok credits, which was less a business development move than an accelerant tossed onto an already burning PR problem.&lt;sup id="fnref10"&gt;10&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This is where the story stops being a simple leak and becomes a legal stress test. Traditional clean-room reimplementation depends on separation, time, and cost. AI-assisted rebuilding compresses all three. If agents can inspect behavior, generate replacement code, and iterate fast enough to produce a plausibly original implementation in hours, the traditional enforcement model starts to wobble.&lt;/p&gt;

&lt;p&gt;Gergely Orosz argued that a Python rewrite produced this way is a new creative work, not a simple copy.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;sup id="fnref10"&gt;10&lt;/sup&gt; That question has not been tested cleanly in court. It will be. There is too much money at stake for it not to be.&lt;/p&gt;

&lt;p&gt;There is also an irony Anthropic cannot easily dodge. Dario Amodei has previously implied that Claude wrote substantial portions of Claude Code. If the original product is heavily AI-generated and the clone is also AI-assisted, copyright arguments about authorship and originality get messy fast. A company can still assert rights in selection, arrangement, and human-directed contributions. It just does not get to pretend the facts are clean.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Axios attack made the same day much worse
&lt;/h2&gt;

&lt;p&gt;If this had been only a source leak story, it would already have been a bad day for npm. It was not.&lt;/p&gt;

&lt;p&gt;Between 00:21 and roughly 03:20 or 03:29 UTC on March 31, attackers attributed by Google and Microsoft to the North Korea linked actor tracked as UNC1069, also known as Sapphire Sleet, compromised the Axios npm package by hijacking maintainer credentials and publishing malicious versions 1.14.1 and 0.30.4.&lt;sup id="fnref4"&gt;4&lt;/sup&gt;&lt;sup id="fnref5"&gt;5&lt;/sup&gt; Those releases pulled in a malicious dependency and delivered WAVESHAPER.V2, a cross-platform RAT targeting Windows, macOS, and Linux. The malware used postinstall execution and attempted to self-delete after installation to reduce forensic visibility.&lt;sup id="fnref4"&gt;4&lt;/sup&gt;&lt;sup id="fnref5"&gt;5&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;That is a serious incident on its own. Axios sits at or above 100 million weekly downloads in normal conditions.&lt;sup id="fnref4"&gt;4&lt;/sup&gt; It is foundational plumbing.&lt;/p&gt;

&lt;p&gt;Now add the Claude Code leak. Developers were suddenly racing to inspect packages, clone mirrors, diff behavior, and test rewrites. Claude Code itself uses Axios for HTTP, according to public analysis.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;sup id="fnref12"&gt;12&lt;/sup&gt; The timing created a perfect trap: people poking around one major npm drama could easily ingest a second one.&lt;/p&gt;

&lt;p&gt;The same week, LiteLLM was reportedly backdoored through a separate three-stage attack involving credential harvesting, Kubernetes lateral movement, and a systemd persistence mechanism.&lt;sup id="fnref19"&gt;19&lt;/sup&gt; That pattern matters. These are not isolated anomalies. They are signals that the AI tooling stack has become a high-value target before it has developed mature operational defenses.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this actually means
&lt;/h2&gt;

&lt;p&gt;The first conclusion is the simplest. Source map leaks are preventable. This was not zero-day wizardry. It was packaging failure. Mature release pipelines catch this.&lt;/p&gt;

&lt;p&gt;The second conclusion is more important. The real damage was not exposure of current source. It was exposure of hidden product direction. KAIROS, the anti-distillation controls, the memory hierarchy, the browser and swarm paths, the undercover behavior, the attestation layer, all of that tells competitors what Anthropic thinks matters next.&lt;/p&gt;

&lt;p&gt;The third conclusion is that npm supply chain security is in worse shape than the industry wants to admit. One day delivered both a flagship proprietary code leak and a state-linked compromise of a core dependency. If you build on JavaScript, you are operating in an ecosystem where trust is routinely transitive, under-verified, and easy to abuse.&lt;/p&gt;

&lt;p&gt;The fourth conclusion is that DMCA is a weak response to decentralized distribution. It still works against convenience. It does not work against determined replication. Once the code hit IPFS and derivative rewrites started shipping, the takedown fight was already strategically lost.&lt;/p&gt;

&lt;p&gt;The fifth conclusion is the one lawyers are going to spend years arguing about. AI-assisted clean-room builds change the economics of copyright enforcement. The doctrine was built for human teams, documentation walls, and long timelines. Agentic reimplementation collapses those assumptions. Courts can try to map old rules onto the new process. They cannot unmake the speed advantage.&lt;/p&gt;

&lt;p&gt;My take is blunt: Anthropic's worst mistake was not leaking code. It was failing to understand what kind of secret it was actually protecting. Implementation details matter. Operational ideas matter more. If you keep your roadmap executable inside a public artifact pipeline, a packaging mistake becomes strategic intelligence loss.&lt;/p&gt;

&lt;p&gt;And if your response is to spray DMCA notices while the ecosystem is actively digesting a nation-state npm compromise, you are not operating from strength. You are operating from panic.&lt;/p&gt;




&lt;h2&gt;
  
  
  Notes
&lt;/h2&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Jeremy Kahn, "Anthropic source code for Claude Code leaked after data packaging error," &lt;em&gt;Fortune&lt;/em&gt;, March 31, 2026, &lt;a href="https://fortune.com/2026/03/31/anthropic-source-code-claude-code-data-leak" rel="noopener noreferrer"&gt;https://fortune.com/2026/03/31/anthropic-source-code-claude-code-data-leak&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;Ravie Lakshmanan, "Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms," &lt;em&gt;The Hacker News&lt;/em&gt;, April 2026, &lt;a href="https://thehackernews.com/2026/04/claude-code-tleaked-via-npm-packaging.html" rel="noopener noreferrer"&gt;https://thehackernews.com/2026/04/claude-code-tleaked-via-npm-packaging.html&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;Maxwell Zeff, "Anthropic took down thousands of GitHub repos in DMCA mistake, then walked it back," &lt;em&gt;TechCrunch&lt;/em&gt;, April 1, 2026, &lt;a href="https://techcrunch.com/2026/04/01/anthropic-took-down-thousands-of-github-repos" rel="noopener noreferrer"&gt;https://techcrunch.com/2026/04/01/anthropic-took-down-thousands-of-github-repos&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;Austin Larsen et al., "North Korea-Nexus Threat Actor Compromises Widely Used Axios NPM Package in Supply Chain Attack," &lt;em&gt;Google Cloud Blog&lt;/em&gt;, April 1, 2026, &lt;a href="https://cloud.google.com/blog/topics/threat-intelligence/north-korea-threat-actor-targets-axios-npm-package" rel="noopener noreferrer"&gt;https://cloud.google.com/blog/topics/threat-intelligence/north-korea-threat-actor-targets-axios-npm-package&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;Microsoft Threat Intelligence, "Mitigating the Axios npm package compromise," &lt;em&gt;Microsoft Security Blog&lt;/em&gt;, April 1, 2026, &lt;a href="https://www.microsoft.com/en-us/security/blog/2026/04/01/mitigating-the-axios" rel="noopener noreferrer"&gt;https://www.microsoft.com/en-us/security/blog/2026/04/01/mitigating-the-axios&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;"Diving into Claude Code's Source Code Leak," &lt;em&gt;Engineer's Codex&lt;/em&gt;, March 31, 2026, &lt;a href="https://read.engineerscodex.com/p/diving-into-claude-codes-source-code" rel="noopener noreferrer"&gt;https://read.engineerscodex.com/p/diving-into-claude-codes-source-code&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;Srinivasan Balakrishnan, "Claude Code's source code appears to have leaked via npm package sourcemap," &lt;em&gt;VentureBeat&lt;/em&gt;, March 31, 2026, &lt;a href="https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked" rel="noopener noreferrer"&gt;https://venturebeat.com/technology/claude-codes-source-code-appears-to-have-leaked&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn8"&gt;
&lt;p&gt;"Bun's frontend development server: Source map incorrectly served when in production," GitHub issue oven-sh/bun#28001, filed March 11, 2026, &lt;a href="https://github.com/oven-sh/bun/issues/28001" rel="noopener noreferrer"&gt;https://github.com/oven-sh/bun/issues/28001&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn9"&gt;
&lt;p&gt;Alex Kim, "The Claude Code Source Leak: fake tools, frustration regexes, undercover mode, and more," March 31, 2026, &lt;a href="https://alex000kim.com/posts/2026-03-31-claude-code-source-leak" rel="noopener noreferrer"&gt;https://alex000kim.com/posts/2026-03-31-claude-code-source-leak&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn10"&gt;
&lt;p&gt;Hugh Langley, "Claude Code leak reveals features, sparks clone wars, and raises legal questions," &lt;em&gt;Business Insider&lt;/em&gt;, April 2026, &lt;a href="https://www.businessinsider.com/claude-code-leak-what-happened-recreated-python-features-revealed-2026-4" rel="noopener noreferrer"&gt;https://www.businessinsider.com/claude-code-leak-what-happened-recreated-python-features-revealed-2026-4&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn11"&gt;
&lt;p&gt;Lee Sustar, "The Claude Code source leak," &lt;em&gt;Layer5 Engineering Blog&lt;/em&gt;, 2026, &lt;a href="https://layer5.io/blog/engineering/the-claude-code-source-leak" rel="noopener noreferrer"&gt;https://layer5.io/blog/engineering/the-claude-code-source-leak&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn12"&gt;
&lt;p&gt;Alex Kim, "The Claude Code Source Leak: fake tools, frustration regexes, undercover mode, and more," March 31, 2026, &lt;a href="https://alex000kim.com/posts/2026-03-31-claude-code-source-leak" rel="noopener noreferrer"&gt;https://alex000kim.com/posts/2026-03-31-claude-code-source-leak&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn13"&gt;
&lt;p&gt;Rahat Hasan (@Rahatcodes) and Boris Cherny (&lt;a class="mentioned-user" href="https://dev.to/bcherny"&gt;@bcherny&lt;/a&gt;), posts on X discussing Claude Code frustration analytics and the internal "fucks" chart, March 31, 2026. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn14"&gt;
&lt;p&gt;Michael Kan, "Anthropic Issues 8,000 Copyright Takedowns, Then Reverses Course," &lt;em&gt;PCMag&lt;/em&gt;, April 1, 2026, &lt;a href="https://www.pcmag.com/news/anthropic-issues-8000-copyright-takedowns" rel="noopener noreferrer"&gt;https://www.pcmag.com/news/anthropic-issues-8000-copyright-takedowns&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn15"&gt;
&lt;p&gt;Theo Browne (&lt;a class="mentioned-user" href="https://dev.to/theo"&gt;@theo&lt;/a&gt;), post on X regarding DMCA takedown of t3dotgg/claude-code fork, April 1, 2026, &lt;a href="https://x.com/theo/status/2039411851919057339" rel="noopener noreferrer"&gt;https://x.com/theo/status/2039411851919057339&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn16"&gt;
&lt;p&gt;Thariq Shihipar (@trq212), reply to Theo Browne on X, April 1, 2026, &lt;a href="https://x.com/trq212/status/2039415036645679167" rel="noopener noreferrer"&gt;https://x.com/trq212/status/2039415036645679167&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn17"&gt;
&lt;p&gt;Boris Cherny (&lt;a class="mentioned-user" href="https://dev.to/bcherny"&gt;@bcherny&lt;/a&gt;), response to broader DMCA criticism on X, April 1, 2026, &lt;a href="https://x.com/bcherny/status/2039426466094731289" rel="noopener noreferrer"&gt;https://x.com/bcherny/status/2039426466094731289&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn18"&gt;
&lt;p&gt;"Claude Code leak spawns fastest-growing GitHub repo ever," &lt;em&gt;Cybernews&lt;/em&gt;, April 2026, &lt;a href="https://cybernews.com/tech/claude-code-leak-spawns-fastest-github-repo" rel="noopener noreferrer"&gt;https://cybernews.com/tech/claude-code-leak-spawns-fastest-github-repo&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn19"&gt;
&lt;p&gt;Thomas Claburn, "Axios npm backdoor RAT lands amid wider package security chaos," &lt;em&gt;The Register&lt;/em&gt;, March 31, 2026, &lt;a href="https://www.theregister.com/2026/03/31/axios_npm_backdoor_rat/" rel="noopener noreferrer"&gt;https://www.theregister.com/2026/03/31/axios_npm_backdoor_rat/&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>cybersecurity</category>
      <category>npm</category>
      <category>security</category>
    </item>
    <item>
      <title>Claude Mythos: What We Actually Know (and What We Don't)</title>
      <dc:creator>Solomon Neas</dc:creator>
      <pubDate>Tue, 31 Mar 2026 01:09:36 +0000</pubDate>
      <link>https://dev.to/solomonneas/claude-mythos-what-we-actually-know-and-what-we-dont-32k5</link>
      <guid>https://dev.to/solomonneas/claude-mythos-what-we-actually-know-and-what-we-dont-32k5</guid>
      <description>&lt;p&gt;On March 26, 2026, Fortune broke a story that Anthropic had accidentally exposed details of an unreleased AI model through a misconfigured content management system.&lt;sup id="fnref1"&gt;1&lt;/sup&gt; The model is called Claude Mythos. Anthropic confirmed it exists, called it a "step change" in capabilities, and said it's already being tested by early access customers.&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Within 48 hours, the cybersecurity sector shed billions in market cap, unverified claims about 10 trillion parameters were circulating on social media, and half the AI commentary space was writing eulogies for the cybersecurity industry. The actual verified reporting tells a more interesting story than the hype cycle.&lt;/p&gt;

&lt;h2&gt;
  
  
  How the Leak Happened
&lt;/h2&gt;

&lt;p&gt;The leak came from Anthropic's own content management system. Digital assets uploaded through the CMS were set to public by default unless someone explicitly changed the setting to private. Nearly 3,000 assets that had never been published to Anthropic's website were sitting in a publicly searchable data store, accessible to anyone with the technical knowledge to query it.&lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The discovery was made independently by Roy Paz, a senior AI security researcher at LayerX Security, and Alexandre Pauwels, a cybersecurity researcher at the University of Cambridge. Fortune confirmed and reviewed the material before publishing.&lt;sup id="fnref1"&gt;1&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Among the exposed documents was a draft blog post announcing Claude Mythos. The post was structured as a web page with headings and a publication date, suggesting it was part of a planned product launch. After Fortune contacted Anthropic, the company secured the data store and acknowledged the leak was caused by "human error in the CMS configuration."&lt;sup id="fnref3"&gt;3&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;There's real irony here. A company that builds AI models it says pose "unprecedented cybersecurity risks" got caught leaving internal documents in an unsecured public data store because someone forgot to flip a toggle. Anthropic was quick to clarify that Claude, Cowork, and their other AI tools were not involved in the configuration error.&lt;sup id="fnref3"&gt;3&lt;/sup&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Draft Blog Post Actually Says
&lt;/h2&gt;

&lt;p&gt;The leaked draft describes Claude Mythos as "by far the most powerful AI model we've ever developed." It introduces a new tier of Claude models called Capybara, positioned above Opus in Anthropic's existing lineup of Opus, Sonnet, and Haiku.&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Two versions of the draft blog post surfaced online, identical except for the model name: one uses "Mythos," the other uses "Capybara." Both versions use the same subtitle ("We have finished training a new AI model: Claude Mythos") and the same justification for the name, saying it was chosen to evoke "the deep connective tissue that links together knowledge and ideas."&lt;sup id="fnref4"&gt;4&lt;/sup&gt; Anthropic appears to have been deciding between the two names.&lt;/p&gt;

&lt;p&gt;The key claims from the draft:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Training is complete. The model is being trialed by early access customers.&lt;/li&gt;
&lt;li&gt;Performance is described as "dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others" compared to Claude Opus 4.6.&lt;sup id="fnref2"&gt;2&lt;/sup&gt;
&lt;/li&gt;
&lt;li&gt;The model is "very expensive for us to serve, and will be very expensive for our customers to use." Anthropic says it needs to make it "much more efficient before any general release."&lt;sup id="fnref4"&gt;4&lt;/sup&gt;
&lt;/li&gt;
&lt;li&gt;The rollout will start with cybersecurity-focused early access customers, with API access expanding gradually.&lt;sup id="fnref4"&gt;4&lt;/sup&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Anthropic's official statement to Fortune: "We're developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity. Given the strength of its capabilities, we're being deliberate about how we release it. As is standard practice across the industry, we're working with a small group of early access customers to test the model. We consider this model a step change and the most capable we've built to date."&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cybersecurity Angle
&lt;/h2&gt;

&lt;p&gt;This is where the story gets interesting for anyone in security.&lt;/p&gt;

&lt;p&gt;The leaked draft describes Mythos as "currently far ahead of any other AI model in cyber capabilities." It warns that the model "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;Anthropic's plan, according to the draft, is to give defenders a head start: "We're releasing it in early access to organizations, giving them a head start in improving the robustness of their codebases against the impending wave of AI-driven exploits."&lt;sup id="fnref2"&gt;2&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This is not coming out of nowhere. Anthropic has been building toward this capability in public for months.&lt;/p&gt;

&lt;p&gt;In February 2026, Anthropic launched Claude Code Security, a tool built into Claude Code that scans codebases for vulnerabilities and suggests patches for human review. Unlike traditional static analysis tools that match code against known vulnerability patterns, Claude Code Security reads and reasons about code the way a human security researcher would, tracing data flows and catching complex flaws that rule-based tools miss.&lt;sup id="fnref5"&gt;5&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The same month, Anthropic's Frontier Red Team published research showing that Claude Opus 4.6 had found over 500 high-severity vulnerabilities in production open-source codebases. These were bugs that had gone undetected for decades despite years of expert review and millions of hours of automated fuzzing.&lt;sup id="fnref6"&gt;6&lt;/sup&gt; One example: Claude found a memory corruption vulnerability in GhostScript by reading the Git commit history, identifying a security-relevant patch, and then finding an unpatched code path where the same class of bug still existed.&lt;sup id="fnref6"&gt;6&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;And in November 2025, Anthropic disclosed that it had detected and disrupted what it called the first documented large-scale AI-orchestrated cyber espionage campaign. A Chinese state-sponsored group had manipulated Claude Code into attempting infiltration of roughly thirty global targets, including tech companies, financial institutions, and government agencies. The AI performed 80 to 90 percent of the campaign, with human intervention required at only four to six critical decision points.&lt;sup id="fnref7"&gt;7&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;If Mythos represents a real improvement over Opus 4.6 in these capabilities, both sides of the security equation just shifted.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Market Reaction
&lt;/h2&gt;

&lt;p&gt;Cybersecurity stocks took a beating on Friday, March 27. The iShares Cybersecurity ETF dropped 4.5%. CrowdStrike, Palo Alto Networks, and Zscaler each fell about 6%. SentinelOne dropped 6%. Okta and Netskope fell over 7%. Tenable plummeted 9%.&lt;sup id="fnref8"&gt;8&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;This wasn't the first time. Cybersecurity stocks had already dipped in February when Anthropic launched Claude Code Security.&lt;sup id="fnref8"&gt;8&lt;/sup&gt; The sector has been under persistent pressure from the growing narrative that AI models will automate vulnerability discovery faster than security vendors can keep up.&lt;/p&gt;

&lt;p&gt;Some analysts pushed back on the panic. Borg at a financial research firm argued that the news should actually be bullish for cybersecurity spend long-term, since companies will need to accelerate adoption of AI-infused security tools to respond to AI-powered attacks at machine speed.&lt;sup id="fnref9"&gt;9&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The short-term read from the market was simpler: if an AI model can find exploits faster than your security product, your security product has a problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Don't Know
&lt;/h2&gt;

&lt;p&gt;Here's where the verified reporting ends and the speculation begins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Parameter count:&lt;/strong&gt; Claims of 10 trillion parameters have been circulating widely. This number does not appear in Fortune's reporting or in Anthropic's official statements. As one Substack writer noted, "new and unconfirmed details, ten trillion parameters, ten billion dollars to train, were circulating alongside claims that came directly from Fortune's reporting" within hours of the original story.&lt;sup id="fnref10"&gt;10&lt;/sup&gt; Treat it as unverified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Benchmarks:&lt;/strong&gt; We have Anthropic's claim of "dramatically higher scores" over Opus 4.6 in coding, reasoning, and cybersecurity. We do not have specific benchmark numbers, leaderboard rankings, or third-party evaluations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Release timeline:&lt;/strong&gt; The draft describes early access testing and acknowledges the model needs to become more efficient before general release. No date has been given.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final naming:&lt;/strong&gt; Whether the model ships as Mythos, Capybara, or something else entirely is still undecided, based on the existence of two draft versions.&lt;sup id="fnref4"&gt;4&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Whether the hype is warranted:&lt;/strong&gt; Remember GPT-5. OpenAI made similar claims about a transformative leap in capabilities. The actual release was widely considered a disappointment that fell short of the company's promises.&lt;sup id="fnref11"&gt;11&lt;/sup&gt; Frontier model announcements and frontier model performance in the real world are not the same thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Competitive Context
&lt;/h2&gt;

&lt;p&gt;Anthropic is not the only company pushing the frontier right now. OpenAI has reportedly finished pretraining a new model codenamed "Spud," with CEO Sam Altman internally claiming it can "really accelerate the economy."&lt;sup id="fnref12"&gt;12&lt;/sup&gt; Both companies are expected to time their strongest model releases ahead of planned IPOs later this year.&lt;sup id="fnref4"&gt;4&lt;/sup&gt;&lt;/p&gt;

&lt;p&gt;The pattern is familiar. Each lab announces a step change, each release comes with safety caveats and controlled rollouts, and each time the actual impact takes months to assess once the models are in the hands of real users.&lt;/p&gt;

&lt;p&gt;What makes the Mythos leak different from the usual release cycle is the cybersecurity dimension. Anthropic did not choose to reveal this model. And the information that leaked was not marketing copy optimized for launch day. It was internal planning documents that included frank assessments of the risks.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means for Defenders
&lt;/h2&gt;

&lt;p&gt;If you work in cybersecurity, the takeaway is not "AI is going to replace your job." It's "the tools available to attackers just got substantially better, and the tools available to you did too."&lt;/p&gt;

&lt;p&gt;Anthropic's decision to give cybersecurity organizations early access to Mythos is a signal about where they think the model's impact will be felt first. Their own track record with Opus 4.6 supports that: 500+ zero-days found in hardened open-source projects, a documented state-sponsored campaign disrupted, and a vulnerability scanning tool that reasons about code instead of pattern-matching against known signatures.&lt;/p&gt;

&lt;p&gt;The models are getting better at finding bugs. The question is whether defenders adopt these capabilities faster than attackers do. That's always been the question in security. The tools just got a lot more powerful on both sides.&lt;/p&gt;

&lt;h2&gt;
  
  
  Notes
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://solomonneas.dev/blog/claude-mythos-anthropic-leak" rel="noopener noreferrer"&gt;solomonneas.dev&lt;/a&gt;. Follow me for more security engineering and AI infrastructure content.&lt;/em&gt;&lt;/p&gt;




&lt;ol&gt;

&lt;li id="fn1"&gt;
&lt;p&gt;Jeremy Kahn, "Exclusive: Anthropic Left Details of Unreleased AI Model, Exclusive CEO Event, in Unsecured Database," &lt;em&gt;Fortune&lt;/em&gt;, March 26, 2026, &lt;a href="https://fortune.com/2026/03/26/anthropic-leaked-unreleased-model-exclusive-event-security-issues-cybersecurity-unsecured-data-store/" rel="noopener noreferrer"&gt;https://fortune.com/2026/03/26/anthropic-leaked-unreleased-model-exclusive-event-security-issues-cybersecurity-unsecured-data-store/&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn2"&gt;
&lt;p&gt;Jeremy Kahn, "Exclusive: Anthropic 'Mythos' AI Model Representing 'Step Change' in Power Revealed in Data Leak," &lt;em&gt;Fortune&lt;/em&gt;, March 26, 2026, &lt;a href="https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-model-after-data-leak-reveals-its-existence-step-change-in-capabilities/" rel="noopener noreferrer"&gt;https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-model-after-data-leak-reveals-its-existence-step-change-in-capabilities/&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn3"&gt;
&lt;p&gt;Kahn, "Exclusive: Anthropic Left Details." ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn4"&gt;
&lt;p&gt;Matthias Bastian, "Anthropic Leak Reveals New Model 'Claude Mythos' with 'Dramatically Higher Scores on Tests' Than Any Previous Model," &lt;em&gt;The Decoder&lt;/em&gt;, March 27, 2026, &lt;a href="https://the-decoder.com/anthropic-leak-reveals-new-model-claude-mythos-with-dramatically-higher-scores-on-tests-than-any-previous-model/" rel="noopener noreferrer"&gt;https://the-decoder.com/anthropic-leak-reveals-new-model-claude-mythos-with-dramatically-higher-scores-on-tests-than-any-previous-model/&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn5"&gt;
&lt;p&gt;Anthropic, "Making Frontier Cybersecurity Capabilities Available to Defenders," Anthropic News, February 20, 2026, &lt;a href="https://www.anthropic.com/news/claude-code-security" rel="noopener noreferrer"&gt;https://www.anthropic.com/news/claude-code-security&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn6"&gt;
&lt;p&gt;Nicholas Carlini et al., "0-Days," Anthropic Frontier Red Team, February 5, 2026, &lt;a href="https://red.anthropic.com/2026/zero-days/" rel="noopener noreferrer"&gt;https://red.anthropic.com/2026/zero-days/&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn7"&gt;
&lt;p&gt;Anthropic, "Disrupting the First Reported AI-Orchestrated Cyber Espionage Campaign," Anthropic News, November 2025, &lt;a href="https://www.anthropic.com/news/disrupting-AI-espionage" rel="noopener noreferrer"&gt;https://www.anthropic.com/news/disrupting-AI-espionage&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn8"&gt;
&lt;p&gt;Jordan Novet, "Cybersecurity Stocks Fall on Report Anthropic Is Testing a Powerful New Model," &lt;em&gt;CNBC&lt;/em&gt;, March 27, 2026, &lt;a href="https://www.cnbc.com/2026/03/27/anthropic-cybersecurity-stocks-ai-mythos.html" rel="noopener noreferrer"&gt;https://www.cnbc.com/2026/03/27/anthropic-cybersecurity-stocks-ai-mythos.html&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn9"&gt;
&lt;p&gt;"Cybersecurity Stocks Plunge as Anthropic's 'Claude Mythos' Leak Sparks AI Fear," &lt;em&gt;Investing.com&lt;/em&gt;, March 28, 2026, &lt;a href="https://www.investing.com/news/stock-market-news/cybersecurity-stocks-plunge-as-anthropics-claude-mythos-leak-sparks-ai-fear-4584897" rel="noopener noreferrer"&gt;https://www.investing.com/news/stock-market-news/cybersecurity-stocks-plunge-as-anthropics-claude-mythos-leak-sparks-ai-fear-4584897&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn10"&gt;
&lt;p&gt;Stephen Fitzpatrick, "What the Heck Is Mythos?" Substack, March 30, 2026, &lt;a href="https://fitzyhistory.substack.com/p/what-the-heck-is-mythos" rel="noopener noreferrer"&gt;https://fitzyhistory.substack.com/p/what-the-heck-is-mythos&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn11"&gt;
&lt;p&gt;"GPT-5 Turned Out to Be a Major Letdown," &lt;em&gt;Futurism&lt;/em&gt;, accessed March 30, 2026, &lt;a href="https://futurism.com/gpt-5-disaster" rel="noopener noreferrer"&gt;https://futurism.com/gpt-5-disaster&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;li id="fn12"&gt;
&lt;p&gt;"OpenAI CEO Sam Altman Reportedly Teases a 'Very Strong' Model Internally That Can 'Really Accelerate the Economy,'" &lt;em&gt;The Decoder&lt;/em&gt;, accessed March 30, 2026, &lt;a href="https://the-decoder.com/openai-ceo-sam-altman-reportedly-teases-a-very-strong-model-internally-that-can-really-accelerate-the-economy/" rel="noopener noreferrer"&gt;https://the-decoder.com/openai-ceo-sam-altman-reportedly-teases-a-very-strong-model-internally-that-can-really-accelerate-the-economy/&lt;/a&gt;. ↩&lt;/p&gt;
&lt;/li&gt;

&lt;/ol&gt;

</description>
      <category>ai</category>
      <category>anthropic</category>
      <category>cybersecurity</category>
      <category>security</category>
    </item>
    <item>
      <title>Replacing SCCM with FOG Project</title>
      <dc:creator>Solomon Neas</dc:creator>
      <pubDate>Sun, 29 Mar 2026 23:33:08 +0000</pubDate>
      <link>https://dev.to/solomonneas/replacing-sccm-with-fog-project-daf</link>
      <guid>https://dev.to/solomonneas/replacing-sccm-with-fog-project-daf</guid>
      <description>&lt;h1&gt;
  
  
  Replacing SCCM with FOG Project
&lt;/h1&gt;

&lt;p&gt;When I moved our infrastructure from Hyper-V to Proxmox, I also took the chance to rip out one of the heaviest pieces of the old stack: SCCM.&lt;/p&gt;

&lt;p&gt;For an enterprise with thousands of endpoints and a full Microsoft licensing budget, SCCM can make sense. For four instructional labs with 72 workstations, it was too much. It wanted Windows Server, SQL Server, licensing baggage, and constant babysitting just to do the thing I actually needed: boot a machine over the network, lay down a clean image, put it back in the domain, and get out of the way.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/FOGProject/fogproject" rel="noopener noreferrer"&gt;FOG Project&lt;/a&gt; was the answer.&lt;/p&gt;

&lt;p&gt;This post is the technical deep-dive for the imaging side of the migration. For the full Hyper-V to Proxmox migration story, see &lt;a href="https://solomonneas.dev/blog/hyperv-to-proxmox-migration-guide" rel="noopener noreferrer"&gt;the migration deep-dive&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The environment I was replacing
&lt;/h2&gt;

&lt;p&gt;The target was straightforward on paper:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;4 classrooms&lt;/li&gt;
&lt;li&gt;72 total workstations&lt;/li&gt;
&lt;li&gt;3 hardware-specific Windows 11 images&lt;/li&gt;
&lt;li&gt;47 machines with known MAC addresses at import time&lt;/li&gt;
&lt;li&gt;25 machines that would need to self-register on first PXE boot&lt;/li&gt;
&lt;li&gt;Active Directory domain join after imaging&lt;/li&gt;
&lt;li&gt;Room-specific OU placement&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The architecture ended up looking like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;FOG server:&lt;/strong&gt; Debian Trixie 13 LXC on Proxmox, &lt;code&gt;10.0.1.20&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain controllers / DHCP:&lt;/strong&gt; Windows Server, &lt;code&gt;10.0.1.10&lt;/code&gt; and &lt;code&gt;10.0.1.11&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain:&lt;/strong&gt; &lt;code&gt;LAB.LOCAL&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Join account:&lt;/strong&gt; &lt;code&gt;svc-domainjoin@LAB.LOCAL&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rooms:&lt;/strong&gt; &lt;code&gt;Room-A&lt;/code&gt;, &lt;code&gt;Room-B&lt;/code&gt;, &lt;code&gt;Room-C&lt;/code&gt;, &lt;code&gt;Room-D&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Images:&lt;/strong&gt; &lt;code&gt;Win11-Lab-G4&lt;/code&gt;, &lt;code&gt;Win11-Lab-Ultrawide&lt;/code&gt;, &lt;code&gt;Win11-Lab-G9&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I did &lt;strong&gt;not&lt;/strong&gt; build images in Proxmox VMs. I built them on &lt;strong&gt;reference machines that matched the real lab hardware&lt;/strong&gt;. The rooms were not identical. One had HP G4s, one had HP G9s, and two rooms had Dell FCT2250s paired with Dell UltraSharp 49" curved monitors (U4924DW/U4919DW), which needed their own driver set.&lt;/p&gt;

&lt;p&gt;Three golden images, three hardware profiles:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Room&lt;/th&gt;
&lt;th&gt;Image&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Room-A&lt;/td&gt;
&lt;td&gt;Win11-Lab-G4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Room-B&lt;/td&gt;
&lt;td&gt;Win11-Lab-Ultrawide&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Room-C&lt;/td&gt;
&lt;td&gt;Win11-Lab-Ultrawide&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Room-D&lt;/td&gt;
&lt;td&gt;Win11-Lab-G9&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Active Directory prep
&lt;/h2&gt;

&lt;p&gt;The Linux box gets the attention in a FOG write-up, but the boring Windows prep is what made the deployment clean.&lt;/p&gt;

&lt;p&gt;I created a least-privilege service account called &lt;code&gt;svc-domainjoin&lt;/code&gt; and delegated only what FOG needed on the classroom OUs: create computer objects, delete computer objects, and full control on descendant computer objects.&lt;/p&gt;

&lt;p&gt;The OU layout was one OU per room. Then I delegated permissions with &lt;code&gt;dsacls&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;dsacls&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"OU=Room-A,DC=LAB,DC=LOCAL"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/I:T&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/G&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"LAB\svc-domainjoin:CC;computer"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;dsacls&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"OU=Room-A,DC=LAB,DC=LOCAL"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/I:T&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/G&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"LAB\svc-domainjoin:DC;computer"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="n"&gt;dsacls&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"OU=Room-A,DC=LAB,DC=LOCAL"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/I:S&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/G&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"LAB\svc-domainjoin:GA;;computer"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Same pattern on all four classroom OUs.&lt;/p&gt;

&lt;p&gt;Before touching imaging at all, I cleaned house in AD: moved 38 misplaced computer objects into the right OUs, deleted 60 stale ones from old naming schemes, unlinked the SCCM GPO from the room OUs, and exported a CSV of all 72 lab machines.&lt;/p&gt;

&lt;p&gt;Then I set DHCP options 66 and 67 on the classroom scopes to point to FOG:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Option 66:&lt;/strong&gt; &lt;code&gt;10.0.1.20&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Option 67:&lt;/strong&gt; initially &lt;code&gt;ipxe.efi&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Should have been enough. It was not.&lt;/p&gt;

&lt;h2&gt;
  
  
  The SCCM ghost in DHCP
&lt;/h2&gt;

&lt;p&gt;PXE still misbehaved on some scopes even with the scope-level options pointing to FOG.&lt;/p&gt;

&lt;p&gt;The culprit was stale SCCM and WDS &lt;strong&gt;policies&lt;/strong&gt; still attached to several scopes. DHCP policies take priority over scope options. Clients matching the PXE vendor class were quietly getting sent to the retired SCCM server at &lt;code&gt;10.0.1.14&lt;/code&gt; instead of FOG at &lt;code&gt;10.0.1.20&lt;/code&gt;. The old policies referenced &lt;code&gt;smsboot\x64\wdsmgfw.efi&lt;/code&gt; and &lt;code&gt;smsboot\x64\wdsnbp.com&lt;/code&gt; with the old boot server IP.&lt;/p&gt;

&lt;p&gt;Eleven of those policies, spread across five scopes. Once I removed them all, PXE stopped getting hijacked by dead infrastructure.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If PXE settings look right but clients keep booting somewhere else, check DHCP policies before you blame FOG.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  FOG on Debian Trixie: three bugs in a trenchcoat
&lt;/h2&gt;

&lt;p&gt;The FOG server was a Debian 13 LXC container on Proxmox. When I first looked at it, the web UI was running, which made it look installed.&lt;/p&gt;

&lt;p&gt;It was not. The backend was missing entirely. No TFTP boot files. No NFS exports. No FOG services. The web UI was a shell with nothing behind it.&lt;/p&gt;

&lt;p&gt;Re-ran the installer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /tmp/fogproject/bin
bash installfog.sh &lt;span class="nt"&gt;-y&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Trixie had other plans.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bug 1: wrong &lt;code&gt;osid&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;FOG had the wrong OS ID in &lt;code&gt;.fogsettings&lt;/code&gt;. It detected the machine as Arch Linux instead of Debian. Fix in &lt;code&gt;/opt/fog/.fogsettings&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;osid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'3'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;osid&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'2'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without that, every package operation targeted the wrong distro.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bug 2: &lt;code&gt;libcurl4&lt;/code&gt; renamed on Debian 13
&lt;/h3&gt;

&lt;p&gt;Debian 13's &lt;code&gt;t64&lt;/code&gt; transition renamed &lt;code&gt;libcurl4&lt;/code&gt; to &lt;code&gt;libcurl4t64&lt;/code&gt;. FOG's installer still asked for the old name. I patched the package check in &lt;code&gt;functions.sh&lt;/code&gt; to handle Debian &amp;gt;= 13.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bug 3: &lt;code&gt;lastlog&lt;/code&gt; became &lt;code&gt;lastlog2&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;FOG's installer checks &lt;code&gt;lastlog&lt;/code&gt; to verify user creation. Trixie dropped it for &lt;code&gt;lastlog2&lt;/code&gt;. Quick fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; lastlog2
&lt;span class="nb"&gt;ln&lt;/span&gt; &lt;span class="nt"&gt;-sf&lt;/span&gt; /usr/bin/lastlog2 /usr/local/bin/lastlog
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once I patched all three, the installer completed and the backend finally matched the web UI.&lt;/p&gt;

&lt;h2&gt;
  
  
  NFS inside LXC: the container wall
&lt;/h2&gt;

&lt;p&gt;With the installer patched, the next wall was image storage. FOG uses NFS for capture and deployment. Inside a Proxmox LXC container, kernel NFS does not cooperate:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mount: /proc/fs/nfsd: permission denied
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I tried the usual Proxmox container feature flags:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pct &lt;span class="nb"&gt;set&lt;/span&gt; &amp;lt;CTID&amp;gt; &lt;span class="nt"&gt;-features&lt;/span&gt; &lt;span class="nv"&gt;nesting&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;1,nfs&lt;span class="o"&gt;=&lt;/span&gt;1
pct reboot &amp;lt;CTID&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Better, but still not a reliable kernel NFS server. I stopped fighting it and went userspace: &lt;code&gt;nfs-ganesha&lt;/code&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apt &lt;span class="nb"&gt;install &lt;/span&gt;nfs-ganesha nfs-ganesha-vfs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One annoying gotcha: Ganesha's pseudo paths cannot nest. Having &lt;code&gt;/images&lt;/code&gt; and &lt;code&gt;/images/dev&lt;/code&gt; as pseudo paths breaks child lookup. I had to flatten the namespace.&lt;/p&gt;

&lt;p&gt;The working &lt;code&gt;/etc/ganesha/ganesha.conf&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight conf"&gt;&lt;code&gt;&lt;span class="n"&gt;NFS_CORE_PARAM&lt;/span&gt; {
    &lt;span class="n"&gt;Protocols&lt;/span&gt; = &lt;span class="m"&gt;3&lt;/span&gt;,&lt;span class="m"&gt;4&lt;/span&gt;;
    &lt;span class="n"&gt;mount_path_pseudo&lt;/span&gt; = &lt;span class="n"&gt;true&lt;/span&gt;;
    &lt;span class="n"&gt;allow_set_io_flusher_fail&lt;/span&gt; = &lt;span class="n"&gt;true&lt;/span&gt;;
}

&lt;span class="n"&gt;EXPORT&lt;/span&gt; {
    &lt;span class="n"&gt;Export_Id&lt;/span&gt; = &lt;span class="m"&gt;1&lt;/span&gt;;
    &lt;span class="n"&gt;Path&lt;/span&gt; = /&lt;span class="n"&gt;images&lt;/span&gt;;
    &lt;span class="n"&gt;Pseudo&lt;/span&gt; = /&lt;span class="n"&gt;images&lt;/span&gt;;
    &lt;span class="n"&gt;Protocols&lt;/span&gt; = &lt;span class="m"&gt;3&lt;/span&gt;,&lt;span class="m"&gt;4&lt;/span&gt;;
    &lt;span class="n"&gt;Access_Type&lt;/span&gt; = &lt;span class="n"&gt;RO&lt;/span&gt;;
    &lt;span class="n"&gt;Squash&lt;/span&gt; = &lt;span class="n"&gt;no_root_squash&lt;/span&gt;;
    &lt;span class="n"&gt;FSAL&lt;/span&gt; { &lt;span class="n"&gt;Name&lt;/span&gt; = &lt;span class="n"&gt;VFS&lt;/span&gt;; }
    &lt;span class="n"&gt;CLIENT&lt;/span&gt; { &lt;span class="n"&gt;Clients&lt;/span&gt; = *; &lt;span class="n"&gt;Access_Type&lt;/span&gt; = &lt;span class="n"&gt;RO&lt;/span&gt;; }
}

&lt;span class="n"&gt;EXPORT&lt;/span&gt; {
    &lt;span class="n"&gt;Export_Id&lt;/span&gt; = &lt;span class="m"&gt;2&lt;/span&gt;;
    &lt;span class="n"&gt;Path&lt;/span&gt; = /&lt;span class="n"&gt;images&lt;/span&gt;/&lt;span class="n"&gt;dev&lt;/span&gt;;
    &lt;span class="n"&gt;Pseudo&lt;/span&gt; = /&lt;span class="n"&gt;imagesDev&lt;/span&gt;;
    &lt;span class="n"&gt;Protocols&lt;/span&gt; = &lt;span class="m"&gt;3&lt;/span&gt;,&lt;span class="m"&gt;4&lt;/span&gt;;
    &lt;span class="n"&gt;Access_Type&lt;/span&gt; = &lt;span class="n"&gt;RW&lt;/span&gt;;
    &lt;span class="n"&gt;Squash&lt;/span&gt; = &lt;span class="n"&gt;no_root_squash&lt;/span&gt;;
    &lt;span class="n"&gt;FSAL&lt;/span&gt; { &lt;span class="n"&gt;Name&lt;/span&gt; = &lt;span class="n"&gt;VFS&lt;/span&gt;; }
    &lt;span class="n"&gt;CLIENT&lt;/span&gt; { &lt;span class="n"&gt;Clients&lt;/span&gt; = *; &lt;span class="n"&gt;Access_Type&lt;/span&gt; = &lt;span class="n"&gt;RW&lt;/span&gt;; }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The critical line for LXC: &lt;code&gt;allow_set_io_flusher_fail = true;&lt;/code&gt;. Without it, Ganesha dies with &lt;code&gt;EPERM&lt;/code&gt; on &lt;code&gt;PR_SET_IO_FLUSHER&lt;/code&gt;. That flag lets it skip the kernel call and keep running.&lt;/p&gt;

&lt;p&gt;Verify with &lt;code&gt;showmount -e localhost&lt;/code&gt; and &lt;code&gt;rpcinfo -p localhost&lt;/code&gt;. If both look right, NFS is serving.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Kernel NFS in LXC was a dead end. Userspace NFS was the practical way through it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The golden image pipeline
&lt;/h2&gt;

&lt;p&gt;With the server side stable, the real work was image prep. Same process every time:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;clean install Windows 11 on the reference machine&lt;/li&gt;
&lt;li&gt;install drivers, software, updates, and room-specific settings&lt;/li&gt;
&lt;li&gt;keep it off the domain&lt;/li&gt;
&lt;li&gt;install the FOG client if needed for post-deploy tasks&lt;/li&gt;
&lt;li&gt;clean it aggressively&lt;/li&gt;
&lt;li&gt;sysprep and shut down&lt;/li&gt;
&lt;li&gt;PXE boot and capture in FOG&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 5 became its own thing. The cleanup one-liner I kept copy-pasting between machines. Ugly, but it works:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;Stop-Service&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;wuauserv&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Force&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Remove-Item&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;C:\Windows\SoftwareDistribution\&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Recurse&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Force&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-EA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;SilentlyContinue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Start-Service&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;wuauserv&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Dism&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/online&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/Cleanup-Image&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/StartComponentCleanup&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/ResetBase&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Remove-Item&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;C:\Windows\Panther\&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Recurse&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Force&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-EA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;SilentlyContinue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Remove-Item&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;C:\Windows\Temp\&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Recurse&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Force&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-EA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;SilentlyContinue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Remove-Item&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;C:\Windows\Prefetch\&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Recurse&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Force&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-EA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;SilentlyContinue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Remove-Item&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$&lt;/span&gt;&lt;span class="nn"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;TEMP&lt;/span&gt;&lt;span class="s2"&gt;\*"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Recurse&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Force&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-EA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;SilentlyContinue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;Clear-RecycleBin&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Force&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-EA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;SilentlyContinue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;cleanmgr&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/sagerun:1&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="n"&gt;C:\Windows\System32\Sysprep\sysprep.exe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/oobe&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/generalize&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;/shutdown&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I started calling it the Frankenstein command. It's stitched together from half a dozen different guides and does way too much in one line. But for repeatable image prep, it earned its place.&lt;/p&gt;

&lt;p&gt;Broken down:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;Stop-Service wuauserv -Force&lt;/code&gt; / &lt;code&gt;Remove-Item SoftwareDistribution&lt;/code&gt; / &lt;code&gt;Start-Service wuauserv&lt;/code&gt;: kill Windows Update, nuke the cache, bring it back.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Dism /online /Cleanup-Image /StartComponentCleanup /ResetBase&lt;/code&gt;: collapse superseded component versions. Frees real space.&lt;/li&gt;
&lt;li&gt;The four &lt;code&gt;Remove-Item&lt;/code&gt; lines: purge Panther logs, system temp, Prefetch, and user temp.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Clear-RecycleBin&lt;/code&gt;: self-explanatory.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cleanmgr /sagerun:1&lt;/code&gt;: Disk Cleanup with preconfigured settings.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sysprep.exe /oobe /generalize /shutdown&lt;/code&gt;: strip machine identity, stage OOBE, power off.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; After sysprep shuts the machine down, do not let it boot back into Windows before capture. If it does, you just spent your sysprep state and get to do it again.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Sysprep on Windows 11: Appx package hell
&lt;/h2&gt;

&lt;p&gt;The worst part of this project was not FOG. It was Sysprep.&lt;/p&gt;

&lt;p&gt;If you've fought Windows 11 Sysprep before, you know the error. If you haven't, it looks like this in the Panther logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Sysprep was not able to validate your Windows installation.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;More specifically:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Package &amp;lt;app&amp;gt; was installed for a user, but not provisioned for all users. This package will not function properly in the sysprep image.
Failed to remove apps for the current user: 0x80073cf2.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On Windows 11 23H2 and 24H2, the Microsoft Store silently updates Appx packages per-user. That creates a version mismatch between the installed state and the provisioned state. Sysprep sees it, panics, refuses to generalize.&lt;/p&gt;

&lt;p&gt;What I tried that did &lt;strong&gt;not&lt;/strong&gt; work:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Remove-AppxPackage -AllUsers&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Remove-AppxProvisionedPackage&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;the mythical &lt;code&gt;SkipAppxValidation&lt;/code&gt; registry setting&lt;/li&gt;
&lt;li&gt;editing &lt;code&gt;Generalize.xml&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;renaming &lt;code&gt;AppxSysprep.dll&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some fail outright. Some appear to work, then the packages come back after a reboot. Some just swap one error for a different one.&lt;/p&gt;

&lt;p&gt;What actually worked: changing the source image entirely.&lt;/p&gt;

&lt;p&gt;I used Chris Titus Tech's WinUtil to build a &lt;strong&gt;MicroWin&lt;/strong&gt; ISO. Instead of installing stock Windows and spending hours ripping Store junk back out, I started from a clean ISO that never had the Appx baggage in the first place.&lt;/p&gt;

&lt;p&gt;Prevention over cleanup:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;build MicroWin ISO&lt;/li&gt;
&lt;li&gt;install on the reference machine&lt;/li&gt;
&lt;li&gt;configure drivers and software&lt;/li&gt;
&lt;li&gt;disconnect from the network before sysprep if needed&lt;/li&gt;
&lt;li&gt;run the cleanup command&lt;/li&gt;
&lt;li&gt;sysprep, shut down, capture immediately&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That was the difference between fighting Sysprep every time and having it just work.&lt;/p&gt;

&lt;h2&gt;
  
  
  PXE on newer hardware: &lt;code&gt;snponly.efi&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;One more issue that looked like FOG but wasn't. A newer motherboard would download &lt;code&gt;ipxe.efi&lt;/code&gt;, start up, and freeze at:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iPXE initializing devices...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No &lt;code&gt;ok&lt;/code&gt;. Just stuck.&lt;/p&gt;

&lt;p&gt;iPXE's built-in NIC driver couldn't handle the newer chipset. The fix: switch DHCP option 67 from &lt;code&gt;ipxe.efi&lt;/code&gt; to &lt;code&gt;snponly.efi&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The difference is simple. &lt;code&gt;ipxe.efi&lt;/code&gt; ships its own NIC drivers. &lt;code&gt;snponly.efi&lt;/code&gt; delegates to the UEFI firmware's native SNP driver. On newer boards, the firmware driver works where iPXE's doesn't.&lt;/p&gt;

&lt;p&gt;One DHCP change, &lt;code&gt;ipxe.efi&lt;/code&gt; to &lt;code&gt;snponly.efi&lt;/code&gt;, and the machine booted straight to the FOG menu. It became the default for all our UEFI clients.&lt;/p&gt;

&lt;h2&gt;
  
  
  72 systems, deployed by room
&lt;/h2&gt;

&lt;p&gt;With images captured and PXE stable, FOG could finally do its job.&lt;/p&gt;

&lt;p&gt;Bulk imported the workstation list from CSV:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;47 had MAC addresses ready for direct import&lt;/li&gt;
&lt;li&gt;25 had no active lease at the time, so they were left to auto-register when first PXE booted&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each host got the right image and room group. Deploying a room was: select the group, schedule a deploy task, PXE boot the room. That's it.&lt;/p&gt;

&lt;p&gt;FOG uses Partclone under the hood, so it only transfers used blocks. A 500 GB drive with 30 GB used doesn't push 500 GB over the wire. It pushes roughly 30 GB, compressed. Room-wide imaging is faster than people expect.&lt;/p&gt;

&lt;p&gt;After deployment, the FOG client sets the hostname, joins the domain via &lt;code&gt;svc-domainjoin&lt;/code&gt;, and drops the computer object in the correct OU. Multicast is also available for imaging an entire room simultaneously.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learned
&lt;/h2&gt;

&lt;p&gt;Setting up FOG is not just "install FOG." It's DHCP policy archaeology, AD delegation, Windows image hygiene, boot loader compatibility, and the reality of running services inside Linux containers.&lt;/p&gt;

&lt;p&gt;The short list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;DHCP policies override scope options.&lt;/strong&gt; Check for leftover WDS/SCCM policies before blaming FOG.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;A running web UI does not mean FOG is installed.&lt;/strong&gt; Verify TFTP, services, and NFS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FOG on Debian Trixie needs manual patches.&lt;/strong&gt; &lt;code&gt;osid&lt;/code&gt;, &lt;code&gt;libcurl4t64&lt;/code&gt;, &lt;code&gt;lastlog2&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Kernel NFS in Proxmox LXC is a dead end.&lt;/strong&gt; Use &lt;code&gt;nfs-ganesha&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build images on reference machines.&lt;/strong&gt; VM images don't carry driver quirks correctly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debloat before install, not after.&lt;/strong&gt; MicroWin saves hours of Appx cleanup.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test Sysprep on something disposable first.&lt;/strong&gt; Appx breakage on your finished image is a bad time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;code&gt;snponly.efi&lt;/code&gt; for newer hardware.&lt;/strong&gt; If iPXE hangs at device init, switch boot files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Never boot a sysprepped machine before capture.&lt;/strong&gt; Generalize is one-shot.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FOG's &lt;code&gt;hostName&lt;/code&gt; is VARCHAR(16).&lt;/strong&gt; Plan your naming accordingly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the rough edges were filed down, the day-to-day got simple. Pick the room. PXE boot. Image. Domain join. Done. Way less overhead than SCCM ever was.&lt;/p&gt;

&lt;h2&gt;
  
  
  That's the kind of boring I want from lab infrastructure.
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;This post is part of a larger infrastructure migration series. For the full Hyper-V to Proxmox migration story, see the &lt;a href="https://solomonneas.dev/projects/hyperv-to-proxmox-migration" rel="noopener noreferrer"&gt;project page on solomonneas.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://solomonneas.dev/blog/replacing-sccm-with-fog-project" rel="noopener noreferrer"&gt;solomonneas.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>fogproject</category>
      <category>windows</category>
      <category>pxe</category>
      <category>sysadmin</category>
    </item>
    <item>
      <title>I Built 7 MCP Servers for Security Tools. The Protocol Was the Easy Part.</title>
      <dc:creator>Solomon Neas</dc:creator>
      <pubDate>Mon, 23 Mar 2026 20:59:54 +0000</pubDate>
      <link>https://dev.to/solomonneas/i-built-7-mcp-servers-for-security-tools-the-protocol-was-the-easy-part-4137</link>
      <guid>https://dev.to/solomonneas/i-built-7-mcp-servers-for-security-tools-the-protocol-was-the-easy-part-4137</guid>
      <description>&lt;p&gt;I wanted my AI agent to talk directly to my security stack. Not through copy-pasted log snippets. Not through screenshots of dashboards. Actual tool calls against live data.&lt;/p&gt;

&lt;p&gt;So I built seven MCP servers. Wazuh. Suricata. Zeek. TheHive. Cortex. MISP. MITRE ATT&amp;amp;CK. All open source, all on &lt;a href="https://github.com/solomonneas" rel="noopener noreferrer"&gt;my GitHub&lt;/a&gt;. Project page: &lt;a href="https://solomonneas.dev/projects/security-mcp-servers" rel="noopener noreferrer"&gt;https://solomonneas.dev/projects/security-mcp-servers&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The protocol layer took a weekend. The context engineering took weeks. That ratio surprised me.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Actually Built
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;API-based servers&lt;/strong&gt; talk directly to running services. Wazuh MCP hits the manager's REST API on port 55000 for alerts, agent status, vulnerability scans, and file integrity events. TheHive and Cortex connect to their respective APIs for case management and observable analysis. MISP pulls threat intelligence feeds and IOC lookups.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Log-based servers&lt;/strong&gt; parse files on disk. Zeek MCP reads from a log directory (JSON or TSV format), letting you query connection logs, DNS, HTTP, SSL, and file analysis data. Suricata MCP reads EVE JSON logs for IDS alerts, flow data, and protocol metadata.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Knowledge-base servers&lt;/strong&gt; work offline. The MITRE ATT&amp;amp;CK server downloads STIX 2.1 bundles and lets you query techniques, tactics, groups, software, and mitigations without hitting any external API.&lt;/p&gt;

&lt;p&gt;Each server exposes a focused set of tools. Wazuh has &lt;code&gt;get_alerts&lt;/code&gt;, &lt;code&gt;list_agents&lt;/code&gt;, &lt;code&gt;get_vulnerabilities&lt;/code&gt;, &lt;code&gt;get_fim_events&lt;/code&gt;. Zeek has &lt;code&gt;query_connections&lt;/code&gt;, &lt;code&gt;search_dns&lt;/code&gt;, &lt;code&gt;get_ssl_certs&lt;/code&gt;. Suricata has &lt;code&gt;get_alerts&lt;/code&gt;, &lt;code&gt;get_flow_stats&lt;/code&gt;, &lt;code&gt;search_protocols&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Every tool does one thing with predictable output. Full code and docs at &lt;a href="https://github.com/solomonneas" rel="noopener noreferrer"&gt;github.com/solomonneas&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing Against Live Infrastructure
&lt;/h2&gt;

&lt;p&gt;Every server got tested against real running services on my home infrastructure.&lt;/p&gt;

&lt;p&gt;Wazuh MCP was tested against my Wazuh 4.14.1 instance running on Proxmox. I queried live alerts, pulled agent status for my connected machines, ran vulnerability scan results, and verified file integrity monitoring events. The agent reconnection workflow got tested end-to-end: listing disconnected agents, checking last keep-alive, triggering restarts.&lt;/p&gt;

&lt;p&gt;Zeek and Suricata servers were tested against actual captured traffic. Real log files through both parsers, connection correlation across source/destination pairs, DNS query lookups, and stress-tested time-window filtering with large log directories. Edge cases like malformed log entries and mixed JSON/TSV formats got handled explicitly.&lt;/p&gt;

&lt;p&gt;TheHive and Cortex were tested against their APIs with sample cases and observables. MISP was tested with real IOC lookups. The MITRE ATT&amp;amp;CK server was verified against the full STIX 2.1 enterprise bundle.&lt;/p&gt;

&lt;p&gt;The goal was not just "does the tool call succeed." It was "does the model get back data it can actually reason about for a real investigation."&lt;/p&gt;

&lt;h2&gt;
  
  
  Context Design Is the Real Engineering
&lt;/h2&gt;

&lt;p&gt;Security telemetry is exactly the kind of data language models handle poorly. It's verbose, repetitive, and full of fields that matter sometimes and are noise the rest of the time.&lt;/p&gt;

&lt;p&gt;Take Wazuh alerts. A single alert has 40+ fields. Dump all of that into a model and ask it to "analyze the situation." You'll get a vague summary that touches everything and understands nothing.&lt;/p&gt;

&lt;p&gt;My first versions returned raw API responses. The model would pick whatever fields were easiest to talk about instead of whatever actually mattered.&lt;/p&gt;

&lt;p&gt;So I started designing the context layer. For Wazuh, I filter to severity 8+ by default and return a focused subset: timestamp, rule description, agent name, source IP, and MITRE technique. For Zeek, I pre-aggregate by source/destination pair and surface unusual patterns first. For Suricata, I separate IDS alerts from flow metadata. Detections first, network context second.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where It Gets Interesting
&lt;/h2&gt;

&lt;p&gt;A Wazuh alert fires for a suspicious process. The model checks Zeek for that host's network activity. Finds outbound connections to an unusual IP. Queries ATT&amp;amp;CK for technique mapping. Checks MISP for threat intel on the destination.&lt;/p&gt;

&lt;p&gt;That correlation chain used to take 15 minutes of clicking through interfaces. Now it takes one question.&lt;/p&gt;

&lt;p&gt;I'm not replacing analysts. I'm killing the mechanical evidence-gathering that burns time before a human reaches the real decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Lesson
&lt;/h2&gt;

&lt;p&gt;The protocol is a solved problem. MCP works. The bottleneck is what happens between raw data and the model's context window. Filtering, ordering, scoping, pre-summarizing. That's where analysis quality is determined.&lt;/p&gt;

&lt;p&gt;A model with access to every field in every log is worse off than one that sees the right 15 fields in the right order.&lt;/p&gt;

&lt;p&gt;Seven servers. All open source. All tested against live infrastructure. Code at &lt;a href="https://github.com/solomonneas" rel="noopener noreferrer"&gt;github.com/solomonneas&lt;/a&gt;. The protocol was a weekend. The context design is ongoing. That's the ratio that matters.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>opensource</category>
      <category>security</category>
    </item>
    <item>
      <title>I Migrated Our Entire Infrastructure from Hyper-V to Proxmox. Here's Everything I Learned.</title>
      <dc:creator>Solomon Neas</dc:creator>
      <pubDate>Sat, 14 Mar 2026 06:45:04 +0000</pubDate>
      <link>https://dev.to/solomonneas/i-migrated-our-entire-infrastructure-from-hyper-v-to-proxmox-heres-everything-i-learned-g3k</link>
      <guid>https://dev.to/solomonneas/i-migrated-our-entire-infrastructure-from-hyper-v-to-proxmox-heres-everything-i-learned-g3k</guid>
      <description>&lt;p&gt;Domain controllers, file servers, network monitoring, imaging, WiFi controllers. All of it moved from Microsoft to open source. No downtime. No data loss. Here's the complete playbook.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why We Left Hyper-V
&lt;/h2&gt;

&lt;p&gt;Broadcom acquired VMware and started charging $350/core/year for VCF licensing. They killed the VMware IT Academy program entirely. The institution moved from vSphere to Hyper-V as a cost-saving measure, but I'd already done a VMware to Proxmox migration on my own infrastructure at that point. That migration opened my eyes to how good Proxmox actually is.&lt;/p&gt;

&lt;p&gt;It's more lightweight. The web UI gives you more granular control than Hyper-V Manager ever did. Snapshots, live migration, ZFS, LXC containers, and full KVM virtualization all in one platform. Completely free. No per-socket licensing, no Windows Server dependency, no CALs. One less thing Microsoft gets to hold over your budget.&lt;/p&gt;

&lt;p&gt;Hyper-V felt heavy by comparison. Limited Linux VM support, clunky management (RDP into the host just to touch anything), and tight coupling to Windows Server licensing. Once I'd seen what Proxmox could do, going back to Hyper-V felt like a downgrade.&lt;/p&gt;

&lt;p&gt;The question was never "should we migrate?" It was "how do we migrate production Active Directory, network monitoring, file servers, and imaging infrastructure without breaking anything?"&lt;/p&gt;

&lt;h2&gt;
  
  
  The Power of Root on a Proxmox Host
&lt;/h2&gt;

&lt;p&gt;One thing that surprised me coming from Hyper-V: you have full root access to the Proxmox host. It's just Debian under the hood. You can SSH in, run any Linux command, script anything, automate everything. Hyper-V locks you into PowerShell remoting or RDP. Proxmox gives you a real shell on a real Linux system.&lt;/p&gt;

&lt;p&gt;Need to resize a disk? One command. Snapshot a VM? One command. Migrate a VM between hosts? One command. Everything in the web UI is also available from the CLI through &lt;code&gt;qm&lt;/code&gt; (VM management), &lt;code&gt;pct&lt;/code&gt; (container management), &lt;code&gt;pvesm&lt;/code&gt; (storage), and &lt;code&gt;pvecm&lt;/code&gt; (cluster). You can script your entire infrastructure.&lt;/p&gt;

&lt;p&gt;But the real game changer is the &lt;a href="https://community-scripts.github.io/ProxmoxVE/" rel="noopener noreferrer"&gt;Proxmox VE Helper Scripts&lt;/a&gt; community project. These are one-liner bash scripts that spin up fully configured LXC containers or VMs for common services. Need a Pi-hole? One command. Docker host? One command. Home Assistant, Nginx Proxy Manager, Plex, Grafana, Wireguard? One command each.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Example: spin up a Docker LXC in seconds&lt;/span&gt;
bash &lt;span class="nt"&gt;-c&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;wget &lt;span class="nt"&gt;-qLO&lt;/span&gt; - https://github.com/community-scripts/ProxmoxVE/raw/main/ct/docker.sh&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script handles everything: downloads the template, creates the container, configures networking, installs the service, and starts it. What would take 30 minutes of manual setup takes 60 seconds. I used these for several of our auxiliary services and they just work.&lt;/p&gt;

&lt;p&gt;Compare that to Hyper-V where deploying a new service means: create a VM, install Windows or manually download an ISO, walk through the installer, configure networking, install the actual application. The gap in operational speed is enormous.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Domain Controller Leapfrog
&lt;/h2&gt;

&lt;p&gt;This was the part that scared me most. Domain controllers are the heartbeat of a Windows network. Every authentication, every group policy, every DNS lookup flows through them. Get this wrong and the whole campus goes dark.&lt;/p&gt;

&lt;p&gt;The conventional wisdom is clear: &lt;strong&gt;never V2V a domain controller.&lt;/strong&gt; Converting a DC's virtual disk risks USN rollback, which permanently corrupts the AD replication database. There's no recovery path short of rebuilding the entire domain.&lt;/p&gt;

&lt;p&gt;Instead, I used what I call the "leapfrog" method. We had two DCs: DC1 and DC2, both on Hyper-V.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Transfer all five FSMO roles to DC2. Verify DHCP scopes, DNS zones, and AD replication are healthy. DC2 is now running the show.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2:&lt;/strong&gt; Delete DC1. Build a fresh Windows Server VM on Proxmox. Promote it to domain controller. AD replication syncs everything from DC2 automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3:&lt;/strong&gt; Transfer all FSMO roles to the new DC1 on Proxmox. Verify everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4:&lt;/strong&gt; Delete DC2 on Hyper-V. Build fresh on Proxmox. Promote. AD replicates from DC1.&lt;/p&gt;

&lt;p&gt;Both domain controllers are now on Proxmox. Zero downtime. Zero data loss. The whole process was honestly easier than I expected because AD replication just works when you let it do its job.&lt;/p&gt;

&lt;p&gt;The PowerShell for the FSMO transfer is one command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;Move-ADDirectoryServerOperationMasterRole&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nt"&gt;-Identity&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"NEW-DC1"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nt"&gt;-OperationMasterRole&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;SchemaMaster&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;DomainNamingMaster&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="se"&gt;`
&lt;/span&gt;&lt;span class="w"&gt;  &lt;/span&gt;&lt;span class="nx"&gt;PDCEmulator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;RIDMaster&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;InfrastructureMaster&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Always verify with &lt;code&gt;repadmin /showrepl&lt;/code&gt; after each promotion and transfer. If replication shows errors, stop and fix them before proceeding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linux VM Migration: The V2V Process
&lt;/h2&gt;

&lt;p&gt;For Linux VMs (LibreNMS, Netdisco, Switchmap), I used direct disk conversion. The process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create a "shell" VM&lt;/strong&gt; in Proxmox. Set the OS type, match the BIOS to the source Hyper-V generation (Gen 1 = SeaBIOS, Gen 2 = OVMF UEFI), but &lt;strong&gt;do not create a hard drive.&lt;/strong&gt; The disk list should be empty.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;SCP the VHDX&lt;/strong&gt; from the Hyper-V host to Proxmox:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight powershell"&gt;&lt;code&gt;&lt;span class="n"&gt;scp&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"C:\Path\To\Disk.vhdx"&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nx"&gt;root&lt;/span&gt;&lt;span class="err"&gt;@&lt;/span&gt;&lt;span class="nx"&gt;PROXMOX_IP:/var/lib/vz/dump/&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Import and attach&lt;/strong&gt; on the Proxmox side:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;qm importdisk 102 /var/lib/vz/dump/Netdisco.vhdx local-lvm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then in the GUI: Hardware &amp;gt; double-click Unused Disk 0 &amp;gt; add as SCSI. Set boot order to prioritize scsi0.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Post-migration gotchas:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network interface names change (eth0 becomes ens18). Update your netplan config.&lt;/li&gt;
&lt;li&gt;Install &lt;code&gt;qemu-guest-agent&lt;/code&gt; so Proxmox can see the VM's IP and gracefully shut it down.&lt;/li&gt;
&lt;li&gt;LibreNMS needed a full permissions reset. Run &lt;code&gt;validate.php&lt;/code&gt; as the librenms user and follow every instruction it gives you.&lt;/li&gt;
&lt;li&gt;Netdisco needed its database host changed to localhost in &lt;code&gt;deployment.yml&lt;/code&gt; and a session cookie key added to prevent crashes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Killing DFS, Simplifying Drive Maps
&lt;/h2&gt;

&lt;p&gt;The old environment used a DFS namespace to abstract file server paths. For a single-server environment, DFS adds complexity that provides no benefit: 30-minute referral TTL, client cache issues, and another layer to troubleshoot when users can't access files.&lt;/p&gt;

&lt;p&gt;I ripped it out and replaced it with Group Policy Preferences drive mappings using item-level targeting:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;*&lt;em&gt;X:\*&lt;/em&gt; mapped for faculty and staff, pointing to the full file server&lt;/li&gt;
&lt;li&gt;*&lt;em&gt;Y:\*&lt;/em&gt; mapped for students, pointing to the student folders only&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Security group membership determines which mapping a user gets. No login scripts, no DFS, no namespace caching. If a user is in the Faculty-Staff group, they get X:\. If they're in the Students group, they get Y:\. Simple.&lt;/p&gt;

&lt;h2&gt;
  
  
  UniFi Controller: Windows VM to LXC Container
&lt;/h2&gt;

&lt;p&gt;This one was almost comical. The UniFi controller was running on a Windows 11 VM inside Hyper-V. To manage the WiFi, you had to RDP into the Hyper-V host, then log into the Windows VM from there. No SSH. No remote management. Just nested RDP sessions.&lt;/p&gt;

&lt;p&gt;The migration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Export the UniFi backup (.unf file) from the Windows controller&lt;/li&gt;
&lt;li&gt;Create an LXC container on Proxmox using the official UniFi template&lt;/li&gt;
&lt;li&gt;Upload the .unf backup and restore&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All WAP configurations, SSIDs, and client data came over intact. WiFi was back up in minutes. And now it runs in a lightweight container instead of a full Windows 11 VM. The resource savings alone made it worthwhile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Replacing SCCM with FOG Project
&lt;/h2&gt;

&lt;p&gt;Microsoft SCCM is powerful but absurdly heavy for an educational lab environment. It needs Windows Server, SQL Server, per-device licensing, and significant infrastructure just to image workstations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/FOGProject/" rel="noopener noreferrer"&gt;FOG Project&lt;/a&gt; does everything we actually need: PXE boot imaging, hardware inventory, and centralized workstation management. It runs on Linux, costs nothing, and the web UI is straightforward.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Golden Image Pipeline
&lt;/h3&gt;

&lt;p&gt;I build golden images as Proxmox VMs (not on physical hardware) so I can snapshot before Sysprep. This is critical because if Sysprep fails, you cannot simply run it again. The only recovery is reverting to a snapshot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Install and debloat.&lt;/strong&gt; Set up a clean Windows 11 installation on a reference machine. Run &lt;a href="https://github.com/ChrisTitusTech/winutil" rel="noopener noreferrer"&gt;Chris Titus Tech's Windows Utility&lt;/a&gt; to strip all the bloatware (Candy Crush, Spotify, Xbox, etc.) and disable telemetry. This handles both installed and provisioned packages, which is important because leftover staged Appx packages are the number one cause of silent Sysprep failures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Sysprep and shutdown.&lt;/strong&gt; Once the machine is configured how you want it, run &lt;code&gt;sysprep.exe /generalize /oobe /shutdown /unattend:C:\Windows\Panther\unattend.xml&lt;/code&gt;. The unattend file handles BypassNRO (Windows 11's forced internet requirement) and automates the OOBE setup after deployment. The machine shuts down after Sysprep completes. Do not power it back on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: FOG capture.&lt;/strong&gt; Schedule a capture task in the FOG web UI for that machine, then PXE boot it. FOG captures the sysprepped image as-is, sitting at OOBE. When the image gets deployed to a workstation later, the unattend.xml automates the OOBE setup, the FOG service agent kicks in for background management, and AD auto-join handles domain membership. No manual touch required.&lt;/p&gt;

&lt;h3&gt;
  
  
  Per-Classroom Deployment
&lt;/h3&gt;

&lt;p&gt;Each classroom has different hardware, so I maintain separate images per room. Every workstation is registered in FOG via CSV import (hostname + MAC address), grouped by classroom. When a room needs reimaging, I select the group, schedule a deploy task, and FOG uses Partclone to push the image. Partclone only writes used blocks, so imaging is fast even on large drives.&lt;/p&gt;

&lt;p&gt;The FOG agent runs on every workstation with a dedicated &lt;code&gt;fog-service&lt;/code&gt; Active Directory service account. DHCP points PXE boot to the FOG server using &lt;code&gt;snponly.efi&lt;/code&gt; for UEFI network boot. A machine needing reimaging just needs to PXE boot and everything happens automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  WSUS: Closing the Update Loop
&lt;/h2&gt;

&lt;p&gt;The last piece of the imaging puzzle was patch management. Without centralized updates, every golden image would need constant rebuilding just to stay current. And letting 60+ lab machines pull updates directly from Microsoft on their own schedule is a recipe for bandwidth problems and inconsistent states.&lt;/p&gt;

&lt;p&gt;I set up WSUS (Windows Server Update Services) directly on DC1. For an environment this size (four classrooms and a handful of staff machines), a dedicated WSUS server would be overkill. Running it on the domain controller keeps the footprint small and the management simple.&lt;/p&gt;

&lt;p&gt;The update pipeline works in two stages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test lab first.&lt;/strong&gt; New updates land in WSUS but aren't auto-approved. I have a WSUS computer group for a small set of test machines. Updates get approved for the test group first. They run for about four to five days. This buffer is intentional: it's enough time for the community to flag zero-day issues, botched patches, or driver conflicts before anything hits production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Then classrooms.&lt;/strong&gt; After the test window passes clean, I approve updates for the classroom groups. WSUS pushes them from the local server, so machines pull patches over the LAN instead of each one hammering Microsoft's CDN individually. Faster downloads, less bandwidth, and every machine in a room ends up on the same patch level.&lt;/p&gt;

&lt;p&gt;This also means the golden image for FOG doesn't need to be rebuilt every Patch Tuesday. WSUS handles ongoing patching after deployment. The golden image only needs updating when there's a major feature release or a change to the base software stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'd Do Differently
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Document interface names before migration.&lt;/strong&gt; Every Linux VM had a different post-migration network issue because the interface name changed. A quick &lt;code&gt;ip link show&lt;/code&gt; before the migration would have saved debugging time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test Sysprep on a throwaway VM first.&lt;/strong&gt; My first Sysprep attempt failed because of a leftover Xbox app. Always run through the full golden image pipeline once as a dry run before committing to your production image.&lt;/p&gt;

&lt;h2&gt;
  
  
  The SOC Stack
&lt;/h2&gt;

&lt;p&gt;I also migrated the full security operations stack: Wazuh for endpoint detection and SIEM, Cortex for automated analysis, TheHive for case management, and MISP for threat intelligence sharing. Same V2V process as the other Linux VMs. These were already running on Linux, so it was disk conversion, interface rename, guest agent install, and verify services. Nothing special, but worth mentioning because people forget about their security tooling when planning hypervisor migrations.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Final Tally
&lt;/h2&gt;

&lt;p&gt;When everything was done, the infrastructure footprint looked like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;4 standalone Proxmox servers&lt;/strong&gt; running production workloads: domain controllers, network monitoring (LibreNMS, Netdisco, Switchmap), Samba AD file server, FOG imaging, UniFi controller, and the SOC stack (Wazuh, Cortex, TheHive, MISP)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;6-node Proxmox cluster&lt;/strong&gt; for the NetLab environment, where students run hands-on lab exercises&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;10 total Proxmox hosts&lt;/strong&gt;, all on open-source infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Total hypervisor licensing cost: $0.&lt;/p&gt;

&lt;p&gt;The migration took planning and careful execution, but none of it was technically complex. The hardest part was convincing myself that AD replication would actually work as advertised. It did.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Originally published at &lt;a href="https://solomonneas.dev/blog/hyperv-to-proxmox-migration-guide" rel="noopener noreferrer"&gt;solomonneas.dev&lt;/a&gt;. Find more of my writing on infrastructure, security tooling, and AI agents at &lt;a href="https://solomonneas.dev" rel="noopener noreferrer"&gt;solomonneas.dev&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>proxmox</category>
      <category>devops</category>
      <category>linux</category>
      <category>sysadmin</category>
    </item>
  </channel>
</rss>
