<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: John Gregoriadis</title>
    <description>The latest articles on DEV Community by John Gregoriadis (@jonnonz).</description>
    <link>https://dev.to/jonnonz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jonnonz"/>
    <language>en</language>
    <item>
      <title>What if your browser built the UI for you?</title>
      <dc:creator>John Gregoriadis</dc:creator>
      <pubDate>Sat, 11 Apr 2026 12:47:27 +0000</pubDate>
      <link>https://dev.to/jonnonz/what-if-your-browser-built-the-ui-for-you-3h3d</link>
      <guid>https://dev.to/jonnonz/what-if-your-browser-built-the-ui-for-you-3h3d</guid>
      <description>&lt;p&gt;We're at a genuinely weird inflection point in frontend development. AI can generate entire interfaces now. LLMs can reason about data and layout. And yet — most SaaS products still ship hand-crafted React apps, each building its own UI, its own accessibility layer, its own theme system, its own responsive breakpoints. Not every service, but the vast majority.&lt;/p&gt;

&lt;p&gt;That's a lot of duplicated effort for what's essentially the same job — showing a human some data and letting them do stuff with it.&lt;/p&gt;

&lt;p&gt;I've been thinking about this a lot lately, and I built a proof of concept to test an idea: what if the browser itself generated the UI?&lt;/p&gt;

&lt;h2&gt;
  
  
  Where we are right now
&lt;/h2&gt;

&lt;p&gt;The industry is circling this idea from multiple angles, but nobody's quite landed on it yet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.apollographql.com/docs/graphos/schema-design/guides/sdui/basics" rel="noopener noreferrer"&gt;Server-driven UI&lt;/a&gt; has been around for a while — Airbnb and others pioneered it for mobile, where app store review cycles make shipping UI changes painful. The server sends down a JSON tree describing what to render, and the client just follows instructions. It's clever, but the server is still calling the shots.&lt;/p&gt;

&lt;p&gt;Google recently shipped &lt;a href="https://developers.google.com/natively-adaptive-interfaces" rel="noopener noreferrer"&gt;Natively Adaptive Interfaces&lt;/a&gt; — a framework that uses AI agents to make accessibility a default rather than an afterthought. Really cool idea, and the right instinct. But it's still operating within a single app's boundaries. Your accessibility preferences don't carry between Google's products and, say, your project management tool.&lt;/p&gt;

&lt;p&gt;Then there's the &lt;a href="https://www.copilotkit.ai/blog/the-developer-s-guide-to-generative-ui-in-2026" rel="noopener noreferrer"&gt;generative UI&lt;/a&gt; wave — CopilotKit, Vercel's AI SDK, and others building frameworks where LLMs generate components on the fly. These are powerful developer tools, but they're still developer tools. The generation happens at build time or on the server. The service is still in control.&lt;/p&gt;

&lt;p&gt;See the pattern? Every approach keeps the power on the service side.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flip it
&lt;/h2&gt;

&lt;p&gt;Here's the idea behind the &lt;a href="https://github.com/jonnonz1/adaptive-browser" rel="noopener noreferrer"&gt;adaptive browser&lt;/a&gt;: what if the generation happened on &lt;em&gt;your&lt;/em&gt; side?&lt;/p&gt;

&lt;p&gt;Instead of a service shipping you a finished frontend, it publishes a manifest — a structured description of what it can do. Its capabilities, endpoints, data shapes, what actions are available. Think of it like an API spec, but semantic. Not just "here's a GET endpoint" but "here's a list of repositories, they're sortable by stars and language, you can create, delete, star, or fork them."&lt;/p&gt;

&lt;p&gt;Your browser takes that manifest, calls the actual APIs, gets real data back, and then generates the UI based on your preferences. Your font size. Your colour scheme. Your preferred layout (tables vs cards vs kanban). Your accessibility needs. All applied universally, across every service.&lt;/p&gt;

&lt;p&gt;The manifest for something like GitHub looks roughly like this — a service describes its capabilities and the browser figures out the rest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;GitHub"&lt;/span&gt;
  &lt;span class="na"&gt;domain&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;api.github.com"&lt;/span&gt;

&lt;span class="na"&gt;capabilities&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;repositories"&lt;/span&gt;
    &lt;span class="na"&gt;endpoints&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/user/repos"&lt;/span&gt;
        &lt;span class="na"&gt;semantic&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;
        &lt;span class="na"&gt;entity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;repository"&lt;/span&gt;
        &lt;span class="na"&gt;sortable_fields&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;updated_at&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;stargazers_count&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
        &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;create&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;delete&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;star&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;fork&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The browser takes that, fetches the data, and generates a bespoke interface — using an LLM to reason about the best way to present it given who you are and what you're trying to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters more than it sounds
&lt;/h2&gt;

&lt;p&gt;When I was building the app store and integrations platforms at Xero, one of the constant headaches was that every third-party integration had its own UI patterns. Users had to learn a new interface for every app they connected. If the browser was generating the UI from a shared set of preferences, that problem just… goes away.&lt;/p&gt;

&lt;p&gt;Accessibility is the big one though. Right now, accessibility is a feature that gets bolted on — and often badly. When the browser generates the UI, accessibility isn't a feature. It's the default. Your preferences — high contrast, keyboard-first navigation, screen reader optimisation, larger text — apply everywhere. Not because every developer remembered to implement them, but because they're baked into how the UI gets generated in the first place.&lt;/p&gt;

&lt;p&gt;Customisation becomes genuinely personal too. Not "pick from three themes the developer made" but "this is how I interact with software, full stop."&lt;/p&gt;

&lt;h2&gt;
  
  
  The trade-off is real though
&lt;/h2&gt;

&lt;p&gt;Frontend complexity drops dramatically, but the complexity doesn't disappear — it moves behind the API. And honestly, it probably increases.&lt;/p&gt;

&lt;p&gt;API design becomes way more important. You can't just throw together some REST endpoints and call it a day. Your manifest needs to be semantic — describing what the data means, not just what shape it is. Data contracts between services matter more. Versioning matters more.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;graph LR
    A[Service] --&amp;gt;|Publishes manifest + APIs| B[Browser Agent]
    C[User Preferences] --&amp;gt; B
    D[Org Guardrails] --&amp;gt; B
    B --&amp;gt;|Generates| E[Bespoke UI]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But here's the thing — this trade-off pushes us somewhere genuinely interesting. If every service needs to describe itself semantically through APIs and manifests, those APIs become the actual product surface. Not the frontend. The APIs.&lt;/p&gt;

&lt;p&gt;And once APIs are the product surface, sharing context between platforms becomes the interesting problem. Your project management tool knows what you're working on. Your email client knows who you're talking to. Your code editor knows what you're building. Right now, none of these talk to each other in any meaningful way because they're all locked behind their own UIs. In a manifest-driven world, that context flows through the APIs — and your browser can stitch it all together into something coherent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where this is headed (IMHO)
&lt;/h2&gt;

&lt;p&gt;I reckon we're about 3-5 years from this being mainstream. The pieces are all there — LLMs that can reason about UI, &lt;a href="https://www.builder.io/blog/ui-over-apis" rel="noopener noreferrer"&gt;standardisation efforts&lt;/a&gt; around sending UI intent over APIs, and a growing expectation from users that software should adapt to them, not the other way around.&lt;/p&gt;

&lt;p&gt;The services that win in this world won't be the ones with the prettiest hand-crafted UI. They'll be the ones with the best APIs, the richest manifests, and the most useful data. The frontend becomes a generated output, not a hand-crafted input.&lt;/p&gt;

&lt;p&gt;Organisations will set preference guardrails — "our people can use dark or light mode, must have destructive action confirmations, these fields are always visible" — while individuals customise within those bounds. Your browser becomes your agent, not just a renderer.&lt;/p&gt;

&lt;p&gt;I built the &lt;a href="https://github.com/jonnonz1/adaptive-browser" rel="noopener noreferrer"&gt;adaptive browser&lt;/a&gt; as a proof of concept to test this thinking — it uses Claude to generate UIs from a GitHub manifest and user preferences defined in YAML. It's rough, but the direction feels right.&lt;/p&gt;

&lt;p&gt;The frontend isn't dying. But what we think of as "frontend development" is about to change. The interesting work moves to API design, semantic data contracts, and building browsers smart enough to be genuine user agents.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://jonno.nz/posts/what-if-your-browser-built-the-ui-for-you/" rel="noopener noreferrer"&gt;jonno.nz&lt;/a&gt;. I write about AI, engineering architecture, and the future of developer tools — more at &lt;a href="https://jonno.nz" rel="noopener noreferrer"&gt;jonno.nz&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>architecture</category>
      <category>webdev</category>
      <category>ux</category>
    </item>
    <item>
      <title>The Dark Forest Needs an Immune System</title>
      <dc:creator>John Gregoriadis</dc:creator>
      <pubDate>Sat, 11 Apr 2026 09:17:16 +0000</pubDate>
      <link>https://dev.to/jonnonz/the-dark-forest-needs-an-immune-system-49co</link>
      <guid>https://dev.to/jonnonz/the-dark-forest-needs-an-immune-system-49co</guid>
      <description>&lt;p&gt;Anthropic just dropped &lt;a href="https://www.anthropic.com/glasswing" rel="noopener noreferrer"&gt;Project Glasswing&lt;/a&gt; — a big collaborative cybersecurity initiative with a shiny new model called Claude Mythos Preview that can find zero-day vulnerabilities at scale. Twelve major tech companies involved. $100M in credits. Found a 27-year-old flaw in OpenBSD. Impressive stuff.&lt;/p&gt;

&lt;p&gt;But let's be real about what's happening here. Anthropic trained a model so capable at breaking into systems that they decided it was too dangerous to release publicly. So they wrapped the release in a collaborative security initiative. The security work is genuinely valuable. But it's also a smart way to keep control of something they know is too powerful to let loose.&lt;/p&gt;

&lt;p&gt;The part that actually matters, though, is who benefits. Glasswing is for the big players. The companies with security teams, budgets, and the kind of infrastructure that gets invited to sit at the table with AWS, Microsoft, and Palo Alto Networks. What about the rest of us? The startups, the small SaaS shops, the indie developers running production systems on a shoestring?&lt;/p&gt;

&lt;p&gt;The internet is a &lt;a href="https://bigthink.com/books/how-the-dark-forest-theory-helps-us-understand-the-internet/" rel="noopener noreferrer"&gt;dark forest&lt;/a&gt;. That's not a metaphor anymore — it's becoming the literal reality. Bots, scrapers, automated exploit chains, credential stuffing, AI-generated phishing. A server goes up and within hours it's being scanned, fingerprinted, and probed by systems that don't sleep. Visibility equals vulnerability. And AI is making the attackers faster, cheaper, and more autonomous every month.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.isc2.org/insights/2026/04/ai-driven-defense-and-autonomous-attacks" rel="noopener noreferrer"&gt;ISC2 put it plainly&lt;/a&gt; — both offence and defence now operate at speeds beyond human intervention. The threats aren't people sitting at keyboards anymore. They're autonomous systems running campaigns end-to-end.&lt;/p&gt;

&lt;p&gt;So what do we do about it?&lt;/p&gt;

&lt;h2&gt;
  
  
  Offensive security — but not the kind you're thinking
&lt;/h2&gt;

&lt;p&gt;When I say offensive security, I don't mean red-teaming or penetration testing. I mean giving your systems the ability to fight back.&lt;/p&gt;

&lt;p&gt;Picture an LLM that sits across your centralised logs — network traffic, database queries, user interactions, access patterns — and builds an understanding of what normal looks like for your system over weeks and months. Not just pattern matching against known signatures. Actually understanding the shape of healthy behaviour.&lt;/p&gt;

&lt;p&gt;When something breaks the pattern, it doesn't just alert. It acts.&lt;/p&gt;

&lt;p&gt;Disable a compromised account. Kill a service that's behaving strangely. Block a database connection that shouldn't exist. Create an incident with full context for a human to review. The response is proportional and immediate — not waiting for someone to check their phone at 3am.&lt;/p&gt;

&lt;p&gt;The architecture is pretty straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;graph TD
    A[Application Logs] --&amp;gt; D[Secure Isolated Log Store]
    B[Network Traffic] --&amp;gt; D
    C[Database Queries] --&amp;gt; D
    D --&amp;gt; F[Baseline Health Model]
    E[User Activity] --&amp;gt; D
    F --&amp;gt;|Anomaly Detected| G[LLM Analysis]
    G --&amp;gt;|Analyse &amp;amp; Plan| H{Threat Assessment}
    H --&amp;gt;|Low| I[Alert &amp;amp; Log]
    H --&amp;gt;|Medium| J[Restrict &amp;amp; Escalate]
    H --&amp;gt;|High| K[Disable &amp;amp; Isolate]
    I --&amp;gt; L[Human Review]
    J --&amp;gt; L
    K --&amp;gt; L
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key is that the logging and analysis layer has to be isolated and secured separately from the systems it's watching. If an attacker can compromise the thing that's watching them, the whole model falls apart.&lt;/p&gt;

&lt;p&gt;In practice that means separate infrastructure with its own auth boundary. Ingestion is write-only — your application services push logs in but can never read or modify what's already there. Append-only, immutable. The analysis layer gets scoped service accounts that can read logs, fire alerts, and pull specific emergency levers through a narrow API. Nothing else. If a compromised service tries to reach the log store directly, it hits a wall.&lt;/p&gt;

&lt;p&gt;None of this is exotic. Centralised logging, immutable storage, scoped IAM — the building blocks exist. The hard part is wiring an LLM into that loop with the right constraints. Enough access to act, not enough to make things worse.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where biology gets interesting
&lt;/h2&gt;

&lt;p&gt;I've been doing &lt;a href="https://jonno.nz/posts/what-if-a-worm-could-make-ai-agents-smarter/" rel="noopener noreferrer"&gt;research with my project C302&lt;/a&gt; — using a simulation of the &lt;em&gt;C. elegans&lt;/em&gt; roundworm's neural network as a behavioural controller for LLM agents. The worm has 302 neurons. That's it. And with those 302 neurons it navigates its environment, finds food, avoids threats, and adapts its behaviour based on what's working.&lt;/p&gt;

&lt;p&gt;In that research, we mapped simple feedback signals to biological synapses and let the neural simulation drive agent behaviour. The live connectome — receiving real-time feedback from the agent's environment — showed a clear improvement over one following a fixed trajectory (0.960 vs 0.867 test pass rate), even when the topology, signals, and rules were identical. The only variable was whether the system adapted to what was actually happening. Early days with a small sample size, but the direction is promising.&lt;/p&gt;

&lt;p&gt;Now apply that thinking to security monitoring.&lt;/p&gt;

&lt;p&gt;Imagine mapping a sudden spike in unusual user activity to the equivalent of a "salt" sensory neuron in the worm's circuit. That fires, and the downstream effect is the security system becomes more aggressive in its investigation — widening its search, correlating more signals, lowering its threshold for action. A pattern of repeated failed authentications from new IPs could map to a "touch" response — the system recoils, tightening access controls automatically.&lt;/p&gt;

&lt;p&gt;This isn't rule-based. It's adaptive. The system develops a behavioural pattern that's learned from running in your specific environment, responding to your specific traffic patterns. That's a fundamentally different thing from a static set of if-then rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  This has to be open
&lt;/h2&gt;

&lt;p&gt;Glasswing is cool. &lt;a href="https://github.com/aliasrobotics/CAI" rel="noopener noreferrer"&gt;Open-source frameworks like CAI&lt;/a&gt; are making progress — but mostly on the offensive side, using LLMs for penetration testing and vulnerability research. On the defensive side, the tooling barely exists. There's no open-source equivalent for the kind of adaptive monitoring and response I'm describing here.&lt;/p&gt;

&lt;p&gt;The building blocks are around. Centralised logging is a solved problem. Open standards for security event formats are maturing. Smaller open models are more than capable of pattern analysis on local infrastructure. What's missing is the glue — a framework that takes logs in, builds a baseline, detects anomalies, and can actually respond. Something a small team can deploy without a six-figure security budget.&lt;/p&gt;

&lt;p&gt;The threats don't discriminate by company size. The defences shouldn't either. This can't be proprietary or locked behind enterprise contracts.&lt;/p&gt;

&lt;p&gt;The dark forest doesn't care how big your company is. The bots scanning your infrastructure don't check your headcount before they attack. If the threats are going to be this accessible, the defences need to be too.&lt;/p&gt;

&lt;p&gt;I'm taking the C302 work in this direction next. An open-source security agent — biologically inspired, adapts to your environment, acts when something breaks the pattern. Small enough for a startup to run. The pieces are all there. I just need to wire them together.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This post was originally published on &lt;a href="https://jonno.nz/posts/the-dark-forest-needs-an-immune-system/" rel="noopener noreferrer"&gt;jonno.nz&lt;/a&gt;. I write hands-on reviews of open-source AI tools and deep dives on engineering topics — check out more at &lt;a href="https://jonno.nz" rel="noopener noreferrer"&gt;jonno.nz&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>opensource</category>
      <category>architecture</category>
    </item>
  </channel>
</rss>
