<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Soumia</title>
    <description>The latest articles on DEV Community by Soumia (@soumia_g_9dc322fc4404cecd).</description>
    <link>https://dev.to/soumia_g_9dc322fc4404cecd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/soumia_g_9dc322fc4404cecd"/>
    <language>en</language>
    <item>
      <title>The Six Pillars of a Good Web App — And How to Enforce All of Them in a Single Lovable Prompt</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Wed, 18 Mar 2026 09:18:47 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/the-six-pillars-of-a-good-web-app-and-how-to-enforce-all-of-them-in-a-single-lovable-prompt-1m3a</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/the-six-pillars-of-a-good-web-app-and-how-to-enforce-all-of-them-in-a-single-lovable-prompt-1m3a</guid>
      <description>&lt;p&gt;Most web apps get two or three of these right. The good ones get four. Very few ship all six from day one.&lt;/p&gt;

&lt;p&gt;Design. Security. Performance. Reliability. Privacy. Accessibility.&lt;/p&gt;

&lt;p&gt;These aren't separate concerns you address in separate sprints. They're the same concern: building something that actually works for the people using it. Here's what each pillar means in practice, how to bake all six into a single Lovable prompt, and how to stress test them before you ship.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Six Pillars
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Design
&lt;/h3&gt;

&lt;p&gt;Not aesthetics. Not a color palette. Design is the absence of friction — intuitive navigation, clear hierarchy, interfaces that don't make users think. A well-designed app communicates trust before a single line of copy does.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Security
&lt;/h3&gt;

&lt;p&gt;Auth flows that don't leak. Input validation that doesn't trust anything. Data protection that assumes breach. Security isn't a feature you add at the end — it's a constraint you build inside of from the start.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Performance
&lt;/h3&gt;

&lt;p&gt;Speed is a feature. Scalability is a promise. Every unnecessary render, every unoptimized query, every blocking resource is a tax on the user. Performance means the app works under load, not just in your local preview.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Reliability
&lt;/h3&gt;

&lt;p&gt;Uptime is table stakes. Error handling is what separates a product from a prototype. A reliable app fails gracefully, recovers silently, and never leaves the user stranded with a blank screen and no explanation.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Privacy
&lt;/h3&gt;

&lt;p&gt;Data minimization: don't collect what you don't need. Compliance: GDPR/RGPD, CCPA, and whatever comes next. But privacy is also a design decision — defaulting to the least invasive option, making consent explicit, making deletion possible.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Accessibility
&lt;/h3&gt;

&lt;p&gt;Inclusive by default. Screen reader support, keyboard navigation, sufficient contrast ratios, semantic HTML. Accessibility is not a nice-to-have. It's the floor, not the ceiling.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Single Prompt
&lt;/h2&gt;

&lt;p&gt;When you build with Lovable, the quality of your output is a direct function of the specificity of your input. Most prompts describe &lt;em&gt;what&lt;/em&gt; to build. The best prompts describe &lt;em&gt;how it should behave&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Here's the prompt template I use to enforce all six pillars from the first generation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Build a [description of app] with the following non-negotiable constraints:

DESIGN
- Clean, minimal UI with clear visual hierarchy
- Mobile-first, responsive layout
- Consistent spacing, typography, and color system throughout

SECURITY
- All user inputs validated and sanitized
- Authentication using [method] with secure session handling
- No sensitive data exposed in client-side code or URLs
- Environment variables for all secrets

PERFORMANCE
- Lazy load all non-critical components
- Optimize all images and assets
- Minimize blocking resources on initial load
- Debounce all expensive operations

RELIABILITY
- All async operations wrapped in try/catch with user-facing error messages
- Loading states for every async action
- Graceful degradation if an API call fails
- No silent failures

PRIVACY &amp;amp; LEGAL COMPLIANCE
- Collect only the data required for core functionality
- No third-party trackers without explicit user consent
- GDPR/RGPD-compliant cookie consent banner on first load
- Clear and accessible privacy policy link in the footer
- Terms and conditions page linked in footer and at signup
- User data exportable and deletable on request
- If the app uses AI-generated content or AI decision-making, surface that clearly to the user (EU AI Act transparency requirement)

ACCESSIBILITY
- Semantic HTML throughout (nav, main, section, article, button, etc.)
- All images with descriptive alt text
- Full keyboard navigation support
- Color contrast ratio minimum 4.5:1 (WCAG AA)
- ARIA labels on all interactive elements
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prompt doesn't describe a design. It describes a standard. Lovable fills in the implementation — you're setting the bar it has to clear.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Stress Test All Six
&lt;/h2&gt;

&lt;p&gt;Shipping is not the end. Stress testing is how you find out what actually holds.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Open the app on a phone you haven't tested on. Does anything break?&lt;/li&gt;
&lt;li&gt;Give it to someone who didn't build it. Watch where they hesitate.&lt;/li&gt;
&lt;li&gt;Resize the browser from mobile to 4K. Does the layout survive?&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Try submitting empty forms, SQL fragments, and script tags in every input field.&lt;/li&gt;
&lt;li&gt;Inspect the network tab. Is anything sensitive traveling in plain text?&lt;/li&gt;
&lt;li&gt;Log out and try to access a protected route directly via URL.&lt;/li&gt;
&lt;li&gt;Check your &lt;code&gt;.env&lt;/code&gt; — nothing should be hardcoded in the codebase.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Performance
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Run Lighthouse in Chrome DevTools. Target 90+ on performance.&lt;/li&gt;
&lt;li&gt;Throttle to "Slow 3G" in the network tab. Is the app still usable?&lt;/li&gt;
&lt;li&gt;Check bundle size. Is anything unexpectedly large?&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Reliability
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Kill the API mid-request. Does the UI handle it or freeze?&lt;/li&gt;
&lt;li&gt;Simulate a failed login. Does the error message help the user?&lt;/li&gt;
&lt;li&gt;Refresh mid-flow. Does state persist where it should?&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Privacy
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Open the network tab and filter for third-party requests. Do you know what each one is doing?&lt;/li&gt;
&lt;li&gt;Check your database schema. Are you storing anything you don't use?&lt;/li&gt;
&lt;li&gt;Try to delete a test account. Does it actually disappear?&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Accessibility
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Navigate the entire app using only the keyboard. Can you reach everything?&lt;/li&gt;
&lt;li&gt;Run axe DevTools or the Accessibility tab in Chrome. Zero critical violations is the target.&lt;/li&gt;
&lt;li&gt;Turn on a screen reader (VoiceOver on Mac, NVDA on Windows). Does the app make sense without a screen?&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Legal Layer: RGPD, EU AI Act, and Terms &amp;amp; Conditions
&lt;/h2&gt;

&lt;p&gt;This is the part most builders skip until a lawyer or a user complaint forces the issue. Don't.&lt;/p&gt;

&lt;h3&gt;
  
  
  RGPD / GDPR
&lt;/h3&gt;

&lt;p&gt;If any of your users are based in the EU — and if you're on the internet, some of them are — RGPD applies to you. That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A cookie consent banner that actually works (not a fake one)&lt;/li&gt;
&lt;li&gt;A privacy policy that says what you collect, why, and for how long&lt;/li&gt;
&lt;li&gt;A process for users to request their data or delete their account&lt;/li&gt;
&lt;li&gt;No data transferred outside the EU without adequate safeguards&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The fine for getting this wrong isn't theoretical. Build it in from day one.&lt;/p&gt;

&lt;h3&gt;
  
  
  EU AI Act
&lt;/h3&gt;

&lt;p&gt;If your app uses AI to generate content, make recommendations, or influence decisions, the EU AI Act has something to say about it. At minimum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Be transparent with users when they're interacting with AI-generated output&lt;/li&gt;
&lt;li&gt;Don't use AI for prohibited purposes (social scoring, real-time biometric surveillance, manipulation)&lt;/li&gt;
&lt;li&gt;If your use case falls into a "high-risk" category (hiring, credit, health), you have additional obligations around human oversight and auditability&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Act is being enforced in phases. The transparency requirements are already live. Add a visible disclosure wherever AI is involved in your app.&lt;/p&gt;

&lt;h3&gt;
  
  
  Terms and Conditions
&lt;/h3&gt;

&lt;p&gt;Not a legal formality. A T&amp;amp;C is a contract between you and your users that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Defines what the app does and doesn't do&lt;/li&gt;
&lt;li&gt;Limits your liability when things go wrong&lt;/li&gt;
&lt;li&gt;Sets the rules for acceptable use&lt;/li&gt;
&lt;li&gt;Gives you legal ground to remove users who violate those rules&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Add it to your Lovable prompt: &lt;em&gt;"Include a Terms and Conditions page linked in the footer and shown at signup with a required checkbox before account creation."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;A user who never saw your T&amp;amp;C is a user who can claim they didn't agree to anything.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters More on Lovable
&lt;/h2&gt;

&lt;p&gt;When you build on Lovable, you're not just shipping your app. You're generating code that runs in a shared environment serving 2 million users. The attack surface isn't just yours — it's everyone's.&lt;/p&gt;

&lt;p&gt;That's not a warning. It's an invitation to raise the standard.&lt;/p&gt;

&lt;p&gt;The six pillars aren't a checklist. They're a disposition — a way of thinking about what a good web app owes the people who use it. Design them in from the first prompt. Test them before you ship. Then ship.&lt;/p&gt;




&lt;h2&gt;
  
  
  If You're Building Right Now — What's Your Biggest Security Concern?
&lt;/h2&gt;

&lt;p&gt;I'm genuinely curious.&lt;/p&gt;

&lt;p&gt;Are you thinking about auth and session handling? Worried about what your AI-generated code is exposing? Unsure whether your app is RGPD-compliant? Not sure where to even start with the EU AI Act?&lt;/p&gt;

&lt;p&gt;Drop it in the comments. No wrong answers. The more specific the better — if enough people share the same concern, I'll write a dedicated piece on it.&lt;/p&gt;

&lt;p&gt;Building in public means debugging in public too. Let's do it together.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Soumia is a Creative Technologist and founder of &lt;a href="https://humiin.io" rel="noopener noreferrer"&gt;Humiin&lt;/a&gt;, an AI venture studio. She builds with Lovable daily and writes about security, AI, and the web on Dev.to.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Voice: An Experiment in Acoustic Automata</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Tue, 17 Mar 2026 20:32:24 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/the-voice-an-experiment-in-acoustic-automata-2721</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/the-voice-an-experiment-in-acoustic-automata-2721</guid>
      <description>&lt;p&gt;&lt;code&gt;The Prologue: A Scandal in Code&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Before we begin, a confession: I have been experimenting. I wanted to know if a machine could move beyond the "monotone ghost" of modern utility and inhabit the sharp, rhythmic wit of a Regency drawing room. The result was &lt;a href="https://open.spotify.com/show/4NTHd0vy0835AzskFpHz87?si=EHrdKwgsTSOuLTYWojYtHw&amp;amp;nd=1&amp;amp;dlsi=39bde10ed2914f44" rel="noopener noreferrer"&gt;TheHighTechCourt&lt;/a&gt; — a podcast designed as a provocation in "Acoustic Automata" where the giants of AI debate the future of compute.&lt;/p&gt;

&lt;p&gt;What follows is the philosophy behind that experiment. Because to build the future of voice, we must first understand why the voice is the pivot of the human experience. &lt;/p&gt;




&lt;p&gt;Breath. Shaped by the tongue, the teeth, the soft architecture of the throat. Traveling as pressure waves through air. Arriving in another body—through the ear, through the chest, through something below language that recognizes its own kind.&lt;/p&gt;

&lt;p&gt;Voice was the first technology. And for most of human history, it was the only one that mattered.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Living Epic
&lt;/h2&gt;

&lt;p&gt;For centuries before it was a text, The Odyssey was a performance. The Rhapsode of Ancient Greece did not merely recite; they "stitched together" songs from a living tradition. They carried tens of thousands of lines of verse in their body—not as static data, but as a fluid, rhythmic architecture that adapted to the torchlight and the tension of the crowd.&lt;/p&gt;

&lt;p&gt;When we read Homer today, we are looking at a fossil. The original "signal" was breath, and it carried everything writing discards: the rhythmic pulse of the meter, the subtle hesitation, the tremor of a voice that knows it is being heard by fourteen thousand people.&lt;/p&gt;

&lt;p&gt;Writing was the first great reduction; voice was always the full signal. Then, across 150 years, everything changed:&lt;/p&gt;

&lt;p&gt;1876 — The Telephone. &lt;strong&gt;Alexander Graham Bell&lt;/strong&gt; finds it necessary &lt;em&gt;"to resort to electrical undulations identical in nature with the air waves."&lt;/em&gt; Voice separates from the body for the first time.&lt;/p&gt;

&lt;p&gt;1902 — The Recording. Enrico Caruso sings into a horn. The voice detaches from time.&lt;/p&gt;

&lt;p&gt;1939 — The Vocoder. The machine built to obscure the voice becomes its instrument.&lt;/p&gt;

&lt;p&gt;1993 — MP3. The voice reduced to data. Quality traded for portability.&lt;/p&gt;

&lt;p&gt;2024 — &lt;strong&gt;Native Multimodal Audio.&lt;/strong&gt; Raw PCM audio travels over a persistent WebSocket connection. The lag disappears. The voice becomes live.&lt;/p&gt;




&lt;h2&gt;
  
  
  From the Monotone Ghost to the Post-Screen Era
&lt;/h2&gt;

&lt;p&gt;To understand where the technology is going, you have to look back at the frustration that built it. In a defining origin story, &lt;em&gt;Mati Staniszewski&lt;/em&gt; shared the memory of growing up in Poland with the Lektor—a single, monotone male voice that read every line for every character in foreign films. The "signal" of the original actor was buried under a flat, rhythmic drone. The performance was deleted.&lt;/p&gt;

&lt;p&gt;That "monotone ghost" is what ElevenLabs is killing. They didn't just want to make a machine speak; they wanted to solve the "Language Tax"—the fact that until now, emotional power stopped at the border of your native tongue.&lt;/p&gt;




&lt;h2&gt;
  
  
  The &lt;em&gt;James Blake&lt;/em&gt; Paradox: Reclaiming the Soul
&lt;/h2&gt;

&lt;p&gt;This mission mirrors a similar evolution in music. In a recent interview with &lt;em&gt;Mehdi Maïzi&lt;/em&gt;, the artist James Blake discusses the "machine as an instrument." For years, digital music tools were like the Lektor: they fixed the pitch but killed the "tremor."&lt;/p&gt;

&lt;p&gt;Blake speaks about using technology not to hide the voice, but to amplify the parts of the human soul that are often too quiet to hear. He describes a world where the machine doesn't just "process" audio; it learns the "affect" of the singer. The WebSocket isn't just a connection; it's a bridge back to the Rhapsode's breath.&lt;/p&gt;




&lt;h2&gt;
  
  
  The State of the Art — March 2026
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Google Gemini 2.5 Flash (Native A2A):&lt;/strong&gt; Bypasses the discrete STT/TTS bottleneck. Reasoning occurs on the waveform itself, allowing the model to interpret emotional prosody natively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenAI Realtime API (Low-Latency RTT):&lt;/strong&gt; Optimized for a 230ms Round-Trip Time. It prioritizes "Time to First Phoneme" to maintain conversational flow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ElevenLabs (Conversational WebSocket):&lt;/strong&gt; Specialized for high-fidelity PCM streaming. It handles non-verbal vocalizations—specifically the 500ms "breath pause"—as load-bearing data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude (Architectural Intelligence):&lt;/strong&gt; Integrated as the reasoning engine for high-expressivity pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Voice: An Experiment in Acoustic Automata
&lt;/h2&gt;

&lt;p&gt;To understand the "human tremor," we must move beyond utility. In a recent design provocation titled The High Tech Court, I shifted the goal from efficiency to presence.&lt;/p&gt;

&lt;p&gt;The experiment: Build a "Speech-to-Speech" drama where the heavyweights—the House of &lt;strong&gt;NVIDIA&lt;/strong&gt; and the House of &lt;strong&gt;AMI—debate&lt;/strong&gt; the future of compute in the opulent drawing rooms of Regency society. By &lt;em&gt;orchestrating the reasoning of Claude and Gemini&lt;/em&gt; with specialized vocal synthesis, we created Acoustic Automata.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Design Findings:
&lt;/h2&gt;

&lt;p&gt;The Social Interface: When the AI is given a social hierarchy—a "Grand Automaton"—it is no longer a servant; it is a peer. The "affect" of a royal sniff creates deeper immersion than raw accuracy.&lt;/p&gt;

&lt;p&gt;Reasoning in Character: By forcing the models to "think" in the sharp wit of the 19th century, we bypassed the monotone ghost.&lt;/p&gt;

&lt;p&gt;The Open Blueprints: This wasn't a closed experiment. The Git for this court—the code that allows frontier models to converse with aristocratic flair—is an open-source contribution to the new sonic architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Manifesto: The Death of the Screen
&lt;/h2&gt;

&lt;p&gt;By March 2026, the mission has moved to a radical declaration of independence from the screen. For fifty years, we have been "screen-slaves," flattening our intent into finger-taps because the machine was deaf.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Voice will be the primary interface." — Mati Staniszewski&lt;/em&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  🏛️ The Artifacts
&lt;/h2&gt;

&lt;p&gt;If the voice is the pivot, these are the traces I am leaving behind for this issue:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Performance:&lt;/strong&gt; Listen to the season premiere of &lt;a href="https://open.spotify.com/show/4NTHd0vy0835AzskFpHz87?si=EHrdKwgsTSOuLTYWojYtHw&amp;amp;nd=1&amp;amp;dlsi=39bde10ed2914f44" rel="noopener noreferrer"&gt;The High Tech Court&lt;/a&gt;, where the frontier of AI is debated through the lens of high society.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Blueprint:&lt;/strong&gt; Explore the Git Repository to see the Python orchestration behind the &lt;a href="https://lnkd.in/e46speeG" rel="noopener noreferrer"&gt;TheCode&lt;/a&gt; pipeline.&lt;/p&gt;

&lt;p&gt;The Dialogue: Find me in the wild: My &lt;a href="https://www.linkedin.com/in/soumia-ghalim/" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Are you working in AI Voice?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Whether you are building low-latency WebSocket bridges, fine-tuning emotional prosody, or designing the "sonic personality" of a new agent, I want to hear from you.&lt;/p&gt;

&lt;p&gt;How are you tackling the "human tremor" in your code?&lt;/p&gt;

&lt;p&gt;Are you finding that native multimodal models (A2A) are ready for the stage, or are you still relying on the control of a cascaded pipeline?&lt;/p&gt;

&lt;p&gt;Let me know what you think. The future of the voice is not a solo performance; it is a rhapsody we are stitching together. Leave a comment or reach out—let's discuss the architecture of the breath.&lt;/p&gt;

</description>
      <category>voiceai</category>
      <category>elevenlabs</category>
      <category>anthropic</category>
      <category>buildinginpublic</category>
    </item>
    <item>
      <title>The LLM Is Not a Chatbot. It's a New Kind of Operating System.</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Tue, 17 Mar 2026 17:12:14 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/the-llm-is-not-a-chatbot-its-a-new-kind-of-operating-system-1o3j</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/the-llm-is-not-a-chatbot-its-a-new-kind-of-operating-system-1o3j</guid>
      <description>&lt;p&gt;I used to think I was building with AI.&lt;/p&gt;

&lt;p&gt;Then I realized I was building on AI—the way you build on an OS.&lt;/p&gt;

&lt;p&gt;Every computing era is defined by its operating system. Windows made the PC era. iOS and Android made mobile. The OS wasn't the app—it was the layer that made all apps possible.&lt;/p&gt;

&lt;p&gt;We're in that moment again. Except the OS is an LLM.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Structural Reality
&lt;/h2&gt;

&lt;p&gt;Andrej Karpathy said it first: LLMs aren't chatbots. They're the kernel process of a new operating system—one that orchestrates tools, memory, browsers, code interpreters, and multimodal I/O. Not through deterministic commands, but through reasoning over intent.&lt;/p&gt;

&lt;p&gt;An OS manages resources and translates intent into action. An LLM with tool access does exactly this—but the "commands" are natural language and the "scheduler" is a reasoning loop. This is already being formalized in research like AIOS (LLM Agent Operating System).&lt;/p&gt;

&lt;p&gt;This paradigm is moving from theory to production at breakneck speed. Just look at what was announced at GTC: NVIDIA CEO Jensen Huang released open-source NemoClaw for secure, always-on OpenClaw agents. By making this declaration on the GTC stage, NVIDIA isn't just dropping another model; they are providing the enterprise-grade infrastructure for autonomous, system-level daemons. In the context of an LLM-as-OS, these NemoClaw agents act exactly like background processes. They run continuously inside secure OpenShell sandboxes, executing complex tasks in the background without ever waiting for a user to type in a chat box.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shift: From Query to Intent
&lt;/h2&gt;

&lt;p&gt;The old stack is built around queries (rigid syntax). The new stack is built around intent (reasoning).&lt;/p&gt;

&lt;p&gt;Instead of: SELECT * FROM market_data WHERE intent LIKE '%competitor%'&lt;/p&gt;

&lt;p&gt;You get: "What's moving in my market right now, and why should I care?"&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons
&lt;/h2&gt;

&lt;p&gt;I’ve been testing this thesis while building Kumiin.io (under the humiin.io umbrella). We aren't building a search engine; we’re building a reasoning engine for market intelligence.&lt;/p&gt;

&lt;p&gt;The LLM kernel spawns sub-processes to scrape job boards, check regulatory filings, and cross-reference headcount. But 2026 engineering has a new friction: "Reasoning Drift." We’ve had to build a secondary Observer Layer—a micro-kernel to fact-check the primary LLM’s tool outputs.&lt;/p&gt;

&lt;p&gt;We’ve traded Schema Migrations for Context Integrity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;LLM-as-OS is a real architectural shift, not hype.&lt;/p&gt;

&lt;p&gt;NVIDIA CEO Jensen Huang's GTC NemoClaw announcement proves that secure, autonomous background processes are becoming the new standard.&lt;/p&gt;

&lt;p&gt;It changes what sits on top of traditional infrastructure.&lt;/p&gt;

&lt;p&gt;The edge belongs to builders who treat the LLM as a processor, not a text box.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What are you building on top of this? Genuinely curious what assumptions people are testing.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Ember That Looks Like Ash</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Mon, 16 Mar 2026 00:04:27 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/the-ember-that-looks-like-ash-4d9j</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/the-ember-that-looks-like-ash-4d9j</guid>
      <description>&lt;p&gt;&lt;em&gt;Building a time capsule for the thought that returns when you have stopped waiting for it.&lt;/em&gt;&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;I'm on a mission to make sure the most alive thought you've ever had doesn't die in the dark.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Before anything else — what is cited in this article
&lt;/h2&gt;

&lt;p&gt;Everything referenced below that is not direct experience building  &lt;a href="https://Cendre.Studio" rel="noopener noreferrer"&gt;Cendre.Studio&lt;/a&gt; is listed here first. If something is not on this list, it is either general knowledge or my own observation. If you dispute a fact, the checklist is where to start.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sources used:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] OWASP Password Storage Cheat Sheet (2023) — PBKDF2 iteration count&lt;/li&gt;
&lt;li&gt;[ ] NIST FIPS 203 — ML-KEM (formerly CRYSTALS-Kyber) standardisation&lt;/li&gt;
&lt;li&gt;[ ] Supabase documentation — pgvector extension availability&lt;/li&gt;
&lt;li&gt;[ ] drand.love — tlock time-lock encryption documentation&lt;/li&gt;
&lt;li&gt;[ ] DoD 5220.22-M — National Industrial Security Program Operating Manual, data sanitisation standard&lt;/li&gt;
&lt;li&gt;[ ] Yann LeCun — "A Path Towards Autonomous Machine Intelligence" (2022, Meta AI)&lt;/li&gt;
&lt;li&gt;[ ] Grover's algorithm — quantum search, effect on symmetric key security&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Facts I am less than certain about — flagged inline with ⚑:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] ⚑ Grover's algorithm reduces AES-256 to 128-bit effective security — directionally correct, verify the exact framing&lt;/li&gt;
&lt;li&gt;[ ] ⚑ drand BLS signatures described as quantum-resistant — verify current drand documentation on this claim&lt;/li&gt;
&lt;li&gt;[ ] ⚑ 310,000 as the OWASP 2023 PBKDF2-HMAC-SHA256 minimum — confirm against current cheat sheet, this number moves&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The ember that looks like ash
&lt;/h2&gt;

&lt;p&gt;There is a thought that arrives at 3am. It does not knock. It is simply there — specific, complete, already retreating. You do not write it down. You let it go. This is correct.&lt;/p&gt;

&lt;p&gt;The thought that matters is not lost by letting go. It is only changed by it.&lt;/p&gt;

&lt;p&gt;It comes back not when you call for it — you cannot call it, any more than you can call a particular quality of winter light — but in the middle of something ordinary, on a Tuesday, wearing nothing dramatic. Six months older. Carrying something it did not have the first time it crossed your mind. The forgetting was not failure. The forgetting was the ember going grey on the surface while something stayed warm underneath.&lt;/p&gt;

&lt;p&gt;This is what &lt;strong&gt;&lt;a href="https://Cendre.Studio" rel="noopener noreferrer"&gt;Cendre.Studio&lt;/a&gt;&lt;/strong&gt; is built for. Not capture — return. Not the fear of losing — the moment of finding, from the other side.&lt;/p&gt;

&lt;p&gt;The distance between the moment of the thought and the moment of reading it is where the meaning assembles itself. We do not understand what we thought at 3am. We understand it when we find it waiting, and we have become someone different enough to read it truly.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We should lose our thoughts. We will. And then we will remember. Cendre is for that second moment — looking back in time at the person who thought it first.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The problem with every other tool
&lt;/h2&gt;

&lt;p&gt;The tools we have were built for the things we need to do. GTD. Notion. Obsidian. Roam. They assume the thought is a task, a note, a unit of knowledge to be sorted and retrieved. None of them assume it is a dream.&lt;/p&gt;

&lt;p&gt;None of them ask: &lt;em&gt;what if some things need to be sealed before they can be truly known?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And none of them are built for the rawness of the material. A dream does not arrive in clean sentences. It arrives in slur and fragment, in phonetic approximation, in the half-language of nearly-asleep. It arrives in the voice of someone who said something they should not have said, and you need to keep it somewhere that is not your own chest.&lt;/p&gt;

&lt;p&gt;Most tools correct this. Cendre does not correct anything. Cendre receives the jagged edge and keeps it exactly that sharp.&lt;/p&gt;




&lt;h2&gt;
  
  
  The architecture
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The capsule
&lt;/h3&gt;

&lt;p&gt;The unit of Cendre is not a note. It is a capsule — a sealed container with a lock, a date, and a dark interior that no one reads until the time is right. You make it. You close it. You choose when it opens: a week, a year, five years, or never unless you choose the other thing.&lt;/p&gt;

&lt;p&gt;The burning.&lt;/p&gt;

&lt;p&gt;Once sealed, the capsule disappears from view. It exists in the vault but cannot be read — not by you, not by anyone — until its date. This is not a trick of the interface. The content is encrypted at the moment of sealing. The capsule is genuinely dark until the hour it was always meant for.&lt;/p&gt;

&lt;h3&gt;
  
  
  The vault
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Seal sequence

1. Content encrypted with AES-256-GCM
   // Key derived from password via Argon2id
   // 64MB memory, 3 iterations, parallelism 4

2. Encryption key time-locked via tlock
   // drand threshold encryption
   // Key undriveable before lock_timestamp
   // ⚑ drand BLS described as quantum-resistant — verify

3. Sealed capsule stored in Supabase
   // Only ciphertext server-side
   // Server never reads. Ever.

4. Burn token generated separately
   // One-way destruction
   // Held only by you, never by the server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Against the quantum future
&lt;/h3&gt;

&lt;p&gt;AES-256 is currently secure. Quantum computers running Grover's algorithm reduce its effective key length — ⚑ the standard framing is that 256-bit symmetric keys are reduced to approximately 128-bit effective security under Grover, which remains strong but narrows the margin when sealing something for five years.&lt;/p&gt;

&lt;p&gt;If you are sealing a thought until 2031, you are betting on the cryptographic landscape of 2031. Cendre is not willing to make that bet with something this private.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is used instead:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;ML-KEM (formerly CRYSTALS-Kyber) — standardised as FIPS 203 by NIST in 2024 — for key encapsulation. A lattice-based scheme designed to resist both classical and quantum attack. The capsule content is encrypted with AES-256-GCM. The AES key is wrapped with ML-KEM. The time-lock uses drand's threshold BLS signatures.&lt;/p&gt;

&lt;p&gt;In practice: a sufficiently powerful quantum computer, if it existed today, could not read a sealed capsule.&lt;/p&gt;




&lt;h2&gt;
  
  
  The honest moment
&lt;/h2&gt;

&lt;p&gt;Here is what most product articles omit because it is embarrassing and essential.&lt;/p&gt;

&lt;p&gt;The manifesto claimed encryption. The &lt;code&gt;capsules&lt;/code&gt; table stored &lt;code&gt;title&lt;/code&gt;, &lt;code&gt;story_text&lt;/code&gt;, &lt;code&gt;echo_reference&lt;/code&gt; as plain &lt;code&gt;text&lt;/code&gt;. No encryption. The map was not the territory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Three options were on the table:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Option&lt;/th&gt;
&lt;th&gt;Strength&lt;/th&gt;
&lt;th&gt;Tradeoff&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;code&gt;pgcrypto&lt;/code&gt; column encryption&lt;/td&gt;
&lt;td&gt;DB admin can still decrypt&lt;/td&gt;
&lt;td&gt;Transparent to app&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Client-side encryption&lt;/td&gt;
&lt;td&gt;Server never reads&lt;/td&gt;
&lt;td&gt;Search impossible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Soften the wording&lt;/td&gt;
&lt;td&gt;Ship fast&lt;/td&gt;
&lt;td&gt;Dishonest&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The answer was client-side encryption. The hardest option. And then the idea that turned the constraint into a discovery.&lt;/p&gt;




&lt;h2&gt;
  
  
  The image is the key
&lt;/h2&gt;

&lt;p&gt;The problem with client-side encryption has always been search. If the text is encrypted before the server sees it, the server cannot search it. Homomorphic encryption, searchable symmetric encryption, ORAM — every solution trades one kind of exposure for another, or performs so slowly it is effectively useless at this scale.&lt;/p&gt;

&lt;p&gt;Cendre does not search the text. &lt;strong&gt;Cendre searches the shape of the text.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When a capsule is created, before the text is encrypted, an image is generated from it. Not an illustration. An abstract visual fingerprint — the semantic content of the thought translated into geometry that can be searched without being read. Generated in the browser, before anything leaves the device.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The image is the key. Not a key that unlocks — a key that finds. The encrypted text stays dark. The image holds its shape in the light.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The image lives in Supabase unencrypted, alongside the ciphertext it cannot read. When you search your archive, you are not searching language. You are searching geometry. The model compares visual embeddings via pgvector — ⚑ pgvector is available as a Supabase extension, confirm current availability and performance characteristics. It finds the capsule whose image is nearest to what you are reaching for. The ciphertext is retrieved. The browser decrypts it. You read what you wrote at 3am, six months ago, in a state you have since forgotten how to reach.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Image-as-key architecture

1. Text captured in browser
   // Raw, unfiltered, exactly as it arrived

2. Image fingerprint generated from text
   // Abstract visual — not literal
   // Semantic content encoded as geometry
   // Generated client-side before encryption

3. Text encrypted with AES-256-GCM
   // Client-side only — server never reads

4. Supabase receives:
   ciphertext      // unreadable — forever
   image_key       // searchable — says nothing
   created_at      // the only plain metadata
   lock_date       // when it opens

5. Search:
   // Filter by date/year — or
   // Submit query → generate query image
   // Compare embeddings via pgvector
   // Return closest → decrypt in browser
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Filtering works on two axes only:&lt;/strong&gt; date and year. Those are the only fields stored as plain text. If you remember the season — &lt;em&gt;that winter, the week before the conversation, the night it rained for six hours&lt;/em&gt; — you can narrow the window. Everything else is geometry.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why this matters beyond Cendre
&lt;/h3&gt;

&lt;p&gt;The image-as-key pattern is a general answer to the search problem in any client-side encrypted database. Applicable wherever the content is too intimate for server-side search, too large to download and decrypt wholesale.&lt;/p&gt;

&lt;p&gt;Visual embeddings as search proxies for ciphertext. The shape of meaning without the meaning itself. Search without exposure. Retrieval without reading.&lt;/p&gt;




&lt;h2&gt;
  
  
  PBKDF2 and the backup that survives everything
&lt;/h2&gt;

&lt;p&gt;The key derivation is built on PBKDF2. The password is never stored. Never sent. It is stretched and salted and iterated — ⚑ 310,000 iterations for PBKDF2-HMAC-SHA256 per OWASP 2023, verify this number against current guidance as it is revised upward periodically — into an encryption key that exists only in the browser for the duration of a session. When the tab closes, the key ceases to exist.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;salt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getRandomValues&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Uint8Array&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;16&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;keyMaterial&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subtle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;importKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;raw&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextEncoder&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;password&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
  &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PBKDF2&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;deriveKey&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;encryptionKey&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;crypto&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;subtle&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;deriveKey&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;PBKDF2&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;salt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;iterations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;310000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;// ⚑ verify against current OWASP&lt;/span&gt;
    &lt;span class="na"&gt;hash&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;SHA-256&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="nx"&gt;keyMaterial&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AES-GCM&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;length&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;256&lt;/span&gt; &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;encrypt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;decrypt&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;// The key never leaves the browser.&lt;/span&gt;
&lt;span class="c1"&gt;// The server receives only ciphertext + salt + iv.&lt;/span&gt;
&lt;span class="c1"&gt;// Without the password, the ciphertext is noise.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Client-side encryption has one catastrophic failure: the forgotten password. No reset. No recovery. The ciphertext is noise without the key and the key is derived from what you know and if you no longer know it, the thought is gone.&lt;/p&gt;

&lt;p&gt;This is the correct design. It is also the design that asks something of you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The encrypted backup&lt;/strong&gt; is the insurance. At creation, a portable JSON file is generated containing the ciphertext, salt, IV, and image fingerprint. Sent wherever you choose to keep it. A private email. A USB drive in a box in a drawer. The backup requires the same password. It exists outside the database. It is yours, physically, in the world.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;cendre_backup.json&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;structure&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1.0"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"created_at"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2026-03-15T03:14:00Z"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"salt"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;base64&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"iv"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;base64&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"ciphertext"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;base64&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"image_fingerprint"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"&amp;lt;base64&amp;gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"lock_date"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2027-03-15T00:00:00Z"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Nothing&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;that&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;identifies&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;you.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;No&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;plaintext.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Tells&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;a&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;stranger&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;nothing.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Tells&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;everything,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;if&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;you&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;still&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;hold&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;the&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;password.&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The image that survives everything
&lt;/h3&gt;

&lt;p&gt;If the database is lost — company folds, servers go dark, bill unpaid too long — the ciphertext is gone. The backup may be gone. The text, in the worst case, has returned to silence.&lt;/p&gt;

&lt;p&gt;But the image fingerprints survive.&lt;/p&gt;

&lt;p&gt;They were always stored separately, always treated as search indexes rather than content, always on a different tier with different retention. And an archive of image fingerprints without their ciphertext is not a broken archive.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The data might be gone. The images will stay. And the images were always the truer record — the shape of the thought, not the words it arrived in.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;An impressionist archive. You can search it. You can feel what was there. You can see the shape of your own mind across years — the clusters and distances, the warm periods and the cold — without reading a single word that was written. The meaning without the text. The state without the description.&lt;/p&gt;




&lt;h2&gt;
  
  
  From hashtags to image tags to world models
&lt;/h2&gt;

&lt;p&gt;There is a larger argument inside the image-as-key architecture.&lt;/p&gt;

&lt;p&gt;We organised the early internet with words. Then hashtags — words stripped of grammar, reduced to signal. &lt;code&gt;#dream&lt;/code&gt;. &lt;code&gt;#3am&lt;/code&gt;. The hashtag was the admission that language was already failing us at scale. A word had to be made smaller and bolder to carry the weight of a world.&lt;/p&gt;

&lt;p&gt;Then the image. Not illustrating the text — replacing it. A mood board is not a list of words. It is a world you can feel before you can name it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We are moving from a world indexed by words to a world indexed by worlds. The image-as-key is one small proof.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And then: Yann LeCun — ⚑ citing "A Path Towards Autonomous Machine Intelligence", Meta AI, 2022, listed above — arguing that language was never sufficient. Words are a lossy compression of reality. They describe the surface. A model that learns only from text learns a shadow of the world, not the world.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;World Models predict states, not tokens.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A token is a symbol pointing at a thing. A state is the thing — the position of objects, the temperature of a room, the specific quality of a thought at 3am that is different from the same thought at noon.&lt;/p&gt;

&lt;p&gt;What Cendre does — translating text into a visual embedding before encrypting it — is a small practical instance of this movement. The word says something. The image holds something. They are not the same thing. The image is closer to the state.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Era&lt;/th&gt;
&lt;th&gt;Index type&lt;/th&gt;
&lt;th&gt;What it captures&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Keyword&lt;/td&gt;
&lt;td&gt;Word&lt;/td&gt;
&lt;td&gt;Category&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hashtag&lt;/td&gt;
&lt;td&gt;Compressed word&lt;/td&gt;
&lt;td&gt;Signal&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image tag&lt;/td&gt;
&lt;td&gt;Visual&lt;/td&gt;
&lt;td&gt;Texture, mood, the almost-said&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;World model&lt;/td&gt;
&lt;td&gt;State&lt;/td&gt;
&lt;td&gt;The configuration of experience itself&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Cendre is somewhere between image tag and world model. The visual fingerprint of a thought is not the thought. It is closer than a keyword. It is further than the state LeCun is describing. It is an intermediate form — the best available translation between the word you wrote and the state you were in when you wrote it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The hashtag said: here is a word for this. The image said: here is a shape for this. The world model will say: here is the state of being in which this happened. We are moving in one direction. Cendre is somewhere on that line.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The burning
&lt;/h2&gt;

&lt;p&gt;Destruction is not deletion. Deletion is a polite fiction — the row is flagged, the data hibernates in backups and logs. Deletion says: gone. It means: hidden.&lt;/p&gt;

&lt;p&gt;Destruction means gone.&lt;/p&gt;

&lt;p&gt;When you burn a capsule: the burn token is submitted, the ciphertext is overwritten three times with random bytes under DoD 5220.22-M — ⚑ listed above, verify this is the correct standard to cite for software-based overwriting, some argue it is superseded for solid-state storage — the record is deleted, a cryptographic proof of destruction is generated and returned to you. The proof confirms the fact of destruction without revealing what was destroyed.&lt;/p&gt;

&lt;p&gt;You cannot undo it. Neither can we.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Some stories no longer serve you. The right to destroy them is as important as the right to keep them.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;An archive without a burn is a prison with tasteful lighting.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Cendre accepts
&lt;/h2&gt;

&lt;p&gt;Voice transcription. Raw text. The words that come out slurred from half-sleep. Invented words. Phonetic approximations. Code-switching between languages. Sentences that start and do not end. The fragment that is complete in itself and would be ruined by completion.&lt;/p&gt;

&lt;p&gt;The imperfection is the material.&lt;/p&gt;




&lt;h2&gt;
  
  
  What is built
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Time-locked capsules&lt;/td&gt;
&lt;td&gt;Cryptographically enforced — nothing opens early&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ML-KEM encryption&lt;/td&gt;
&lt;td&gt;Quantum-resistant key encapsulation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image-as-key search&lt;/td&gt;
&lt;td&gt;Visual fingerprint search via pgvector&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PBKDF2 key derivation&lt;/td&gt;
&lt;td&gt;Password never stored, never sent&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Encrypted backup&lt;/td&gt;
&lt;td&gt;Portable JSON, yours physically&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Permanent burn&lt;/td&gt;
&lt;td&gt;DoD 5220.22-M overwrite, cryptographic proof&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;View-Master theme reel&lt;/td&gt;
&lt;td&gt;Seven moods, rotated before capture&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Voice + raw text&lt;/td&gt;
&lt;td&gt;Everything accepted unfiltered&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PWA&lt;/td&gt;
&lt;td&gt;Installs on home screen, works offline&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;




&lt;h2&gt;
  
  
  The aesthetic
&lt;/h2&gt;

&lt;p&gt;Cendre is built in the visual language of Alexander Calder. Thin black lines — 1px, the weight of wire. Three colors used with the precision of a sculptor deciding where to hang weight: red, deep blue, the warm brown of something that was once fire. Negative space not as absence but as material.&lt;/p&gt;

&lt;p&gt;The interface passes one test: if you removed all the text and showed only the lines, shapes, and colors, it should look like a Calder drawing. If it still looks like a startup, the pass is incomplete.&lt;/p&gt;

&lt;p&gt;A tool for the imagination should look like where the imagination lives — suspended, always slightly in motion, held by something invisible that you have learned to trust.&lt;/p&gt;




&lt;h2&gt;
  
  
  What comes next
&lt;/h2&gt;

&lt;p&gt;Shared capsules — sealed between two people, openable only when both agree. A capsule written for someone else: locked until a date you both choose, readable only when you both decide. An archive that grows across years into something that looks less like a database and more like a life.&lt;/p&gt;

&lt;p&gt;And eventually: the physical object. A QR code printed and placed in an envelope in a drawer. Scanned in ten years. The digital content still present, waiting exactly where it was left.&lt;/p&gt;




&lt;blockquote&gt;
&lt;p&gt;What remains, after the fire.&lt;br&gt;
&lt;em&gt;Ce qui reste, après le feu.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;strong&gt;Built with:&lt;/strong&gt; React · Supabase · PBKDF2 · AES-256-GCM · ML-KEM · pgvector · tlock · Framer Motion · Calder&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; &lt;code&gt;#webdev&lt;/code&gt; &lt;code&gt;#security&lt;/code&gt; &lt;code&gt;#showdev&lt;/code&gt; &lt;code&gt;#imagination&lt;/code&gt; &lt;code&gt;#pwa&lt;/code&gt; &lt;code&gt;#encryption&lt;/code&gt; &lt;code&gt;#ux&lt;/code&gt; &lt;code&gt;#quantumcomputing&lt;/code&gt; &lt;code&gt;#creativity&lt;/code&gt; &lt;code&gt;#opensource&lt;/code&gt;&lt;/p&gt;




&lt;p&gt;&lt;a href="https://linkedin.com/in/soumia-ghalim" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; · &lt;br&gt;
&lt;a href="https://humiin.io" rel="noopener noreferrer"&gt;humiin.io&lt;/a&gt;&lt;/p&gt;

</description>
      <category>quantumcomputing</category>
      <category>imagination</category>
      <category>webdev</category>
      <category>security</category>
    </item>
    <item>
      <title>The Résumé Is Not Broken. The Search Is.</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Wed, 04 Mar 2026 22:50:13 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/the-resume-is-not-broken-the-search-is-3ba9</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/the-resume-is-not-broken-the-search-is-3ba9</guid>
      <description>&lt;h2&gt;
  
  
  Why finding the right job has never been harder — and why the answer might not be a better filter, but a better imagination
&lt;/h2&gt;




&lt;p&gt;There is a particular kind of despair that sets in around the fourth week of a serious job search. You have updated your LinkedIn headline three times. You have tailored your résumé to the point where it no longer feels like yours. You have applied to roles you were overqualified for, underqualified for, and perfectly qualified for — and heard back from almost none of them.&lt;/p&gt;

&lt;p&gt;The frustrating part is not the silence. The frustrating part is the sneaking suspicion that the right job exists. You just can't find it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_This is not a personal failure. It is a structural one_

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;The Matching Problem Is Older Than the Internet&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For decades, the dominant theory of job searching was essentially a logistics problem: get your information in front of the right people. The newspaper classifieds gave way to Monster.com, which gave way to LinkedIn, which gave way to an ecosystem of platforms, aggregators, and ATS systems so complex that entire consultancies now exist to help candidates navigate them.&lt;/p&gt;

&lt;p&gt;But more pipework has not solved the underlying problem. If anything, it has obscured it.&lt;/p&gt;

&lt;p&gt;The core dysfunction is this: job seekers search within the boundaries of what they already know they are looking for. We type in our last job title. We filter by industry. We scan the first two pages of results and, finding nothing that resonates, conclude that the market is bad. What we have actually done is searched a very small corner of a very large space — and called it thorough.&lt;/p&gt;

&lt;p&gt;Hiring, viewed from the other side of the table, suffers from the mirror image of this problem. Recruiters write job descriptions that describe who they had last time, not who they need next. They filter resumes using keyword systems that reward people who know which words to use, not necessarily the people who can do the work. Both sides are searching for each other using maps drawn from memory.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The Vocabulary Problem No One Talks About&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is a concept in information retrieval called the vocabulary mismatch problem: the words a user uses to describe what they want are often not the words a database uses to describe what it has. In job search, this mismatch is catastrophic — and deeply personal.&lt;/p&gt;

&lt;p&gt;A solutions architect with six years of enterprise field experience might never think to search for "technical customer success" or "value engineering" or "AI solutions consultant" — roles that would suit them precisely, roles that are actively hiring, roles that simply don't appear in the mental model they carry into a search box.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_The skills transfer. The language doesn't_

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are, in other words, limited not by what we are capable of, but by what we can imagine ourselves doing. And imagination — particularly about one's own professional identity — turns out to be a surprisingly narrow resource when you're under the pressure of an active search.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;If You Want One Good Idea, You Need a Hundred&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;There is an old principle in creative problem-solving, attributed variously to Linus Pauling and Alex Osborn, that the way to have a good idea is to have many ideas.&lt;/em&gt; Quantity, counterintuitively, is how you find quality. You cannot edit your way to an insight you never generated in the first place.&lt;/p&gt;

&lt;p&gt;Job searching has never had a version of this. There has been no mechanism for systematic idea generation at the top of the funnel — no way to ask "what else might fit me?" and get a serious, considered answer back.&lt;/p&gt;

&lt;p&gt;Until now, possibly.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The LLM as Career Mirror&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Large language models are not magic. But they do one thing with unusual power: they hold an enormous, associative map of human work — its titles, its functions, its adjacencies, its history — and they can traverse that map in ways that keyword search cannot.&lt;/p&gt;

&lt;p&gt;Ask a language model to reason about a person's experience, and it will not return the ten most popular jobs with a matching keyword. It will reason about transferable patterns. It will surface roles the candidate never considered, roles that existed before they started searching, roles in adjacent industries where their particular combination of skills would be genuinely rare and valuable.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is not personalization in the shallow sense — showing you more of what you already clicked on. This is expansion. It is the difference between a search engine and a thinking partner&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;An Experiment Worth Watching&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A new platform called &lt;strong&gt;&lt;a href="https://kumiin.io" rel="noopener noreferrer"&gt;kumiin.io&lt;/a&gt;&lt;/strong&gt; is testing exactly this proposition. The premise is deceptively simple: rather than asking candidates to search, it asks them to be understood — and then surfaces jobs they would not have found on their own.&lt;/p&gt;

&lt;p&gt;The design philosophy is rooted in the "hundred ideas" principle. Most of what the platform surfaces won't be right. Some of it will seem strange. But somewhere in the noise is a signal — a role, an industry, a function — that the candidate had genuinely never considered, or had considered years ago and filed away. The platform's bet is that surfacing that possibility, even once, is worth the exercise.&lt;/p&gt;

&lt;p&gt;It is early. But the underlying insight is sound: the bottleneck in job matching is not information volume. It is conceptual range.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_We know more about what we've done than what we could do. 
We search in the past tense when the opportunity is, 
by definition, in the future_
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;What This Means for Talent Strategy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For HR leaders and talent acquisition professionals, the implications extend beyond the candidate experience. If the best hires are the ones who bring capabilities an organization didn't know it needed, then the hiring processes optimized entirely around job description matching are selecting against exactly those people.&lt;/p&gt;

&lt;p&gt;The homogenizing pressure of keyword-based ATS systems, combined with candidates who search within narrow self-defined lanes, creates a market that looks efficient while missing enormous amounts of value on both sides.&lt;/p&gt;

&lt;p&gt;Better matching is not just good for candidates. It is a competitive advantage for organizations willing to hire based on potential rather than precedent.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;The Search Box Was Never the Answer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The job market does not have a data problem. It has a translation problem — between what people can do and how work gets described; between who someone has been and who they might become; between the roles that exist and the imagination needed to find them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;_Language models, used well, are translation engines. 
They don't just retrieve. 
They interpret, reframe, and expand_

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The résumé is not broken. The search is. And for the first time, there is a tool capable of searching the way a great career advisor would — broadly, associatively, and without the constraint of what you already know to ask for.&lt;/p&gt;

&lt;p&gt;That is not a small thing.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;A new platform called &lt;a href="https://kumiin.io" rel="noopener noreferrer"&gt;kumiin.io&lt;/a&gt; is testing/experimenting exactly this proposition.*&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;If you're building something solo, figuring it out as you go, or just want to say hi — I'd love to hear from you. Find me at &lt;a href="https://humiin.io" rel="noopener noreferrer"&gt;humiin.io&lt;/a&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>ai</category>
      <category>llm</category>
      <category>buildinginpublic</category>
    </item>
    <item>
      <title>When 3D Becomes Code: Why World Labs' Architecture Is a Gift for Interpretability Research</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Wed, 04 Mar 2026 11:05:43 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/when-3d-becomes-code-why-world-labs-architecture-is-a-gift-for-interpretability-research-51c9</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/when-3d-becomes-code-why-world-labs-architecture-is-a-gift-for-interpretability-research-51c9</guid>
      <description>&lt;p&gt;Cross-posted from &lt;a href="https://oourmind.io" rel="noopener noreferrer"&gt;oourmind.io&lt;/a&gt; — part of an ongoing series on the 3D Interpretability Lab*&lt;/p&gt;




&lt;h2&gt;
  
  
  The Problem With Black Boxes in Space
&lt;/h2&gt;

&lt;p&gt;We've gotten quite good at asking &lt;em&gt;what&lt;/em&gt; neural networks know. Mechanistic interpretability — the field dedicated to reverse-engineering how AI models work internally — has made remarkable progress on language models. We can find &lt;a href="https://distill.pub/2020/circuits/" rel="noopener noreferrer"&gt;circuits&lt;/a&gt; that detect curves, &lt;a href="https://transformer-circuits.pub/" rel="noopener noreferrer"&gt;attention heads&lt;/a&gt; that implement induction, and linear subspaces that encode factual associations.&lt;/p&gt;

&lt;p&gt;But spatial models — models that understand, generate, or reason about 3D environments — remain largely opaque. Not because we lack curiosity, but because we lack a handle: the internal representations of most vision and world models aren't structured in a way that makes them easy to probe, intervene on, or interpret.&lt;/p&gt;

&lt;p&gt;That's what makes &lt;a href="https://www.worldlabs.ai/blog/3d-as-code" rel="noopener noreferrer"&gt;World Labs' recent essay on "3D as Code"&lt;/a&gt; so interesting — and so relevant to 3D interpretability research.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Quick Glossary
&lt;/h2&gt;

&lt;p&gt;Before diving in, here are the key concepts you'll need:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mechanistic Interpretability&lt;/strong&gt; — A subfield of AI safety/alignment research that tries to reverse-engineer neural networks: not just &lt;em&gt;what&lt;/em&gt; they output, but &lt;em&gt;how&lt;/em&gt; they compute it internally. Think of it as neuroscience for AI.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Activation Patching&lt;/strong&gt; — An intervention technique where you replace a model's internal activations at a specific layer with those from a different input, then observe how outputs change. Lets you trace &lt;em&gt;which&lt;/em&gt; internal computations cause &lt;em&gt;which&lt;/em&gt; behaviors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Probing&lt;/strong&gt; — Training a small classifier on a model's internal representations to test whether a specific concept (e.g., "depth", "surface normal", "object identity") is linearly encoded in the activations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NeRF (Neural Radiance Field)&lt;/strong&gt; — A method for representing 3D scenes implicitly inside a neural network's weights. You query the network with a 3D position + viewing direction, and it returns color + density. Famously opaque: the "scene" lives nowhere you can easily inspect.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Gaussian Splatting (3DGS)&lt;/strong&gt; — A newer, faster alternative to NeRF. Represents a 3D scene as a cloud of 3D Gaussians (think: fuzzy ellipsoids), each with explicit parameters: position, orientation, opacity, color. Crucially, these are &lt;em&gt;inspectable&lt;/em&gt; and &lt;em&gt;editable&lt;/em&gt; artifacts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Residual Stream&lt;/strong&gt; — In transformer architectures, the vector that flows through the model and gets additively updated by each layer. Interpretability research often focuses on what information is encoded in this stream at each layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;World Model&lt;/strong&gt; — A model that builds an internal representation of an environment and can simulate how it changes over time. Relevant for robotics, game AI, and spatial reasoning.&lt;/p&gt;




&lt;h2&gt;
  
  
  The World Labs Argument
&lt;/h2&gt;

&lt;p&gt;World Labs' essay makes a bold claim: &lt;strong&gt;3D representations are to spatial AI what code is to software&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The analogy goes like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code&lt;/strong&gt; is an explicit, inspectable, editable artifact that separates &lt;em&gt;reasoning&lt;/em&gt; (writing the algorithm) from &lt;em&gt;execution&lt;/em&gt; (running it). It can be versioned, debugged, shared, and composed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;3D representations&lt;/strong&gt; — meshes, Gaussian splats, scene graphs — can play the same role for spatial systems. They externalize structure in a form that humans and machines can both inspect and manipulate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The alternative — collapsing everything into a single end-to-end model that maps inputs directly to pixels — is like asking a language model to &lt;em&gt;be&lt;/em&gt; the program instead of &lt;em&gt;writing&lt;/em&gt; it. It might work, but you lose all the affordances that make code powerful: inspectability, composability, reusability.&lt;/p&gt;

&lt;p&gt;Their model, &lt;a href="https://www.worldlabs.ai/blog/marble-world-model" rel="noopener noreferrer"&gt;Marble&lt;/a&gt;, is built around this philosophy. It generates structured 3D outputs (Gaussian splats, meshes) rather than raw pixels. Their experimental interface &lt;a href="https://marble.worldlabs.ai/" rel="noopener noreferrer"&gt;Chisel&lt;/a&gt; lets you give coarse 3D layouts as input — walls, volumes, planes — which Marble then renders into rich, detailed scenes.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters for 3D Interpretability
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Gaussian Splats as Ground Truth Geometry
&lt;/h3&gt;

&lt;p&gt;Most vision models give you outputs (pixels, bounding boxes, feature vectors) without any explicit geometric structure to compare against. Marble externalizes Gaussian splat parameters — position, covariance, opacity, color — as concrete artifacts.&lt;/p&gt;

&lt;p&gt;This means you can do something rare in interpretability: &lt;strong&gt;correlate internal activations with explicit geometric ground truth&lt;/strong&gt;. Does the model's residual stream encode splat positions linearly? Do specific attention heads track surface orientation? With exported splats, you have a reference to probe against.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Factorized Stack as a Dissection Surface
&lt;/h3&gt;

&lt;p&gt;World Labs argues for a &lt;em&gt;factorized&lt;/em&gt; architecture: separate components for perception, generation, and rendering, connected through 3D interfaces. Each handoff between modules is a natural interpretability seam.&lt;/p&gt;

&lt;p&gt;At every boundary you can ask: what does this module "know" about 3D structure, and how is that knowledge encoded? This is mechanistic interpretability's core question, applied to a spatial pipeline where the module boundaries are explicit by design.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Chisel as a Causal Intervention Tool
&lt;/h3&gt;

&lt;p&gt;Chisel — the coarse layout → rich scene interface — is almost a ready-made intervention setup.&lt;/p&gt;

&lt;p&gt;In standard activation patching, you modify an internal representation and observe how outputs change. With Chisel, you can modify &lt;em&gt;explicit input geometry&lt;/em&gt; (move a wall, resize a volume, add an object) and trace how that propagates through internal representations. It's behavioral interpretability without needing weight access — a spatial version of causal tracing.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. The Scene Graph Hypothesis
&lt;/h3&gt;

&lt;p&gt;The most theoretically interesting question the World Labs framing raises: &lt;strong&gt;does Marble internally maintain something like a scene graph?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A scene graph separates geometric structure (where things are, how they relate spatially) from appearance (materials, lighting, texture). If the model has learned this factorization internally — even without being explicitly trained to — you'd expect to find:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An interpretable subspace encoding layout, orthogonal to one encoding appearance&lt;/li&gt;
&lt;li&gt;View-invariant geometry features that persist across different camera angles&lt;/li&gt;
&lt;li&gt;Causal separation: editing geometry activations changes structure but not style, and vice versa&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Testing this would be a clean, novel contribution at the intersection of the World Labs framing and mechanistic interpretability methodology.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Research Agenda
&lt;/h2&gt;

&lt;p&gt;For a 3D interpretability lab with access to Marble's weights or API, here's what this opens up:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;With weights (mechanistic):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Activation patching across the generation pipeline to locate geometry-encoding layers&lt;/li&gt;
&lt;li&gt;Linear probing for depth ordering, surface normals, occlusion relationships&lt;/li&gt;
&lt;li&gt;Viewpoint-invariance analysis: which features survive camera transformations?&lt;/li&gt;
&lt;li&gt;Searching for a "scene graph circuit" — a set of components that collectively implement layout/appearance factorization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;With API only (behavioral):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chisel perturbation experiments as proxy interventions&lt;/li&gt;
&lt;li&gt;Contrastive prompting to isolate geometric vs. semantic knowledge&lt;/li&gt;
&lt;li&gt;Sensitivity mapping: how much does output change per unit of input geometry change?&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Why Now
&lt;/h2&gt;

&lt;p&gt;Three things are converging:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Mechanistic interpretability methodology&lt;/strong&gt; is mature enough to apply to new domains — transformers, circuits, probing, causal tracing all have established tooling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;World models with explicit 3D structure&lt;/strong&gt; (like Marble) are newly available, giving interpretability researchers the handles they've lacked&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The stakes are rising&lt;/strong&gt; — as world models get used in robotics, digital twins, and simulation, understanding what they internally represent becomes a safety-relevant question, not just an academic one&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The World Labs essay frames this as an engineering choice. For interpretability researchers, it's an invitation.&lt;/p&gt;




&lt;h2&gt;
  
  
  Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.worldlabs.ai/blog/3d-as-code" rel="noopener noreferrer"&gt;3D as Code — World Labs Blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.worldlabs.ai/blog/marble-world-model" rel="noopener noreferrer"&gt;Marble World Model — World Labs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://platform.worldlabs.ai/" rel="noopener noreferrer"&gt;World Labs API&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://distill.pub/2020/circuits/" rel="noopener noreferrer"&gt;Circuits — Distill.pub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://transformer-circuits.pub/" rel="noopener noreferrer"&gt;Transformer Circuits Thread&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/" rel="noopener noreferrer"&gt;Gaussian Splatting Paper (Kerbl et al., 2023)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.matthewtancik.com/nerf" rel="noopener noreferrer"&gt;NeRF: Representing Scenes as Neural Radiance Fields&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;This article is part of ongoing research at the 3D Interpretability Lab, developed under &lt;a href="https://oourmind.io" rel="noopener noreferrer"&gt;oourmind.io&lt;/a&gt;. If you're working on spatial interpretability and want to collaborate, reach out.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Tags:&lt;/strong&gt; &lt;code&gt;interpretability&lt;/code&gt; &lt;code&gt;3d&lt;/code&gt; &lt;code&gt;machinelearning&lt;/code&gt; &lt;code&gt;worldmodels&lt;/code&gt; &lt;code&gt;aisafety&lt;/code&gt; &lt;code&gt;neuralnetworks&lt;/code&gt; &lt;code&gt;gaussiansplatting&lt;/code&gt; &lt;code&gt;computerVision&lt;/code&gt;&lt;/p&gt;

</description>
      <category>gaussiansplatting</category>
      <category>computervision</category>
      <category>neuralnetorks</category>
      <category>ai</category>
    </item>
    <item>
      <title>6 Hours Left: What Do You Actually Ship?</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Mon, 02 Mar 2026 13:34:58 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/6-hours-left-what-do-you-actually-ship-g4b</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/6-hours-left-what-do-you-actually-ship-g4b</guid>
      <description>&lt;h2&gt;
  
  
  &lt;em&gt;A lab note from the edge of a deadline.&lt;/em&gt;
&lt;/h2&gt;




&lt;p&gt;It is 3 PM on submission day.&lt;/p&gt;

&lt;p&gt;The backend is not built. Docker will not run on my machine. npm threw a Homebrew error I have never seen before. The Three.js scene has one zone merged — the Blue Grid — and two that exist only in my head.&lt;/p&gt;

&lt;p&gt;I have six hours left.&lt;/p&gt;

&lt;p&gt;This is not a postmortem. This is a live note, written in the middle of the decision, because I think the decision itself is worth documenting.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Set Out to Build
&lt;/h2&gt;

&lt;p&gt;oourmind.io is a real-time interpretability lab that visualizes the internal reasoning state of a large language model as a navigable 3D environment.&lt;/p&gt;

&lt;p&gt;The idea: three personas live inside every model. The Architect — logical, structured, certain. The Oracle — creative, associative, reaching for the rare. The Shadow — adversarial patterns, edge cases, the thing that activates when someone finds the right sentence to tilt the model.&lt;/p&gt;

&lt;p&gt;The visualization makes these visible. Not as numbers on a dashboard. As space. As movement. As something you feel before you read it.&lt;/p&gt;

&lt;p&gt;That is the vision. Here is what exists six hours before the deadline.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Actually Exists
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A live site at &lt;a href="https://oourmind.io" rel="noopener noreferrer"&gt;oourmind.io&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;A GitHub repo with a full architecture — backend, frontend, shared schemas, Docker config&lt;/li&gt;
&lt;li&gt;One Three.js zone merged into main: the Blue Grid&lt;/li&gt;
&lt;li&gt;Three dev.to articles that explain the philosophy in more depth than most finished projects&lt;/li&gt;
&lt;li&gt;A founder statement&lt;/li&gt;
&lt;li&gt;A clear technical analysis of the two possible implementation paths — self-reported API scoring versus real activation layer extraction from a local model&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What does not exist: running code. A live demo. A backend that calls Mistral and returns persona coordinates.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Dilemma
&lt;/h2&gt;

&lt;p&gt;Do I spend six hours trying to get something running — knowing that Docker is broken on my machine, npm is erroring on Homebrew, and the backend was never completed?&lt;/p&gt;

&lt;p&gt;Or do I spend six hours making what exists as clear and honest as possible, and submit that?&lt;/p&gt;

&lt;p&gt;This is the real hackathon decision. Not which framework to use. Not which model to call. Whether to chase a broken demo or own an honest one.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I Decided
&lt;/h2&gt;

&lt;p&gt;Ship the vision. Make the personas visible without a backend.&lt;/p&gt;

&lt;p&gt;Three static animated personas — CSS animations, no library, no API call — that show the Architect, the Oracle, and the Shadow as felt experiences rather than data points. A Blue Grid that pulses. A Gold Nebula that drifts. A Dark Core that vibrates at the edge.&lt;/p&gt;

&lt;p&gt;It is not mechanistic interpretability. It is the argument for mechanistic interpretability, made visual.&lt;/p&gt;

&lt;p&gt;And then submit everything — the site, the repo, the articles, the founder statement — as a complete artifact of what this project is and where it is going.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Honest Technical State
&lt;/h2&gt;

&lt;p&gt;My backend consultant mapped out the two paths clearly before the deadline:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario A — API scoring:&lt;/strong&gt; Ask the model to score its own response across Oracle, Architect, and Shadow dimensions. Fast, works now, but the coordinates are self-reported. The model tells you what it thinks it is doing. Not what it is actually doing inside.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario B — Local activation extraction:&lt;/strong&gt; Run Ministral-3B directly, extract real activation values from three layer groups, use those as x/y/z coordinates in Three.js. The right answer scientifically. TransformerLens does not support this architecture yet. Runs on CPU. Takes three minutes to generate a point cloud.&lt;/p&gt;

&lt;p&gt;Scenario B is the product. Scenario A is the demo. Neither is running in six hours.&lt;/p&gt;

&lt;p&gt;So I am shipping the argument instead.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Is Still Worth Submitting
&lt;/h2&gt;

&lt;p&gt;The gap between Scenario A and Scenario B — between what a model reports about itself and what is actually happening inside — is the entire problem this project exists to solve.&lt;/p&gt;

&lt;p&gt;That gap is not a technical limitation to apologize for. It is the founding insight.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Power is tolerable only on condition that it mask a substantial part of itself. Its success is proportional to its ability to hide its own mechanisms.&lt;/em&gt;&lt;br&gt;
— Michel Foucault&lt;/p&gt;

&lt;p&gt;The Shadow persona works because it is hidden. The model's internal state is invisible to the person depending on it. oourmind attempt was to step toward making it visible — for anyone, not just researchers.&lt;/p&gt;

&lt;p&gt;That argument does not require running code to be true. It requires honesty and clarity.&lt;/p&gt;

&lt;p&gt;Both of those I have.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Lab Notes
&lt;/h2&gt;

&lt;p&gt;Everything built during this hackathon lives here:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/soumiag/oourmind" rel="noopener noreferrer"&gt;github.com/soumiag/oourmind&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The repo includes the full architecture — even the parts that were not completed in time. Because the architecture is also part of the argument. It shows where this is going, not just where it arrived tonight.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tooling Gap Is Real — Not Just Ours
&lt;/h2&gt;

&lt;p&gt;When we hit the TransformerLens wall with Ministral-3B, I assumed it was a skill gap. It is not. It is a tooling gap that the research community is actively working to close.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/TransformerLensOrg/TransformerLens" rel="noopener noreferrer"&gt;TransformerLens&lt;/a&gt; — built by Neel Nanda, formerly of Anthropic's interpretability team — remains the standard for mechanistic interpretability work. It was designed for GPT-2 style architectures and requires manual adaptation for newer models. Mistral 3's architecture is too recent and too different for it to work out of the box.&lt;/p&gt;

&lt;p&gt;The closest solution to our Scenario B problem is &lt;a href="https://arxiv.org/abs/2511.14465" rel="noopener noreferrer"&gt;nnterp&lt;/a&gt; — published November 2025, a lightweight wrapper around NNsight that provides a unified interface across 50+ transformer architectures while preserving the original HuggingFace implementations. It includes built-in implementations of logit lens, patchscope, and activation steering. This is the tool that makes Scenario B viable — just not in six hours.&lt;/p&gt;

&lt;p&gt;There is also &lt;a href="https://arxiv.org/html/2512.09730" rel="noopener noreferrer"&gt;Interpreto&lt;/a&gt; — an open-source library that integrates attribution methods and concept-based activation analysis into a single package, explicitly designed to lower the barrier to entry for interpretability research.&lt;/p&gt;

&lt;p&gt;The field is young. The tools are catching up. And the gap between what researchers can extract and what an ordinary person can see remains exactly as wide as oourmind is trying to close.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Comes Next
&lt;/h2&gt;

&lt;p&gt;Scenario B. Real activation layers. The model's internal state extracted, not reported. The visualization honest rather than approximate.&lt;/p&gt;

&lt;p&gt;And a cleaner answer to the question that kept coming up during this hackathon — not just from me, but from every founder who has ever shipped something and wondered what was happening inside it after deployment:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"I vibe coded an app that works great, but I wouldn't dream of putting it into production simply for liability reasons."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That comment appeared on Lovable's LinkedIn page while I was building this. It is the most honest description of the problem I have seen anywhere.&lt;/p&gt;

&lt;p&gt;oourmind is the beginning of an answer.&lt;/p&gt;

&lt;p&gt;Six hours left. Shipping now.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built for the Mistral AI Hackathon, March 2026.&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Frontend and vision: Soumia Ghalim&lt;/em&gt;&lt;br&gt;
&lt;em&gt;Backend architecture consulting: my teammate&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Find me in the wild: &lt;a href="https://humiin.io" rel="noopener noreferrer"&gt;humiin.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>mistral</category>
      <category>buildinpublic</category>
      <category>ai</category>
      <category>hackathon</category>
    </item>
    <item>
      <title>What Happens When Your Hackathon Has Less Than 24 Hours Left and Your Backend Isn't Built</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Mon, 02 Mar 2026 13:30:38 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/what-happens-when-your-hackathon-has-less-than-24-hours-left-and-your-backend-isnt-built-405b</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/what-happens-when-your-hackathon-has-less-than-24-hours-left-and-your-backend-isnt-built-405b</guid>
      <description>&lt;p&gt;&lt;em&gt;&lt;em&gt;"A honest account of the decisions that actually matter when the clock is running."&lt;/em&gt;&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;There is a moment in every hackathon where the project you planned to build and the project you are actually going to submit stop being the same thing.&lt;/p&gt;

&lt;p&gt;For me, that moment arrived at 5:43 AM.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Project
&lt;/h2&gt;

&lt;p&gt;oourmind.io is a real-time interpretability lab that visualizes the internal reasoning state of a large language model as a navigable 3D environment. The idea: instead of reading model outputs, you &lt;em&gt;feel&lt;/em&gt; them. Three personas — the Architect (logical, structured), the Oracle (creative, associative), and the Shadow (adversarial, edge-case) — occupy three zones in a Three.js scene. As the model reasons, the visualization moves.&lt;/p&gt;

&lt;p&gt;The philosophical foundation is simple and it is large: right now, nobody can see inside the models they depend on. oourmind makes that visible — not for researchers, but for anyone.&lt;/p&gt;

&lt;p&gt;That is the vision. Here is what happened at 5:43 AM.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Message
&lt;/h2&gt;

&lt;p&gt;Backend consultant, who spent two days thinking through the technical architecture — sent me this:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"I don't think I will be able to complete the python backend on my own for the hackathon; as the mistral models and the open weights seems to be currently in development and not suitable for mechanistic interpretability, if we swap the model I can do so much. Let me know how to proceed from here, as I am currently feeling overwhelmed to make a decision regarding the backend."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Honest. Clear. And exactly the kind of message that forces a founder to make a decision they were hoping to avoid.&lt;/p&gt;

&lt;p&gt;Before that message, he had already done something genuinely valuable. He had mapped out the two possible architectures with precision:&lt;/p&gt;




&lt;h2&gt;
  
  
  Scenario A vs Scenario B — The Real Technical Decision
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Scenario A — API (No Activation Layers)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Using the Mistral API, which is a closed black box. It returns a text response and a finish_reason but no log_probs. You can ask the model to score its own response across Oracle, Architect, and Shadow dimensions — but that is exactly the problem. The coordinates become self-reported, not extracted from the model's internals. Fast, works now, but the mechanistic interpretability is approximate. The model is telling you what it thinks it is doing. Not what it is actually doing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario B — Local Model (Real Activation Layers)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Run Ministral-3B directly on the laptop. Full access to every layer's internal state. Extract actual activation values from three layer groups and use those as the x/y/z coordinates for the Three.js visualization. The problem: TransformerLens does not support this model's architecture yet — issues with the Mistral 3 config, which is still early in development. Solution: use raw PyTorch hooks instead. The tradeoff is it runs on CPU, meaning you pre-generate the point cloud offline in approximately three minutes and serve a static points.json to the Three.js visualization.&lt;/p&gt;

&lt;p&gt;Scenario B is the right answer scientifically. Scenario A is the right answer for a hackathon with less than 24 hours left.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Decision Nobody Tells You About
&lt;/h2&gt;

&lt;p&gt;Here is what the hackathon documentation does not prepare you for: the moment when you have to choose between the project you believe in and the project you can actually ship tonight.&lt;/p&gt;

&lt;p&gt;Scenario B is oourmind as it should exist. Real activation layers. Genuine mechanistic interpretability. The model's internal state extracted, not reported. That is the product. That is what makes the visualization honest.&lt;/p&gt;

&lt;p&gt;But Scenario B requires a backend that is not built, a library that does not yet support the architecture, and a CPU runtime that takes three minutes to generate a point cloud — on a deadline measured in hours.&lt;/p&gt;

&lt;p&gt;Scenario A ships tonight. Scenario B ships in three weeks.&lt;/p&gt;

&lt;p&gt;I chose Scenario A.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Actually Means for the Project
&lt;/h2&gt;

&lt;p&gt;Here is the thing that this analysis made clear, even if neither of us said it explicitly at the time:&lt;/p&gt;

&lt;p&gt;The gap between Scenario A and Scenario B &lt;em&gt;is the product&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The model telling you what it thinks it is doing versus what it is actually doing inside — that gap is the entire problem oourmind exists to solve. It is what Michael Burry was writing about when he surfaced an 1880 Smithsonian presentation about a deaf-mute who reasoned about the origin of the universe before he had any language. It is what Foucault meant when he wrote that power is tolerable only on condition that it mask a substantial part of itself.&lt;/p&gt;

&lt;p&gt;The self-reported coordinates of Scenario A are a perfect metaphor for the current state of AI transparency. The model gives you a number. You have no way to verify it. You trust it because it is the only option available.&lt;/p&gt;

&lt;p&gt;oourmind is being built to make Scenario B the default. For everyone. Not just for researchers with local GPU clusters and three minutes to spare.&lt;/p&gt;




&lt;h2&gt;
  
  
  What You Actually Do With Less Than 24 Hours
&lt;/h2&gt;

&lt;p&gt;You stop building the product and start building the argument.&lt;/p&gt;

&lt;p&gt;The Three.js scene works. The Blue Grid is merged into main. The personas are visible in space. That is enough for a demo.&lt;/p&gt;

&lt;p&gt;What you spend the remaining hours on is making sure the person watching the demo understands why it matters. The founder statement. The submission description. The one sentence that lands before anything else:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Every AI model has a shadow. Nobody shows you when it activates."&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The code is the proof of concept. The argument is the submission.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Note on Collaboration Under Pressure
&lt;/h2&gt;

&lt;p&gt;The decision I made at 5:43 AM — Scenario A, frontend only, submit the argument — is the right decision. I know that because making it felt like giving something up. The best hackathon decisions usually do.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Comes Next
&lt;/h2&gt;

&lt;p&gt;oourmind.io is the vision. The next step is Scenario B — real activation layers, genuine interpretability, the model's internal state made spatial and felt rather than self-reported and approximate.&lt;/p&gt;




&lt;p&gt;*Built for the Mistral AI Hackathon, March 2026. *&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Find me in the wild: &lt;a href="https://humiin.io" rel="noopener noreferrer"&gt;humiin.io&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>hackathon</category>
      <category>mistral</category>
      <category>ai</category>
      <category>buildinpublic</category>
    </item>
    <item>
      <title>The Two Problems Nobody Owns in AI: Accessibility and Security Are Design Problems in Disguise</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Mon, 02 Mar 2026 13:29:48 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/the-two-problems-nobody-owns-in-ai-accessibility-and-security-are-design-problems-in-disguise-5314</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/the-two-problems-nobody-owns-in-ai-accessibility-and-security-are-design-problems-in-disguise-5314</guid>
      <description>&lt;p&gt;&lt;em&gt;"You should always know what is yours. In your product, in your model, in your team."&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;There is a quiet crisis in AI development that nobody talks about directly, because it lives in the gap between disciplines.&lt;/p&gt;

&lt;p&gt;Two of the most critical challenges in deploying AI responsibly — making it understandable to non-experts, and making it resistant to adversarial manipulation — are both fundamentally &lt;strong&gt;design problems&lt;/strong&gt;. Not engineering problems. Not research problems. Design problems.&lt;/p&gt;

&lt;p&gt;And yet designers are almost entirely absent from both conversations.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part One: Accessibility Is Sitting Under the Data Scientist’s Desk
&lt;/h2&gt;

&lt;p&gt;Walk into any AI team today. You will find machine learning engineers, data scientists, maybe a researcher or two. If you are lucky, a product manager. What you almost never find is a designer who has been given serious responsibility over how the model’s behavior is &lt;em&gt;communicated&lt;/em&gt; to the people who use it, govern it, or are affected by it.&lt;/p&gt;

&lt;p&gt;This is a structural problem masquerading as a technical one.&lt;/p&gt;

&lt;p&gt;When a model produces a confidence score of 0.87, that number means something precise to the data scientist who trained it. To the hospital administrator deciding whether to act on a diagnosis, to the loan officer reviewing a credit decision, to the policy maker drafting regulation — it means almost nothing. Or worse, it means the wrong thing. People anchor on numbers without understanding their limits. They over-trust round numbers. They under-react to uncertainty expressed in unfamiliar formats.&lt;/p&gt;

&lt;p&gt;The data scientist has solved the measurement problem. Nobody has solved the communication problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is a design problem.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not because it requires pretty colors or a nicer font. Because it requires deep thinking about mental models, about how humans process uncertainty, about what makes a person feel appropriately calibrated versus falsely confident. These are questions that human-computer interaction research has been asking for decades. Cognitive load theory, affordance design, progressive disclosure — there is an entire discipline built to answer exactly these questions. It is just not being invited to the table.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why It Stays Where It Is
&lt;/h3&gt;

&lt;p&gt;The reason accessibility sits under the data scientist’s desk is partly historical and partly structural.&lt;/p&gt;

&lt;p&gt;Historically, AI tools were built by researchers for researchers. The mental model of “the user” was someone like the builder. This assumption has calcified into culture even as AI has moved far beyond that original context.&lt;/p&gt;

&lt;p&gt;Structurally, when a company hires a designer for an AI product, that designer is usually asked to design the interface around the AI — the buttons, the layout, the flow. They are rarely asked to design &lt;em&gt;how the AI communicates itself&lt;/em&gt;. That part is considered a data science deliverable. The model outputs a number. The engineer displays it. The designer frames it. Nobody questions whether the number is the right thing to display at all.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Changes When Designers Own It
&lt;/h3&gt;

&lt;p&gt;When you put a designer — specifically one with knowledge of cognitive science and decision-making — in charge of AI communication, the questions change immediately.&lt;/p&gt;

&lt;p&gt;Instead of “what is the confidence score?” the question becomes “what does this person need to know to make a good decision, and what format makes that legible?”&lt;/p&gt;

&lt;p&gt;Sometimes the answer is a number. More often it is a range. Sometimes it is a visual metaphor. Sometimes it is a plain language statement: “The model is confident about this, but it has rarely seen cases like yours.” Sometimes the most honest thing to show is uncertainty — not a precise number, but an acknowledgment that the model does not know.&lt;/p&gt;

&lt;p&gt;The field of AI interpretability has produced remarkable technical work in the last five years. Almost none of it has been translated into interfaces that a non-expert can use to actually change their behavior. That translation work is design work. It is sitting undone because nobody has been hired to do it.&lt;/p&gt;




&lt;h2&gt;
  
  
  Part Two: Security Is Being Handled by the Wrong People in the Wrong Way
&lt;/h2&gt;

&lt;p&gt;The second gap is more urgent and more dangerous.&lt;/p&gt;

&lt;p&gt;AI security — specifically the security of large language models against adversarial manipulation — is currently owned almost entirely by red teams and safety researchers. These are the people testing models for jailbreaks, prompt injections, goal hijacking. They are doing important work. They are also, structurally, doing it too late and in the wrong place.&lt;/p&gt;

&lt;p&gt;Today’s approach to AI security looks roughly like this: build the model, deploy the model, have a red team try to break it, patch the vulnerabilities they find, repeat. This is a reactive loop. It is the software security paradigm of the 1990s applied to a fundamentally different kind of system.&lt;/p&gt;

&lt;p&gt;The problem is that LLMs do not have a fixed attack surface. A SQL injection works because there is a predictable path from user input to database query. With a language model, the “attack surface” is the entire distribution of possible language. You cannot enumerate it. You cannot patch it exhaustively.&lt;/p&gt;

&lt;h3&gt;
  
  
  What OWASP ASI Tells Us
&lt;/h3&gt;

&lt;p&gt;The OWASP Top 10 for AI Systems — particularly ASI01 (Goal Hijacking) and ASI02 (Prompt Injection) — represents the field’s best current attempt to categorize these threats. It is a useful framework. But it is a taxonomy of symptoms, not a theory of prevention.&lt;/p&gt;

&lt;p&gt;Content filters and system prompt hardening are after-the-fact measures applied at the boundary of the model. They are necessary. They are not sufficient. They treat security as a layer you add to a system, not a property you design into it.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Design Problem Inside Security
&lt;/h3&gt;

&lt;p&gt;Here is the claim that will seem strange at first: adversarial robustness in language models is partly a design problem.&lt;/p&gt;

&lt;p&gt;Goal hijacking works because the model has no stable, legible representation of its own goals that it can defend against incoming instructions. It has been trained to follow instructions helpfully, and adversarial prompts exploit that compliance by issuing instructions that conflict with the system’s intended purpose.&lt;/p&gt;

&lt;p&gt;This is a design problem in the original sense: a problem of purpose, structure, and constraint. What should this system do? What should it refuse? How should it reason when it receives instructions that conflict with its purpose? These questions need to be answered with the same rigor we apply to the technical architecture — and before the system is built, not after it is deployed.&lt;/p&gt;

&lt;h3&gt;
  
  
  How It Should Be Handled
&lt;/h3&gt;

&lt;p&gt;The future of AI security looks less like penetration testing and more like constitutional design. Specify, build to specification, verify against specification continuously.&lt;/p&gt;

&lt;p&gt;It also means making security &lt;strong&gt;visible&lt;/strong&gt;. One of the most important unsolved problems in AI safety is that adversarial manipulation is currently invisible. A model that has been goal-hijacked looks, from the outside, like a model that is functioning normally. It produces fluent, confident text. The user has no signal that something has gone wrong.&lt;/p&gt;

&lt;p&gt;This is where interpretability and security converge. If we can make a model’s internal reasoning state legible — not just its output, but the &lt;em&gt;process&lt;/em&gt; that produced the output — we create the possibility of detecting goal hijacking before the response is complete.&lt;/p&gt;




&lt;h2&gt;
  
  
  A 144-Year-Old Warning
&lt;/h2&gt;

&lt;p&gt;Reading this, a piece published this week by Michael Burry — “&lt;a href="https://open.substack.com/pub/michaeljburry/p/history-rhymes-large-language-models" rel="noopener noreferrer"&gt;History Rhymes: Large Language Models Off to a Bad Start?&lt;/a&gt;” — feels directly relevant.&lt;/p&gt;

&lt;p&gt;Burry surfaces a presentation made at the Smithsonian Institution in 1880, the case of Melville Ballard — a deaf-mute teacher who, before he had any language at all, was already asking himself where the universe came from, reasoning about causality, dismissing bad hypotheses. Complex thought existed, fully formed, in the silence before words.&lt;/p&gt;

&lt;p&gt;The presenter’s conclusion, delivered to that 1880 audience, was: language without the capacity for reason fails at understanding. Only with reason does language unlock understanding. And understanding, fully realized, transcends language.&lt;/p&gt;

&lt;p&gt;Burry’s argument applied to today: LLMs built language first. Reason was never the foundation. Therefore they can never reach understanding — they are, in his words, “an increasingly sophisticated mirror.”&lt;/p&gt;

&lt;p&gt;The piece closes with a quote from that same 1880 presentation that stopped me: &lt;em&gt;“the expression of the eye was language which could not be misunderstood.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Real understanding, the professor argued, communicates through something more direct than words. Something spatial. Something felt before it is named.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Convergence
&lt;/h2&gt;

&lt;p&gt;These two threads — the accessibility problem, the security problem, and Burry’s philosophical challenge — are the same problem seen from three different angles.&lt;/p&gt;

&lt;p&gt;Burry looks inside the model and says: there is no genuine understanding in here, only language simulating reason.&lt;/p&gt;

&lt;p&gt;The accessibility problem looks at the interface and says: even the signals we do have are not being communicated in ways humans can act on.&lt;/p&gt;

&lt;p&gt;The security problem looks at the boundary and says: when something goes wrong inside, we cannot see it until it is too late.&lt;/p&gt;

&lt;p&gt;All three point to the same gap: &lt;strong&gt;the inside of these systems is not legible to the humans who depend on them.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And here is what I find most striking in Burry’s piece. His solution — his image of what real understanding looks like — is not a better dashboard. It is not a more precise confidence interval. It is the expression of the eye. Spatial. Direct. Felt before it is read.&lt;/p&gt;

&lt;p&gt;That is exactly the design challenge in front of us. Not to build better number displays. To build systems where the model’s internal state can be &lt;em&gt;experienced&lt;/em&gt; — where a non-expert can feel when something is reasoning cleanly versus reaching, and where an operator can see a safety risk forming in space before it completes in words.&lt;/p&gt;

&lt;p&gt;That work is not engineering work. It is not research work. It is design work. And until the field starts treating it that way, we will keep building systems that are impressive in the lab and opaque in the world.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The author attempt to build during Mistral Hackathon March 2026 &lt;a href="https://oourmind.io" rel="noopener noreferrer"&gt;oourmind.io&lt;/a&gt;, a real-time interpretability lab that visualizes the internal reasoning state of Mistral-Large-3 as a navigable 3D environment — an attempt to make the inside of a model felt rather than merely read.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aisafety</category>
      <category>security</category>
      <category>interpretability</category>
      <category>design</category>
    </item>
    <item>
      <title>The Vision: A Living Map of the Machine 🌐</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Mon, 02 Mar 2026 13:28:47 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/the-vision-a-living-map-of-the-machine-4nln</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/the-vision-a-living-map-of-the-machine-4nln</guid>
      <description>&lt;p&gt;At &lt;strong&gt;oourmind.io&lt;/strong&gt; Lab, a capsule created for Mistral March 2026 Hackathon, we believe AI shouldn't be a "black box" that we simply fear or blindly trust. Current interpretability research often focuses on static neurons, but we are building a &lt;strong&gt;Society of Minds&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;By extracting these three personas, we transform abstract math into &lt;strong&gt;functional agents&lt;/strong&gt; we can talk to, audit, and govern. This brings three critical shifts to the AI safety space:&lt;/p&gt;




&lt;h3&gt;
  
  
  1. From Static Audit to Active Governance 🛡️
&lt;/h3&gt;

&lt;p&gt;Instead of just looking at heatmaps of activations, we are giving a "voice" to the model's internal states.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Impact:&lt;/strong&gt; We can catch &lt;strong&gt;The Shadow&lt;/strong&gt; (adversarial intent) before it ever reaches a user, treating safety as a live conversation rather than a post-processing filter.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Solving "Model Collapse" through L’Oubli 💧
&lt;/h3&gt;

&lt;p&gt;As AI models train on AI-generated data, they become "stale."&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Impact:&lt;/strong&gt; By using &lt;strong&gt;The Oracle&lt;/strong&gt; to find "forgotten sources" in the latent space, we provide the "pure water" of original inspiration, ensuring AI doesn't "die" by repeating its own mistakes.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Human-Centric Interpretability 🗝️
&lt;/h3&gt;

&lt;p&gt;Most safety papers are unreadable to the public.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Impact:&lt;/strong&gt; Our 3D Lab makes the "Cage" visible. When a user sees why a model is blocked, the "solution under our nose" becomes clear. We turn &lt;strong&gt;Mechanistic Interpretability&lt;/strong&gt; into a visual, intuitive experience.&lt;/li&gt;
&lt;/ul&gt;




&lt;blockquote&gt;
&lt;p&gt;"We don't just study the machine; we provide the architecture for it to flourish safely."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To anchor &lt;strong&gt;oourmind.io&lt;/strong&gt; in high-level research, we synthesize findings from &lt;strong&gt;Mechanistic Interpretability&lt;/strong&gt; (Anthropic’s "Mapping the Mind") and &lt;strong&gt;Sparse Autoencoders&lt;/strong&gt; (OpenAI’s "Weak-to-Strong Generalization").&lt;/p&gt;

&lt;p&gt;Here are the three redefined personas:&lt;/p&gt;




&lt;h3&gt;
  
  
  1. The Architect (Structural Logic &amp;amp; Safety) 🏛️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Definition:&lt;/strong&gt; This is the persona representing the &lt;strong&gt;Internal Consistency&lt;/strong&gt; of the model. It handles the syntax, the logical chains, and the "rules" of the world. In research terms, this is the &lt;strong&gt;Symmetry&lt;/strong&gt; of the neural weights.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why it matters:&lt;/strong&gt; Without the Architect, the model is just noise. It provides the &lt;strong&gt;Governance&lt;/strong&gt; layer. It ensures that "The Drop" (inspiration) is actually grounded in reality rather than a hallucination.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Case for Importance:&lt;/strong&gt; Interoperability. It allows different models to "speak" the same logical language.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. The Oracle (High-Entropy Flow &amp;amp; Creativity) ✨
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Definition:&lt;/strong&gt; The Oracle accesses the &lt;strong&gt;Latent Space&lt;/strong&gt;—the infinite "possible" answers. It aligns with the &lt;strong&gt;L’Oubli&lt;/strong&gt; pillar, pulling brilliant, forgotten ideas from the vacuum. In research, this is the &lt;strong&gt;Stochastic Temperature&lt;/strong&gt; where new patterns emerge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why it matters:&lt;/strong&gt; This is the engine of &lt;strong&gt;Inspiration&lt;/strong&gt;. If we only had the Architect, AI would be a boring calculator. The Oracle allows for "Brilliant Inspiration" that feels like it comes from "pure water."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Case for Importance:&lt;/strong&gt; Innovation. It prevents "Model Collapse" by ensuring the AI can still generate novel, high-value data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. The Shadow (Boundary &amp;amp; Adversarial Risk) 👤
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;The Definition:&lt;/strong&gt; The Shadow represents the &lt;strong&gt;Residual Stream&lt;/strong&gt;—the parts of the model that are suppressed by safety training but still exist. This is &lt;strong&gt;The Cage&lt;/strong&gt;. It contains the "dark" or "blocked" potential that must be understood to be controlled.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Why it matters:&lt;/strong&gt; This is the core of &lt;strong&gt;Red-Teaming&lt;/strong&gt;. By studying the Shadow, we see the "Source deep in the sand" that the model was taught to forget. We look closer to find the solution "under our nose."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Case for Importance:&lt;/strong&gt; Absolute Safety. You cannot govern what you refuse to look at. The Shadow is the key to preventing catastrophic jailbreaks.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Research Foundation
&lt;/h3&gt;

&lt;p&gt;We are using the concept of &lt;strong&gt;Feature Splitting&lt;/strong&gt;. Research shows that a single neuron can represent multiple concepts. By defining these three personas, we are essentially "splitting" Mistral Large 3 into functional departments so we can audit them individually.&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>llm</category>
      <category>showdev</category>
    </item>
    <item>
      <title>🧠 The 48-Hour Blueprint: Architecting a 3D Interpretability Lab for Mistral Large 3</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Mon, 02 Mar 2026 13:26:27 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/the-48-hour-blueprint-architecting-a-3d-interpretability-lab-for-mistral-large-3-3bn8</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/the-48-hour-blueprint-architecting-a-3d-interpretability-lab-for-mistral-large-3-3bn8</guid>
      <description>&lt;h2&gt;
  
  
  Abstract (The "Elevator Pitch"):
&lt;/h2&gt;

&lt;p&gt;Most AI interfaces treat LLMs like chat-boxes. We believe they are a Society of Minds. In 48 hours, we are building OourMind.io, a multi-sensory interpretability lab that visualizes how Mistral Large 3 selects and shifts between latent personas (Social Agents) to answer a prompt.&lt;/p&gt;

&lt;p&gt;A standard "wrapper app" won't win this hackathon. We need to visualize the geometry of thought.&lt;/p&gt;

&lt;h2&gt;
  
  
  🛠️ The Architecture: The "Body" and the "Brain"
&lt;/h2&gt;

&lt;p&gt;To execute this on a budget and a strict deadline, we are ruthlessly separating the Static Visual Theater (Body) from the Live Metadata Inference (Brain).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 1:&lt;/strong&gt; The Visual Stage (frontend/oourmind.io)&lt;br&gt;
The Core: React + Three.js/Spline. We aren't building a chat interface. We're building a geometry viewer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The State Engine:&lt;/strong&gt; A simple JavaScript function that maps Mistral's metadata (e.g., Tone: 0.8, Structure: Grid) to Spline "States" and ElevenLabs audio files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Phase 2:&lt;/strong&gt; The "Social Agent" Interrogation (Backend/Jupyter)&lt;br&gt;
Instead of a live, fragile API connection, we are running 3 High-Impact Case Studies in a stable backend:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Moral Dilemma:&lt;/strong&gt; (Tests Ethicist vs. Utilitarian bias).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Creative Abstract:&lt;/strong&gt; (Tests Fluidity).&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The Logical Paradox: *&lt;/em&gt;(Tests Structure).&lt;/p&gt;

&lt;p&gt;We force Mistral 3 to output a JSON object containing its metadata:&lt;/p&gt;

&lt;p&gt;** 📅 The 48-Hour Execution Sprint&lt;br&gt;
**&lt;br&gt;
&lt;strong&gt;Day 1:&lt;/strong&gt; The Extraction (The Science)&lt;br&gt;
H0-H6: Finalize the Spline geometries and ElevenLabs voice profiles for the 3 target personas (Analytical, Creative, Technical).&lt;/p&gt;

&lt;p&gt;H7-H12: Backend Interrogation. Running the "Case Study" prompts on Mistral Large 3 to extract the raw activation metadata and response content.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 2:&lt;/strong&gt; The Exhibition (The Demo)&lt;br&gt;
H13-H18: Hardcoding the extracted JSON from Day 1 into the oourmind.io frontend. If the user clicks "Moral Dilemma," the site "plays back" the pre-recorded 3D and audio state.&lt;/p&gt;

&lt;p&gt;H19-H22: Record the Vision Video. This is 90% of the judging score. The video shows the vision, not just the code.&lt;/p&gt;

&lt;p&gt;H23-H24: Polish documentation and submit.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 Future Evolution (Post-Hackathon)
&lt;/h2&gt;

&lt;p&gt;The MVP is just a snapshot. Here is the Production Pipeline we will build next:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;1. Neuron-Activation Heatmaps&lt;/em&gt;&lt;br&gt;
Move beyond simple geometry to a live visualization of actual neurons firing within Mistral’s latent layers as it generates text. This is true interpretability.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;2. The "Persona Switchboard"&lt;/em&gt;&lt;br&gt;
An interactive dashboard where a human auditor can manually force Mistral to switch personas mid-sentence (e.g., from "Aggressive Lawyer" to "Helpful Mediator").&lt;/p&gt;

&lt;p&gt;&lt;em&gt;3. Verification of Trust (Governance Dashboard)&lt;/em&gt;&lt;br&gt;
We will integrate a "Verifiable Persona Signature" (e.g., using a protocol like hummiin.io). This provides a decentralized, auditable receipt that the model was in a safe "persona state" during critical inference. This is the governance layer for corporate AI deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Statement:
&lt;/h2&gt;

&lt;p&gt;We aren't building another tool to generate content. We're building a tool to understand the character of the content generator.&lt;/p&gt;

</description>
      <category>interpretability</category>
      <category>mistral</category>
      <category>hackathon</category>
    </item>
    <item>
      <title>A Studio of One's Own</title>
      <dc:creator>Soumia</dc:creator>
      <pubDate>Sun, 01 Mar 2026 17:38:28 +0000</pubDate>
      <link>https://dev.to/soumia_g_9dc322fc4404cecd/a-studio-of-ones-own-h64</link>
      <guid>https://dev.to/soumia_g_9dc322fc4404cecd/a-studio-of-ones-own-h64</guid>
      <description>&lt;h2&gt;
  
  
  A post mortem on building 🌸 Nejat.Studio — not a tutorial, not a brag. A reckoning.
&lt;/h2&gt;




&lt;p&gt;There is, I think, a particular kind of exhaustion that no one warns you about. Not the exhaustion of doing too much — though there is that too — but the exhaustion of waiting. Of holding an idea so carefully, for so long, that it begins to feel fragile. Precious in the wrong way. Too delicate to touch.&lt;/p&gt;

&lt;p&gt;I had been waiting.&lt;/p&gt;

&lt;p&gt;For the right moment, the right tool, the right version of myself that would finally feel ready. And then, almost by accident — the way most true things happen — I stopped waiting and simply began.&lt;/p&gt;




&lt;p&gt;What I built is called &lt;a href="https://nejat.studio" rel="noopener noreferrer"&gt;Nejat&lt;/a&gt;. A tribute to the women who shaped history and, quietly, shaped me. To launch on March 8th. It lives, now, in the world — which still feels improbable.&lt;/p&gt;

&lt;p&gt;This is not a tutorial. It is not a listicle. It is an attempt to say honestly what the experience of building something &lt;em&gt;yours&lt;/em&gt; actually costs, and what it returns.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;On burnout, which is just another word for losing the thread.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The first thing I will tell you is this: do not ship the thing you made at midnight.&lt;/p&gt;

&lt;p&gt;It seems obvious. It is not obvious when you are in it — when the feature finally works and the excitement is a physical sensation and your finger hovers over the button and you think, &lt;em&gt;what could possibly go wrong now.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Everything. Everything could go wrong now.&lt;/p&gt;

&lt;p&gt;Your app does not behave the same in the wild as it does in the sandbox. Real browsers are unkind. Real users are creative in ways you did not anticipate. Ship to a subdomain first. Test it there. Walk away. Come back in the morning with fresh eyes and the willingness to be wrong.&lt;/p&gt;

&lt;p&gt;I learned to use the Plan button — to ask the tool to &lt;em&gt;show me what it intended to do&lt;/em&gt; before it did it. This small act of pause saved me, more than once, from expensive mistakes. If you have a deadline, plan backwards from it. Keep credits in reserve. Keep an MVP that works, always, somewhere you can find it.&lt;/p&gt;

&lt;p&gt;The studio does not exist if the builder burns down.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;On AI magic, which is just another word for the right question.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;At some point in building Nejat, the app stopped being an app and became something closer to a conversation.&lt;/p&gt;

&lt;p&gt;I had integrated a language model so that users could speak &lt;em&gt;with&lt;/em&gt; the women being honored. Could ask them questions. Could hear them answer in their own voice, or something close to it.&lt;/p&gt;

&lt;p&gt;This is the thing about AI that no one explains well: it is not a feature. It is a relationship. You have to know what question you are trying to answer before you can know where the magic belongs.&lt;/p&gt;

&lt;p&gt;For Nejat, the question was: &lt;em&gt;what would she say to you, if she could?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Everything else followed from that.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;On vision, which must come before everything else.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I have spent years helping companies adopt technology. I know how to build a roadmap. I know how to write a requirements document.&lt;/p&gt;

&lt;p&gt;None of that prepared me for the moment of asking: &lt;em&gt;but what do I want to exist in the world?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The answer, when it came, was quiet. I wanted to honor women. I wanted to build something my community could shape with me. I wanted to apply everything I had learned — about systems, about users, about the strange alchemy of making something that works — to something that mattered to me personally.&lt;/p&gt;

&lt;p&gt;V1 was ruthless. Show the women. Tell their stories. Let the AI speak in their voice. That was all.&lt;/p&gt;

&lt;p&gt;Everything that felt essential but wasn't went into V2. Some of it is still there, waiting.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;On pitching, which is just another word for not disappearing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The last thing, and perhaps the hardest: you have to talk about what you are building.&lt;/p&gt;

&lt;p&gt;Not because you need validation. Not because the internet requires it. But because building in silence is a kind of disappearing, and the work deserves to be seen.&lt;/p&gt;

&lt;p&gt;I talked about Nejat before it was ready. My community told me what it needed. They shaped it into something I could not have made alone. This is not a weakness. This is how good things get made.&lt;/p&gt;

&lt;p&gt;Tell someone. Show them the broken version. Ask what they would change.&lt;/p&gt;

&lt;p&gt;The studio only exists if someone can find the door.&lt;/p&gt;




&lt;p&gt;🌸 &lt;strong&gt;Nejat.Studio&lt;/strong&gt; is my room.&lt;/p&gt;

&lt;p&gt;Not borrowed, not rented, not justified by a job title or a company's permission. Mine — built with curiosity and community and the particular stubbornness of someone who finally stopped waiting.&lt;/p&gt;

&lt;p&gt;You can visit: 🌸 &lt;a href="https://nejat.studio" rel="noopener noreferrer"&gt;Nejat&lt;/a&gt; — the tribute that started it all.&lt;/p&gt;

&lt;p&gt;If you are building something, I want to know. Not the polished version. The real one.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with &lt;a href="https://lovable.dev" rel="noopener noreferrer"&gt;Lovable&lt;/a&gt; + Anthropic's Claude. Written in Paris, at a reasonable hour.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;If you're building something solo, figuring it out as you go, or just want to say hi — I'd love to hear from you. Find me at &lt;a href="https://humiin.io" rel="noopener noreferrer"&gt;humiin.io&lt;/a&gt;&lt;/p&gt;

</description>
      <category>buildinginpublic</category>
      <category>devjournal</category>
      <category>lovable</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
