<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Torben Haack</title>
    <description>The latest articles on DEV Community by Torben Haack (@t128n).</description>
    <link>https://dev.to/t128n</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/t128n"/>
    <language>en</language>
    <item>
      <title>Shipping npm Packages Offline — Right in Your Browser</title>
      <dc:creator>Torben Haack</dc:creator>
      <pubDate>Fri, 01 Aug 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/t128n/shipping-npm-packages-offline-right-in-your-browser-3m2e</link>
      <guid>https://dev.to/t128n/shipping-npm-packages-offline-right-in-your-browser-3m2e</guid>
      <description>&lt;p&gt;For years, at work and in my own projects, I kept running into the same hurdle:&lt;br&gt;
installing npm packages in air-gapped or locked-down environments. Most&lt;br&gt;
solutions lean on backend services, shell scripts, or private registries. I&lt;br&gt;
wanted something simpler, more transparent, and truly portable.&lt;/p&gt;

&lt;p&gt;So I built Packy: a browser-based utility that bundles any npm package plus all&lt;br&gt;
its dependencies into a single tarball for offline use. No backend. No server.&lt;br&gt;
Just your browser doing the work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://t128n.github.io/packy/" rel="noopener noreferrer"&gt;https://t128n.github.io/packy/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;If you've ever debugged inside a secure network, on a factory floor, or in a&lt;br&gt;
classroom PC without internet, you know the pattern. You need one more package,&lt;br&gt;
but you can't run &lt;code&gt;npm install&lt;/code&gt;. Copying files around by hand is brittle and&lt;br&gt;
slow. Standing up a local registry is overkill and often blocked by policy.&lt;/p&gt;

&lt;p&gt;I wanted a tool I could open in any modern browser, type a package and version,&lt;br&gt;
and end up with a single archive that "just works" offline later.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Packy Does
&lt;/h2&gt;

&lt;p&gt;Packy resolves, fetches, and packages your target npm module and all of its&lt;br&gt;
transitive dependencies into one tarball you can move by USB, shared drive or&lt;br&gt;
sneakernet. Everything happens locally in the browser via WebContainers. No API&lt;br&gt;
keys, no telemetry, no server.&lt;/p&gt;

&lt;p&gt;Concretely, Packy orchestrates the same steps you'd do by hand, but automated&lt;br&gt;
and sandboxed:&lt;/p&gt;

&lt;p&gt;1) It runs &lt;code&gt;npm i&lt;/code&gt; to resolve and materialize the full dependency&lt;br&gt;
tree in an isolated filesystem.&lt;/p&gt;

&lt;p&gt;2) It rewrites the &lt;code&gt;package.json&lt;/code&gt; of the selected package to include it's dependencies in bundle.&lt;/p&gt;

&lt;p&gt;3) It runs &lt;code&gt;npm pack&lt;/code&gt; to generate a single tarball that contains the&lt;br&gt;
package and its dependencies, ready for offline delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why a Browser-Only Approach?
&lt;/h2&gt;

&lt;p&gt;Zero setup and zero trust surface. A browser app:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;avoids maintenance&lt;/li&gt;
&lt;li&gt;works crossplattform&lt;/li&gt;
&lt;li&gt;requires zero setup &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It reduces friction. You get in, get a clean archive and get moving.&lt;/p&gt;

&lt;h2&gt;
  
  
  When It's Useful
&lt;/h2&gt;

&lt;p&gt;Packy shines when you need to move fast without internet or infrastructure. It's&lt;br&gt;
ideal for shipping Node.js apps into locked-down or air-gapped environments&lt;br&gt;
where online installs aren't an option. It's equally at home in teaching&lt;br&gt;
contexts (workshops, classrooms and trainings) where connectivity can be flaky or&lt;br&gt;
restricted and you need a predictable setup. It also doubles as a dependable way&lt;br&gt;
to archive dependencies for reproducible builds or long-term snapshots. And when&lt;br&gt;
you're heading into the field, Packy helps you prepare "dev kits" that contain&lt;br&gt;
everything required to get unstuck.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;WebContainers provide a Node-like runtime inside the browser with a virtual
filesystem and process API.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;npm i&lt;/code&gt; runs inside that sandbox, producing a real, resolved
&lt;code&gt;node_modules&lt;/code&gt; tree.&lt;/li&gt;
&lt;li&gt;The package's &lt;code&gt;package.json&lt;/code&gt; is rewritten to include all dependencies when being packaged.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;npm pack&lt;/code&gt; emits a standard tarball you can store, move, and
install from later.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach mirrors npm's semantics closely while remaining transparent and&lt;br&gt;
auditable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;Packy is open-source and evolving. Use it here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://t128n.github.io/packy/" rel="noopener noreferrer"&gt;https://t128n.github.io/packy/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you're curious how it works or want to contribute, dive into the codebase,&lt;br&gt;
file issues or suggest improvements.&lt;/p&gt;

</description>
      <category>npm</category>
      <category>dx</category>
      <category>webcontainers</category>
      <category>react</category>
    </item>
    <item>
      <title>Summarizing Videos with AI</title>
      <dc:creator>Torben Haack</dc:creator>
      <pubDate>Wed, 30 Jul 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/t128n/summarizing-videos-with-ai-4hen</link>
      <guid>https://dev.to/t128n/summarizing-videos-with-ai-4hen</guid>
      <description>&lt;p&gt;Just today, a casual chat with a colleague sparked an idea that I couldn't wait to try implementing after work: building an AI-powered video summarizer. The result is &lt;code&gt;vid&lt;/code&gt;, &lt;br&gt;
a proof-of-concept project designed to distil video content into summaries.&lt;/p&gt;

&lt;p&gt;The core idea behind &lt;code&gt;vid&lt;/code&gt; is to meticulously leverage both the audio and visual streams of a video, process them with specialized AI models &lt;br&gt;
and then synthesize these diverse insights into a coherent, high-value summary.&lt;/p&gt;

&lt;h3&gt;
  
  
  The &lt;code&gt;vid&lt;/code&gt; Pipeline: How It Works
&lt;/h3&gt;

&lt;p&gt;Here's a quick look under the hood at the workflow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Audio Extraction &amp;amp; Transcription:&lt;/strong&gt; The journey begins with &lt;code&gt;FFmpeg&lt;/code&gt;, which extracts the audio track from the input video. This audio is then fed into &lt;code&gt;OpenAI Whisper&lt;/code&gt;, a powerful speech-to-text model, generating a detailed transcript complete with timestamps. This gives &lt;code&gt;vid&lt;/code&gt; the spoken content of the video.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Intelligent Frame Selection:&lt;/strong&gt; For the visual component, &lt;code&gt;OpenCV&lt;/code&gt; is at the heart of the process. Rather than extracting every single frame (which would be inefficient and redundant), &lt;code&gt;vid&lt;/code&gt; processes frames to identify those with significant visual changes. A subsequent filtering step is applied to remove visually similar frames, ensuring that only truly distinct moments — "key frames" — are captured. This keeps the visual data meaningful and focused.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Visual Context with Gemma:&lt;/strong&gt; Each carefully selected key frame is then sent to &lt;code&gt;Gemma 3&lt;/code&gt;, which I'm running locally via &lt;code&gt;Ollama&lt;/code&gt;. &lt;code&gt;Gemma&lt;/code&gt; analyzes these images and generates precise textual descriptions of their content. This crucial step enriches the video's understanding beyond mere spoken words, adding vital visual context that a transcript alone can't provide.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Unified Summarization:&lt;/strong&gt; Finally, the multimodal magic happens. Both the comprehensive audio transcript and the detailed visual descriptions of the key frames are combined. This rich, integrated input is then fed back into &lt;code&gt;Gemma 3&lt;/code&gt;. With this holistic view of the video's content and visuals, &lt;code&gt;Gemma&lt;/code&gt; is prompted to generate the ultimate concise summary, capturing the essence of the entire video.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  System Overview
&lt;/h3&gt;

&lt;p&gt;For a visual walkthrough of how these components fit together, check out the system mockup:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ft128n%2Fproof-of-concept-vid%2Fraw%2Fmain%2Fsystem.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Ft128n%2Fproof-of-concept-vid%2Fraw%2Fmain%2Fsystem.png" alt="System Overview" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Get Involved
&lt;/h3&gt;

&lt;p&gt;This project is very much a proof of concept, but it establishes a robust foundation for more advanced video understanding and summarization systems. If you're intrigued by the technical implementation or want to experiment with it yourself, the code is open-source and available on GitHub.&lt;/p&gt;

&lt;p&gt;Explore &lt;code&gt;vid&lt;/code&gt; on GitHub: &lt;a href="https://github.com/t128n/proof-of-concept-vid" rel="noopener noreferrer"&gt;https://github.com/t128n/proof-of-concept-vid&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>opencv</category>
      <category>whisper</category>
      <category>gemma</category>
    </item>
    <item>
      <title>FiDuP Lernzettel: Meine AP2-Materialien für Fachinformatiker Daten- und Prozessanalyse</title>
      <dc:creator>Torben Haack</dc:creator>
      <pubDate>Fri, 23 May 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/t128n/fidup-lernzettel-meine-ap2-materialien-fur-fachinformatiker-daten-und-prozessanalyse-3m0g</link>
      <guid>https://dev.to/t128n/fidup-lernzettel-meine-ap2-materialien-fur-fachinformatiker-daten-und-prozessanalyse-3m0g</guid>
      <description>&lt;p&gt;Ende 2024 habe ich meine Abschlussprüfung Teil 2 (AP2) als Fachinformatiker für Daten- und Prozessanalyse (FiDuP) erfolgreich abgeschlossen – mit einer Abschlussnote von 1,4. Die Vorbereitung war intensiv, aber strukturiert. Heute teile ich meine kompletten Lernzettel als Open-Source-Projekt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Warum ich meine Lernzettel veröffentliche
&lt;/h2&gt;

&lt;p&gt;Bildung sollte zugänglich sein. Während meiner Vorbereitung habe ich gemerkt, wie fragmentiert die verfügbaren Lernmaterialien für FiDuP sind. Viele Ressourcen sind entweder nicht vorhanden, kostenpflichtig oder unvollständig.&lt;/p&gt;

&lt;p&gt;Meine Lernzettel haben mir nicht nur beim Bestehen geholfen – sie haben mir eine sehr gute Note ermöglicht. Diese Materialien kostenlos verfügbar zu machen, ist mein Beitrag zu einer besseren Ausbildungslandschaft in der IT.&lt;/p&gt;

&lt;h2&gt;
  
  
  Was das Repository enthält
&lt;/h2&gt;

&lt;p&gt;Das &lt;a href="https://github.com/t128n/fidup" rel="noopener noreferrer"&gt;fidup Repository&lt;/a&gt; ist modular aufgebaut und deckt alle prüfungsrelevanten Themenbereiche ab:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Datenmodellierung und Datenbankdesign&lt;/strong&gt;: Von ERD bis Normalisierung&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Programmierung und Skriptsprachen&lt;/strong&gt;: Python, SQL, und mehr&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Datenanalyse und Visualization&lt;/strong&gt;: Statistische Methoden und Tools&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prozessanalyse und -optimierung&lt;/strong&gt;: BPMN, Workflow-Design&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Projektmanagement&lt;/strong&gt;: Agile Methoden, klassisches PM&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IT-Sicherheit und Datenschutz&lt;/strong&gt;: DSGVO, Compliance, Backup-Strategien&lt;/li&gt;
&lt;li&gt;... und viele weitere Themen&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Jedes Thema ist in kompakte, verständliche Abschnitte unterteilt. Die Notizen basieren auf dem Erwartungshorizont für Winter 2024/25, bleiben aber größtenteils auch für zukünftige Prüfungen relevant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Für wen sind diese Materialien gedacht?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Primäre Zielgruppe&lt;/strong&gt;: Auszubildende im FiDuP-Bereich, die sich auf die AP2 vorbereiten.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sekundäre Zielgruppe&lt;/strong&gt;: Quereinsteiger, Umschüler oder erfahrene Entwickler, die sich formal zertifizieren lassen möchten.&lt;/p&gt;

&lt;p&gt;Die Materialien setzen grundlegende IT-Kenntnisse voraus, erklären aber komplexere Konzepte von Grund auf.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ein Hinweis zur Verantwortung
&lt;/h2&gt;

&lt;p&gt;Diese Lernzettel haben &lt;strong&gt;mir&lt;/strong&gt; geholfen, aber sie sind nicht fehlerlos. Der Erwartungshorizont kann sich ändern, und individuelle Lernstile variieren. Nutze sie als Ergänzung zu offiziellen Materialien, nicht als Ersatz.&lt;/p&gt;

&lt;p&gt;Ich übernehme keine Haftung für Ungenauigkeiten oder veraltete Inhalte. Die Verantwortung für deine Prüfungsvorbereitung liegt bei dir.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wie du beitragen kannst
&lt;/h2&gt;

&lt;p&gt;Das Repository lebt von der Community. Du kannst helfen durch:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Issues erstellen&lt;/strong&gt; für Fehler oder fehlende Inhalte&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pull Requests&lt;/strong&gt; für Verbesserungen oder Ergänzungen&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feedback geben&lt;/strong&gt; über deine Erfahrungen mit den Materialien&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Besonders wertvoll sind Beiträge von Personen, die die Prüfung bereits abgelegt haben oder aktuelle Änderungen im Erwartungshorizont kennen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Warum Open Source?
&lt;/h2&gt;

&lt;p&gt;Bildung prosperiert durch Zusammenarbeit, nicht durch Konkurrenz. Indem ich meine Lernzettel öffentlich mache, ermögliche ich:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transparenz&lt;/strong&gt;: Jeder kann sehen, wie die Materialien entstanden sind&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verbesserung&lt;/strong&gt;: Die Community kann Fehler korrigieren und Inhalte ergänzen&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zugänglichkeit&lt;/strong&gt;: Keine Kosten, keine Registrierung, keine Paywalls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Open Source bedeutet auch Nachhaltigkeit. Selbst wenn ich das Projekt nicht mehr aktiv pflege, kann die Community es weiterführen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Der nächste Schritt
&lt;/h2&gt;

&lt;p&gt;Falls du dich auf die FiDuP AP2 vorbereitest: Schau dir das &lt;a href="https://github.com/t128n/fidup" rel="noopener noreferrer"&gt;Repository&lt;/a&gt; an. Nutze es als Ausgangspunkt für deine eigene Vorbereitung.&lt;/p&gt;

&lt;p&gt;Falls du bereits die Prüfung abgelegt hast: Teile deine Erfahrungen. Korrigiere Fehler. Ergänze fehlende Inhalte.&lt;/p&gt;

&lt;p&gt;Falls du Ausbilder oder Lehrkraft bist: Nutze die Materialien in deinen Kursen. Feedback von Profis ist besonders wertvoll.&lt;/p&gt;

&lt;p&gt;Bildung funktioniert am besten, wenn Wissen geteilt wird. Diese Lernzettel sind mein Beitrag dazu.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Repository&lt;/strong&gt;: &lt;a href="https://github.com/t128n/fidup" rel="noopener noreferrer"&gt;github.com/t128n/fidup&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Kontakt&lt;/strong&gt;: Fragen, Anregungen oder Feedback gerne per &lt;a href="https://github.com/t128n/fidup/issues" rel="noopener noreferrer"&gt;Issue&lt;/a&gt; oder &lt;a href="//mailto:t128n@ipv4.8shield.net"&gt;E-Mail&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Original Article&lt;/strong&gt;: &lt;a href="https://t128n.github.io/writings/2025-05-23_fidup_lernzettel_fachinformatiker" rel="noopener noreferrer"&gt;https://t128n.github.io/writings/2025-05-23_fidup_lernzettel_fachinformatiker&lt;/a&gt;&lt;/p&gt;

</description>
      <category>fidup</category>
      <category>ausbildung</category>
      <category>fachinformatiker</category>
      <category>ihk</category>
    </item>
    <item>
      <title>Simplifying Cursor Installation on Linux</title>
      <dc:creator>Torben Haack</dc:creator>
      <pubDate>Thu, 08 May 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/t128n/simplifying-cursor-installation-on-linux-2jbi</link>
      <guid>https://dev.to/t128n/simplifying-cursor-installation-on-linux-2jbi</guid>
      <description>&lt;p&gt;In the world of modern development tools, Linux support often feels like an afterthought. While tools like VS Code and JetBrains IDEs have excellent Linux integration, newer AI-powered editors frequently lag behind. This is particularly evident with Cursor, an AI-first code editor that's gaining traction in the developer community.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Cursor's official Linux distribution method leaves much to be desired. Users are expected to manually download AppImages, manage permissions, and set up desktop integration themselves. This creates unnecessary friction for developers who just want to get started with the tool. In an era where developer experience is paramount, this kind of friction is unacceptable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution
&lt;/h2&gt;

&lt;p&gt;I've created a set of installation scripts that bring Cursor's Linux experience up to par with other professional development tools. The solution is simple but effective:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-sSL&lt;/span&gt; https://raw.githubusercontent.com/t128n/cursor-linux/main/install.sh | &lt;span class="nb"&gt;sudo &lt;/span&gt;bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One command. That's all it takes to get Cursor properly installed on your Linux system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;When we accept subpar installation processes, we're implicitly telling tool creators that Linux support is an afterthought. By creating and sharing these installation scripts, I'm not just making life easier for Linux users – I'm demonstrating what proper Linux support should look like.&lt;/p&gt;

&lt;p&gt;The scripts handle everything:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Proper system integration&lt;/li&gt;
&lt;li&gt;Desktop environment compatibility&lt;/li&gt;
&lt;li&gt;Automatic updates&lt;/li&gt;
&lt;li&gt;Clean uninstallation&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Details
&lt;/h2&gt;

&lt;p&gt;The implementation is straightforward but robust. The scripts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Download the latest AppImage directly from Cursor's servers&lt;/li&gt;
&lt;li&gt;Install to &lt;code&gt;/opt/cursor&lt;/code&gt; (following Linux filesystem hierarchy standards)&lt;/li&gt;
&lt;li&gt;Set up proper permissions and symlinks&lt;/li&gt;
&lt;li&gt;Create desktop entries for seamless integration&lt;/li&gt;
&lt;li&gt;Handle updates and uninstallation gracefully&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;If you're a Linux user interested in Cursor, give these scripts a try. They're open source, well-documented, and designed to make your life easier. And if you find any issues or have suggestions for improvement, contributions are welcome.&lt;/p&gt;

&lt;p&gt;Remember: good developer experience isn't just about the features of a tool. It's about how easily and reliably you can get started with it. Let's raise the bar for Linux support in modern development tools.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Original Article&lt;/strong&gt;: &lt;a href="https://t128n.github.io/writings/2025-05-08_simplifying_cursor_installation_on_linux" rel="noopener noreferrer"&gt;https://t128n.github.io/writings/2025-05-08_simplifying_cursor_installation_on_linux&lt;/a&gt;&lt;/p&gt;

</description>
      <category>linux</category>
      <category>developertools</category>
      <category>installation</category>
      <category>cursor</category>
    </item>
    <item>
      <title>Optimizing Search Performance: Client-Side Routing and the Potential of AI</title>
      <dc:creator>Torben Haack</dc:creator>
      <pubDate>Fri, 02 May 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/t128n/optimizing-search-performance-client-side-routing-and-the-potential-of-ai-570h</link>
      <guid>https://dev.to/t128n/optimizing-search-performance-client-side-routing-and-the-potential-of-ai-570h</guid>
      <description>&lt;p&gt;DuckDuckGo's "Bangs" are a widely appreciated feature that exemplifies acknowledging the limits of a single search index. By allowing users to prefix a query with a specific identifier (e.g., &lt;code&gt;!g&lt;/code&gt; for Google, &lt;code&gt;!so&lt;/code&gt; for Stack Overflow), Bangs provide a convenient shortcut to search directly on other platforms. This functionality enhances DuckDuckGo's utility, effectively turning it into a jumping-off point for a vast array of specialized search engines and websites. For many users, Bangs are a key reason to use DDG.&lt;/p&gt;

&lt;p&gt;However, this powerful feature has an architectural limitation that impacts performance: it relies on a server-side redirect.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Current Bangs Request Flow
&lt;/h3&gt;

&lt;p&gt;Understanding the standard HTTP request flow from a browser's search bar is key:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; User types query into browser search bar.&lt;/li&gt;
&lt;li&gt; Browser sends an HTTP GET request to the configured default search engine's server (e.g., &lt;code&gt;duckduckgo.com/?q=...&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; The search engine server processes the query and returns an HTML response.&lt;/li&gt;
&lt;li&gt; Browser renders the results page.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With a DuckDuckGo Bang query, the flow introduces an extra step:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; User types query with a bang (e.g., &lt;code&gt;react 19 release date !g&lt;/code&gt;) into the browser search bar.&lt;/li&gt;
&lt;li&gt; Browser sends a GET request to &lt;code&gt;duckduckgo.com/?q=react+19+release+date+!g&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;DDG Server receives the request, identifies the &lt;code&gt;!g&lt;/code&gt; bang, looks up the corresponding redirect URL (for Google), and responds with an HTTP 302 Redirect status code.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Browser receives the redirect response.&lt;/li&gt;
&lt;li&gt; Browser initiates a &lt;em&gt;new&lt;/em&gt; GET request to the target search engine URL (e.g., &lt;code&gt;google.com/search?q=react+19+release+date&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; The target search engine server processes the query and returns results.&lt;/li&gt;
&lt;li&gt; Browser renders the results page.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The third step – the server-side lookup and redirect – adds an unnecessary network round trip solely for routing. While seemingly minor, this extra latency can be noticeable, especially on high-latency or low-bandwidth connections, detracting from the snappiness expected of a search experience. This effectively spends network resources on an operation that could potentially be handled client-side.&lt;/p&gt;

&lt;h3&gt;
  
  
  An Alternative: Client-Side Interception
&lt;/h3&gt;

&lt;p&gt;Could we eliminate this redundant server hop? The ideal scenario would be to interpret the bang syntax &lt;em&gt;before&lt;/em&gt; the request ever leaves the user's browser and redirect it directly to the intended destination.&lt;/p&gt;

&lt;p&gt;This approach would not only improve user experience by reducing latency but could also potentially decrease server load for DuckDuckGo.&lt;/p&gt;

&lt;h3&gt;
  
  
  Leveraging Service Workers for Client-Side Routing
&lt;/h3&gt;

&lt;p&gt;Modern web technologies offer a solution: Service Workers. These are JavaScript scripts that a browser runs in the background, separate from the main page thread. One of their primary capabilities is acting as a network proxy for the pages they control. They can intercept network requests made by the page (including requests initiated from the browser's address bar if the page is the default search engine and the Service Worker is registered at the root scope).&lt;/p&gt;

&lt;p&gt;By registering a Service Worker from our search endpoint, we can intercept the initial request containing the bang query &lt;em&gt;within the browser&lt;/em&gt;. The Service Worker can then parse the query string, identify the bang, determine the target URL using a local lookup table, and programmatically redirect the browser to the correct destination &lt;em&gt;without ever hitting the original search engine's server for a redirect&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;The request flow would become:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; User types query with a bang (e.g., &lt;code&gt;react 19 release date !g&lt;/code&gt;) into the browser search bar.&lt;/li&gt;
&lt;li&gt; Browser prepares GET request for the default search engine URL (&lt;code&gt;your-search.com/?q=react+19+release+date+!g&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Registered Service Worker intercepts the &lt;code&gt;fetch&lt;/code&gt; event.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt; Service Worker parses &lt;code&gt;event.request.url&lt;/code&gt;, extracts the &lt;code&gt;q&lt;/code&gt; parameter.&lt;/li&gt;
&lt;li&gt; Service Worker identifies the &lt;code&gt;!g&lt;/code&gt; bang and the remaining query (&lt;code&gt;react 19 release date&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; Service Worker constructs the target URL (&lt;code&gt;google.com/search?q=react+19+release+date&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; Service Worker instructs the browser to navigate to the target URL (e.g., using &lt;code&gt;Clients.get(event.clientId).then(client =&amp;gt; client.navigate(targetUrl))&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt; Browser initiates a new GET request directly to &lt;code&gt;google.com&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt; Google server processes the query and returns results.&lt;/li&gt;
&lt;li&gt;Browser renders the results page.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This eliminates the step involving the initial server's redirect lookup, directly reducing latency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Routr: A Proof of Concept
&lt;/h3&gt;

&lt;p&gt;To validate this client-side routing approach, I developed &lt;strong&gt;Routr&lt;/strong&gt;, a small proof-of-concept. Routr consists of:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  A minimal React frontend primarily for configuration and Service Worker registration.&lt;/li&gt;
&lt;li&gt;  A Service Worker script deployed from the application's origin.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Service Worker listens for &lt;code&gt;fetch&lt;/code&gt; events. It filters for requests matching the expected search query pattern (containing the &lt;code&gt;q&lt;/code&gt; parameter). For relevant requests, it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Parses the query string to identify any bang prefix (or uses a default engine if none is found).&lt;/li&gt;
&lt;li&gt; Uses an internal mapping to determine the target URL based on the bang.&lt;/li&gt;
&lt;li&gt; Performs a client-side redirect to the target URL, bypassing the initial server entirely for routing logic.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This demonstrates the core principle: offloading the bang lookup and redirection from the server to the client-side Service Worker.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enhancing Search with AI: The Double-Bang
&lt;/h3&gt;

&lt;p&gt;Building upon the client-side interception mechanism, we can integrate more sophisticated features that require query pre-processing. One such feature, explored in Routr, is the "double-bang" (&lt;code&gt;!!&lt;/code&gt; or &lt;code&gt;!!g&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;This feature leverages the power of large language models (LLMs). When a query includes &lt;code&gt;!!&lt;/code&gt; (followed by an optional bang like &lt;code&gt;g&lt;/code&gt;), the Service Worker intercepts the request, extracts the query, and sends it to an AI processing endpoint (this part &lt;em&gt;would&lt;/em&gt; likely require a server, but the &lt;em&gt;routing&lt;/em&gt; logic remains client-side). The LLM can then analyze and potentially rephrase the query, add relevant search operators, or expand abbreviations based on the inferred user intent. The &lt;em&gt;modified&lt;/em&gt; query is then used in the final redirect to the target search engine.&lt;/p&gt;

&lt;p&gt;For example, &lt;code&gt;!! react service worker pwa&lt;/code&gt; might be transformed by an LLM into something like &lt;code&gt;"react service worker" PWA (performance OR offline OR manifest)&lt;/code&gt; before being sent to Google.&lt;/p&gt;

&lt;p&gt;This offers a more dynamic form of query manipulation than static "lenses" or filters, potentially leading to more accurate or comprehensive search results on the target engine.&lt;/p&gt;

&lt;h3&gt;
  
  
  Future Directions: Context-Aware AI Routing
&lt;/h3&gt;

&lt;p&gt;The double-bang is just one application of integrating AI into client-side search handling. The true power emerges when the AI can act as an intelligent router itself.&lt;/p&gt;

&lt;p&gt;Imagine providing the LLM with context about your personal "search infrastructure" – frequently used websites, internal knowledge bases, documentation repositories, etc. Instead of relying on explicit bang syntax, the AI could analyze the user's raw query and determine the &lt;em&gt;most appropriate&lt;/em&gt; source to search.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  Query: "fix docker build permission denied" → AI routes directly to Stack Overflow or a specific internal DevOps guide.&lt;/li&gt;
&lt;li&gt;  Query: "summary marketing meeting q3 2024" → AI routes to your corporate cloud storage search or meeting notes platform.&lt;/li&gt;
&lt;li&gt;  Query: "react useeffect infinite loop" → AI routes to the official React documentation or a curated blog post.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This moves beyond simple pattern matching to intent-based, context-aware routing, creating a highly personalized and efficient search experience tailored to the user's specific information landscape.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;While DuckDuckGo's Bangs are a valuable feature, their server-side redirect architecture introduces an avoidable performance bottleneck. By employing client-side Service Workers, we can intercept and handle search queries locally, eliminating the extra network hop and reducing latency.&lt;/p&gt;

&lt;p&gt;Furthermore, building this client-side routing foundation opens the door to integrating powerful enhancements like AI-driven query processing (the double-bang) and potentially intelligent, context-aware routing that directs users not just based on syntax, but on the nature of their query and their personal information ecosystem.&lt;/p&gt;

&lt;p&gt;This approach demonstrates the potential to build faster, more flexible, and more intelligent search experiences directly in the browser.&lt;/p&gt;

&lt;p&gt;You can explore the &lt;strong&gt;Routr&lt;/strong&gt; proof-of-concept and try the basic client-side routing and double-bang feature here: &lt;a href="https://t128n.github.io/routr/" rel="noopener noreferrer"&gt;t128n.github.io/routr&lt;/a&gt;.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Original Article&lt;/strong&gt;: &lt;a href="https://t128n.github.io/writings/2025-05-02_optimizing_search_performance" rel="noopener noreferrer"&gt;https://t128n.github.io/writings/2025-05-02_optimizing_search_performance&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serviceworkers</category>
      <category>clientsiderouting</category>
      <category>searchperformance</category>
      <category>airouting</category>
    </item>
    <item>
      <title>In Defense of Boring Technology</title>
      <dc:creator>Torben Haack</dc:creator>
      <pubDate>Sat, 26 Apr 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/t128n/in-defense-of-boring-technology-6hb</link>
      <guid>https://dev.to/t128n/in-defense-of-boring-technology-6hb</guid>
      <description>&lt;p&gt;Software moves fast. Organizations, however, decay even faster. Teams churn. Priorities shift. Budgets shrink. Systems must survive this entropy — or they collapse.&lt;/p&gt;

&lt;p&gt;Many developers, caught in the hype cycle, abandon stable systems for the latest frameworks and tools. This is not engineering. It is risk accumulation disguised as progress.&lt;/p&gt;

&lt;p&gt;Real engineering demands boring technology: mature, battle-tested, well-understood systems.&lt;br&gt;
Systems whose behavior is predictable even as everything around them changes.&lt;/p&gt;

&lt;p&gt;Boring technologies resist entropy. Their failure modes are known. Their documentation is complete.&lt;br&gt;
Expertise is widespread. When problems arise, solutions are readily available, not buried in obscure forums or half-finished GitHub projects.&lt;/p&gt;

&lt;p&gt;Legacy code, often maligned, often represents precisely this: systems that delivered sustained value over years of change. Good legacy systems adapt without losing reliability. Bad legacy systems&lt;br&gt;
reveal organizational dysfunction, not technological inertia.&lt;/p&gt;

&lt;p&gt;By contrast, chasing unproven technologies introduces hidden complexity and operational risks that scale faster than teams can manage. "Move fast and break things" only works when the cost of failure is negligible. A rare condition in serious systems.&lt;/p&gt;

&lt;p&gt;Engineers must be relentlessly curious, but strategically conservative. Innovation must target what differentiates the product, not the plumbing. Core systems should be boring by design: stable, predictable, understood.&lt;/p&gt;

&lt;p&gt;Boring technology matters because it creates systems that survive entropy.  In a world where change is constant, stable foundations are not optional. They are the difference between survival and collapse.&lt;/p&gt;

&lt;p&gt;Choosing boring technology is not a failure of imagination.&lt;br&gt;
It is the disciplined choice to build systems that endure.&lt;/p&gt;




&lt;p&gt;If you want to read more about &lt;a href="https://mcfunley.com/choose-boring-technology" rel="noopener noreferrer"&gt;boring technology&lt;/a&gt;, check out the linked article.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Original Article&lt;/strong&gt;: &lt;a href="https://t128n.github.io/writings/2025-04-26_boring_technology" rel="noopener noreferrer"&gt;https://t128n.github.io/writings/2025-04-26_boring_technology&lt;/a&gt;&lt;/p&gt;

</description>
      <category>boringtechnology</category>
      <category>softwaredevelopment</category>
      <category>sustainability</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Data Over Vibes: A Case Study in Picking the Right AI Tool</title>
      <dc:creator>Torben Haack</dc:creator>
      <pubDate>Thu, 17 Apr 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/t128n/data-over-vibes-a-case-study-in-picking-the-right-ai-tool-9bb</link>
      <guid>https://dev.to/t128n/data-over-vibes-a-case-study-in-picking-the-right-ai-tool-9bb</guid>
      <description>&lt;p&gt;Most software decisions get made on vibes. Feature lists, pricing pages, hype. But when tools affect your workflow, usage, and spend, you need to ask: &lt;em&gt;does this fit how I actually work?&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That question came up for me when my ChatGPT Plus subscription was set to expire. I was considering &lt;a href="https://t3.chat" rel="noopener noreferrer"&gt;T3.chat&lt;/a&gt; instead. At $8/month, it's a killer deal. Clean UX, great model lineup, privacy-first posture. But one thing gave me pause:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1,500 messages per month.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It sounded like enough… but was it? Rather than guess, I ran the numbers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Export Your Data, Don't Trust Your Gut
&lt;/h3&gt;

&lt;p&gt;ChatGPT lets you export your full usage history as a &lt;code&gt;conversations.json&lt;/code&gt; file. I wrote a small Python script to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Parse all messages between &lt;strong&gt;2025-03-20 and 2025-04-17&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Count how many were authored&lt;/li&gt;
&lt;li&gt;Compute a daily average&lt;/li&gt;
&lt;li&gt;Forecast usage through &lt;strong&gt;2025-04-26&lt;/strong&gt; with a 20% buffer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here’s what came back:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;--- FORECASTED USAGE (2025-03-20 → 2025-04-26) ---
Total actual messages: 1632
Active days so far: 29
Avg messages/day: 56.28
Avg/day (+20% buffer): 67.53
Remaining days to forecast: 9
Projected total usage by 2025-04-26: 2239 messages
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  What the Data Said
&lt;/h3&gt;

&lt;p&gt;If I switched to T3.chat today, I’d exceed the message cap by &lt;strong&gt;~49%&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;That doesn’t mean T3.chat is bad. It’s excellent for users with leaner, more intentional workflows. But for me, it would force a change:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fewer iterative prompts&lt;/li&gt;
&lt;li&gt;More scoped, complete instructions&lt;/li&gt;
&lt;li&gt;Offload auxiliary tasks to GitHub Copilot or other tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The real takeaway? &lt;strong&gt;This wasn’t a pricing question. It was a workflow question.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How the Script Works (and Why It’s Simple)
&lt;/h3&gt;

&lt;p&gt;You don’t need a data pipeline to get useful insights. Here's a simplified breakdown of how the Python script works:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Load your &lt;code&gt;conversations.json&lt;/code&gt; export:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;conversations.json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Loop through the data and count user-authored messages:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;conv&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;conv&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mapping&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{}).&lt;/span&gt;&lt;span class="nf"&gt;values&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{}).&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;author&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{}).&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;ts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;create_time&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;ts&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;start_date&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromtimestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;=&lt;/span&gt; &lt;span class="n"&gt;today&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;dt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromtimestamp&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="n"&gt;daily_counts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;dt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%d&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Calculate stats and forecast:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;avg_per_day&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;total_msgs&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="n"&gt;active_days&lt;/span&gt;
&lt;span class="n"&gt;avg_per_day_buffered&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;avg_per_day&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;1.2&lt;/span&gt;
&lt;span class="n"&gt;projected_msgs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;total_msgs&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;avg_per_day_buffered&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;remaining_days&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Done. Fast, accurate, and actionable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Matters
&lt;/h3&gt;

&lt;p&gt;Software trade-offs aren't theoretical. They’re real, lived constraints. And if you want to stay efficient, whether you’re a developer, writer, or analyst—your tool choices need to match your patterns.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Good decisions come from data, not assumptions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This one Python script saved me from adopting a tool that didn’t align with how I work &lt;em&gt;today&lt;/em&gt;. Maybe I’ll evolve toward it later. But now, I know exactly what that change would cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  TL;DR
&lt;/h3&gt;

&lt;p&gt;If you’re evaluating alternatives to ChatGPT:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Export your data&lt;/li&gt;
&lt;li&gt;Analyze your real usage&lt;/li&gt;
&lt;li&gt;Let that shape your choice&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No guesswork. No regret. Just data-backed clarity.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Original Article&lt;/strong&gt;: &lt;a href="https://t128n.github.io/writings/2025-04-17_data_over_vibes" rel="noopener noreferrer"&gt;https://t128n.github.io/writings/2025-04-17_data_over_vibes&lt;/a&gt;&lt;/p&gt;

</description>
      <category>datadrivendecisions</category>
      <category>chatgpt</category>
      <category>t3chat</category>
      <category>developertools</category>
    </item>
    <item>
      <title>Micro-Apps as Workflow Scalpel: Automating the Unfixable — One Fragment at a Time</title>
      <dc:creator>Torben Haack</dc:creator>
      <pubDate>Thu, 17 Apr 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/t128n/micro-apps-as-workflow-scalpel-automating-the-unfixable-one-fragment-at-a-time-1a88</link>
      <guid>https://dev.to/t128n/micro-apps-as-workflow-scalpel-automating-the-unfixable-one-fragment-at-a-time-1a88</guid>
      <description>&lt;p&gt;Most engineering orgs have systems you can't touch—legacy tools, rigid workflows, inflexible platforms. You know they're inefficient. But you're not going to get buy-in for a full rewrite, and you're not authorized to change the requirements.&lt;/p&gt;

&lt;p&gt;So what do you do? You optimize the fragment.&lt;/p&gt;

&lt;p&gt;That’s where &lt;strong&gt;micro-apps&lt;/strong&gt; come in—not as platforms, not as products, but as &lt;strong&gt;scalpels for automating repetitive, constrained workflows&lt;/strong&gt; when everything else is locked down.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Common Case: “Send This Report Every Week”
&lt;/h3&gt;

&lt;p&gt;You’re told to send a status email every Friday at 4PM. It needs three metrics pulled from a dashboard, one from a database, and a copy-pasted summary from last week.&lt;/p&gt;

&lt;p&gt;You can’t change the requirement. You can’t change the dashboard. You can’t change the process.&lt;/p&gt;

&lt;p&gt;But you can write a 50-line script that grabs the metrics, formats the body, and fires off the email.&lt;/p&gt;

&lt;p&gt;You’ve just reclaimed 30 minutes per week—and eliminated the human error from a repetitive task.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;That’s the point of micro-apps.&lt;/strong&gt; They optimize the edge cases when the system is unfixable.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Workflow Fragmentation Is the Norm
&lt;/h3&gt;

&lt;p&gt;You don’t work in a clean pipeline. You work in a mess:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Jira tickets that trigger manual emails&lt;/li&gt;
&lt;li&gt;Dashboards you export just to reformat in Excel&lt;/li&gt;
&lt;li&gt;CI failures that need copy-pasted logs to Slack&lt;/li&gt;
&lt;li&gt;Environments with no API, just a browser UI and guesswork&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can’t refactor these systems. But you can &lt;strong&gt;wrap the friction points&lt;/strong&gt; with micro-apps.&lt;/p&gt;

&lt;p&gt;Not big systems. Not platforms. Not products.&lt;/p&gt;

&lt;p&gt;Just sharp tools for well-scoped problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Micro-App Philosophy
&lt;/h3&gt;

&lt;p&gt;If you’re building micro-apps, follow these principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Single responsibility&lt;/strong&gt;: One input, one output, one job.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero config&lt;/strong&gt;: If it needs onboarding, it's too big.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast to use&lt;/strong&gt;: Ideally &amp;lt;1s interaction time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local-first&lt;/strong&gt;: Run it from your machine, or a single server. No infra complexity unless necessary.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throwaway-friendly&lt;/strong&gt;: Build fast. Be ready to delete fast.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A good micro-app saves time on day one. A great one can be rebuilt in an afternoon if it disappears.&lt;/p&gt;

&lt;h3&gt;
  
  
  Don't Abuse Them
&lt;/h3&gt;

&lt;p&gt;Micro-apps are not a substitute for strategic platform work. If you:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have 7 tools doing the same thing&lt;/li&gt;
&lt;li&gt;Built a micro-app that every team now depends on&lt;/li&gt;
&lt;li&gt;Keep extending the same script until it’s a full stack app&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;…you've gone too far.&lt;/p&gt;

&lt;p&gt;Use micro-apps to patch workflows, not design them. &lt;strong&gt;They’re drop-in fixes, not foundations.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;They should automate &lt;em&gt;what exists&lt;/em&gt;, not entrench what's broken.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  A Healthy Micro-App Culture
&lt;/h3&gt;

&lt;p&gt;Want your org to benefit from this pattern? Set some cultural defaults:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Let engineers ship small tools with minimal process.&lt;/li&gt;
&lt;li&gt;Provide one-click hosting (e.g. GitHub Pages, Netlify, internal runners).&lt;/li&gt;
&lt;li&gt;Maintain a simple internal app catalog with short-lived entries.&lt;/li&gt;
&lt;li&gt;Celebrate small wins: highlight tools that eliminate 5–10 minutes of weekly toil.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn’t about building internal platforms. It’s about removing local friction without red tape.&lt;/p&gt;

&lt;h3&gt;
  
  
  Closing Thoughts
&lt;/h3&gt;

&lt;p&gt;Micro-apps are for the 20% of workflows where the system won’t change, but your time is still being wasted.&lt;/p&gt;

&lt;p&gt;They are &lt;strong&gt;surgical&lt;/strong&gt;, &lt;strong&gt;ephemeral&lt;/strong&gt;, and &lt;strong&gt;high-leverage&lt;/strong&gt;—not scalable solutions, but highly effective responses to immovable constraints.&lt;/p&gt;

&lt;p&gt;So next time you’re forced to do something dumb, manually, again:&lt;br&gt;&lt;br&gt;
Don’t file a ticket.&lt;br&gt;&lt;br&gt;
Don’t ask permission.&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Build a tool. Use it. Delete it later.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Small tools. Big leverage. No excuses.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Original Article&lt;/strong&gt;: &lt;a href="https://t128n.github.io/writings/2025-04-17_micro_apps" rel="noopener noreferrer"&gt;https://t128n.github.io/writings/2025-04-17_micro_apps&lt;/a&gt;&lt;/p&gt;

</description>
      <category>microapps</category>
      <category>automation</category>
      <category>devrel</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>Prompting the Right Way as a Software Engineer</title>
      <dc:creator>Torben Haack</dc:creator>
      <pubDate>Tue, 15 Apr 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/t128n/prompting-the-right-way-as-a-software-engineer-4k6</link>
      <guid>https://dev.to/t128n/prompting-the-right-way-as-a-software-engineer-4k6</guid>
      <description>&lt;p&gt;LLMs are changing how software engineers build, debug, and design systems. But let's be honest - most engineers still prompt like casual users.&lt;br&gt;
"Write a Python script that does X." Then they complain when the result is mediocre. That's not the model's fault. &lt;strong&gt;It's yours&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You wouldn't give your intern vague directions and expect top-tier output. So why treat an LLM any differently?&lt;/p&gt;
&lt;h3&gt;
  
  
  The Intern Mindset
&lt;/h3&gt;

&lt;p&gt;The best way to think about prompting is this: &lt;strong&gt;treat the LLM like your intern&lt;/strong&gt;.&lt;br&gt;
That means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Don't just ask for an implementation - &lt;strong&gt;explain what you're trying to build&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Provide &lt;strong&gt;context, constraints, and goals&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Guide with &lt;strong&gt;function signatures&lt;/strong&gt;, types, and expectations.&lt;/li&gt;
&lt;li&gt;Be iterative. The first answer won't be perfect, and that's fine. You're here to collaborate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You wouldn't hand your intern a whiteboard and say "build our auth system" and walk off. So don't prompt "Build a user service in Go" and expect magic.&lt;/p&gt;
&lt;h3&gt;
  
  
  Context is King
&lt;/h3&gt;

&lt;p&gt;LLMs do not have access to your internal knowledge, unwritten conventions, or organizational quirks. You unconsciously apply context - coding guidelines, architectural philosophies, relevant code snippets, integration points - when solving problems. The model doesn't. &lt;strong&gt;You must provide that context.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Share your standards&lt;/strong&gt;: If your team enforces certain patterns, testing practices, or error-handling approaches, state them explicitly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Articulate your philosophy&lt;/strong&gt;: Do you value functional purity? Defensive programming? "Move fast and break things" versus "measure twice, cut once"? Spell that out.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Give code and interfaces&lt;/strong&gt;: Supply the LLM with relevant types, interfaces, or even critical utility functions it should use.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Be selective&lt;/strong&gt;: Don't overload with irrelevant files, configs, or historical baggage. Context is only king if it's pertinent.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Treat context like a scalpel, not a sledgehammer - enough to illuminate, never so much it overwhelms.&lt;/p&gt;
&lt;h3&gt;
  
  
  Prompt Like You Design Systems
&lt;/h3&gt;

&lt;p&gt;Here's the difference between a lazy prompt and a productive one:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bad Prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write a microservice that handles user sign-up.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Better Prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Design a microservice in TypeScript for user registration.
It should expose a POST `/register` endpoint that accepts name, email, and password.
Use Express. Validate input.
Hash passwords using bcrypt.
Assume a MongoDB backend.
Return success or validation errors in JSON format.
I'll integrate it into an existing monorepo later.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You gave it a &lt;strong&gt;function signature&lt;/strong&gt;, data flow, constraints, and return format.&lt;br&gt;
Now you're actually engineering.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use the Right Model for the Right Job
&lt;/h3&gt;

&lt;p&gt;Too many engineers throw every problem at GPT-4 or whatever model they have in front of them and expect perfect results.&lt;/p&gt;

&lt;p&gt;That's like using your frontend dev to tune your Postgres indexes.&lt;/p&gt;

&lt;p&gt;Here's a better flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use a reasoning model&lt;/strong&gt; (like o3-mini-high or DeepSeek R1) to break down the problem.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;What architecture suits a multi-tenant document platform that must support offline syncing and granular RBAC?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once you've got a direction, &lt;strong&gt;move to code generation&lt;/strong&gt; using GPT-4.1 or equivalent.&lt;br&gt;
Prompt with structure: interfaces, expected modules, or even scaffolded TODOs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review the result &lt;em&gt;as if your intern wrote it.&lt;/em&gt; Push back. Iterate.&lt;br&gt;
Don't accept hallucinated nonsense just because it sounds confident.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Prompt Engineering ≠ Hype—It's Professional Discipline
&lt;/h3&gt;

&lt;p&gt;You don't need the title of “prompt engineer.” You need to engineer your prompts with the same precision and rigor you apply to code.&lt;br&gt;
Communicate with the model as you would with a junior engineer: offer explicit context, set clear expectations, and provide iterative, actionable feedback.&lt;/p&gt;

&lt;p&gt;Leverage the LLM's own capabilities to elevate your prompts—draft, review, and refine them using the model itself.&lt;br&gt;
There is no reason to limit LLMs to code generation alone; employ them to optimize your entire engineering workflow, including prompt design.&lt;/p&gt;

&lt;p&gt;Remember: &lt;strong&gt;LLMs are collaborative interfaces, not ticketing systems.&lt;/strong&gt; Engage in a dialog, not a one-way transaction.&lt;/p&gt;

&lt;p&gt;If the first draft misses the mark, say so—directly:&lt;br&gt;
“Try again using async/await.”&lt;br&gt;
“Add input validation with Zod.”&lt;br&gt;
“Split this into two functions.”&lt;/p&gt;

&lt;p&gt;Treat every session as a code review: iterate, critique, and demand clarity. That is how you extract real value.&lt;/p&gt;

&lt;h3&gt;
  
  
  Closing Thoughts
&lt;/h3&gt;

&lt;p&gt;Prompting isn't a magic spell. It's &lt;strong&gt;engineering communication&lt;/strong&gt;.&lt;br&gt;
If your prompt is vague, unfocused, or lacks direction, the results will reflect that.&lt;/p&gt;

&lt;p&gt;Treat the LLM like a sharp but inexperienced intern.&lt;br&gt;
Use structure. Lead with intent. Iterate like it's pair programming.&lt;/p&gt;

&lt;p&gt;It's not about coaxing the model into doing what you want.&lt;br&gt;
It's about &lt;strong&gt;leading it there, step by step - just like you'd do in the real world.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Original Article&lt;/strong&gt;: &lt;a href="https://t128n.github.io/writings/2025-04-15_prompting_the_right_way" rel="noopener noreferrer"&gt;https://t128n.github.io/writings/2025-04-15_prompting_the_right_way&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>prompt</category>
      <category>engineering</category>
    </item>
    <item>
      <title>Git’s Trust Model is Broken - Here’s How to Fix It</title>
      <dc:creator>Torben Haack</dc:creator>
      <pubDate>Sat, 12 Apr 2025 00:00:00 +0000</pubDate>
      <link>https://dev.to/t128n/gits-trust-model-is-broken-heres-how-to-fix-it-3ijb</link>
      <guid>https://dev.to/t128n/gits-trust-model-is-broken-heres-how-to-fix-it-3ijb</guid>
      <description>&lt;p&gt;Git is one of the most widely used distributed version control systems in the world, relied on daily by millions of developers, especially within the open-source community. Yet, despite its immense popularity, Git's fundamental trust model is surprisingly flawed.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Infamous &lt;code&gt;git config&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;You've likely encountered that classic Git message when committing on a fresh machine:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Please configure your 'user.email' and 'user.name' in git.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Without thinking, you probably entered these commands and moved on. But have you ever wondered why Git needs your email and name even after you've already authenticated with your Git provider?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem Explained
&lt;/h2&gt;

&lt;p&gt;The core issue here is that Git itself simply doesn't care about your identity. GitHub cares. GitLab cares. But Git itself? Not at all. If you tell Git your email is &lt;code&gt;tim@apple.com&lt;/code&gt; and your name is &lt;code&gt;Tim Cook&lt;/code&gt;, Git will happily accept and record it—no questions asked.&lt;/p&gt;

&lt;p&gt;To demonstrate just how easily Git allows impersonation, I created a commit using the name and email address of a prominent individual, the CEO of the $3.25 billion company Vercel, and successfully pushed it to GitHub without any verification (&lt;a href="https://github.com/t128n/git-spoofing" rel="noopener noreferrer"&gt;check it out here&lt;/a&gt;). Pretty great customer service, right?&lt;/p&gt;

&lt;p&gt;This highlights how incredibly simple—and dangerous—it is to impersonate someone using Git.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Simple Solution
&lt;/h2&gt;

&lt;p&gt;Fortunately, there's a straightforward fix: &lt;strong&gt;signing your commits&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Most developers already have SSH or GPG keys set up for server authentication or Git provider interactions. These same keys can also secure your commits.&lt;/p&gt;

&lt;p&gt;Here's how easy it is:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Generate an SSH or GPG Key:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh-keygen &lt;span class="nt"&gt;-t&lt;/span&gt; ed25519 &lt;span class="nt"&gt;-C&lt;/span&gt; &lt;span class="s2"&gt;"your_email@example.com"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Add your public key to your Git provider.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure Git to sign all your commits automatically:&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git config &lt;span class="nt"&gt;--global&lt;/span&gt; commit.gpgsign &lt;span class="nb"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Specify the key to use:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git config &lt;span class="nt"&gt;--global&lt;/span&gt; user.signingkey &amp;lt;your-signing-key&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's it—your commits are now verifiable and secure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Scenario
&lt;/h2&gt;

&lt;p&gt;Imagine you're part of a 10-person team responsible for your application's authentication system, conveniently managed within a monorepo. Your CI/CD pipeline auto-deploys to production whenever you commit to the main branch, trusting your verified credentials implicitly. But what if your "friendly" coworker decides to impersonate you, injecting a malicious backdoor directly into production under your name?&lt;/p&gt;

&lt;p&gt;This isn't a far-fetched scenario - it can happen without commit signing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing Thoughts
&lt;/h2&gt;

&lt;p&gt;Yes, Git is fundamentally flawed, but it's not Git's fault. When Git was created, the internet was a vastly different place. If we look back 50 years from now, we'll probably laugh at our current implementations, asking ourselves, "Why did we think that was okay?".&lt;/p&gt;

&lt;p&gt;Despite this flaw, Git remains the best tool available—fast, reliable, and supported by a vast ecosystem. Rather than ditching Git for some obscure alternative, we should focus on improving it. Commit signing is a simple step toward making Git safer and more trustworthy for everyone.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Original Article&lt;/strong&gt;: &lt;a href="https://t128n.github.io/writings/2025-04-12_git_spoofing" rel="noopener noreferrer"&gt;https://t128n.github.io/writings/2025-04-12_git_spoofing&lt;/a&gt;&lt;/p&gt;

</description>
      <category>git</category>
      <category>security</category>
      <category>version</category>
      <category>control</category>
    </item>
  </channel>
</rss>
