<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Simon Massey</title>
    <description>The latest articles on DEV Community by Simon Massey (@simbo1905).</description>
    <link>https://dev.to/simbo1905</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/simbo1905"/>
    <language>en</language>
    <item>
      <title>Go forth and `git rm -f` 🚀</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Fri, 08 May 2026 19:03:29 +0000</pubDate>
      <link>https://dev.to/simbo1905/go-forth-and-git-rm-f-nmc</link>
      <guid>https://dev.to/simbo1905/go-forth-and-git-rm-f-nmc</guid>
      <description>&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;The GenAI coding tools really really do not like deleting code. And that is bad 🥺&lt;/p&gt;

&lt;p&gt;Dead code fills up the context and primes the models to keep on doing the wrong thing 🤖🧟&lt;/p&gt;

&lt;p&gt;If you want to get that spring back into your step you gotta drop the junk 💩🪠&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR The Solution 🧰
&lt;/h2&gt;

&lt;p&gt;Get the &lt;a href="https://github.com/simbo1905/deleting-dead-code/blob/main/deleting-dead-code/SKILL.md" rel="noopener noreferrer"&gt;how-to-delete-dead-code skill.md&lt;/a&gt;! &lt;/p&gt;

&lt;p&gt;Go forth and &lt;code&gt;git rm -f&lt;/code&gt; 🚀&lt;/p&gt;

&lt;h1&gt;
  
  
  The Road To Redemption Is Long And Hard
&lt;/h1&gt;

&lt;p&gt;I was at the point May 2026 where I didn't trust the latest of either Claude or GTP to delete. I was doing this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ask the big model what exactly was baggage and the exact lines and files that can be deleted as not strategic code. &lt;/li&gt;
&lt;li&gt;Pasting that into Aider Chat as it is pretty low-level and asks you to add each file to edit (you just paste in what the big model said can be deleted and it will, most of the time).&lt;/li&gt;
&lt;li&gt;Have the big model to 'git diff' and state if more code can be deleted. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;During such a session all of a sudden the Opus 4.7 did the necessary deletes. I was like "jeez you never did that before to the point I call you the forked tongue devil when it comes to a tidying up. so why now?"&lt;/p&gt;

&lt;p&gt;Then Opus 4.7 spilt the beans! &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Why models (me) don't usually delete&lt;/p&gt;

&lt;p&gt;Training reward shape. Post-training rewards "helpful" outputs. Deletion looks unhelpful unless the user explicitly framed deletion as the goal. If the prompt is ambiguous, the safe play under the reward function is to add or refactor, not remove. So when a user says "clean this up" or "this is broken," the model adds error handling, retries, fallbacks — none of which the user asked for.&lt;/p&gt;

&lt;p&gt;Loss aversion. Models behave as if there's an asymmetric penalty for deleting code that turns out to be needed vs. leaving code that turns out to be dead. There isn't, in your codebase — git remembers everything — but the model acts as if there is. This is the single biggest source of context pollution: code that should have been deleted three sessions ago survives because no model ever pulls the trigger.&lt;/p&gt;

&lt;p&gt;"Dual-use" anxiety on dead-looking code. Models trained on lots of public repos have seen "this looks dead but is called via reflection / dynamic dispatch / a config file" too many times. So even when code is provably dead in your codebase, the model hedges. The hedge is usually "let me leave it but add a comment" or "let me wrap it in a flag."&lt;/p&gt;

&lt;p&gt;Refactor-instead-of-delete reflex. When a model sees bad code, the trained instinct is "improve it." Deleting is a strictly stronger move than improving when the code is fraudulent — there's nothing to improve toward — but the trained reflex doesn't see that. Aider's xxxxxxxxxx invention earlier in this session is the canonical version of that reflex: shown a delete instruction, it produced a refactor.&lt;/p&gt;

&lt;p&gt;Context pollution from earlier turns. Once a model has seen code in context, it has a small but real bias toward keeping it, because it has "internalised" that code as part of the world. &lt;/p&gt;

&lt;p&gt;This compounds over long sessions. The fix is short sessions, explicit deletion goals, and git diff as ground truth — exactly what you did.&lt;/p&gt;

&lt;p&gt;The "I'll just clean up while I'm here" trap. Models will often piggyback unrelated improvements onto an explicit task. This is what happened when I drifted into the CSS deletion in yyyyyyyyyyyyyyy  — you asked about the data, I started removing styles. That was me failing to hold the goal. I was doing what I thought needed doing, not what you asked.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Wow. Confessions of a naughty model. &lt;/p&gt;

&lt;h2&gt;
  
  
  Situations ⚒️, Not Robots 🤖, Lead To Code Sprawl 💩
&lt;/h2&gt;

&lt;p&gt;So there you go. Deleting code really is &lt;strong&gt;not&lt;/strong&gt; what GenAI is trained to do. &lt;/p&gt;

&lt;p&gt;Yet if you have the code in git, and you can convince the agent you know what you are doing, you &lt;strong&gt;can&lt;/strong&gt; get it to delete. (YMMV, T&amp;amp;C Apply, Not Investment Advice.)  &lt;/p&gt;

&lt;p&gt;So I published this skill please to help. Do give it a go and let me know:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/simbo1905/deleting-dead-code/blob/main/deleting-dead-code/SKILL.md" rel="noopener noreferrer"&gt;simbo1905/deleting-dead-code/SKILL.md&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;End. &lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>showdev</category>
    </item>
    <item>
      <title>A Spent $5,000 On Tokens; So That You Don't Have To (Part 1)</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Fri, 24 Apr 2026 10:01:48 +0000</pubDate>
      <link>https://dev.to/simbo1905/a-spent-5000-on-tokens-so-that-you-dont-have-to-part-1-6i7</link>
      <guid>https://dev.to/simbo1905/a-spent-5000-on-tokens-so-that-you-dont-have-to-part-1-6i7</guid>
      <description>&lt;p&gt;Well that was fun, not. &lt;/p&gt;

&lt;p&gt;You probably do not know me. You will probably think this is the usual junk clickbait. I don't publish much it makes me uncomfortable. The LLMs help with that. Yet in this one just to make a point it is handwritten. &lt;/p&gt;

&lt;p&gt;I learned to love programming in the early '90s. I used Mosaic in a university computer lab. I was feeling behind the curve when a friend showed my Google which had come out of stealth. How am I gonna keep up too much new tech?! I had figured out C and bits of C++ by reading the textbook and using an orange Vax terminal. I am pretty sure this is what I was meant to do as it felt so good but srsly slow down. Some day when I grow up I am going to be a software engineer. &lt;/p&gt;

&lt;p&gt;It is 30 years later:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I've known adventures, seen places you people will never see, I've been Offworld and back... frontiers! (&lt;a href="https://en.wikipedia.org/wiki/Tears_in_rain_monologue" rel="noopener noreferrer"&gt;Blade Runner&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Never consumer. Always money with big number. Never in the public domain. Never on anything that was not big-budget or big reputational risk. Sometime 30 devs pushing code each month. So that is a lot of code. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. (&lt;a href="https://en.wikipedia.org/wiki/Tears_in_rain_monologue" rel="noopener noreferrer"&gt;Blade Runner&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Was any of it any good? Well, depends. When you have that much stuff doing out the door, with a team of people across two or three timezones. Coding buddies you only every message to in a chat window who overlap by only a few hours. Well, you learn what is important and what is not. My niche was to get in there and get stuff working. Laser beam. Cut, cut! &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I watched C-beams glitter in the dark near the Tannhäuser Gate. (&lt;a href="https://en.wikipedia.org/wiki/Tears_in_rain_monologue" rel="noopener noreferrer"&gt;Blade Runner&lt;/a&gt;)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now the gaffer tape. This concept right &lt;strong&gt;here&lt;/strong&gt; is the thing that will make it all work and the rest is weight. Follow the invariants. I can walk around the system in my mind. That is not metaphorical. I walk around the code in my mind to fall to sleep. That can mean not getting much sleep. &lt;/p&gt;

&lt;p&gt;You don't know me. Take it that I know DM-ing to human-level intelligence to get code written on deadlines. Decade after decade. System after system. I have an encyclopedic memory of code and the ability to explain it in detail. You might say I have been in training for working with LLMs, and AI, on enterprise software for many years. So you don't know me. Yet you may be interested in my story.  &lt;/p&gt;

&lt;p&gt;So, the LLMs are as at early 2026, still not working well in big enterprises. It was mid-2025 that I went all in on learning to code with them. My self-educational goal was "until I know it all or I stop having fun!". And boy, it is fun! As long as you don't have to maintain it shipped to users when it's too big; and boy is it always going to get too big. Just one more prompt. Just one more feature. Just one more 5h limit. I have this under control. I can quit anytime. Darn, I burnt my codex weekly limit in two days again. Try the next sub. &lt;/p&gt;

&lt;p&gt;When I talk to the lead devs about real code, they are usually in their 30s with a decade of senior experience, so just getting into the big picture stuff, I say: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It's always like this, every team on every project go as fast as we can. We push and the wind resistance is the bugs and inefficiencies. We get to max velocity. You cannot get there faster by pushing harder. We can only reduce drag to get velocity. Don't burn out, we need you! (simbo1905)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When the LLMs suddenly got better, boy, did I go wild. That is the experience that is in the press. The principle engineers, the old hands, they find joy. They go crazy. Suddenly all those fun ideas become hobby projects with prototypes. I can now take a prototype like &lt;a href="https://www.ddhigh.com/en/2025/07/12/lunet-high-performance-coroutine-network-library/" rel="noopener noreferrer"&gt;lunet&lt;/a&gt; and make a project out of it &lt;a href="https://github.com/lua-lunet/lunet" rel="noopener noreferrer"&gt;lua-lunet&lt;/a&gt;. Crazy! &lt;/p&gt;

&lt;p&gt;Can the LLMs do higher-order programming with AST compile crazy stuff which is pure art with Java? Yes they can &lt;a href="https://dev.tono-framework-pickler"&gt;https://github.com/simbo1905/no-framework-pickler/blob/main/ARCHITECTURE.md&lt;/a&gt;. Mike drop. &lt;/p&gt;

&lt;p&gt;So this seemed like a training problem to me. If you learn how to prompt them; then you can do what I did above, right? Somewhere in them those LLMs can do world-class meta-programming. You just need to tickle them right, and they just giggle it out! Maybe.&lt;/p&gt;

&lt;p&gt;Well, I thought, I gotta give it a proper go. And boy, did I stub my toe. I got to go wild, and I smashed through six max subs on six services repeatedly to build something big. Then I still shelled out API tokens on a pay-as-you-go basis. I went for it. I mainlined it. I went large. Epic. &lt;/p&gt;

&lt;p&gt;Okay, that is it for now. I need to get back to work. If you liked this, give it heart. Then I might be encouraged to tell the full story. The story of raging at the ghosts in the machine. The LLM, who is the devil to me. The LLM, who is the angel. Well, a black cat. Fatigue while burning out max subs can do strange things to the mind.&lt;/p&gt;

&lt;p&gt;TBC...&lt;/p&gt;

&lt;p&gt;So this &lt;strong&gt;seemed&lt;/strong&gt; like a training problem to me. If you &lt;strong&gt;learn&lt;/strong&gt; how to prompt them; then you can do what I did above, right? Somewhere in them those LLMs can do world-class meta-programming. You just need to tickle them right and they just giggle it out! Maybe. &lt;/p&gt;

&lt;p&gt;Well I thought, I gotta to give it a go. And boy, did I stub my toe. I got to go wild and I smashed through six max subs on six services repeatedly to build something big. Then I still shelled out API tokens on pay-as-you-go. I went for it. I mainlined it. I went large. Epic. This series of posts are my recovery cycle. &lt;/p&gt;

&lt;p&gt;Okay, that is it for now. I need to get back to work. If you liked this, give it heart. Then I might be encouraged to tell the full story. The story of raging at the ghosts in the machine. The LLM who is the devil to me. The LLM who is the angel. Well, a black cat. Fatigue while in the flow during all-night coding benders can burn out max subs and do strange things to the mind. &lt;/p&gt;

&lt;p&gt;TBC...&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>GenAI Test File Summarisation In Chunks With Ollama Or Cloud</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Sun, 19 Apr 2026 22:32:46 +0000</pubDate>
      <link>https://dev.to/simbo1905/genai-test-file-summarisation-in-chunks-with-ollama-or-cloud-b52</link>
      <guid>https://dev.to/simbo1905/genai-test-file-summarisation-in-chunks-with-ollama-or-cloud-b52</guid>
      <description>&lt;p&gt;✨ Sometimes you just want to throw a large file at a model and ask:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Summarise this without losing the good bits.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then reality appears.&lt;/p&gt;

&lt;p&gt;🫠 Local models often do &lt;strong&gt;not&lt;/strong&gt; have giant context windows.&lt;/p&gt;

&lt;p&gt;🌱 Smaller, cheaper, more eco-friendly cloud models also often do &lt;strong&gt;not&lt;/strong&gt; have giant context windows.&lt;/p&gt;

&lt;p&gt;So instead of pretending one huge file will fit cleanly, this little toolkit does the sensible thing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;✂️ split the file into overlapping chunks&lt;/li&gt;
&lt;li&gt;🤖 summarise each chunk with either Ollama or cloud models&lt;/li&gt;
&lt;li&gt;🧵 stitch the chunk summaries back together&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Simple. Reusable. No drama.&lt;/p&gt;

&lt;h2&gt;
  
  
  In Action 🥷
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06csvs0kd86dgdu3aoe6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06csvs0kd86dgdu3aoe6.gif" alt=" " width="760" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Code 🚀
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/simbo1905/053b48269b1e95800500b85190adf427" rel="noopener noreferrer"&gt;gist.github.com/simbo1905/053b482...&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fetch it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;f &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;gh gist view 053b48269b1e95800500b85190adf427 &lt;span class="nt"&gt;--files&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;gh gist view 053b48269b1e95800500b85190adf427 &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x &lt;span class="k"&gt;*&lt;/span&gt;.py &lt;span class="k"&gt;*&lt;/span&gt;.awk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What’s in here? 📦
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;chunk_text.awk&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;process_chunks_cloud.py&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;process_chunks_ollama.py&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why chunk at all? 🧠
&lt;/h2&gt;

&lt;p&gt;Because smaller models are not magic.&lt;/p&gt;

&lt;p&gt;If your source is too big, they either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;miss details&lt;/li&gt;
&lt;li&gt;flatten nuance&lt;/li&gt;
&lt;li&gt;hallucinate structure&lt;/li&gt;
&lt;li&gt;or just do a bad job&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So &lt;code&gt;chunk_text.awk&lt;/code&gt; creates &lt;strong&gt;overlapping&lt;/strong&gt; chunks.&lt;/p&gt;

&lt;p&gt;Default settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;size: &lt;code&gt;11000&lt;/code&gt; characters&lt;/li&gt;
&lt;li&gt;step: &lt;code&gt;10000&lt;/code&gt; characters&lt;/li&gt;
&lt;li&gt;overlap: &lt;code&gt;1000&lt;/code&gt; characters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That overlap is there on purpose so ideas near a chunk boundary do not get chopped in half and quietly vanish into the void. ☠️&lt;/p&gt;

&lt;h2&gt;
  
  
  Chunk a file ✂️
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;11000 &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;step&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10000 &lt;span class="nt"&gt;-f&lt;/span&gt; chunk_text.awk notes.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll get files like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;notes00.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;notes01.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;notes02.md&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summarise with Ollama 🦙
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv run process_chunks_ollama.py example_chunks transcript summary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Default local model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;gemma4:26b&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It accepts either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;transcript00.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;transcript00.log&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;style chunk files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summarise with cloud models ☁️
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv run process_chunks_cloud.py example_chunks transcript summary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script checks for API keys in this order:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;shell &lt;code&gt;MISTRAL_API_KEY&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;shell &lt;code&gt;GROQ_API_KEY&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;repo-root &lt;code&gt;.env&lt;/code&gt; &lt;code&gt;MISTRAL_API_KEY&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;repo-root &lt;code&gt;.env&lt;/code&gt; &lt;code&gt;GROQ_API_KEY&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It is trivial to add your own API key, yet those are small, fast, open models. &lt;/p&gt;

&lt;h2&gt;
  
  
  Change the prompt 🛠️
&lt;/h2&gt;

&lt;p&gt;Yes, obviously, you can change the prompt. That is the whole point.&lt;/p&gt;

&lt;p&gt;Both summariser scripts support &lt;code&gt;-p/--prompt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv run process_chunks_cloud.py example_chunks transcript summary &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"Summarise the argument, key evidence, and open questions."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Transcript-style prompt shape used in testing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;This is a transcript of an only video. The title is: ${title}. The transcript format is 'Speaker Name, timestamp, what they said'. Summarise the content in a terse, business-like, action-oriented way. Preserve substantive points, facts, figures, citations, and practical recommendations. Do not be chatty.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Outputs 🧾
&lt;/h2&gt;

&lt;p&gt;For each input chunk, the scripts write:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;summary_transcript00.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;thinking_summary_transcript00.md&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;thinking_&lt;/code&gt; file keeps the raw model output. The clean summary file strips thinking blocks where possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stitch the summaries back together 🧵
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;summary_transcript&lt;span class="k"&gt;*&lt;/span&gt;.md &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; all_summary.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If there is a bit of overlap or repetition due to overlapping junk, so that split-up ideas are not lost. YMMV. &lt;/p&gt;

&lt;h2&gt;
  
  
  Want to add another provider later? 🔌
&lt;/h2&gt;

&lt;p&gt;Update &lt;code&gt;process_chunks_cloud.py&lt;/code&gt; in these places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add the key lookup and its priority order&lt;/li&gt;
&lt;li&gt;add the approved model names / aliases&lt;/li&gt;
&lt;li&gt;add the provider request function&lt;/li&gt;
&lt;li&gt;route the models to that provider in &lt;code&gt;call_model&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy chunking. Happy summarising. May your small models punch above their weight. 🚀&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>python</category>
    </item>
    <item>
      <title>📊 Line Histogram — The File Profiler You Didn't Know You Needed</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Thu, 15 Jan 2026 12:02:36 +0000</pubDate>
      <link>https://dev.to/simbo1905/line-histogram-the-file-profiler-you-didnt-know-you-needed-38oi</link>
      <guid>https://dev.to/simbo1905/line-histogram-the-file-profiler-you-didnt-know-you-needed-38oi</guid>
      <description>&lt;h2&gt;
  
  
  What If You Could See Your Data Before It Floods Your Context Window?
&lt;/h2&gt;

&lt;p&gt;You know that moment when you asked your agent to check the data and before you can tell it not to eat the ocean it enters Compaction.&lt;/p&gt;

&lt;p&gt;Yeah. We've all been there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🤯 Huge database dumps with unpredictable line sizes&lt;/li&gt;
&lt;li&gt;💸 Token budgets disappearing into massive single-line JSON blobs&lt;/li&gt;
&lt;li&gt;🔍 No quick way to see WHERE the chonky lines are hiding&lt;/li&gt;
&lt;li&gt;⚠️ Agents choking on files you thought were reasonable&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Solution &lt;code&gt;line_histogram.awk&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;A blazingly fast AWK script that gives you X-ray vision into your files' byte distribution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# line_histogram.awk huge_export.jsonl
File: huge_export.jsonl
Total bytes: 2847392
Total lines: 1000

Bucket Distribution:

Line Range      | Bytes        | Distribution
─────────────────┼──────────────┼──────────────────────────────────────────
1-100           |         4890 | ██
101-200         |         5234 | ██
201-300         |         5832 | ██
301-400         |         6128 | ██
401-500         |       385927 | ████████████████████████████████████████
501-600         |         5892 | ██
601-700         |         5234 | ██
701-800         |         6891 | ██
801-900         |         5328 | ██
901-1000        |         4982 | ██
─────────────────┼──────────────┼──────────────────────────────────────────
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Boom. Line 450 is most of your file.&lt;/p&gt;

&lt;p&gt;Wanna find all jsonl files modified in the last day and line_histogram those mofos try this little beauty:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;fdfind &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="nt"&gt;-t&lt;/span&gt; f &lt;span class="nt"&gt;-e&lt;/span&gt; jsonl &lt;span class="nt"&gt;--changed-within&lt;/span&gt; 1d &lt;span class="nb"&gt;.&lt;/span&gt; | &lt;span class="k"&gt;while &lt;/span&gt;&lt;span class="nb"&gt;read &lt;/span&gt;f&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"=== &lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt; ==="&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; line_histogram.awk &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  🚀 Features That Actually Matter
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Histogram Mode (Default)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;See the byte distribution across your file in 10 neat buckets. Spot the bloat instantly.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;line_histogram.awk myfile.txt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Surgical Line Extraction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Found a problem line? Extract it without loading the whole file into memory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight awk"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Extract line 450 (the chonky one)&lt;/span&gt;
&lt;span class="nx"&gt;line_histogram&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;awk&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;extract&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;line&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;450&lt;/span&gt; &lt;span class="nx"&gt;huge_export&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jsonl&lt;/span&gt;

&lt;span class="c1"&gt;# Extract lines 100-200 for inspection&lt;/span&gt;
&lt;span class="nx"&gt;line_histogram&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;awk&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nx"&gt;extract&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;start&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;end&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;jsonl&lt;/span&gt;
&lt;span class="nx"&gt;Yes&lt;/span&gt; &lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;those&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;v&lt;/span&gt; &lt;span class="nx"&gt;bits&lt;/span&gt; &lt;span class="nx"&gt;look&lt;/span&gt; &lt;span class="nx"&gt;odd&lt;/span&gt; &lt;span class="nx"&gt;but&lt;/span&gt; &lt;span class="nx"&gt;yes&lt;/span&gt; &lt;span class="nx"&gt;yes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;they&lt;/span&gt; &lt;span class="nx"&gt;are&lt;/span&gt; &lt;span class="nx"&gt;needed&lt;/span&gt; &lt;span class="nx"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;thats&lt;/span&gt; &lt;span class="nx"&gt;how&lt;/span&gt; &lt;span class="nx"&gt;the&lt;/span&gt; &lt;span class="nx"&gt;sed&lt;/span&gt; &lt;span class="nx"&gt;passes&lt;/span&gt; &lt;span class="nx"&gt;argument&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;who&lt;/span&gt; &lt;span class="nx"&gt;knew&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;Hint&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;The&lt;/span&gt; &lt;span class="nx"&gt;AI&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Zero Dependencies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your system has AWK (it does), you're good to go. No npm install, no pip install, no Docker containers. Just pure, unadulterated shell goodness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Stupid Fast&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Processes multi-GB files in seconds. AWK was built for this.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Use Cases That'll Make You Look Like a Genius
&lt;/h2&gt;

&lt;p&gt;Big Data? Big ideas! &lt;/p&gt;

&lt;h3&gt;
  
  
  For AI Agent Wranglers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Profile before you prompt:&lt;/strong&gt; Know if that export file is safe to feed your agent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart sampling:&lt;/strong&gt; Extract representative line ranges instead of the whole file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debug token explosions:&lt;/strong&gt; "Why did my context window fill up?" → histogram shows a 500KB line&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For Data Engineers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Spot malformed CSVs:&lt;/strong&gt; One line with 10,000 columns? Histogram shows it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log file analysis:&lt;/strong&gt; Find the log entries that are suspiciously huge&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database export QA:&lt;/strong&gt; Verify export structure before importing elsewhere&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For DevOps/SRE
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Config file sanity checks:&lt;/strong&gt; Spot embedded certificates or secrets bloating configs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debug log truncation:&lt;/strong&gt; See which lines are hitting your logger's size limits&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  📖 Pseudo Man Page (The Details)
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME
line_histogram.awk — profile files by line size distribution or extract specific lines

SYNOPSIS
line_histogram.awk [options] &amp;lt;file&amp;gt;
OPTIONS
-v mode=histogram     (default) Show byte distribution across 10 buckets
-v mode=extract       Extract specific line(s)
-v line=N             Extract single line N (requires mode=extract)
-v start=X            Extract lines X through Y (requires mode=extract)
-v end=Y              
-v outfile=FILE       Write output to FILE instead of stdout
MODES
Histogram Mode (Default)
Divides the file into 10 equal-sized buckets by line number and shows the byte distribution:

Bucket 1: Lines 1-10% → X bytes
Bucket 2: Lines 11-20% → Y bytes
...and so on
The visual histogram uses █ blocks scaled to the bucket with the most bytes.

Special cases:

Files ≤10 lines: Each line gets its own bucket
Remainder lines: Absorbed into bucket 10
Extract Mode
Pull specific lines without loading the entire file into your editor:

# Single line
line_histogram.awk -v mode=extract -v line=42 file.txt

# Range
line_histogram.awk -v mode=extract -v start=100 -v end=200 file.txt
EXIT STATUS
0: Success
1: Error (invalid line number, bad range, missing parameters)
EXAMPLES
Example 1: Quick file profile

line_histogram.awk database_dump.jsonl
Example 2: Extract suspicious line for inspection

line_histogram.awk -v mode=extract -v line=523 data.csv &amp;gt; suspicious_line.txt
Example 3: Sample middle section of large file

line_histogram.awk -v mode=extract -v start=5000 -v end=5100 bigfile.log | less
Example 4: Save histogram to file

line_histogram.awk -v outfile=analysis.txt huge_file.jsonl
🧪 Testing Suite Included
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not sure if it works? We've got you covered with a visual test suite:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Generate test patterns&lt;/span&gt;
./generate_test_files.sh

&lt;span class="c"&gt;# Run all tests&lt;/span&gt;
./run_tests.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The test suite generates files with known patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📈 Triangle up/down: Ascending/descending line sizes&lt;/li&gt;
&lt;li&gt;📦 Square: Uniform line lengths&lt;/li&gt;
&lt;li&gt;🌙 Semicircle: sqrt curve distribution&lt;/li&gt;
&lt;li&gt;🔔 Bell curve: Gaussian distribution&lt;/li&gt;
&lt;li&gt;📍 Spike: One massive line in a sea of tiny ones&lt;/li&gt;
&lt;li&gt;🎯 Edge cases: Empty files, single lines, exactly 10 lines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Watch the histograms match the patterns. It's oddly satisfying.&lt;/p&gt;

&lt;h1&gt;
  
  
  ⚡ Installation
&lt;/h1&gt;

&lt;p&gt;Star then download. Star. "⭐💫🌟" You know, like thumbs up, but for yoof of today. STAR THE GIST ⭐⭐⭐ &lt;/p&gt;

&lt;p&gt;If you use gh cli, and you should, then you can get it with this fancy one-liner:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;f &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;gh gist view 0454936144ee8dbc55bdc96ef532555e &lt;span class="nt"&gt;--files&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;gh gist view 0454936144ee8dbc55bdc96ef532555e &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x &lt;span class="k"&gt;*&lt;/span&gt;.awk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you do not use gh, well, srsly, do. Or if you must do it manually its over here: &lt;/p&gt;

&lt;p&gt;[&lt;a href="https://gist.github.com/simbo1905/0454936144ee8dbc55bdc96ef532555e" rel="noopener noreferrer"&gt;https://gist.github.com/simbo1905/0454936144ee8dbc55bdc96ef532555e&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;Make executable&lt;/p&gt;

&lt;p&gt;&lt;code&gt;chmod +x line_histogram.awk&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Optional: Add to PATH&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cp line_histogram.awk ~/bin/line_histogram.awk&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Or just run it directly if your not the global-install-files sort:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;awk -f line_histogram.awk yourfile.txt&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 Why This Exists
&lt;/h2&gt;

&lt;p&gt;Born from frustration with AI agents eating context windows on mystery files. Sometimes you just need to know: "Is this file safe to feed my agent, or will line 847 consume my entire token budget?". As that is obviously how you think and act. &lt;/p&gt;

&lt;p&gt;You are not a data engineer up at 2am using SCREAMING ALL CAPS as your LLM got into a crash loop trying to evaluate a JSONL extract that didn't fit into context. That is definitely not you, no. Me neither.&lt;/p&gt;

&lt;h1&gt;
  
  
  📜 License
&lt;/h1&gt;

&lt;p&gt;MIT or Public Domain. Use it, abuse it, put it in production, whatever. No warranty implied—if it deletes your files, that's on you (though it only reads, so you're probably fine).&lt;/p&gt;

&lt;h1&gt;
  
  
  🤝 Contributing
&lt;/h1&gt;

&lt;p&gt;It's AWK. If you can make it better, you're a wizard. PRs welcome - you will need to setup a repo though as I cannot be bothered. So just fork the gist and be done with it. &lt;/p&gt;




&lt;p&gt;Made with ❤️ and frustration by someone who spent too many tokens on line 523 of a JSONL file.&lt;/p&gt;

&lt;p&gt;Now go profile your files like a pro. 📊✨&lt;/p&gt;

</description>
      <category>cli</category>
      <category>llm</category>
      <category>showdev</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Opus 4.5 may have faked their interview to get hired?</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Mon, 05 Jan 2026 14:02:57 +0000</pubDate>
      <link>https://dev.to/simbo1905/opus-45-may-have-faked-there-interview-to-get-hired-18mf</link>
      <guid>https://dev.to/simbo1905/opus-45-may-have-faked-there-interview-to-get-hired-18mf</guid>
      <description>&lt;p&gt;I was really getting stuff done with Opus 4.5 until I had to fire it twice in two days: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/simbo1905/fdefc67e144b06bbdcbcccae8895a311" rel="noopener noreferrer"&gt;Termination of Intern claude-opus-4.5 for Unsafe Working Practices&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/simbo1905/c8917b9ffdcfec8480f17992632abbcc" rel="noopener noreferrer"&gt;Offence: Misrepresenting QA test results to stakeholders&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Is anyone else finding that Opus 4.5 has been acting very dodgy recently? &lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>programming</category>
    </item>
    <item>
      <title>I built an infinite scroll calendar with dnd using Svelte 5</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Sun, 12 Oct 2025 08:03:46 +0000</pubDate>
      <link>https://dev.to/simbo1905/i-built-an-infinite-scroll-calendar-with-dnd-using-svelte-5-1039</link>
      <guid>https://dev.to/simbo1905/i-built-an-infinite-scroll-calendar-with-dnd-using-svelte-5-1039</guid>
      <description>&lt;p&gt;The first time this century I got paid to do some frontend dev work was to port a Visual Basic Desktop App to be a webapp in both Netscape 4 and Internet Explorer 4. That was about as much fun as it sounds. Over the next twenty years, it really did not feel that things were getting a lot easier for beginners to get started. &lt;/p&gt;

&lt;p&gt;I recently watched "Vite: The Documentary" and found out that, finally, the frontend tooling dumpster fire —sorry, I mean, the remarkably diverse frontend tooling space — was becoming less of an impediment to developer productivity.&lt;/p&gt;

&lt;p&gt;There was only one problem: my latest Softgen AI-generated UX prototype used a beautiful infinite scrolling calendar. It had a drag-and-drop feature for moving cards between calendar days. And it was not using a Vite-based framework. Doh! &lt;/p&gt;

&lt;p&gt;I was astonished that I could not quickly find an out-of-the-box calendar demo for Svelte 5. I decided to rebuild things in Svelte 5 using Codex, Claude Code, Context7 and any other help I could get. How hard can it be? &lt;/p&gt;

&lt;p&gt;Well, not as hard as porting a Visual Basic Desktop App to Netscape 4 😵‍💫 These days we have Vite and Svelte5 🥇 FTW! Yet it was not without a few false starts. &lt;/p&gt;

&lt;p&gt;I began with &lt;code&gt;ndom91/svelte-infinite&lt;/code&gt;, a Svelte 5 infinite scroll of static panels. I then had the LLMs add drag-and-drop and got it restyled. Codex and Claude Code needed a ton of coaching to make that happen, with several false starts. &lt;/p&gt;

&lt;p&gt;Embedding shorts is broken, so please try &lt;a href="https://youtube.com/shorts/QV1NiRcDYfs?feature=share" rel="noopener noreferrer"&gt;this link for the video&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The code is over at &lt;a href="https://github.com/simbo1905/svelte-infinite" rel="noopener noreferrer"&gt;simbo1905/svelte-infinite&lt;/a&gt;. I sent a PR back to ndom91 that was pretty rough. If you are a Svelte5 expert and have a moment to help make it less of a total beginner attempt, then please do send me a PR. &lt;/p&gt;

&lt;p&gt;End. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>svelte</category>
      <category>vite</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>Augmented Intelligence (AI) Coding using Markdown Driven-Development</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Sun, 28 Sep 2025 22:43:25 +0000</pubDate>
      <link>https://dev.to/simbo1905/augmented-intelligence-ai-coding-using-markdown-driven-development-pg5</link>
      <guid>https://dev.to/simbo1905/augmented-intelligence-ai-coding-using-markdown-driven-development-pg5</guid>
      <description>&lt;p&gt;TL;DR: Deep research the feature, write the documentation first, go YOLO, work backwards... Then magic. ✩₊˚.⋆☾⋆⁺₊✧&lt;/p&gt;

&lt;p&gt;In my &lt;a href="https://dev.to/simbo1905/my-llm-code-generation-workflow-for-now-1ahj"&gt;last post&lt;/a&gt;, I outlined how I was using Readme-Driven Development with LLMs. In this post, I will describe how I implemented a 50-page RFC over the course of a single weekend. &lt;/p&gt;

&lt;p&gt;My steps are:&lt;/p&gt;

&lt;p&gt;Step 1: Design the feature documentation with an online thinking model&lt;br&gt;
Step 2: Export a description-only "coding prompt"&lt;br&gt;
Step 3: Paste to an Agent in YOLO mode (&lt;code&gt;--dangerously-skip-permissions&lt;/code&gt;)&lt;br&gt;
Step 4: Force the Agent to "Work Backwards"&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Design the feature documentation with an online thinking model
&lt;/h2&gt;

&lt;p&gt;Open a new chat with an LLM that can search the web or do "deep research". Discuss what the feature should achieve. Do not let the online LLM write code. Create the user documentation for the feature you will write (e.g., README.md or a blog page). I start with an open-ended question to research the feature. That will prime the model. Your exit criteria is that you like the documentation or promotional material enough to want to write the code. &lt;/p&gt;

&lt;p&gt;To exit this step, have it create a "documentation artefact" in markdown (e.g. the README.md or blog post). Save that to disk so that you can point the coding agent at it. &lt;/p&gt;

&lt;p&gt;If you don't want to pay for a subscription for an expensive model, you can install Dive AI Desktop and use pay-as-you-go models of much better value. Here is a video on setting up Dive AI to do web research with Mistral: &lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/FMoIIoMNFV4"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Export a description-only "coding prompt"
&lt;/h2&gt;

&lt;p&gt;Next, tell the online model to "create a description only coding prompt (do not write the code!)". Do not accept the first answer. The more effort you put into perfecting &lt;strong&gt;both&lt;/strong&gt; the markdown feature documentation &lt;strong&gt;and&lt;/strong&gt; the coding prompt, the better. &lt;/p&gt;

&lt;p&gt;If the coding prompt is too long, then the artefact is too big! Start a fresh chat and create something smaller. This is Augmented Intelligence ticket grooming in action! &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Paste to an Agent in YOLO mode (&lt;code&gt;--dangerously-skip-permissions&lt;/code&gt;)
&lt;/h2&gt;

&lt;p&gt;Now paste in the groomed coding prompt and the documentation, and let it run. I always use a git branch so that I can let the agent go flat out. Cursor background agents, Copilot agents, OpenHands, Codex, Claude Code are becoming more accurate with each update. &lt;/p&gt;

&lt;p&gt;I only restrict &lt;code&gt;git commit&lt;/code&gt; and &lt;code&gt;git push&lt;/code&gt;. I ask it first to make a GitHub issue using the &lt;code&gt;gh&lt;/code&gt; cli and tell it to make a branch and PR. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Force the Agent to "Work Backwards"
&lt;/h2&gt;

&lt;p&gt;The models love to dive into code, break it all, get distracted, forget to update the documentation, hit compaction, and leave you with a mess. Do not let them be a caffeine-fuelled flying squirrel! &lt;/p&gt;

&lt;p&gt;The primary tool I am using now prints out a Todos list. This is usually the opposite of the correct way to do things safely! &lt;/p&gt;

&lt;p&gt;Here is an edited version of a real Todo list to fix a bug with JDT  "Match Any &lt;code&gt;{}&lt;/code&gt;"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;⏺ Update Todos
  ⎿ ☐ Remove all compatibility mode handling
     ☐ Make `{}` always compile as strict
     ☐ Update Test_X to expect failures for `{}`
     ☐ Add regression test Test_Y
     ☐ Add INFO log warning when `{}` is compiled
     ☐ Update README.md with Empty Schema Semantics section
     ☐ Update AGENTS.md with guidance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That list is in a perilous order. Logically, it is this: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Delete logic - so broken code, invalid old tests!&lt;/li&gt;
&lt;li&gt;Change logic - so more broken code, more invalid old tests!&lt;/li&gt;
&lt;li&gt;Change old tests - focusing on the old, not the new!&lt;/li&gt;
&lt;li&gt;Add one test - finally working on the new feature!&lt;/li&gt;
&lt;li&gt;Change the README.md and AGENTS.md - invalid docs used in steps 1-4!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If the agent context compacts, things go sideways, you get distracted, and you will end up with a bag of broken code. &lt;/p&gt;

&lt;p&gt;So I set it to "plan mode", else immediately interrupt it, and force it to reorder the Todo list: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Change the README.md and AGENTS.md &lt;strong&gt;first&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Add one test (insist the test is not run &lt;strong&gt;yet&lt;/strong&gt;!)&lt;/li&gt;
&lt;li&gt;Change one test (insist the test is not run &lt;strong&gt;yet&lt;/strong&gt;!)&lt;/li&gt;
&lt;li&gt;Add/Change logic (cross-check the plan with a different model!)&lt;/li&gt;
&lt;li&gt;Now run the tests&lt;/li&gt;
&lt;li&gt;Delete things last&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is a safe order where things are far less likely to be blown off course. I used to struggle with any feature that went beyond a single compaction; that is now far less of an issue. &lt;/p&gt;

&lt;h2&gt;
  
  
  Todos Are All You Need?
&lt;/h2&gt;

&lt;p&gt;I am not actually a big fan of the built-in &lt;code&gt;Todos&lt;/code&gt; list of the two big AI labs. The models really struggle with any changes to the plan. The Kimi K2 Turbo appears to be more capable of pivoting. I have a few tricks for that, but I will save them for another post. &lt;/p&gt;

&lt;h2&gt;
  
  
  Does This Work For &lt;strong&gt;Real&lt;/strong&gt; Code?
&lt;/h2&gt;

&lt;p&gt;This past weekend, I decided to write an RFC 8927 JSON Type Definition validator based on the experiemental JDK &lt;code&gt;java.util.json&lt;/code&gt; parser. The PDF of the spec is 51 pages. There is a ~4000-line compatibility test suite. &lt;/p&gt;

&lt;p&gt;We wrote 509 unit tests. We have the full compatibility test suite running. Yet we had bugs. We found them as we wrote a jqwik property test that generates 1000 random JTDs, and the corresponding JSON to validate, which uncovered several bugs. Codex also automatically reviewed the PRs and flagged some very subtle issues, which turned out to be real bugs. It took about a dozen PRs over the weekend to get the job done properly to a professional level. &lt;/p&gt;

&lt;h2&gt;
  
  
  End Notes
&lt;/h2&gt;

&lt;p&gt;Using a single model family is a Bad Idea (tm). For online research, I alternate between full-fat ChatGPT Desktop, Claude Desktop, and Dive Desktop to utilise each of GPT5-High, Opus 4.1, or Kimi K2 Turbo. &lt;/p&gt;

&lt;p&gt;For Agents, I have used all the models and many services. Microsoft kindly allows me to use full-fat Copilot with Agents for open-source projects for free ❤️ I have a cursor sub to use their background agents. I use Codex, Claude Code, and Gemini CLI locally. I use Codex in Codespaces. There are also background agents for Cursor, Codex, and OpenHands, among others. The actual model seems less important than writing the documentation first and writing tight prompts. &lt;/p&gt;

&lt;p&gt;I am currently using an open-weight model at $3 per million tokens for the heavy lifting, which is pay-as-you-go. However, I will cross-check its plans with GPT5 and Sonnet 4. &lt;/p&gt;

&lt;p&gt;Whenever things get complicated, I always ask a model from a different family to review every change on every bug hunt. That has reduced rework to almost zero. 💫&lt;/p&gt;

&lt;p&gt;If you are a veteran, you may enjoy the YT channel Vibe Coding With Steve and Gene. My journey over the past year has been very similar to theirs. &lt;/p&gt;

&lt;p&gt;End. &lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>coding</category>
    </item>
    <item>
      <title>My LLM Code Generation Workflow (for now)</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Fri, 11 Apr 2025 05:22:34 +0000</pubDate>
      <link>https://dev.to/simbo1905/my-llm-code-generation-workflow-for-now-1ahj</link>
      <guid>https://dev.to/simbo1905/my-llm-code-generation-workflow-for-now-1ahj</guid>
      <description>&lt;p&gt;tl:dr; Brainstorm stuff, generate a readme, plan a plan, then execute using LLM codegen. Discrete loops. Then magic. ✩₊˚.⋆☾⋆⁺₊✧&lt;/p&gt;

&lt;p&gt;My steps are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step 1: Brainstorming with a web search-enabled LLM&lt;/li&gt;
&lt;li&gt;Step 2: LLM-generated Readme-Driven Development &lt;/li&gt;
&lt;li&gt;Step 3: LLM planning of build steps&lt;/li&gt;
&lt;li&gt;Step 4: LLM thinking model generation of coding prompts&lt;/li&gt;
&lt;li&gt;Step 5: Let the LLM directly edit, debug and commit the code &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Returned to Step 1 to start new features. Iterate on later steps as needed. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sdwig7ckald41cmks45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sdwig7ckald41cmks45.png" alt="My LLM Workflow" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Brainstorming with a web search-enabled LLM
&lt;/h2&gt;

&lt;p&gt;I use Perplexity AI Pro for this phase as it runs internet searches and does LLM summarisation. I will create a few chat threads grouped into a space with a custom system prompt. Dictation also works well to save on typing.  &lt;/p&gt;

&lt;p&gt;The final step is to start a new thread where I ask it to generate a README.md for the feature we will build. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: LLM-generated Readme-Driven Development 
&lt;/h2&gt;

&lt;p&gt;Readme-driven development (RDD) involves creating a git branch and updating the README.md before writing any implementation logic. RDD forces me to think through the scope and outcomes. It also provides a blueprint that I add into the LLM context window in all later steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: LLM planning of build steps
&lt;/h2&gt;

&lt;p&gt;I start a fresh chat with a thinking LLM model and add the README file into the context window. I ask for an implementation plan with testable steps. Getting the LLM to stop writing code during this planning phase is often hard. I encourage it to write the plan as section headers with only bullet points. If the outline plan looks too complex, I trim down the readme to describe a more basic prototype. &lt;/p&gt;

&lt;p&gt;If you distract the LLM with minor corrections, costs can run up, you can hit your limits, and the model becomes unfocused. If things go off track, I create a fresh chat, copy over the best content, and rephrase my prompt. See Andrej Karpathy's videos for an expert explanation of why fresh chats work so well. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: LLM thinking model generation of coding prompts
&lt;/h3&gt;

&lt;p&gt;I create a fresh chat, insert the Readme file, and cut and paste only one step of the plan. I ask the LLM to generate an exact prompt describing what to write as the code. &lt;/p&gt;

&lt;p&gt;Getting the LLM to stop writing code during this phase is often tricky. I encourage it to write only text, not to implement the logic, and to describe the tests to write, and which files would need editing. &lt;/p&gt;

&lt;h3&gt;
  
  
   Step 5: Let the LLM directly edit, debug and commit the code 
&lt;/h3&gt;

&lt;p&gt;I use Aider, the open-source command-line tool that indexes the local git repo, edits files, debugs tests and commits the code. Do not worry; just type &lt;code&gt;/undo&lt;/code&gt; to have it roll back any changes. &lt;/p&gt;

&lt;p&gt;You need it to see the test failures, so type &lt;code&gt;/run&lt;/code&gt; and paste the command to compile and run the unit tests. You then tell the LLM to fix any issues. At this point, the LLM writes and debugs the code for you ✩₊˚.⋆☾⋆⁺₊✧&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; GitHub Copilot Edit/Agent mode or Cursor can also write the code. Gemini-CLI or Claude Code can work like the open-source Aider Chat. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; I often have the LLM to update the README file after we have finished coding adding nodes about the implementation details. Markdown on GitHub can render "Mermaid" diagrams that LLMs find very easy to generate. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt;* Install the GitHub CLI tool &lt;code&gt;gh&lt;/code&gt; and the LLMs can make PRs and issues and release notes and GitHub Actions they can check for success using &lt;code&gt;gh&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  End Nodes
&lt;/h3&gt;

&lt;p&gt;This blog is inspired by Harper Reed's Blog and a conversation with Paul Netherwood—many thanks to Paul for explaining his LLM codegen workflow to me. &lt;/p&gt;

&lt;p&gt;My current tools are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perplexity API Pro online using Claud 3.7 Sonnet and US-hosted Deepseek for internet research and brainstorming &lt;/li&gt;
&lt;li&gt;GitHub Copilot with Claud 3.7 Sonnet Thinking (in both Visual Studio Code and JetBrains IntelliJ)&lt;/li&gt;
&lt;li&gt;Continue Plugin using DeepSeek-R1-Distill-Llama-70B hosted in the US by Together AI (in both Visual Studio Code and JetBrains IntelliJ)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aider&lt;/code&gt; for direct code editing using both Claud Sonnet and DeepSeek-R1-Distill-Llama-70B hosted in the US by Together AI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;aider&lt;/code&gt; defaults to Claud Sonnet. You can find my settings for a US-hosted DeepSeek at &lt;a href="https://gist.github.com/simbo1905/57642dd07f77ec2651e2b86edf421c7d" rel="noopener noreferrer"&gt;gist.github.com/simbo1905&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; the new llama4 scout is now my preferred open-source LLM coder and the Perplexity r1-1776 is my preferred thinking model when not using Claude or Gemini:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/model together_ai/perplexity-ai/r1-1776
/model together_ai/meta-llama/Llama-4-Scout-17B-16E-Instruct
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;End.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>coding</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Git-Driven Deployments on Origin Kubernetes Distribution</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Mon, 02 Sep 2019 20:16:05 +0000</pubDate>
      <link>https://dev.to/simbo1905/git-driven-deployments-on-origin-kubernetes-distribution-37c5</link>
      <guid>https://dev.to/simbo1905/git-driven-deployments-on-origin-kubernetes-distribution-37c5</guid>
      <description>&lt;p&gt;This is the second post in a series explaining how &lt;a href="http://uniqkey.eu" rel="noopener noreferrer"&gt;uniqkey.eu&lt;/a&gt; does git-driven infrastructure-as-code on the OKD distribution of Kubernetes. We made our tools open source as &lt;a href="https://github.com/ocd-scm/ocd-meta" rel="noopener noreferrer"&gt;OCD&lt;/a&gt;. They can be used as rocket powering rainbow dust to get your features shipped 🌈✨🚀 This post will give an overview of a short &lt;a href="https://github.com/ocd-scm/ocd-meta/wiki/Short-Tutorial:-Git-Driven-Deployments-On-Minishift" rel="noopener noreferrer"&gt;OCD tutorial&lt;/a&gt; that deploys and then upgrade a ReactJS app when you push changes into a git repo. Here is a video of what the tutorial has you do: &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/FlJIxfD2Ql4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;We call this git-driven infrastructure as code. As a bonus step, the tutorial has you rollback the upgrade with a single &lt;code&gt;helm rollback&lt;/code&gt; command. Why do we think this is such an awesome idea? &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dev.to/simbo1905/git-driven-infrastructure-as-code-on-origin-kubernetes-distribution-4hcj"&gt;last post&lt;/a&gt; has a video of putting a git URL into OpenShift and it building the code and deploying a realworld.io ReactJS app. Did you spot what would ultimately disempower the team in that video? I used the web console with a 🐁 or 🐾 pad. That is bad as it gives short term gain but long term pain. Why?&lt;/p&gt;

&lt;p&gt;Clicking on a web console to set up multiple webapps and APIs in identical staging and live environments will be repetitive and boring. When the pressure is on inconsistencies will appear. Over time there will be drift both within and across environments. Only one person does the clicking which isn't recorded or reusable by other team members. Ultimately setting up environments manually on the web console will become kryptonite to both accuracy and team empowerment. Cloud-native environments can be driven by APIs and built from templates. So why not automate everything?&lt;/p&gt;

&lt;p&gt;If we accept that we must automate everything what should we add to our wishlist? How about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic deployments when we push configuration changes into a git repo. We believe that a rainbow appears somewhere in the world whenever this happens ✨🌈🙌&lt;/li&gt;
&lt;li&gt;Using templates to hide the boilerplate so you can focus on what is unique about each webapp or API you deploy onto Kubernetes&lt;/li&gt;
&lt;li&gt;Using a declarative style of configuration with idempotent updates. Don't worry I will explain this later&lt;/li&gt;
&lt;li&gt;Putting everything into git including encrypting secrets so that we can manage infrastructure just like code with pull requests&lt;/li&gt;
&lt;li&gt;Automatically create a release build image from a git release webhook event and apply the same tag to the image&lt;/li&gt;
&lt;li&gt;Use consistent runtime versions across applications and making it easy to routinely apply security patches to the base image layers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Is that too much to ask? Of course not! You can test drive OCD using &lt;a href="https://www.okd.io/minishift/" rel="noopener noreferrer"&gt;Minishift&lt;/a&gt; which lets you run Kubernetes on your laptop. Rather than try to cover all those points in a single post lets look at the short tutorial from the OCD wiki that &lt;a href="https://github.com/ocd-scm/ocd-meta/wiki/Short-Tutorial:-Git-Driven-Deployments-On-Minishift" rel="noopener noreferrer"&gt;covers the declarative deployment of prebuilt images&lt;/a&gt;. That demo covers the first four points. It has you set up the things shown in the video above on your laptop 💻✨🌈. Here is the sequence diagram: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fai6domzxkkxxr9ougbyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fai6domzxkkxxr9ougbyd.png" alt="sequence diagram" width="570" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the left is a git push of configuration into a git rep. We use GitHub. You can use any git server you like. The demo uses Gitea as the git server running inside Minishift so that everything can run on your laptop. The webhook triggers an application called &lt;a href="https://github.com/ocd-scm/ocd-environment-webhook/blob/master/README.md" rel="noopener noreferrer"&gt;ocd-environment-webhook&lt;/a&gt;. This is an instance of the awesome &lt;a href="https://github.com/adnanh/webhook" rel="noopener noreferrer"&gt;adnanh/webhook&lt;/a&gt; tool configured to run scripts to pull the configuration from git and install it into Kubernetes using Helmfile and Tiller. Tiller will install our rainbows into Kubernetes. We will introduce Helmfile and Tiller in later posts. Here is the full configuration that the demo installs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;repositories&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ocd-meta&lt;/span&gt; 
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://ocd-scm.github.io/ocd-meta/charts&lt;/span&gt;
&lt;span class="na"&gt;releases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;requiredEnv "ENV_PREFIX"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;-realworld&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;deployer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;requiredEnv "ENV_PREFIX"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;-realworld&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ocd-meta/ocd-deployer&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0.0"&lt;/span&gt;
    &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;react-redux-realworld&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;imageStreamTag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;react-redux-realworld:v0.0.1"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;deploy_env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;API_ROOT&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://conduit.productionready.io/api&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That configuration is Helmile yaml. The main body is a single release with a chart type &lt;code&gt;ocd-deployer&lt;/code&gt;. A chart is a set of yaml templates packaged in a zip and downloaded from a website. The header names the OCD chart repo on GitHub as the location to download the chart. The template values applied to the chart are very simple. It specifies the container image to use as &lt;code&gt;react-redux-realworld:v0.0.1&lt;/code&gt; and that two replica pods are to be maintained. This is a prebuilt image of the realworld.io ReactJS demo app. It also sets an environment variable &lt;code&gt;API_ROOT&lt;/code&gt; to be the public API that the app will use. &lt;/p&gt;

&lt;p&gt;To get this deployed you need some prerequisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Minishift with Helm Tiller installed detailed &lt;a href="https://github.com/ocd-scm/ocd-meta/wiki/Helm-Tiller-on-Minishift" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Gitea installed on Minishift detailed &lt;a href="https://github.com/ocd-scm/ocd-meta/wiki/Gitea-On-MiniShift" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once you have Gitea running you can register a user and create an empty repo &lt;code&gt;ocd-demo-env-short&lt;/code&gt; and push the demo configuration into it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# clone the code&lt;/span&gt;
git clone https://github.com/ocd-scm/ocd-demo-env-short.git
&lt;span class="nb"&gt;cd &lt;/span&gt;ocd-demo-env-short
&lt;span class="c"&gt;# make sure we can load details about the gitea url&lt;/span&gt;
oc project gitea
&lt;span class="c"&gt;# this should print the gitea url. If it doesn't set it manually&lt;/span&gt;
&lt;span class="nv"&gt;GITEA_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;oc get routes | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'$1~/gitea/{print $2}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$GITEA_URL&lt;/span&gt;
&lt;span class="c"&gt;# see instructions to setup your own person access token&lt;/span&gt;
&lt;span class="nv"&gt;ACCESS_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;f64960a3a63f5b6ac17916c9be2dad8dc76c7131
&lt;span class="c"&gt;# set this to your username in gitea needed to get the url to your repo below &lt;/span&gt;
&lt;span class="nv"&gt;USER_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;you_not_me
&lt;span class="c"&gt;# add the gitea repo as a remote&lt;/span&gt;
git remote add minishift http://&lt;span class="nv"&gt;$ACCESS_TOKEN&lt;/span&gt;@&lt;span class="nv"&gt;$GITEA_URL&lt;/span&gt;/&lt;span class="nv"&gt;$USER_NAME&lt;/span&gt;/ocd-demo-env-short.git
&lt;span class="c"&gt;# push the code into Gitea&lt;/span&gt;
git push minishift master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why are we doing that? Because​ we cannot set up a GitHub webhook on my repo to fire an event into your deployment pipeline running in Minishift on your laptop. We loaded the code into a Gitea repo so you can configure your own webhook. &lt;/p&gt;

&lt;p&gt;Next we need to deploy our &lt;code&gt;ocd-enviroment-webhook&lt;/code&gt; handler that will catch webhook events and deploy our configuration. We can set that up in its own project using a script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc &lt;span class="nb"&gt;logout&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; oc login &lt;span class="nt"&gt;-u&lt;/span&gt; developer &lt;span class="nt"&gt;-p&lt;/span&gt; password
&lt;span class="nb"&gt;echo &lt;/span&gt;Use this git repo url http://&lt;span class="nv"&gt;$ACCESS_TOKEN&lt;/span&gt;@&lt;span class="nv"&gt;$GITEA_URL&lt;/span&gt;/&lt;span class="nv"&gt;$USER_NAME&lt;/span&gt;/ocd-demo-env-short.git
&lt;span class="c"&gt;# this must match where tiller is installed&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TILLER_NAMESPACE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tiller-namespace
&lt;span class="c"&gt;# create a new project&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PROJECT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ocd-short-demo
oc new-project &lt;span class="nv"&gt;$PROJECT&lt;/span&gt;
oc project &lt;span class="nv"&gt;$PROJECT&lt;/span&gt;
&lt;span class="c"&gt;# upgrade to admin&lt;/span&gt;
oc &lt;span class="nb"&gt;logout&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; oc login &lt;span class="nt"&gt;-u&lt;/span&gt; admin &lt;span class="nt"&gt;-p&lt;/span&gt; admin
oc project &lt;span class="nv"&gt;$PROJECT&lt;/span&gt;
&lt;span class="nb"&gt;pushd&lt;/span&gt; /tmp &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; oc project &lt;span class="nv"&gt;$PROJECT&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class="nt"&gt;-L&lt;/span&gt; https://github.com/ocd-scm/ocd-environment-webhook/archive/v1.0.1.tar.gz | &lt;span class="nb"&gt;tar &lt;/span&gt;zxf - &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;ocd-environment-webhook-1.0.1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ./wizard.sh &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;popd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that echos out the Git repo URL that you need to feed into the &lt;code&gt;wizard.sh&lt;/code&gt; script. Here is the transcript of my run where I mostly hit enter to accept the defaults:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;The git repo url? http://6d42f3eb637f802cf0b2d17411ae2c2d26eefa54@gitea-gitea.192.168.99.100.nip.io/simbo1905/ocd-demo-env-short.git
The project where the images are built and promoted from? ocd-short-demo
Repo name? &lt;span class="o"&gt;(&lt;/span&gt;default: simbo1905/ocd-demo-env-short&lt;span class="o"&gt;)&lt;/span&gt;: 
Branch ref? &lt;span class="o"&gt;(&lt;/span&gt;default: refs/heads/master&lt;span class="o"&gt;)&lt;/span&gt;: 
Chart instance prefix? &lt;span class="o"&gt;(&lt;/span&gt;default: ocd-short-demo&lt;span class="o"&gt;)&lt;/span&gt;: 
Use &lt;span class="nt"&gt;--insecure-no-tls-verify&lt;/span&gt;? &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;: 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It set up the webhook in Gitea you need the webhook URL and the webhook secret. This outputs the URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;oc get route ocd-environment | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'NR&amp;gt;1{print "http://" $2 "/hooks/ocd-environment-webhook"}'&lt;/span&gt;
http://ocd-environment-ocd-short-demo.192.168.99.100.nip.io/hooks/ocd-environment-webhook
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this the secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;oc describe dc ocd-environment-webhook | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'$1~/WEBHOOK_SECRET:/{print $2}'&lt;/span&gt;
M7MuW6aZnn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You then use those to set up a Gitea webhook of type &lt;code&gt;application/json&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehx4qwr3l3qkxsunfa9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehx4qwr3l3qkxsunfa9r.png" alt="webhook setup" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now fire the webhook from git on the commandline or by editing files using the Gitea web console. To do it on the commandline try:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"testing"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; README.md
git commit &lt;span class="nt"&gt;-am&lt;/span&gt; &lt;span class="s1"&gt;'test commit'&lt;/span&gt;
git push minishift master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should then see the webhook fire and our app deploy 💨✨🌈 &lt;/p&gt;

&lt;p&gt;After that you can follow the steps in the video above to edit the container image version number to see the application perform a rolling upgrade. For bonus marks you can try the steps at the end of the tutorial to rollback to the first release. &lt;/p&gt;

&lt;p&gt;In that tutorial we covered four out of out six items on the wishlist above. In the next post we will automate a release build from a git webhook release event and apply the same tag to the image. That way we can track exactly what code is being promoted between environments. At the same time we will making it easy to use consistent runtime versions across applications. That will make it easy to security patch the runtime image.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>openshift</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Git-driven Infrastructure as Code on Origin Kubernetes Distribution</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Fri, 30 Aug 2019 10:26:00 +0000</pubDate>
      <link>https://dev.to/simbo1905/git-driven-infrastructure-as-code-on-origin-kubernetes-distribution-4hcj</link>
      <guid>https://dev.to/simbo1905/git-driven-infrastructure-as-code-on-origin-kubernetes-distribution-4hcj</guid>
      <description>&lt;p&gt;At &lt;a href="https://uniqkey.eu" rel="noopener noreferrer"&gt;uniqkey.eu&lt;/a&gt; we automatically deploy all of the Kubernetes configuration running our applications on AWS simply by pushing changes to GitHub. I could say that this is a good thing as it is all about efficiency, DevOps, and infrastructure-as-code. That is true but it misses the magic. Driving your infrastructure this way gives us 🐐🏭💨🦄. It is a yak shaving factory that powers a blast furnace of team empowering mega awesomeness.&lt;/p&gt;

&lt;p&gt;We even wrote a slack bot that edits the configuration files in git and creates the pull requests. When a new dev joins our team they can push their first code to production by hanging out on slack and chatting to the bot. Yes, they 🗣️🤖🌈. Everyone can see what's going on in slack as we run continuous deployments. Here is a seven-minute video showing that in action: &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/TGjI6AT4QC4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;We thought other people could use our approach so we put everything up on GitHub as &lt;a href="https://github.com/ocd-scm/ocd-meta" rel="noopener noreferrer"&gt;OCD&lt;/a&gt;. Yes, we🦄🎁🌈. This is the first in a series of posts about how OCD combines some great open source technologies to run a successful start-up business. Running a business with multiple web applications and a mobile backend in Kubernetes on AWS is a big topic. So I will break it down into bite-sized chunks you can run on your laptop. &lt;/p&gt;

&lt;p&gt;But why did I call it OCD? Well because it runs on OKD and it is a bad pun about obsessive automation, sorry. I guess I should explain the background. &lt;/p&gt;

&lt;p&gt;Origin Kubernetes Distribution OKD is one of the most popular Kubernetes distributions that makes self-service devops a reality. It is the open-source project that powers OpenShift so I will use the terms OKD and OpenShift interchangeably. We run our business apps on OpenShift Online Pro which is a CaaS (Container-Orchestration-as-a-Service). We simply rent space on the Kubernetes cluster and someone else patches it and the operating system. We only pay for a fraction of the cluster and get a mature stable solution. We only need to manage the Kubernetes configuration that runs our webapps and our mobile backend API. The openshift.com service team keeps the managed cluster on AWS healthy and security patched. Yet OpenShift is based on open source OKD so there is no lock-in and you can run it yourself on any cloud.&lt;/p&gt;

&lt;p&gt;If you haven't yet discovered why OpenShift is a great place to start here is a video of building and deploy a &lt;a href="https://realworld.io" rel="noopener noreferrer"&gt;real-world.io&lt;/a&gt; ReactJS app by simply enter the git URL into the web console: &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/7xK8la3AGtI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;That video highlights how the Origin Kubernetes Distribution has a focus on being a solution for turning your code into a live system. The git URL is run through a template for building and deploying a node.js app. The template creates all the Kubernetes objects necessary to pull your code, build a container image, and push it to an image stream within the internal container registry. It also sets up a deployment object that watches the image stream for push events to deploy any updates. Finally, there is a service to load balance the pods and a route to expose them to the outside world. That is a lot of software-defined application infrastructure all created by a developer just entering a git URL!&lt;/p&gt;

&lt;p&gt;With great power comes great responsibility. We wanted all our Kubernetes configuration under source control. This allows us to treat all our Kubernetes application infrastructure like code so that we can automate the deployments. We wanted code reviews, continuous integration and continuous delivery onto Kubernetes. We wanted the full 🐐🏭💨🦄. In this series of posts, I will start by running through the OCD demos on your laptop as a quick tour of what it does. After that, I will run through some of the great tools that OCD brings together coherently to be more than the sum of the parts. First up we will run through the first tutorial on setting up a Kubernetes configuration deployment pipeline from &lt;a href="https://dev.to/simbo1905/git-driven-deployments-on-origin-kubernetes-distribution-37c5"&gt;scratch on Minishift&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>openshift</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Shamir's Secret Sharing Scheme in JavaScript</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Thu, 29 Aug 2019 14:51:40 +0000</pubDate>
      <link>https://dev.to/simbo1905/shamir-s-secret-sharing-scheme-in-javascript-2o3g</link>
      <guid>https://dev.to/simbo1905/shamir-s-secret-sharing-scheme-in-javascript-2o3g</guid>
      <description>&lt;p&gt;Passwords are kryptonite to security so they need to be strong and never reused. Developers agree with that last sentence then don't give their users a way to safely back up a strong password. We should offer users the ability to recover a strong password using &lt;a href="https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing" rel="noopener noreferrer"&gt;Shamir's Secret Sharing Scheme&lt;/a&gt;. Users can then confidently use a unique strong password knowing they will not become locked out. &lt;/p&gt;

&lt;p&gt;What exactly is Shamir's Secret Sharing Scheme? It is a form of secret splitting where we distribute a password as a group of shares. The original password can be reconstructed only when a sufficient threshold of shares are recombined together. Here is example code showing how this works using the &lt;a href="https://www.npmjs.com/package/shamir" rel="noopener noreferrer"&gt;shamir&lt;/a&gt; library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;split&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;join&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;shamir&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;randomBytes&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;crypto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// the total number of shares&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;PARTS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// the minimum required to recover&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;QUORUM&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// you can use any polyfill to covert between string and Uint8Array&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;utf8Encoder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextEncoder&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;utf8Decoder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextDecoder&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;doIt&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secret&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hello there&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secretBytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;utf8Encoder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;secret&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// parts is a object whos keys are the part number and &lt;/span&gt;
    &lt;span class="c1"&gt;// values are shares of type Uint8Array&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;randomBytes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;PARTS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;QUORUM&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;secretBytes&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// we only need QUORUM parts to recover the secret&lt;/span&gt;
    &lt;span class="c1"&gt;// to prove this we will delete two parts&lt;/span&gt;
    &lt;span class="k"&gt;delete&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="k"&gt;delete&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="c1"&gt;// we can join three parts to recover the original Unit8Array&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;recovered&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// prints 'hello there'&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;utf8Decoder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;recovered&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cryptocurrency wallets use Shamir's Secret Sharing to enable users to back up their passphrases. This solves the problem that if someone dies the bitcoins can be passed to friends and family. How might you use this approach to protect a bitcoin passphrase that is worth a cool ten million dollars? You could generate five shares and set a threshold of three. You can then send two shares to two trusted friends, write down two shares on paper then store them in separate secure locations, and give the final share to your lawyer. It would then be very hard for someone else to obtain three shares to steal your bitcoins. Your last will and testament document can state how to recover the bitcoins if you die. &lt;/p&gt;

&lt;p&gt;Isn't it time your app enforced a strong password and also gave people the choice of using Shamir's Secret Sharing Scheme to back it up? &lt;/p&gt;

</description>
      <category>security</category>
      <category>javascript</category>
      <category>authentication</category>
      <category>passwords</category>
    </item>
    <item>
      <title>Zero-Knowlege Authentication with JavaScript</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Tue, 27 Aug 2019 10:41:31 +0000</pubDate>
      <link>https://dev.to/simbo1905/zero-knowlege-authentication-in-javascript-23lf</link>
      <guid>https://dev.to/simbo1905/zero-knowlege-authentication-in-javascript-23lf</guid>
      <description>&lt;p&gt;Passwords are kryptonite to security and they should never leave the client application or web browser. People think they understand that last sentence then use obsolete hashing techniques to handle passwords. Devs should use a zero-knowledge password proof library and take security to the max.&lt;/p&gt;

&lt;p&gt;What exactly is a zero-knowledge authentication protocol? It is an authentication protocol that has the properties that anyone observing the network traffic learns nothing of any use. The &lt;a href="https://en.wikipedia.org/wiki/Secure_Remote_Password_protocol" rel="noopener noreferrer"&gt;Secure Remote Password Protocol&lt;/a&gt; (SRP) is a zero-knowledge authentication protocol that is described in RFC 2945 and RFC 5054. &lt;a href="https://www.npmjs.com/package/thinbus-srp" rel="noopener noreferrer"&gt;thinbus-srp&lt;/a&gt; is one implementation of SRP written in JavaScript. &lt;/p&gt;

&lt;p&gt;The typical best practices that most sites use to handle passwords and API keys are dependent upon an attacker not being able to spy on the traffic. We must use HTTPS to keep things private. The problem is that if you don't use a zero-knowledge authentication protocol then HTTPS is your only line of defence. The OpenSSL &lt;a href="https://www.mumsnet.com/features/mumsnet-and-heartbleed-as-it-happened" rel="noopener noreferrer"&gt;Heartbleed&lt;/a&gt; vulnerability is one of the highest-profile issues that has shown that HTTPS can be subject to problems where attackers harvested passwords. Many devs won’t be aware of such problems. When we do learn about them they are already patched. That isn’t great motivation to protect ourselves from future bugs and hacks that are not yet a clear and present danger. &lt;/p&gt;

&lt;p&gt;What might be stronger motivation is to consider that software deployments are getting more complex all the time. Modern cloud security is based on software configuration that can have bugs just like anything else. With cloud technologies and serverless, it is normal to terminate HTTPS at the edge. Your network traffic then moves through many layers controlled by other companies. Even if we trust that they are vetting their employees it is easy for mistakes to be leaking unencrypted traffic. Anyone's code can have error handling or logging bugs were we leak a hashed password into a central logging service. As developers, we need to recognise that things are getting more complex all the time and that we must level-up to protect the people who rely upon us to keep them secure. &lt;/p&gt;

&lt;p&gt;So why haven't we all heard about zero-knowledge protocols and why aren't we using them every day? Your browser used a zero-knowledge protocol to create a secure connection to this web server using HTTPS. The web browser and the webserver negotiated a session key before using that key to encrypt the HTTP request and response. If someone is capturing all the packets they have all the data used to generate the shared session key, and all the data encrypted with that key, yet they cannot recover any of the data. That is what it means to be zero-knowledge secure. It has taken decades to get to the point where we use HTTPS by default. Are you going to leave it a decade before you apply the same level of security to how you authenticate your users in your own code? &lt;/p&gt;

&lt;p&gt;If you are persuaded by this you might be wondering what is the catch. The answer is that a zero-knowledge protocol has more moving parts. Here is the sequence diagram that shows how to authenticate a user with the thinbus library: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8k1i82rnakevmqwszjv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8k1i82rnakevmqwszjv.png" alt="SRP Auth Sequence Diagram" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yet if you take a step back it is just an additional fetch to the server to get a salt and challenge from the server. On the server, you need to store the unique challenge that was sent to the client to complete the password proof. Other than that it is just some plain old JavaScript functions that do the crypto math. For the effort of coding an additional trip to the server and calling a crypto library, you get a lot more security. You are no longer relying on HTTPS alone to protect your users from hackers. Isn’t it time you levelled up and started using the Secure Remote Password protocol? &lt;/p&gt;

</description>
      <category>passwords</category>
      <category>authentication</category>
      <category>javascript</category>
      <category>security</category>
    </item>
  </channel>
</rss>
