<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Simon Massey</title>
    <description>The latest articles on DEV Community by Simon Massey (@simbo1905).</description>
    <link>https://dev.to/simbo1905</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/simbo1905"/>
    <language>en</language>
    <item>
      <title>GenAI Test File Summarisation In Chunks With Ollama Or Cloud</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Sun, 19 Apr 2026 22:32:46 +0000</pubDate>
      <link>https://dev.to/simbo1905/genai-test-file-summarisation-in-chunks-with-ollama-or-cloud-b52</link>
      <guid>https://dev.to/simbo1905/genai-test-file-summarisation-in-chunks-with-ollama-or-cloud-b52</guid>
      <description>&lt;p&gt;✨ Sometimes you just want to throw a large file at a model and ask:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Summarise this without losing the good bits.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Then reality appears.&lt;/p&gt;

&lt;p&gt;🫠 Local models often do &lt;strong&gt;not&lt;/strong&gt; have giant context windows.&lt;/p&gt;

&lt;p&gt;🌱 Smaller, cheaper, more eco-friendly cloud models also often do &lt;strong&gt;not&lt;/strong&gt; have giant context windows.&lt;/p&gt;

&lt;p&gt;So instead of pretending one huge file will fit cleanly, this little toolkit does the sensible thing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;✂️ split the file into overlapping chunks&lt;/li&gt;
&lt;li&gt;🤖 summarise each chunk with either Ollama or cloud models&lt;/li&gt;
&lt;li&gt;🧵 stitch the chunk summaries back together&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Simple. Reusable. No drama.&lt;/p&gt;

&lt;h2&gt;
  
  
  In Action 🥷
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06csvs0kd86dgdu3aoe6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06csvs0kd86dgdu3aoe6.gif" alt=" " width="760" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Code 🚀
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/simbo1905/053b48269b1e95800500b85190adf427" rel="noopener noreferrer"&gt;gist.github.com/simbo1905/053b482...&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Fetch it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="k"&gt;for &lt;/span&gt;f &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;gh gist view 053b48269b1e95800500b85190adf427 &lt;span class="nt"&gt;--files&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do &lt;/span&gt;gh gist view 053b48269b1e95800500b85190adf427 &lt;span class="nt"&gt;-f&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$f&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;done&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;chmod&lt;/span&gt; +x &lt;span class="k"&gt;*&lt;/span&gt;.py &lt;span class="k"&gt;*&lt;/span&gt;.awk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  What’s in here? 📦
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;chunk_text.awk&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;process_chunks_cloud.py&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;process_chunks_ollama.py&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why chunk at all? 🧠
&lt;/h2&gt;

&lt;p&gt;Because smaller models are not magic.&lt;/p&gt;

&lt;p&gt;If your source is too big, they either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;miss details&lt;/li&gt;
&lt;li&gt;flatten nuance&lt;/li&gt;
&lt;li&gt;hallucinate structure&lt;/li&gt;
&lt;li&gt;or just do a bad job&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So &lt;code&gt;chunk_text.awk&lt;/code&gt; creates &lt;strong&gt;overlapping&lt;/strong&gt; chunks.&lt;/p&gt;

&lt;p&gt;Default settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;size: &lt;code&gt;11000&lt;/code&gt; characters&lt;/li&gt;
&lt;li&gt;step: &lt;code&gt;10000&lt;/code&gt; characters&lt;/li&gt;
&lt;li&gt;overlap: &lt;code&gt;1000&lt;/code&gt; characters&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That overlap is there on purpose so ideas near a chunk boundary do not get chopped in half and quietly vanish into the void. ☠️&lt;/p&gt;

&lt;h2&gt;
  
  
  Chunk a file ✂️
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;11000 &lt;span class="nt"&gt;-v&lt;/span&gt; &lt;span class="nv"&gt;step&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;10000 &lt;span class="nt"&gt;-f&lt;/span&gt; chunk_text.awk notes.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You’ll get files like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;notes00.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;notes01.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;notes02.md&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Summarise with Ollama 🦙
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv run process_chunks_ollama.py example_chunks transcript summary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Default local model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;gemma4:26b&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It accepts either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;transcript00.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;transcript00.log&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;style chunk files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summarise with cloud models ☁️
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv run process_chunks_cloud.py example_chunks transcript summary
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script checks for API keys in this order:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;shell &lt;code&gt;MISTRAL_API_KEY&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;shell &lt;code&gt;GROQ_API_KEY&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;repo-root &lt;code&gt;.env&lt;/code&gt; &lt;code&gt;MISTRAL_API_KEY&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;repo-root &lt;code&gt;.env&lt;/code&gt; &lt;code&gt;GROQ_API_KEY&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It is trivial to add your own API key, yet those are small, fast, open models. &lt;/p&gt;

&lt;h2&gt;
  
  
  Change the prompt 🛠️
&lt;/h2&gt;

&lt;p&gt;Yes, obviously, you can change the prompt. That is the whole point.&lt;/p&gt;

&lt;p&gt;Both summariser scripts support &lt;code&gt;-p/--prompt&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;uv run process_chunks_cloud.py example_chunks transcript summary &lt;span class="nt"&gt;-p&lt;/span&gt; &lt;span class="s2"&gt;"Summarise the argument, key evidence, and open questions."&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Transcript-style prompt shape used in testing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;This is a transcript of an only video. The title is: ${title}. The transcript format is 'Speaker Name, timestamp, what they said'. Summarise the content in a terse, business-like, action-oriented way. Preserve substantive points, facts, figures, citations, and practical recommendations. Do not be chatty.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Outputs 🧾
&lt;/h2&gt;

&lt;p&gt;For each input chunk, the scripts write:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;summary_transcript00.md&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;thinking_summary_transcript00.md&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The &lt;code&gt;thinking_&lt;/code&gt; file keeps the raw model output. The clean summary file strips thinking blocks where possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stitch the summaries back together 🧵
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cat &lt;/span&gt;summary_transcript&lt;span class="k"&gt;*&lt;/span&gt;.md &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; all_summary.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If there is a bit of overlap or repetition due to overlapping junk, so that split-up ideas are not lost. YMMV. &lt;/p&gt;

&lt;h2&gt;
  
  
  Want to add another provider later? 🔌
&lt;/h2&gt;

&lt;p&gt;Update &lt;code&gt;process_chunks_cloud.py&lt;/code&gt; in these places:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;add the key lookup and its priority order&lt;/li&gt;
&lt;li&gt;add the approved model names / aliases&lt;/li&gt;
&lt;li&gt;add the provider request function&lt;/li&gt;
&lt;li&gt;route the models to that provider in &lt;code&gt;call_model&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Happy chunking. Happy summarising. May your small models punch above their weight. 🚀&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>python</category>
    </item>
    <item>
      <title>📊 Line Histogram — The File Profiler You Didn't Know You Needed</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Thu, 15 Jan 2026 12:02:36 +0000</pubDate>
      <link>https://dev.to/simbo1905/line-histogram-the-file-profiler-you-didnt-know-you-needed-38oi</link>
      <guid>https://dev.to/simbo1905/line-histogram-the-file-profiler-you-didnt-know-you-needed-38oi</guid>
      <description>&lt;h2&gt;
  
  
  What If You Could See Your Data Before It Floods Your Context Window?
&lt;/h2&gt;

&lt;p&gt;You know that moment when you asked your agent to check the data and before you can tell it not to eat the ocean it enters Compaction.&lt;/p&gt;

&lt;p&gt;Yeah. We've all been there.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;🤯 Huge database dumps with unpredictable line sizes&lt;/li&gt;
&lt;li&gt;💸 Token budgets disappearing into massive single-line JSON blobs&lt;/li&gt;
&lt;li&gt;🔍 No quick way to see WHERE the chonky lines are hiding&lt;/li&gt;
&lt;li&gt;⚠️ Agents choking on files you thought were reasonable&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Solution &lt;code&gt;line_histogram.awk&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;A blazingly fast AWK script that gives you X-ray vision into your files' byte distribution.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# chmod +x ./line_histogram.awk
./line_histogram.awk huge_export.jsonl
File: huge_export.jsonl
Total bytes: 2847392
Total lines: 1000

Bucket Distribution:

Line Range      | Bytes        | Distribution
─────────────────┼──────────────┼──────────────────────────────────────────
1-100           |         4890 | ██
101-200         |         5234 | ██
201-300         |         5832 | ██
301-400         |         6128 | ██
401-500         |       385927 | ████████████████████████████████████████
501-600         |         5892 | ██
601-700         |         5234 | ██
701-800         |         6891 | ██
801-900         |         5328 | ██
901-1000        |         4982 | ██
─────────────────┼──────────────┼──────────────────────────────────────────
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Boom. Line 450 is most of your file.&lt;/p&gt;

&lt;h3&gt;
  
  
  🚀 Features That Actually Matter
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Histogram Mode (Default)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;See the byte distribution across your file in 10 neat buckets. Spot the bloat instantly.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;./line_histogram.awk myfile.txt&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Surgical Line Extraction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Found a problem line? Extract it without loading the whole file into memory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Extract line 450 (the chonky one)
./line_histogram.awk -v mode=extract -v line=450 huge_export.jsonl

# Extract lines 100-200 for inspection
./line_histogram.awk -v mode=extract -v start=100 -v end=200 data.jsonl
Yes yes, those -v bits look odd but yes yes, they are needed as thats how the sed passes argument, who knew! (Hint: The AI)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Zero Dependencies&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your system has AWK (it does), you're good to go. No npm install, no pip install, no Docker containers. Just pure, unadulterated shell goodness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Stupid Fast&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Processes multi-GB files in seconds. AWK was built for this.&lt;/p&gt;

&lt;h2&gt;
  
  
  💡 Use Cases That'll Make You Look Like a Genius
&lt;/h2&gt;

&lt;p&gt;Big Data? Big ideas! &lt;/p&gt;

&lt;h3&gt;
  
  
  For AI Agent Wranglers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Profile before you prompt:&lt;/strong&gt; Know if that export file is safe to feed your agent&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smart sampling:&lt;/strong&gt; Extract representative line ranges instead of the whole file&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debug token explosions:&lt;/strong&gt; "Why did my context window fill up?" → histogram shows a 500KB line&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For Data Engineers
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Spot malformed CSVs:&lt;/strong&gt; One line with 10,000 columns? Histogram shows it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Log file analysis:&lt;/strong&gt; Find the log entries that are suspiciously huge&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database export QA:&lt;/strong&gt; Verify export structure before importing elsewhere&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  For DevOps/SRE
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Config file sanity checks:&lt;/strong&gt; Spot embedded certificates or secrets bloating configs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debug log truncation:&lt;/strong&gt; See which lines are hitting your logger's size limits&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  📖 Pseudo Man Page (The Details)
&lt;/h1&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME
line_histogram.awk — profile files by line size distribution or extract specific lines

SYNOPSIS
./line_histogram.awk [options] &amp;lt;file&amp;gt;
OPTIONS
-v mode=histogram     (default) Show byte distribution across 10 buckets
-v mode=extract       Extract specific line(s)
-v line=N             Extract single line N (requires mode=extract)
-v start=X            Extract lines X through Y (requires mode=extract)
-v end=Y              
-v outfile=FILE       Write output to FILE instead of stdout
MODES
Histogram Mode (Default)
Divides the file into 10 equal-sized buckets by line number and shows the byte distribution:

Bucket 1: Lines 1-10% → X bytes
Bucket 2: Lines 11-20% → Y bytes
...and so on
The visual histogram uses █ blocks scaled to the bucket with the most bytes.

Special cases:

Files ≤10 lines: Each line gets its own bucket
Remainder lines: Absorbed into bucket 10
Extract Mode
Pull specific lines without loading the entire file into your editor:

# Single line
./line_histogram.awk -v mode=extract -v line=42 file.txt

# Range
./line_histogram.awk -v mode=extract -v start=100 -v end=200 file.txt
EXIT STATUS
0: Success
1: Error (invalid line number, bad range, missing parameters)
EXAMPLES
Example 1: Quick file profile

./line_histogram.awk database_dump.jsonl
Example 2: Extract suspicious line for inspection

./line_histogram.awk -v mode=extract -v line=523 data.csv &amp;gt; suspicious_line.txt
Example 3: Sample middle section of large file

./line_histogram.awk -v mode=extract -v start=5000 -v end=5100 bigfile.log | less
Example 4: Save histogram to file

./line_histogram.awk -v outfile=analysis.txt huge_file.jsonl
🧪 Testing Suite Included
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Not sure if it works? We've got you covered with a visual test suite:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Generate test patterns
./generate_test_files.sh

# Run all tests
./run_tests.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The test suite generates files with known patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;📈 Triangle up/down: Ascending/descending line sizes&lt;/li&gt;
&lt;li&gt;📦 Square: Uniform line lengths&lt;/li&gt;
&lt;li&gt;🌙 Semicircle: sqrt curve distribution&lt;/li&gt;
&lt;li&gt;🔔 Bell curve: Gaussian distribution&lt;/li&gt;
&lt;li&gt;📍 Spike: One massive line in a sea of tiny ones&lt;/li&gt;
&lt;li&gt;🎯 Edge cases: Empty files, single lines, exactly 10 lines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Watch the histograms match the patterns. It's oddly satisfying.&lt;/p&gt;

&lt;h1&gt;
  
  
  ⚡ Installation
&lt;/h1&gt;

&lt;p&gt;Star then download. Star. "⭐💫🌟" You know, like thumbs up, but for yoof of today. STAR THE GIST ⭐⭐⭐ &lt;/p&gt;

&lt;p&gt;[&lt;a href="https://gist.github.com/simbo1905/0454936144ee8dbc55bdc96ef532555e" rel="noopener noreferrer"&gt;https://gist.github.com/simbo1905/0454936144ee8dbc55bdc96ef532555e&lt;/a&gt;]&lt;/p&gt;

&lt;h1&gt;
  
  
  Make executable
&lt;/h1&gt;

&lt;p&gt;&lt;code&gt;chmod +x line_histogram.awk&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Optional: Add to PATH&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cp line_histogram.awk ~/bin/line_histogram.awk&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Or just run it directly:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;awk -f line_histogram.awk yourfile.txt&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🎯 Why This Exists
&lt;/h2&gt;

&lt;p&gt;Born from frustration with AI agents eating context windows on mystery files. Sometimes you just need to know: "Is this file safe to feed my agent, or will line 847 consume my entire token budget?". As that is obviously how you think and act. &lt;/p&gt;

&lt;p&gt;You are not a data engineer up at 2am using SCREAMING ALL CAPS as your LLM got into a crash loop trying to evaluate a JSONL extract that didn't fit into context. That is definitely not you, no. Me neither.&lt;/p&gt;

&lt;h1&gt;
  
  
  📜 License
&lt;/h1&gt;

&lt;p&gt;MIT or Public Domain. Use it, abuse it, put it in production, whatever. No warranty implied—if it deletes your files, that's on you (though it only reads, so you're probably fine).&lt;/p&gt;

&lt;h1&gt;
  
  
  🤝 Contributing
&lt;/h1&gt;

&lt;p&gt;It's AWK. If you can make it better, you're a wizard. PRs welcome - you will need to setup a repo though as I cannot be bothered. So just fork the gist and be done with it. &lt;/p&gt;




&lt;p&gt;Made with ❤️ and frustration by someone who spent too many tokens on line 523 of a JSONL file.&lt;/p&gt;

&lt;p&gt;Now go profile your files like a pro. 📊✨&lt;/p&gt;

</description>
      <category>cli</category>
      <category>llm</category>
      <category>showdev</category>
      <category>tooling</category>
    </item>
    <item>
      <title>Opus 4.5 may have faked their interview to get hired?</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Mon, 05 Jan 2026 14:02:57 +0000</pubDate>
      <link>https://dev.to/simbo1905/opus-45-may-have-faked-there-interview-to-get-hired-18mf</link>
      <guid>https://dev.to/simbo1905/opus-45-may-have-faked-there-interview-to-get-hired-18mf</guid>
      <description>&lt;p&gt;I was really getting stuff done with Opus 4.5 until I had to fire it twice in two days: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/simbo1905/fdefc67e144b06bbdcbcccae8895a311" rel="noopener noreferrer"&gt;Termination of Intern claude-opus-4.5 for Unsafe Working Practices&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://gist.github.com/simbo1905/c8917b9ffdcfec8480f17992632abbcc" rel="noopener noreferrer"&gt;Offence: Misrepresenting QA test results to stakeholders&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Is anyone else finding that Opus 4.5 has been acting very dodgy recently? &lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>programming</category>
    </item>
    <item>
      <title>I built an infinite scroll calendar with dnd using Svelte 5</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Sun, 12 Oct 2025 08:03:46 +0000</pubDate>
      <link>https://dev.to/simbo1905/i-built-an-infinite-scroll-calendar-with-dnd-using-svelte-5-1039</link>
      <guid>https://dev.to/simbo1905/i-built-an-infinite-scroll-calendar-with-dnd-using-svelte-5-1039</guid>
      <description>&lt;p&gt;The first time this century I got paid to do some frontend dev work was to port a Visual Basic Desktop App to be a webapp in both Netscape 4 and Internet Explorer 4. That was about as much fun as it sounds. Over the next twenty years, it really did not feel that things were getting a lot easier for beginners to get started. &lt;/p&gt;

&lt;p&gt;I recently watched "Vite: The Documentary" and found out that, finally, the frontend tooling dumpster fire —sorry, I mean, the remarkably diverse frontend tooling space — was becoming less of an impediment to developer productivity.&lt;/p&gt;

&lt;p&gt;There was only one problem: my latest Softgen AI-generated UX prototype used a beautiful infinite scrolling calendar. It had a drag-and-drop feature for moving cards between calendar days. And it was not using a Vite-based framework. Doh! &lt;/p&gt;

&lt;p&gt;I was astonished that I could not quickly find an out-of-the-box calendar demo for Svelte 5. I decided to rebuild things in Svelte 5 using Codex, Claude Code, Context7 and any other help I could get. How hard can it be? &lt;/p&gt;

&lt;p&gt;Well, not as hard as porting a Visual Basic Desktop App to Netscape 4 😵‍💫 These days we have Vite and Svelte5 🥇 FTW! Yet it was not without a few false starts. &lt;/p&gt;

&lt;p&gt;I began with &lt;code&gt;ndom91/svelte-infinite&lt;/code&gt;, a Svelte 5 infinite scroll of static panels. I then had the LLMs add drag-and-drop and got it restyled. Codex and Claude Code needed a ton of coaching to make that happen, with several false starts. &lt;/p&gt;

&lt;p&gt;Embedding shorts is broken, so please try &lt;a href="https://youtube.com/shorts/QV1NiRcDYfs?feature=share" rel="noopener noreferrer"&gt;this link for the video&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The code is over at &lt;a href="https://github.com/simbo1905/svelte-infinite" rel="noopener noreferrer"&gt;simbo1905/svelte-infinite&lt;/a&gt;. I sent a PR back to ndom91 that was pretty rough. If you are a Svelte5 expert and have a moment to help make it less of a total beginner attempt, then please do send me a PR. &lt;/p&gt;

&lt;p&gt;End. &lt;/p&gt;

</description>
      <category>webdev</category>
      <category>svelte</category>
      <category>vite</category>
      <category>vibecoding</category>
    </item>
    <item>
      <title>Augmented Intelligence (AI) Coding using Markdown Driven-Development</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Sun, 28 Sep 2025 22:43:25 +0000</pubDate>
      <link>https://dev.to/simbo1905/augmented-intelligence-ai-coding-using-markdown-driven-development-pg5</link>
      <guid>https://dev.to/simbo1905/augmented-intelligence-ai-coding-using-markdown-driven-development-pg5</guid>
      <description>&lt;p&gt;TL;DR: Deep research the feature, write the documentation first, go YOLO, work backwards... Then magic. ✩₊˚.⋆☾⋆⁺₊✧&lt;/p&gt;

&lt;p&gt;In my &lt;a href="https://dev.to/simbo1905/my-llm-code-generation-workflow-for-now-1ahj"&gt;last post&lt;/a&gt;, I outlined how I was using Readme-Driven Development with LLMs. In this post, I will describe how I implemented a 50-page RFC over the course of a single weekend. &lt;/p&gt;

&lt;p&gt;My steps are:&lt;/p&gt;

&lt;p&gt;Step 1: Design the feature documentation with an online thinking model&lt;br&gt;
Step 2: Export a description-only "coding prompt"&lt;br&gt;
Step 3: Paste to an Agent in YOLO mode (&lt;code&gt;--dangerously-skip-permissions&lt;/code&gt;)&lt;br&gt;
Step 4: Force the Agent to "Work Backwards"&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 1: Design the feature documentation with an online thinking model
&lt;/h2&gt;

&lt;p&gt;Open a new chat with an LLM that can search the web or do "deep research". Discuss what the feature should achieve. Do not let the online LLM write code. Create the user documentation for the feature you will write (e.g., README.md or a blog page). I start with an open-ended question to research the feature. That will prime the model. Your exit criteria is that you like the documentation or promotional material enough to want to write the code. &lt;/p&gt;

&lt;p&gt;To exit this step, have it create a "documentation artefact" in markdown (e.g. the README.md or blog post). Save that to disk so that you can point the coding agent at it. &lt;/p&gt;

&lt;p&gt;If you don't want to pay for a subscription for an expensive model, you can install Dive AI Desktop and use pay-as-you-go models of much better value. Here is a video on setting up Dive AI to do web research with Mistral: &lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/FMoIIoMNFV4"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Export a description-only "coding prompt"
&lt;/h2&gt;

&lt;p&gt;Next, tell the online model to "create a description only coding prompt (do not write the code!)". Do not accept the first answer. The more effort you put into perfecting &lt;strong&gt;both&lt;/strong&gt; the markdown feature documentation &lt;strong&gt;and&lt;/strong&gt; the coding prompt, the better. &lt;/p&gt;

&lt;p&gt;If the coding prompt is too long, then the artefact is too big! Start a fresh chat and create something smaller. This is Augmented Intelligence ticket grooming in action! &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Paste to an Agent in YOLO mode (&lt;code&gt;--dangerously-skip-permissions&lt;/code&gt;)
&lt;/h2&gt;

&lt;p&gt;Now paste in the groomed coding prompt and the documentation, and let it run. I always use a git branch so that I can let the agent go flat out. Cursor background agents, Copilot agents, OpenHands, Codex, Claude Code are becoming more accurate with each update. &lt;/p&gt;

&lt;p&gt;I only restrict &lt;code&gt;git commit&lt;/code&gt; and &lt;code&gt;git push&lt;/code&gt;. I ask it first to make a GitHub issue using the &lt;code&gt;gh&lt;/code&gt; cli and tell it to make a branch and PR. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Force the Agent to "Work Backwards"
&lt;/h2&gt;

&lt;p&gt;The models love to dive into code, break it all, get distracted, forget to update the documentation, hit compaction, and leave you with a mess. Do not let them be a caffeine-fuelled flying squirrel! &lt;/p&gt;

&lt;p&gt;The primary tool I am using now prints out a Todos list. This is usually the opposite of the correct way to do things safely! &lt;/p&gt;

&lt;p&gt;Here is an edited version of a real Todo list to fix a bug with JDT  "Match Any &lt;code&gt;{}&lt;/code&gt;"&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;⏺ Update Todos
  ⎿ ☐ Remove all compatibility mode handling
     ☐ Make `{}` always compile as strict
     ☐ Update Test_X to expect failures for `{}`
     ☐ Add regression test Test_Y
     ☐ Add INFO log warning when `{}` is compiled
     ☐ Update README.md with Empty Schema Semantics section
     ☐ Update AGENTS.md with guidance
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That list is in a perilous order. Logically, it is this: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Delete logic - so broken code, invalid old tests!&lt;/li&gt;
&lt;li&gt;Change logic - so more broken code, more invalid old tests!&lt;/li&gt;
&lt;li&gt;Change old tests - focusing on the old, not the new!&lt;/li&gt;
&lt;li&gt;Add one test - finally working on the new feature!&lt;/li&gt;
&lt;li&gt;Change the README.md and AGENTS.md - invalid docs used in steps 1-4!&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If the agent context compacts, things go sideways, you get distracted, and you will end up with a bag of broken code. &lt;/p&gt;

&lt;p&gt;So I set it to "plan mode", else immediately interrupt it, and force it to reorder the Todo list: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Change the README.md and AGENTS.md &lt;strong&gt;first&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Add one test (insist the test is not run &lt;strong&gt;yet&lt;/strong&gt;!)&lt;/li&gt;
&lt;li&gt;Change one test (insist the test is not run &lt;strong&gt;yet&lt;/strong&gt;!)&lt;/li&gt;
&lt;li&gt;Add/Change logic (cross-check the plan with a different model!)&lt;/li&gt;
&lt;li&gt;Now run the tests&lt;/li&gt;
&lt;li&gt;Delete things last&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That is a safe order where things are far less likely to be blown off course. I used to struggle with any feature that went beyond a single compaction; that is now far less of an issue. &lt;/p&gt;

&lt;h2&gt;
  
  
  Todos Are All You Need?
&lt;/h2&gt;

&lt;p&gt;I am not actually a big fan of the built-in &lt;code&gt;Todos&lt;/code&gt; list of the two big AI labs. The models really struggle with any changes to the plan. The Kimi K2 Turbo appears to be more capable of pivoting. I have a few tricks for that, but I will save them for another post. &lt;/p&gt;

&lt;h2&gt;
  
  
  Does This Work For &lt;strong&gt;Real&lt;/strong&gt; Code?
&lt;/h2&gt;

&lt;p&gt;This past weekend, I decided to write an RFC 8927 JSON Type Definition validator based on the experiemental JDK &lt;code&gt;java.util.json&lt;/code&gt; parser. The PDF of the spec is 51 pages. There is a ~4000-line compatibility test suite. &lt;/p&gt;

&lt;p&gt;We wrote 509 unit tests. We have the full compatibility test suite running. Yet we had bugs. We found them as we wrote a jqwik property test that generates 1000 random JTDs, and the corresponding JSON to validate, which uncovered several bugs. Codex also automatically reviewed the PRs and flagged some very subtle issues, which turned out to be real bugs. It took about a dozen PRs over the weekend to get the job done properly to a professional level. &lt;/p&gt;

&lt;h2&gt;
  
  
  End Notes
&lt;/h2&gt;

&lt;p&gt;Using a single model family is a Bad Idea (tm). For online research, I alternate between full-fat ChatGPT Desktop, Claude Desktop, and Dive Desktop to utilise each of GPT5-High, Opus 4.1, or Kimi K2 Turbo. &lt;/p&gt;

&lt;p&gt;For Agents, I have used all the models and many services. Microsoft kindly allows me to use full-fat Copilot with Agents for open-source projects for free ❤️ I have a cursor sub to use their background agents. I use Codex, Claude Code, and Gemini CLI locally. I use Codex in Codespaces. There are also background agents for Cursor, Codex, and OpenHands, among others. The actual model seems less important than writing the documentation first and writing tight prompts. &lt;/p&gt;

&lt;p&gt;I am currently using an open-weight model at $3 per million tokens for the heavy lifting, which is pay-as-you-go. However, I will cross-check its plans with GPT5 and Sonnet 4. &lt;/p&gt;

&lt;p&gt;Whenever things get complicated, I always ask a model from a different family to review every change on every bug hunt. That has reduced rework to almost zero. 💫&lt;/p&gt;

&lt;p&gt;If you are a veteran, you may enjoy the YT channel Vibe Coding With Steve and Gene. My journey over the past year has been very similar to theirs. &lt;/p&gt;

&lt;p&gt;End. &lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>softwaredevelopment</category>
      <category>coding</category>
    </item>
    <item>
      <title>My LLM Code Generation Workflow (for now)</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Fri, 11 Apr 2025 05:22:34 +0000</pubDate>
      <link>https://dev.to/simbo1905/my-llm-code-generation-workflow-for-now-1ahj</link>
      <guid>https://dev.to/simbo1905/my-llm-code-generation-workflow-for-now-1ahj</guid>
      <description>&lt;p&gt;tl:dr; Brainstorm stuff, generate a readme, plan a plan, then execute using LLM codegen. Discrete loops. Then magic. ✩₊˚.⋆☾⋆⁺₊✧&lt;/p&gt;

&lt;p&gt;My steps are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step 1: Brainstorming with a web search-enabled LLM&lt;/li&gt;
&lt;li&gt;Step 2: LLM-generated Readme-Driven Development &lt;/li&gt;
&lt;li&gt;Step 3: LLM planning of build steps&lt;/li&gt;
&lt;li&gt;Step 4: LLM thinking model generation of coding prompts&lt;/li&gt;
&lt;li&gt;Step 5: Let the LLM directly edit, debug and commit the code &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Returned to Step 1 to start new features. Iterate on later steps as needed. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sdwig7ckald41cmks45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0sdwig7ckald41cmks45.png" alt="My LLM Workflow" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Brainstorming with a web search-enabled LLM
&lt;/h2&gt;

&lt;p&gt;I use Perplexity AI Pro for this phase as it runs internet searches and does LLM summarisation. I will create a few chat threads grouped into a space with a custom system prompt. Dictation also works well to save on typing.  &lt;/p&gt;

&lt;p&gt;The final step is to start a new thread where I ask it to generate a README.md for the feature we will build. &lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: LLM-generated Readme-Driven Development 
&lt;/h2&gt;

&lt;p&gt;Readme-driven development (RDD) involves creating a git branch and updating the README.md before writing any implementation logic. RDD forces me to think through the scope and outcomes. It also provides a blueprint that I add into the LLM context window in all later steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: LLM planning of build steps
&lt;/h2&gt;

&lt;p&gt;I start a fresh chat with a thinking LLM model and add the README file into the context window. I ask for an implementation plan with testable steps. Getting the LLM to stop writing code during this planning phase is often hard. I encourage it to write the plan as section headers with only bullet points. If the outline plan looks too complex, I trim down the readme to describe a more basic prototype. &lt;/p&gt;

&lt;p&gt;If you distract the LLM with minor corrections, costs can run up, you can hit your limits, and the model becomes unfocused. If things go off track, I create a fresh chat, copy over the best content, and rephrase my prompt. See Andrej Karpathy's videos for an expert explanation of why fresh chats work so well. &lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: LLM thinking model generation of coding prompts
&lt;/h3&gt;

&lt;p&gt;I create a fresh chat, insert the Readme file, and cut and paste only one step of the plan. I ask the LLM to generate an exact prompt describing what to write as the code. &lt;/p&gt;

&lt;p&gt;Getting the LLM to stop writing code during this phase is often tricky. I encourage it to write only text, not to implement the logic, and to describe the tests to write, and which files would need editing. &lt;/p&gt;

&lt;h3&gt;
  
  
   Step 5: Let the LLM directly edit, debug and commit the code 
&lt;/h3&gt;

&lt;p&gt;I use Aider, the open-source command-line tool that indexes the local git repo, edits files, debugs tests and commits the code. Do not worry; just type &lt;code&gt;/undo&lt;/code&gt; to have it roll back any changes. &lt;/p&gt;

&lt;p&gt;You need it to see the test failures, so type &lt;code&gt;/run&lt;/code&gt; and paste the command to compile and run the unit tests. You then tell the LLM to fix any issues. At this point, the LLM writes and debugs the code for you ✩₊˚.⋆☾⋆⁺₊✧&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; GitHub Copilot Edit/Agent mode or Cursor can also write the code. Gemini-CLI or Claude Code can work like the open-source Aider Chat. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; I often have the LLM to update the README file after we have finished coding adding nodes about the implementation details. Markdown on GitHub can render "Mermaid" diagrams that LLMs find very easy to generate. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt;* Install the GitHub CLI tool &lt;code&gt;gh&lt;/code&gt; and the LLMs can make PRs and issues and release notes and GitHub Actions they can check for success using &lt;code&gt;gh&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  End Nodes
&lt;/h3&gt;

&lt;p&gt;This blog is inspired by Harper Reed's Blog and a conversation with Paul Netherwood—many thanks to Paul for explaining his LLM codegen workflow to me. &lt;/p&gt;

&lt;p&gt;My current tools are: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Perplexity API Pro online using Claud 3.7 Sonnet and US-hosted Deepseek for internet research and brainstorming &lt;/li&gt;
&lt;li&gt;GitHub Copilot with Claud 3.7 Sonnet Thinking (in both Visual Studio Code and JetBrains IntelliJ)&lt;/li&gt;
&lt;li&gt;Continue Plugin using DeepSeek-R1-Distill-Llama-70B hosted in the US by Together AI (in both Visual Studio Code and JetBrains IntelliJ)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;aider&lt;/code&gt; for direct code editing using both Claud Sonnet and DeepSeek-R1-Distill-Llama-70B hosted in the US by Together AI&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;aider&lt;/code&gt; defaults to Claud Sonnet. You can find my settings for a US-hosted DeepSeek at &lt;a href="https://gist.github.com/simbo1905/57642dd07f77ec2651e2b86edf421c7d" rel="noopener noreferrer"&gt;gist.github.com/simbo1905&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update:&lt;/strong&gt; the new llama4 scout is now my preferred open-source LLM coder and the Perplexity r1-1776 is my preferred thinking model when not using Claude or Gemini:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/model together_ai/perplexity-ai/r1-1776
/model together_ai/meta-llama/Llama-4-Scout-17B-16E-Instruct
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;End.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ai</category>
      <category>coding</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Git-Driven Deployments on Origin Kubernetes Distribution</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Mon, 02 Sep 2019 20:16:05 +0000</pubDate>
      <link>https://dev.to/simbo1905/git-driven-deployments-on-origin-kubernetes-distribution-37c5</link>
      <guid>https://dev.to/simbo1905/git-driven-deployments-on-origin-kubernetes-distribution-37c5</guid>
      <description>&lt;p&gt;This is the second post in a series explaining how &lt;a href="http://uniqkey.eu" rel="noopener noreferrer"&gt;uniqkey.eu&lt;/a&gt; does git-driven infrastructure-as-code on the OKD distribution of Kubernetes. We made our tools open source as &lt;a href="https://github.com/ocd-scm/ocd-meta" rel="noopener noreferrer"&gt;OCD&lt;/a&gt;. They can be used as rocket powering rainbow dust to get your features shipped 🌈✨🚀 This post will give an overview of a short &lt;a href="https://github.com/ocd-scm/ocd-meta/wiki/Short-Tutorial:-Git-Driven-Deployments-On-Minishift" rel="noopener noreferrer"&gt;OCD tutorial&lt;/a&gt; that deploys and then upgrade a ReactJS app when you push changes into a git repo. Here is a video of what the tutorial has you do: &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/FlJIxfD2Ql4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;We call this git-driven infrastructure as code. As a bonus step, the tutorial has you rollback the upgrade with a single &lt;code&gt;helm rollback&lt;/code&gt; command. Why do we think this is such an awesome idea? &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dev.to/simbo1905/git-driven-infrastructure-as-code-on-origin-kubernetes-distribution-4hcj"&gt;last post&lt;/a&gt; has a video of putting a git URL into OpenShift and it building the code and deploying a realworld.io ReactJS app. Did you spot what would ultimately disempower the team in that video? I used the web console with a 🐁 or 🐾 pad. That is bad as it gives short term gain but long term pain. Why?&lt;/p&gt;

&lt;p&gt;Clicking on a web console to set up multiple webapps and APIs in identical staging and live environments will be repetitive and boring. When the pressure is on inconsistencies will appear. Over time there will be drift both within and across environments. Only one person does the clicking which isn't recorded or reusable by other team members. Ultimately setting up environments manually on the web console will become kryptonite to both accuracy and team empowerment. Cloud-native environments can be driven by APIs and built from templates. So why not automate everything?&lt;/p&gt;

&lt;p&gt;If we accept that we must automate everything what should we add to our wishlist? How about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic deployments when we push configuration changes into a git repo. We believe that a rainbow appears somewhere in the world whenever this happens ✨🌈🙌&lt;/li&gt;
&lt;li&gt;Using templates to hide the boilerplate so you can focus on what is unique about each webapp or API you deploy onto Kubernetes&lt;/li&gt;
&lt;li&gt;Using a declarative style of configuration with idempotent updates. Don't worry I will explain this later&lt;/li&gt;
&lt;li&gt;Putting everything into git including encrypting secrets so that we can manage infrastructure just like code with pull requests&lt;/li&gt;
&lt;li&gt;Automatically create a release build image from a git release webhook event and apply the same tag to the image&lt;/li&gt;
&lt;li&gt;Use consistent runtime versions across applications and making it easy to routinely apply security patches to the base image layers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Is that too much to ask? Of course not! You can test drive OCD using &lt;a href="https://www.okd.io/minishift/" rel="noopener noreferrer"&gt;Minishift&lt;/a&gt; which lets you run Kubernetes on your laptop. Rather than try to cover all those points in a single post lets look at the short tutorial from the OCD wiki that &lt;a href="https://github.com/ocd-scm/ocd-meta/wiki/Short-Tutorial:-Git-Driven-Deployments-On-Minishift" rel="noopener noreferrer"&gt;covers the declarative deployment of prebuilt images&lt;/a&gt;. That demo covers the first four points. It has you set up the things shown in the video above on your laptop 💻✨🌈. Here is the sequence diagram: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fai6domzxkkxxr9ougbyd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fai6domzxkkxxr9ougbyd.png" alt="sequence diagram" width="570" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On the left is a git push of configuration into a git rep. We use GitHub. You can use any git server you like. The demo uses Gitea as the git server running inside Minishift so that everything can run on your laptop. The webhook triggers an application called &lt;a href="https://github.com/ocd-scm/ocd-environment-webhook/blob/master/README.md" rel="noopener noreferrer"&gt;ocd-environment-webhook&lt;/a&gt;. This is an instance of the awesome &lt;a href="https://github.com/adnanh/webhook" rel="noopener noreferrer"&gt;adnanh/webhook&lt;/a&gt; tool configured to run scripts to pull the configuration from git and install it into Kubernetes using Helmfile and Tiller. Tiller will install our rainbows into Kubernetes. We will introduce Helmfile and Tiller in later posts. Here is the full configuration that the demo installs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;repositories&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ocd-meta&lt;/span&gt; 
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://ocd-scm.github.io/ocd-meta/charts&lt;/span&gt;
&lt;span class="na"&gt;releases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;requiredEnv "ENV_PREFIX"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;-realworld&lt;/span&gt;
    &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
      &lt;span class="na"&gt;deployer&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{{&lt;/span&gt; &lt;span class="nv"&gt;requiredEnv "ENV_PREFIX"&lt;/span&gt; &lt;span class="pi"&gt;}}&lt;/span&gt;&lt;span class="s"&gt;-realworld&lt;/span&gt;
    &lt;span class="na"&gt;chart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ocd-meta/ocd-deployer&lt;/span&gt;
    &lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1.0.0"&lt;/span&gt;
    &lt;span class="na"&gt;values&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;react-redux-realworld&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;imageStreamTag&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;react-redux-realworld:v0.0.1"&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;deploy_env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; 
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;API_ROOT&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://conduit.productionready.io/api&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That configuration is Helmile yaml. The main body is a single release with a chart type &lt;code&gt;ocd-deployer&lt;/code&gt;. A chart is a set of yaml templates packaged in a zip and downloaded from a website. The header names the OCD chart repo on GitHub as the location to download the chart. The template values applied to the chart are very simple. It specifies the container image to use as &lt;code&gt;react-redux-realworld:v0.0.1&lt;/code&gt; and that two replica pods are to be maintained. This is a prebuilt image of the realworld.io ReactJS demo app. It also sets an environment variable &lt;code&gt;API_ROOT&lt;/code&gt; to be the public API that the app will use. &lt;/p&gt;

&lt;p&gt;To get this deployed you need some prerequisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Minishift with Helm Tiller installed detailed &lt;a href="https://github.com/ocd-scm/ocd-meta/wiki/Helm-Tiller-on-Minishift" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Gitea installed on Minishift detailed &lt;a href="https://github.com/ocd-scm/ocd-meta/wiki/Gitea-On-MiniShift" rel="noopener noreferrer"&gt;here&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once you have Gitea running you can register a user and create an empty repo &lt;code&gt;ocd-demo-env-short&lt;/code&gt; and push the demo configuration into it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# clone the code&lt;/span&gt;
git clone https://github.com/ocd-scm/ocd-demo-env-short.git
&lt;span class="nb"&gt;cd &lt;/span&gt;ocd-demo-env-short
&lt;span class="c"&gt;# make sure we can load details about the gitea url&lt;/span&gt;
oc project gitea
&lt;span class="c"&gt;# this should print the gitea url. If it doesn't set it manually&lt;/span&gt;
&lt;span class="nv"&gt;GITEA_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;oc get routes | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'$1~/gitea/{print $2}'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$GITEA_URL&lt;/span&gt;
&lt;span class="c"&gt;# see instructions to setup your own person access token&lt;/span&gt;
&lt;span class="nv"&gt;ACCESS_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;f64960a3a63f5b6ac17916c9be2dad8dc76c7131
&lt;span class="c"&gt;# set this to your username in gitea needed to get the url to your repo below &lt;/span&gt;
&lt;span class="nv"&gt;USER_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;you_not_me
&lt;span class="c"&gt;# add the gitea repo as a remote&lt;/span&gt;
git remote add minishift http://&lt;span class="nv"&gt;$ACCESS_TOKEN&lt;/span&gt;@&lt;span class="nv"&gt;$GITEA_URL&lt;/span&gt;/&lt;span class="nv"&gt;$USER_NAME&lt;/span&gt;/ocd-demo-env-short.git
&lt;span class="c"&gt;# push the code into Gitea&lt;/span&gt;
git push minishift master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why are we doing that? Because​ we cannot set up a GitHub webhook on my repo to fire an event into your deployment pipeline running in Minishift on your laptop. We loaded the code into a Gitea repo so you can configure your own webhook. &lt;/p&gt;

&lt;p&gt;Next we need to deploy our &lt;code&gt;ocd-enviroment-webhook&lt;/code&gt; handler that will catch webhook events and deploy our configuration. We can set that up in its own project using a script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;oc &lt;span class="nb"&gt;logout&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; oc login &lt;span class="nt"&gt;-u&lt;/span&gt; developer &lt;span class="nt"&gt;-p&lt;/span&gt; password
&lt;span class="nb"&gt;echo &lt;/span&gt;Use this git repo url http://&lt;span class="nv"&gt;$ACCESS_TOKEN&lt;/span&gt;@&lt;span class="nv"&gt;$GITEA_URL&lt;/span&gt;/&lt;span class="nv"&gt;$USER_NAME&lt;/span&gt;/ocd-demo-env-short.git
&lt;span class="c"&gt;# this must match where tiller is installed&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;TILLER_NAMESPACE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;tiller-namespace
&lt;span class="c"&gt;# create a new project&lt;/span&gt;
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;PROJECT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;ocd-short-demo
oc new-project &lt;span class="nv"&gt;$PROJECT&lt;/span&gt;
oc project &lt;span class="nv"&gt;$PROJECT&lt;/span&gt;
&lt;span class="c"&gt;# upgrade to admin&lt;/span&gt;
oc &lt;span class="nb"&gt;logout&lt;/span&gt; &lt;span class="p"&gt;;&lt;/span&gt; oc login &lt;span class="nt"&gt;-u&lt;/span&gt; admin &lt;span class="nt"&gt;-p&lt;/span&gt; admin
oc project &lt;span class="nv"&gt;$PROJECT&lt;/span&gt;
&lt;span class="nb"&gt;pushd&lt;/span&gt; /tmp &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; oc project &lt;span class="nv"&gt;$PROJECT&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; curl &lt;span class="nt"&gt;-L&lt;/span&gt; https://github.com/ocd-scm/ocd-environment-webhook/archive/v1.0.1.tar.gz | &lt;span class="nb"&gt;tar &lt;/span&gt;zxf - &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;cd &lt;/span&gt;ocd-environment-webhook-1.0.1 &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; ./wizard.sh &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;popd&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that echos out the Git repo URL that you need to feed into the &lt;code&gt;wizard.sh&lt;/code&gt; script. Here is the transcript of my run where I mostly hit enter to accept the defaults:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;The git repo url? http://6d42f3eb637f802cf0b2d17411ae2c2d26eefa54@gitea-gitea.192.168.99.100.nip.io/simbo1905/ocd-demo-env-short.git
The project where the images are built and promoted from? ocd-short-demo
Repo name? &lt;span class="o"&gt;(&lt;/span&gt;default: simbo1905/ocd-demo-env-short&lt;span class="o"&gt;)&lt;/span&gt;: 
Branch ref? &lt;span class="o"&gt;(&lt;/span&gt;default: refs/heads/master&lt;span class="o"&gt;)&lt;/span&gt;: 
Chart instance prefix? &lt;span class="o"&gt;(&lt;/span&gt;default: ocd-short-demo&lt;span class="o"&gt;)&lt;/span&gt;: 
Use &lt;span class="nt"&gt;--insecure-no-tls-verify&lt;/span&gt;? &lt;span class="o"&gt;(&lt;/span&gt;default: &lt;span class="nb"&gt;false&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;: 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It set up the webhook in Gitea you need the webhook URL and the webhook secret. This outputs the URL:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;oc get route ocd-environment | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'NR&amp;gt;1{print "http://" $2 "/hooks/ocd-environment-webhook"}'&lt;/span&gt;
http://ocd-environment-ocd-short-demo.192.168.99.100.nip.io/hooks/ocd-environment-webhook
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this the secret:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;oc describe dc ocd-environment-webhook | &lt;span class="nb"&gt;awk&lt;/span&gt; &lt;span class="s1"&gt;'$1~/WEBHOOK_SECRET:/{print $2}'&lt;/span&gt;
M7MuW6aZnn
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You then use those to set up a Gitea webhook of type &lt;code&gt;application/json&lt;/code&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehx4qwr3l3qkxsunfa9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fehx4qwr3l3qkxsunfa9r.png" alt="webhook setup" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can now fire the webhook from git on the commandline or by editing files using the Gitea web console. To do it on the commandline try:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"testing"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&amp;gt;&lt;/span&gt; README.md
git commit &lt;span class="nt"&gt;-am&lt;/span&gt; &lt;span class="s1"&gt;'test commit'&lt;/span&gt;
git push minishift master
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You should then see the webhook fire and our app deploy 💨✨🌈 &lt;/p&gt;

&lt;p&gt;After that you can follow the steps in the video above to edit the container image version number to see the application perform a rolling upgrade. For bonus marks you can try the steps at the end of the tutorial to rollback to the first release. &lt;/p&gt;

&lt;p&gt;In that tutorial we covered four out of out six items on the wishlist above. In the next post we will automate a release build from a git webhook release event and apply the same tag to the image. That way we can track exactly what code is being promoted between environments. At the same time we will making it easy to use consistent runtime versions across applications. That will make it easy to security patch the runtime image.&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>openshift</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Git-driven Infrastructure as Code on Origin Kubernetes Distribution</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Fri, 30 Aug 2019 10:26:00 +0000</pubDate>
      <link>https://dev.to/simbo1905/git-driven-infrastructure-as-code-on-origin-kubernetes-distribution-4hcj</link>
      <guid>https://dev.to/simbo1905/git-driven-infrastructure-as-code-on-origin-kubernetes-distribution-4hcj</guid>
      <description>&lt;p&gt;At &lt;a href="https://uniqkey.eu" rel="noopener noreferrer"&gt;uniqkey.eu&lt;/a&gt; we automatically deploy all of the Kubernetes configuration running our applications on AWS simply by pushing changes to GitHub. I could say that this is a good thing as it is all about efficiency, DevOps, and infrastructure-as-code. That is true but it misses the magic. Driving your infrastructure this way gives us 🐐🏭💨🦄. It is a yak shaving factory that powers a blast furnace of team empowering mega awesomeness.&lt;/p&gt;

&lt;p&gt;We even wrote a slack bot that edits the configuration files in git and creates the pull requests. When a new dev joins our team they can push their first code to production by hanging out on slack and chatting to the bot. Yes, they 🗣️🤖🌈. Everyone can see what's going on in slack as we run continuous deployments. Here is a seven-minute video showing that in action: &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/TGjI6AT4QC4"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;We thought other people could use our approach so we put everything up on GitHub as &lt;a href="https://github.com/ocd-scm/ocd-meta" rel="noopener noreferrer"&gt;OCD&lt;/a&gt;. Yes, we🦄🎁🌈. This is the first in a series of posts about how OCD combines some great open source technologies to run a successful start-up business. Running a business with multiple web applications and a mobile backend in Kubernetes on AWS is a big topic. So I will break it down into bite-sized chunks you can run on your laptop. &lt;/p&gt;

&lt;p&gt;But why did I call it OCD? Well because it runs on OKD and it is a bad pun about obsessive automation, sorry. I guess I should explain the background. &lt;/p&gt;

&lt;p&gt;Origin Kubernetes Distribution OKD is one of the most popular Kubernetes distributions that makes self-service devops a reality. It is the open-source project that powers OpenShift so I will use the terms OKD and OpenShift interchangeably. We run our business apps on OpenShift Online Pro which is a CaaS (Container-Orchestration-as-a-Service). We simply rent space on the Kubernetes cluster and someone else patches it and the operating system. We only pay for a fraction of the cluster and get a mature stable solution. We only need to manage the Kubernetes configuration that runs our webapps and our mobile backend API. The openshift.com service team keeps the managed cluster on AWS healthy and security patched. Yet OpenShift is based on open source OKD so there is no lock-in and you can run it yourself on any cloud.&lt;/p&gt;

&lt;p&gt;If you haven't yet discovered why OpenShift is a great place to start here is a video of building and deploy a &lt;a href="https://realworld.io" rel="noopener noreferrer"&gt;real-world.io&lt;/a&gt; ReactJS app by simply enter the git URL into the web console: &lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/7xK8la3AGtI"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;That video highlights how the Origin Kubernetes Distribution has a focus on being a solution for turning your code into a live system. The git URL is run through a template for building and deploying a node.js app. The template creates all the Kubernetes objects necessary to pull your code, build a container image, and push it to an image stream within the internal container registry. It also sets up a deployment object that watches the image stream for push events to deploy any updates. Finally, there is a service to load balance the pods and a route to expose them to the outside world. That is a lot of software-defined application infrastructure all created by a developer just entering a git URL!&lt;/p&gt;

&lt;p&gt;With great power comes great responsibility. We wanted all our Kubernetes configuration under source control. This allows us to treat all our Kubernetes application infrastructure like code so that we can automate the deployments. We wanted code reviews, continuous integration and continuous delivery onto Kubernetes. We wanted the full 🐐🏭💨🦄. In this series of posts, I will start by running through the OCD demos on your laptop as a quick tour of what it does. After that, I will run through some of the great tools that OCD brings together coherently to be more than the sum of the parts. First up we will run through the first tutorial on setting up a Kubernetes configuration deployment pipeline from &lt;a href="https://dev.to/simbo1905/git-driven-deployments-on-origin-kubernetes-distribution-37c5"&gt;scratch on Minishift&lt;/a&gt;. &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>devops</category>
      <category>openshift</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Shamir's Secret Sharing Scheme in JavaScript</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Thu, 29 Aug 2019 14:51:40 +0000</pubDate>
      <link>https://dev.to/simbo1905/shamir-s-secret-sharing-scheme-in-javascript-2o3g</link>
      <guid>https://dev.to/simbo1905/shamir-s-secret-sharing-scheme-in-javascript-2o3g</guid>
      <description>&lt;p&gt;Passwords are kryptonite to security so they need to be strong and never reused. Developers agree with that last sentence then don't give their users a way to safely back up a strong password. We should offer users the ability to recover a strong password using &lt;a href="https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing" rel="noopener noreferrer"&gt;Shamir's Secret Sharing Scheme&lt;/a&gt;. Users can then confidently use a unique strong password knowing they will not become locked out. &lt;/p&gt;

&lt;p&gt;What exactly is Shamir's Secret Sharing Scheme? It is a form of secret splitting where we distribute a password as a group of shares. The original password can be reconstructed only when a sufficient threshold of shares are recombined together. Here is example code showing how this works using the &lt;a href="https://www.npmjs.com/package/shamir" rel="noopener noreferrer"&gt;shamir&lt;/a&gt; library:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;split&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;join&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;shamir&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;randomBytes&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;require&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;crypto&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// the total number of shares&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;PARTS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// the minimum required to recover&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;QUORUM&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// you can use any polyfill to covert between string and Uint8Array&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;utf8Encoder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextEncoder&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;utf8Decoder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;TextDecoder&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;doIt&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secret&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;hello there&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secretBytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;utf8Encoder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;secret&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// parts is a object whos keys are the part number and &lt;/span&gt;
    &lt;span class="c1"&gt;// values are shares of type Uint8Array&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;randomBytes&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;PARTS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;QUORUM&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;secretBytes&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// we only need QUORUM parts to recover the secret&lt;/span&gt;
    &lt;span class="c1"&gt;// to prove this we will delete two parts&lt;/span&gt;
    &lt;span class="k"&gt;delete&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="k"&gt;delete&lt;/span&gt; &lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="c1"&gt;// we can join three parts to recover the original Unit8Array&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;recovered&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c1"&gt;// prints 'hello there'&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;utf8Decoder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;recovered&lt;/span&gt;&lt;span class="p"&gt;));&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Cryptocurrency wallets use Shamir's Secret Sharing to enable users to back up their passphrases. This solves the problem that if someone dies the bitcoins can be passed to friends and family. How might you use this approach to protect a bitcoin passphrase that is worth a cool ten million dollars? You could generate five shares and set a threshold of three. You can then send two shares to two trusted friends, write down two shares on paper then store them in separate secure locations, and give the final share to your lawyer. It would then be very hard for someone else to obtain three shares to steal your bitcoins. Your last will and testament document can state how to recover the bitcoins if you die. &lt;/p&gt;

&lt;p&gt;Isn't it time your app enforced a strong password and also gave people the choice of using Shamir's Secret Sharing Scheme to back it up? &lt;/p&gt;

</description>
      <category>security</category>
      <category>javascript</category>
      <category>authentication</category>
      <category>passwords</category>
    </item>
    <item>
      <title>Zero-Knowlege Authentication with JavaScript</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Tue, 27 Aug 2019 10:41:31 +0000</pubDate>
      <link>https://dev.to/simbo1905/zero-knowlege-authentication-in-javascript-23lf</link>
      <guid>https://dev.to/simbo1905/zero-knowlege-authentication-in-javascript-23lf</guid>
      <description>&lt;p&gt;Passwords are kryptonite to security and they should never leave the client application or web browser. People think they understand that last sentence then use obsolete hashing techniques to handle passwords. Devs should use a zero-knowledge password proof library and take security to the max.&lt;/p&gt;

&lt;p&gt;What exactly is a zero-knowledge authentication protocol? It is an authentication protocol that has the properties that anyone observing the network traffic learns nothing of any use. The &lt;a href="https://en.wikipedia.org/wiki/Secure_Remote_Password_protocol" rel="noopener noreferrer"&gt;Secure Remote Password Protocol&lt;/a&gt; (SRP) is a zero-knowledge authentication protocol that is described in RFC 2945 and RFC 5054. &lt;a href="https://www.npmjs.com/package/thinbus-srp" rel="noopener noreferrer"&gt;thinbus-srp&lt;/a&gt; is one implementation of SRP written in JavaScript. &lt;/p&gt;

&lt;p&gt;The typical best practices that most sites use to handle passwords and API keys are dependent upon an attacker not being able to spy on the traffic. We must use HTTPS to keep things private. The problem is that if you don't use a zero-knowledge authentication protocol then HTTPS is your only line of defence. The OpenSSL &lt;a href="https://www.mumsnet.com/features/mumsnet-and-heartbleed-as-it-happened" rel="noopener noreferrer"&gt;Heartbleed&lt;/a&gt; vulnerability is one of the highest-profile issues that has shown that HTTPS can be subject to problems where attackers harvested passwords. Many devs won’t be aware of such problems. When we do learn about them they are already patched. That isn’t great motivation to protect ourselves from future bugs and hacks that are not yet a clear and present danger. &lt;/p&gt;

&lt;p&gt;What might be stronger motivation is to consider that software deployments are getting more complex all the time. Modern cloud security is based on software configuration that can have bugs just like anything else. With cloud technologies and serverless, it is normal to terminate HTTPS at the edge. Your network traffic then moves through many layers controlled by other companies. Even if we trust that they are vetting their employees it is easy for mistakes to be leaking unencrypted traffic. Anyone's code can have error handling or logging bugs were we leak a hashed password into a central logging service. As developers, we need to recognise that things are getting more complex all the time and that we must level-up to protect the people who rely upon us to keep them secure. &lt;/p&gt;

&lt;p&gt;So why haven't we all heard about zero-knowledge protocols and why aren't we using them every day? Your browser used a zero-knowledge protocol to create a secure connection to this web server using HTTPS. The web browser and the webserver negotiated a session key before using that key to encrypt the HTTP request and response. If someone is capturing all the packets they have all the data used to generate the shared session key, and all the data encrypted with that key, yet they cannot recover any of the data. That is what it means to be zero-knowledge secure. It has taken decades to get to the point where we use HTTPS by default. Are you going to leave it a decade before you apply the same level of security to how you authenticate your users in your own code? &lt;/p&gt;

&lt;p&gt;If you are persuaded by this you might be wondering what is the catch. The answer is that a zero-knowledge protocol has more moving parts. Here is the sequence diagram that shows how to authenticate a user with the thinbus library: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8k1i82rnakevmqwszjv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8k1i82rnakevmqwszjv.png" alt="SRP Auth Sequence Diagram" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Yet if you take a step back it is just an additional fetch to the server to get a salt and challenge from the server. On the server, you need to store the unique challenge that was sent to the client to complete the password proof. Other than that it is just some plain old JavaScript functions that do the crypto math. For the effort of coding an additional trip to the server and calling a crypto library, you get a lot more security. You are no longer relying on HTTPS alone to protect your users from hackers. Isn’t it time you levelled up and started using the Secure Remote Password protocol? &lt;/p&gt;

</description>
      <category>passwords</category>
      <category>authentication</category>
      <category>javascript</category>
      <category>security</category>
    </item>
    <item>
      <title>Microservces Anti-Patterns and Ten Tips To Fail Badly</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Thu, 22 Aug 2019 20:50:52 +0000</pubDate>
      <link>https://dev.to/simbo1905/microservces-anti-patterns-and-ten-tips-to-fail-badly-2le</link>
      <guid>https://dev.to/simbo1905/microservces-anti-patterns-and-ten-tips-to-fail-badly-2le</guid>
      <description>&lt;p&gt;In this post, I will give a run down of a youtube video by David Schmitz of anti-patterns and tips to defeat microservices.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/X0tjziAQfNQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;10) Go Full-Scale Polyglot&lt;/p&gt;

&lt;p&gt;One of the benefits of using microservices is that you can use different programming languages in different services. This freedom of choice can be abused with inappropriate use of multiple technologies. There is a phrase "CV driven engineering" which is when devs use frameworks and tools because it will look good on their CV. No-one will be able to support all the code and customers will be the worse for it. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xlj6kq1t0ogjbwxne76.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xlj6kq1t0ogjbwxne76.png" alt="The Where Is Waldo Stacks" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;9) The Data Monolith&lt;/p&gt;

&lt;p&gt;The next anti-pattern is a shared database. You can share a database cluster with total isolation of tables. What is being called out here is sharing data via the database. If you go to another microservices tables, and they change their tables, things explode. Some decades ago on the first platform where I was a tech lead the strategy was to have a single shared database with many teams writing micro-frontends. A single sign-on service meant that we could link between applications and the user had the illusion of a single big service but each area of the system was built by different teams. Where the technical strategy was profoundly flawed was that teams were to exchange data via Stored procs. Databases aren't designed to act as APIs to encapsulated business logic. In effect teams just gave other teams access to the procs that their own app used. This is just like querying each other's tables. 💩 💥 💀 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2o8ujpuvvm6be0flzg7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2o8ujpuvvm6be0flzg7o.png" alt="Two Microservies With Shared Database" width="600" height="584"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8) The Event Monolith&lt;/p&gt;

&lt;p&gt;There is a popular pattern of event sourcing and taken to its logical extreme it is known as the "inside-out database" pattern that is gaining popularity in the Kafka world. What David's video explains is that unless you take a lot of care exposing events between teams gives you high coupling and can lead to copy-and-paste fixes and enhancements being added to every service. At a very high level, you can get the same problems as sharing database tables. Appending entries to a shared event log is just like appending rows to a shared database table. No team should expose an event stream to anther team without it being a contract that they will maintain strict rules of backwards compatibility just as they must with any microservices API. Avro can be used to add attributes in a way that doesn't break old processes reading the new events. Upgrading data or fix data related bugs needs careful planning and practice or you will have monolithic headache. &lt;/p&gt;

&lt;p&gt;7) The Homegrown Monolith&lt;/p&gt;

&lt;p&gt;Obviously, something new as cool as microservices requires that you go and write your own framework. After all, you are one of the smartest folks around and if google can do it then so can you. Yet it is very hard to avoid a change-the-world event when you are writing your own framework while you write your application. A homegrown framework is a monolithic force as well as being immature and a distraction from adding business features. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6ysx7bkavfccqw3drod.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd6ysx7bkavfccqw3drod.png" alt="Framework Synchronized Release" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6) Use The Meat Cloud&lt;/p&gt;

&lt;p&gt;David specifically picks out large financial services organisations here. The "meat cloud" is using lots of people in an ops team rather than self-service automation. Typically teams doing microservices have a continuous integration and continuous deployment pipeline for their application code. Yet big firms believe that it is optimal to have separate teams that manage all the infrastructure needing a support ticket to make any changes. Since they have people sitting around waiting to do work they manually click about to set things up rather than automate anything. To prevent folks from breaking expensive shared tools they lock them down to only do what they anticipated you need to do. The experienced person who set it up isn't around to actually run the shop. You have someone without the big picture clicking the mouse day-to-day. This separates who wants something to be done, from the people who know how it can be done, from the people who have the privileges to get it done, from the people who have the time to get it done. So nothing gets done. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F579q84prf1cgihlp6f82.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F579q84prf1cgihlp6f82.png" alt="Hand Craft Infrastructure With Love" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5) The Distributed Monolith&lt;/p&gt;

&lt;p&gt;The anti-pattern here is to build an app as though it is a monolith yet deploy it as many microservices. This combines to give you the worst of both worlds. In a microservices environment, failures are more common than in a monolithic environment for the same level of application code quality. This is due to the fact that networks are not perfect and due to the mathematics of cumulative probabilities. If you have a problem where only 1 in 365 requests are failing in a monolith that is a bad enough problem. If you are using microservices where 23 inter-service calls are required to build a page then 1 in 365 requests failing means that 50% of customer pages will fail. The network is not reliable. &lt;a href="https://aphyr.com/posts/288-the-network-is-reliable" rel="noopener noreferrer"&gt;Here is a post&lt;/a&gt; about real-world outages showing that network reliability is a fallacy. A service mesh may help but is that itself a complex distributed system that will take effort to set up and run. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focoq3xp50ni15p02zewd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focoq3xp50ni15p02zewd.png" alt="The Musketeer Pattern" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4) The Single Page Application (SPA) Monolith&lt;/p&gt;

&lt;p&gt;This is another hidden monolith anti-pattern. The problem is that your single page application can become the monolith. That has all the problems of a traditional monolith with the added twist that is running in many different browsers. David points out that it feels easy to just put new logic into the front-end when your requirements suddenly change but that convenience is the road to having a big ball of mud application. Approaches such as React.lazy applied to React Routes with code splitting may allow you to create a &lt;a href="https://dev.tomodular%20architecture"&gt;modular architecture&lt;/a&gt; for your frontend to avoid this trap. Such advanced techniques go well beyond the basics and will require effort to set up and will be a challenge to retrofit to a fast growing code base. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e2vnmef0v3l3lgkfxb7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6e2vnmef0v3l3lgkfxb7.png" alt="micro is backend only" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3) The Decision Monolith&lt;/p&gt;

&lt;p&gt;We can all take a dig at architects who restrict the use of new technologies and who proscribed J2EE as the solution to all things. Yet it is hard to not say no to new ideas in the interests of reducing risks. We should ensure that we don't have a dogmatic and inflexible approach and that we create ways to experiment and to continuously introduce new innovations. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5hkrr71msr2v2ugw01u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5hkrr71msr2v2ugw01u.png" alt="The Answer Is Websphere" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2) The Monolithic Business&lt;/p&gt;

&lt;p&gt;Microservices cannot be a technology only matter. If we have monolithic processes then we are really doing waterfall in disguise. This is also known as "wagile", "water-scrum-fall", "dark agile", "faux agile" and a host of other names. As well as how the process works we should also call out monolithic attitudes and approaches outside of technology in terms of business engagement. The essence of agile and microservices is that we release little and often to get valuable feedback. If there is no real feedback loop between the business user and what we are building then we are not optimizing business value. &lt;/p&gt;

&lt;p&gt;1) Use A HR Driven Microservices Architecture&lt;/p&gt;

&lt;p&gt;The notable quote here is: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"It is easier to search for React developers than for general craftswomen."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is basically saying that if your hiring practices focus on specializations then you are de-optimising your ability to build, deploy and support microservices. It is also a known bad practice to organise teams into specialisms. You need to organise around small mixed disciplined teams that are empowered to design, build, deploy and support what they create in as self-service model as possible. Yet when I talk to developers I am often surprised with how they self identify as in a given specialism such as "I am a Java developer" or "I am a React developer". That just perpetuates silos and allows organisations to box you into a silo and disempower you while claiming that is what makes you happy. Call yourself a software engineer and be intellectually curious as to how everything works. Infrastructure and networks in the cloud are software defined and has an API you can make use. Aim to be a maker of things not the ​holder of particular skils. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4op4n2526rbca6zjx1w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd4op4n2526rbca6zjx1w.png" alt="Rolebased Silos" width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's all folks. It would be great to hear about your experiences of what works and what doesn't. &lt;/p&gt;

</description>
      <category>microservices</category>
      <category>devops</category>
      <category>agile</category>
      <category>design</category>
    </item>
    <item>
      <title>How to integrate Git Bash with Visual Studio Code on Windows</title>
      <dc:creator>Simon Massey</dc:creator>
      <pubDate>Fri, 09 Aug 2019 07:05:06 +0000</pubDate>
      <link>https://dev.to/simbo1905/how-to-integrate-git-bash-with-visual-studio-code-on-windows-3217</link>
      <guid>https://dev.to/simbo1905/how-to-integrate-git-bash-with-visual-studio-code-on-windows-3217</guid>
      <description>&lt;p&gt;Many clients will require me to work on a Windows laptop for access to their networks. Fortunately, I can change the settings within VS Code to use Git Bash as the built-in terminal. I can then get the same developer experience on a Windows laptop that I get at home on my mac. &lt;/p&gt;

&lt;p&gt;First type "Ctrl+Shift+P" to open the command search and type/select "Open User Settings". If this display a settings search page you will need to hit the ”{}” at the top right to get to the raw JSON. Merge the following settings ensuring to use paths that match where you installed the "bash.exe" and "git.exe" binaries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"terminal.integrated.shell.windows"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"C:&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;whereever&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;Git-2.17.1&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;bin&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;bash.exe"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"terminal.integrated.env.windows"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"CHERE_INVOKING"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"terminal.integrated.shellArgs.windows"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="s2"&gt;"-l"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;//&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;optional&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;add&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;git-path&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;use&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;this&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;version&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;of&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;git&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;built&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;visual&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;studio&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;model&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="nl"&gt;"git.path"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"C:&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;whereever&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;Git-2.17.1&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;bin&lt;/span&gt;&lt;span class="se"&gt;\\&lt;/span&gt;&lt;span class="s2"&gt;git.exe"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that you don't have to use the last setting which is telling VS Code to use the git that came with Git Bash for its built-in git features. &lt;/p&gt;

&lt;p&gt;The great thing about this approach is that you don't have to switch to a separate Bash Windows. The embedded terminal runs inside of VS Code and starts in the correct folder on disk. VS Code will filter the terminal output looking for file paths like &lt;code&gt;./server.js&lt;/code&gt; and it will turn them into hyperlinks. Ctrl+Click on them and it will open the file within the main editor panel! That is great when your build or unit tests you are running in the terminal throw an error logging the file to open. &lt;/p&gt;

&lt;p&gt;Git Bash comes with all your favourite coreutils goodies like find, awk, sed, ssh, and more. You can also capture output to the windows clipboard with &lt;code&gt;cat whatever | clip&lt;/code&gt;. You can read the windows clipboard with &lt;code&gt;cat /dev/clipboard | awk -f mega_skills.awk&lt;/code&gt;. You can bash the heck out of any problem and be winning all day long.&lt;/p&gt;

&lt;p&gt;Enjoy!&lt;/p&gt;

</description>
      <category>vscode</category>
      <category>windows</category>
    </item>
  </channel>
</rss>
