<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Janne Sinivirta</title>
    <description>The latest articles on DEV Community by Janne Sinivirta (@v3rtti).</description>
    <link>https://dev.to/v3rtti</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/v3rtti"/>
    <language>en</language>
    <item>
      <title>Meet Daffy: A Lightweight Guardian for Your DataFrames</title>
      <dc:creator>Janne Sinivirta</dc:creator>
      <pubDate>Mon, 05 Jan 2026 15:39:33 +0000</pubDate>
      <link>https://dev.to/v3rtti/meet-daffy-a-lightweight-guardian-for-your-dataframes-3jih</link>
      <guid>https://dev.to/v3rtti/meet-daffy-a-lightweight-guardian-for-your-dataframes-3jih</guid>
      <description>&lt;h2&gt;
  
  
  The Data Validation Dilemma
&lt;/h2&gt;

&lt;p&gt;Most DataFrame breakages are boring: a column got renamed, a join introduced nulls, a dtype changed, or a value shows up that you didn’t expect.&lt;/p&gt;

&lt;p&gt;In notebooks, we &lt;em&gt;do&lt;/em&gt; validate — just informally. We inspect &lt;code&gt;.head()&lt;/code&gt;, run &lt;code&gt;.info()&lt;/code&gt;, do a quick &lt;code&gt;value_counts()&lt;/code&gt;, and add a couple of ad-hoc &lt;code&gt;assert&lt;/code&gt;s when something looks suspicious. That’s often enough to move forward.&lt;/p&gt;

&lt;p&gt;The problem is what happens when the notebook turns into “real code”. Those checks either:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;stay behind in the exploration phase, or&lt;/li&gt;
&lt;li&gt;get mixed into the transformation logic itself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Either way, the assumptions become hard to see later:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What columns are required on input?&lt;/li&gt;
&lt;li&gt;What constraints do we assume (non-null, ranges, allowed values)?&lt;/li&gt;
&lt;li&gt;What does the function guarantee on output?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That’s the gap Daffy is trying to close: keep the transformation code clean, while making the DataFrame “contract” explicit at the function boundary.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Daffy does
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Daffy&lt;/strong&gt; is a small library for validating pandas and Polars DataFrames at runtime using decorators.&lt;/p&gt;

&lt;p&gt;You annotate a data-processing function with what you expect to receive (input) and what you promise to return (output). When the function runs, Daffy checks those expectations and raises a clear error if something doesn’t match — close to where the data is transformed, not several steps later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key features:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Column and type checks:&lt;/strong&gt; Ensure required columns exist and have expected dtypes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Value constraints:&lt;/strong&gt; Enforce rules like non-null columns, unique keys, allowed categories, or numeric ranges.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Row-level validation:&lt;/strong&gt; For cross-column business rules, validate rows with Pydantic models.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple backends:&lt;/strong&gt; Works with pandas, Polars, Modin, and PyArrow tables.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The main idea is separation of concerns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Your function stays focused on the transformation.&lt;/li&gt;
&lt;li&gt;The assumptions live next to it, but outside the transformation code.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Before and After: A Validation Story
&lt;/h2&gt;

&lt;p&gt;To make this concrete, here’s a simple example: apply a discount to a products DataFrame.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before Daffy – manual checks mixed into code:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;apply_discount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Manual validation
&lt;/span&gt;    &lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Price&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Missing Price column!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="k"&gt;assert&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Brand&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Missing Brand column!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="c1"&gt;# We assume Price should be numeric;
&lt;/span&gt;    &lt;span class="c1"&gt;# you might add more checks here
&lt;/span&gt;
    &lt;span class="c1"&gt;# Perform transformation
&lt;/span&gt;    &lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;copy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Discount&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Price&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works, but the validation and business logic are coupled. If this function grows (or gets copied), the checks tend to drift, get removed, or become inconsistent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After Daffy – validation declared at the boundary:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;daffy&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;df_in&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;df_out&lt;/span&gt;

&lt;span class="nd"&gt;@df_in&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Brand&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Price&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="nd"&gt;@df_out&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Brand&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Price&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Discount&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;apply_discount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;copy&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Discount&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Price&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mf"&gt;0.1&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;df&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can also add constraints without turning the function into a pile of checks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@df_in&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Price&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;checks&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;}},&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Brand&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;checks&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;notnull&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;}},&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;apply_discount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;df&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="bp"&gt;...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why Choose Daffy over Pandera or Others?
&lt;/h2&gt;

&lt;p&gt;You might ask: why not use &lt;strong&gt;Pandera&lt;/strong&gt; or &lt;strong&gt;Great Expectations&lt;/strong&gt;?&lt;/p&gt;

&lt;p&gt;They’re both good tools — they’re just aimed at slightly different workflows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pandera&lt;/strong&gt; is strong when you want a schema-first approach with rich validation, and you’re okay maintaining schemas/classes alongside the code.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Great Expectations&lt;/strong&gt; is great for broader pipeline / warehouse-style data quality, expectation suites, reporting, and monitoring.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Daffy is intentionally narrower in scope. It’s for cases where you want lightweight checks &lt;em&gt;right where you transform data&lt;/em&gt;, with minimal ceremony:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;define the input/output expectations next to the function&lt;/li&gt;
&lt;li&gt;keep the function body focused on transformations&lt;/li&gt;
&lt;li&gt;fail early with an error that points to the violated assumption&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;If you’re mostly doing DataFrame work in Python (notebooks, scripts, ETL steps, small-to-medium pipelines) and your pain point is “assumptions are scattered and easy to forget”, Daffy is a practical middle ground.&lt;/p&gt;

&lt;p&gt;It won’t replace heavier validation/monitoring frameworks for every scenario — and it shouldn’t try to. But if you want clearer function boundaries and faster feedback when your DataFrame shape changes, it fits nicely.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it out here:&lt;/strong&gt; &lt;a href="https://github.com/vertti/daffy" rel="noopener noreferrer"&gt;https://github.com/vertti/daffy&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>pandas</category>
      <category>polars</category>
      <category>datavalidation</category>
    </item>
    <item>
      <title>Stop Writing Shell Scripts for Container Health Checks</title>
      <dc:creator>Janne Sinivirta</dc:creator>
      <pubDate>Sat, 03 Jan 2026 12:16:02 +0000</pubDate>
      <link>https://dev.to/v3rtti/stop-writing-shell-scripts-for-container-health-checks-4pif</link>
      <guid>https://dev.to/v3rtti/stop-writing-shell-scripts-for-container-health-checks-4pif</guid>
      <description>&lt;p&gt;It started as one of those “this will take 30 seconds” moments.&lt;/p&gt;

&lt;p&gt;We ship a container that includes a tiny helper binary — something we compile in a builder stage and &lt;code&gt;COPY&lt;/code&gt; into the runtime image. Think: &lt;code&gt;config-render&lt;/code&gt;, &lt;code&gt;migrate-db&lt;/code&gt;, &lt;code&gt;probe&lt;/code&gt;, whatever your service depends on at startup.&lt;/p&gt;

&lt;p&gt;I just wanted to make sure the image really contained the helper I thought it contained, so I added:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;config-render &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Green build. Ship it. Done.&lt;/p&gt;

&lt;p&gt;..until a pipeline failed later with an error that told me basically nothing useful.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Was it missing from &lt;code&gt;PATH&lt;/code&gt;?
&lt;/li&gt;
&lt;li&gt;Did we copy it into the wrong directory?
&lt;/li&gt;
&lt;li&gt;Did it lose the executable bit?
&lt;/li&gt;
&lt;li&gt;Wrong architecture (hello &lt;code&gt;exec format error&lt;/code&gt;)?
&lt;/li&gt;
&lt;li&gt;Or did it run but print something unexpected?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;So I did what we all do: I started “hardening” the check.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First: “Is it on PATH?”&lt;/li&gt;
&lt;li&gt;Then: “Can it execute?”&lt;/li&gt;
&lt;li&gt;Then: “What version is it?”&lt;/li&gt;
&lt;li&gt;Then: “Is that version acceptable?”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And suddenly my “simple validation” became a little bash pipeline that tried to be clever:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;command&lt;/span&gt; &lt;span class="nt"&gt;-v&lt;/span&gt; config-render &lt;span class="o"&gt;&amp;gt;&lt;/span&gt;/dev/null 2&amp;gt;&amp;amp;1 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  config-render &lt;span class="nt"&gt;--version&lt;/span&gt; 2&amp;gt;/dev/null | &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nb"&gt;grep&lt;/span&gt; &lt;span class="nt"&gt;-oE&lt;/span&gt; &lt;span class="s1"&gt;'[0-9]+\.[0-9]+\.[0-9]+'&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="c"&gt;# ...some version comparison logic... \&lt;/span&gt;
  &lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"config-render missing or wrong version"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;exit &lt;/span&gt;1&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It worked... for now.&lt;/p&gt;

&lt;p&gt;But this is where validation quietly becomes a maintenance trap:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Output parsing&lt;/strong&gt; breaks when formatting changes (“v1.2.3”, “1.2.3 (build abc)”, extra lines, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Redirect chains&lt;/strong&gt; swallow the exact error you needed (permission denied vs exec format error vs missing)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version comparisons&lt;/strong&gt; drift into “good enough”&lt;/li&gt;
&lt;li&gt;Every repo grows its &lt;em&gt;own&lt;/em&gt; flavor of the same fragile scripts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At some point I realized: I don’t want “a shell script that hopefully detects the problem.”&lt;br&gt;
I want &lt;strong&gt;a single check that tells me exactly why it failed&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  One command, clear failure reasons
&lt;/h2&gt;

&lt;p&gt;That’s what &lt;strong&gt;Preflight&lt;/strong&gt; is for.&lt;/p&gt;

&lt;p&gt;Instead of assembling checks out of &lt;code&gt;grep&lt;/code&gt; + &lt;code&gt;awk&lt;/code&gt;, you run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;preflight cmd config-render &lt;span class="nt"&gt;--min&lt;/span&gt; 1.4.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When it passes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[OK] cmd: config-render
     path: /usr/local/bin/config-render
     version: 1.6.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When it fails, you get the &lt;em&gt;reason&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;not found in PATH&lt;/li&gt;
&lt;li&gt;failed to execute (permission denied / exec format error / etc.)&lt;/li&gt;
&lt;li&gt;version too old (with an explicit comparison)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[FAIL] cmd: config-render
       failed to execute: exec format error
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[FAIL] cmd: config-render
       version 1.2.0 &amp;lt; minimum 1.4.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No guessing. No “why did this randomly break today”.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why predictable checks matter (containers, CI, everywhere)
&lt;/h2&gt;

&lt;p&gt;This isn’t just a “containers are minimal” problem. It’s a reliability problem.&lt;/p&gt;

&lt;p&gt;Shell-based checks are notoriously sensitive to environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;bash&lt;/code&gt; vs &lt;code&gt;sh&lt;/code&gt; differences (and whatever &lt;code&gt;/bin/sh&lt;/code&gt; happens to be today)&lt;/li&gt;
&lt;li&gt;GNU vs BSD tool differences (&lt;code&gt;grep&lt;/code&gt;, &lt;code&gt;sed&lt;/code&gt;, &lt;code&gt;awk&lt;/code&gt; behave just differently enough)&lt;/li&gt;
&lt;li&gt;Linux vs macOS quirks in CI runners&lt;/li&gt;
&lt;li&gt;inconsistent error messaging when commands fail inside pipelines&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A small binary with a narrow job tends to behave the same way everywhere.&lt;br&gt;&lt;br&gt;
That consistency is what you actually want from validation: &lt;strong&gt;predictable pass/fail and predictable output&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;
  
  
  Multi-stage builds bonus: executable documentation
&lt;/h2&gt;

&lt;p&gt;There’s another benefit I didn’t appreciate until later: in a multi-stage Dockerfile, these checks become &lt;em&gt;documentation that runs&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;When you see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;RUN &lt;/span&gt;preflight cmd config-render &lt;span class="nt"&gt;--min&lt;/span&gt; 1.4.0
&lt;span class="k"&gt;RUN &lt;/span&gt;preflight &lt;span class="nb"&gt;env &lt;/span&gt;DATABASE_URL
&lt;span class="k"&gt;RUN &lt;/span&gt;preflight file /app/config.yaml &lt;span class="nt"&gt;--not-empty&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;you’re not just “testing stuff”. You’re encoding expectations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“This image must contain this helper”&lt;/li&gt;
&lt;li&gt;“This version is the minimum supported”&lt;/li&gt;
&lt;li&gt;“This env var must be present”&lt;/li&gt;
&lt;li&gt;“This config file must exist and be non-empty”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It reads like a contract - and it fails like one too.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Preflight checks
&lt;/h2&gt;

&lt;p&gt;The above command check was just the beginning. Preflight now supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;cmd&lt;/strong&gt; — exists on PATH, executes, extracts version, compares semver&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;env&lt;/strong&gt; — required env vars, allowed values / patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;file&lt;/strong&gt; — existence, permissions, “not empty”, basic content checks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;http / tcp&lt;/strong&gt; — connectivity with retry + timeout&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hash&lt;/strong&gt; — checksum verification&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Three places it immediately helps
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1) Build-time validation (fail fast)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; --from=ghcr.io/vertti/preflight:latest /preflight /usr/local/bin/preflight&lt;/span&gt;

&lt;span class="k"&gt;RUN &lt;/span&gt;preflight cmd config-render &lt;span class="nt"&gt;--min&lt;/span&gt; 1.4.0
&lt;span class="k"&gt;RUN &lt;/span&gt;preflight &lt;span class="nb"&gt;env &lt;/span&gt;DATABASE_URL
&lt;span class="k"&gt;RUN &lt;/span&gt;preflight file /app/config.yaml &lt;span class="nt"&gt;--not-empty&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2) CI validation (same checks outside Docker)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;preflight cmd config-render &lt;span class="nt"&gt;--min&lt;/span&gt; 1.4.0
preflight &lt;span class="nb"&gt;env &lt;/span&gt;DATABASE_URL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3) Healthchecks without &lt;code&gt;curl&lt;/code&gt; (distroless-friendly)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;HEALTHCHECK&lt;/span&gt;&lt;span class="s"&gt; CMD ["/preflight", "http", "http://localhost:8080/health"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  When shell is fine — and when it bites you
&lt;/h2&gt;

&lt;p&gt;Shell is totally fine when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the check is genuinely trivial&lt;/li&gt;
&lt;li&gt;you don’t care about consistent error reporting&lt;/li&gt;
&lt;li&gt;you’re happy maintaining the script&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Preflight pays off when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;you’re copying/downloading helper binaries into images&lt;/li&gt;
&lt;li&gt;you want the same behavior in CI and in containers&lt;/li&gt;
&lt;li&gt;you need real version constraints&lt;/li&gt;
&lt;li&gt;you want consistent output across checks&lt;/li&gt;
&lt;li&gt;you want checks that double as executable documentation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Repo: &lt;code&gt;https://github.com/vertti/preflight&lt;/code&gt;&lt;/p&gt;

</description>
      <category>docker</category>
      <category>containers</category>
      <category>devops</category>
      <category>cli</category>
    </item>
    <item>
      <title>My Must-Read List of Software Process Books</title>
      <dc:creator>Janne Sinivirta</dc:creator>
      <pubDate>Thu, 09 May 2019 06:33:27 +0000</pubDate>
      <link>https://dev.to/v3rtti/my-must-read-list-of-software-process-books-19gh</link>
      <guid>https://dev.to/v3rtti/my-must-read-list-of-software-process-books-19gh</guid>
      <description>&lt;p&gt;Recently I was asked to give a talk on Continuous Deployment. I guess I was expected to talk about the technical challenges involved in building, testing and deploying software. But to be honest, those challenges are a small and relatively trivial part of CD. So I decided to talk about what I call holistic continuous deployment.&lt;/p&gt;

&lt;p&gt;To get the full benefits of continuous deployment I argue that you need to improve the performance of the entire business system. You need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;shorten and amplify feedback loops&lt;/li&gt;
&lt;li&gt;enable the environment for continual experimentation&lt;/li&gt;
&lt;li&gt;encourage taking risks and learning from failures.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the presentation I went through why and how to change the way to plan, communicate, collaborate, learn and lead your whole business.&lt;/p&gt;

&lt;p&gt;I’ve given the presentation now in few places and one of the things people always ask me is “where can I read more about the topic”? I read a lot and love giving book suggestions. But this is a difficult question because how I see it, I’m asked “how to do modern software development well?” Much of it has to do with what could be called DevOps. But that too has become a topic that encompasses everything under the sun, depending on who you ask from.&lt;/p&gt;

&lt;p&gt;Regardless, here’s my attempt to list some of the books that have had the biggest impact on my thinking on software development methods and processes. I will also briefly explain why I’ve picked each book. The list is also in the order I would read the books.&lt;/p&gt;

&lt;p&gt;As with many seemingly technical efforts, successful holistic continuous deployment is 20% technology and 80% people. Picking the wrong technology, tool or cloud provider is rarely the real reason for failure. You are much more likely to fail because of communication failures, lack of empathy, lack of alignment or cumbersome organization structure. So what my book suggestion list is missing is probably 7 books on psychology and human behaviour research. But I don’t feel qualified to give advice on those topics.&lt;/p&gt;

&lt;p&gt;I hope you find new ideas and inspiration from these books, or the research required to back your gut feelings up like I did. And do let me know your own picks!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F5015%2F5687%2F5595%2FGoldratt_Amazon.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F5015%2F5687%2F5595%2FGoldratt_Amazon.jpg" alt="The Goal" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.amazon.com/Goal-Process-Ongoing-Improvement/dp/0884271951" rel="noopener noreferrer"&gt;THE GOAL: A PROCESS OF ONGOING IMPROVEMENT - ELIYAHU M. GOLDRATT&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The Goal is written by the Israeli business management guru Eliyahu M. Goldratt. It is a fictional novel about a manager in charge of a troubled manufacturing operation. The book is used in management colleges to teach students about the importance of strategic capacity planning and constraint management.&lt;/p&gt;

&lt;p&gt;Time Magazine listed the book as one of "The 25 Most Influential Business Management Books”. The book is a quick read but gives you plenty to think about.&lt;/p&gt;

&lt;p&gt;The Theory of Constraints presented in the book is easier to apply in manufacturing than software development. However I find the idea and process of flow and hunting bottlenecks important concepts when working in organisations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F9415%2F5687%2F5991%2Fphoenix_Amazon.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F9415%2F5687%2F5991%2Fphoenix_Amazon.jpg" alt="The Phoenix Project" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business-ebook/dp/B078Y98RG8/" rel="noopener noreferrer"&gt;THE PHOENIX PROJECT - GENE KIM, KEVIN BEHR, GEORGE SPAFFORD&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;The Phoenix Project is a book directly based on Goldratt's “The Goal”. The fictional story is about an IT organisation and its struggles. For me the story is a bit anxiety inducing as it captures so vividly many typical pain points in large IT companies. What stuck with me forever, since I read the book, is the fact that most organisations have “key person problems”. Key person is that one and only guy who “knows this system” or the only guy who “can do this database maintenance”.&lt;/p&gt;

&lt;p&gt;You should do everything you can to distribute information and avoid creating these key persons. But when you already have them, you should make sure you allow them to have slack (these people usually work at 120% capacity) and that they should focus more on teaching others than executing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F2215%2F5687%2F6207%2Fbeyond_phoenix_amazon.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F2215%2F5687%2F6207%2Fbeyond_phoenix_amazon.jpg" alt="Beyond the Phoenix Project" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.amazon.com/Beyond-Phoenix-Project-Origins-Evolution/dp/B07B76MQNY/" rel="noopener noreferrer"&gt;BEYOND THE PHOENIX PROJECT: THE ORIGINS AND EVOLUTION OF DEVOPS - GENE KIM, JOHN WILLIS&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;I think Beyond the Phoenix Project is only available as an audiobook. It’s a quite freely flowing discussion between the authors of the Phoenix Project on how the book was created. They share interesting bits and pieces of how they learned what they did, what motivated their writing and how they eventually ended up writing the DevOps Handbook.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F3115%2F5687%2F6324%2Fdevopshb_amazon.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F3115%2F5687%2F6324%2Fdevopshb_amazon.jpg" alt="Devops Handbook" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.amazon.com/DevOps-Handbook-World-Class-Reliability-Organizations-ebook/dp/B01M9ASFQ3" rel="noopener noreferrer"&gt;DEVOPS HANDBOOK - GENE KIM, JEZ HUMBLE, PATRICK DEBOIS, JOHN WILLIS, JOHN ALLSPAW&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;By the time I read the DevOps Handbook, it had very little new to offer me. But at that time I had been developing software as my day job for more than 20 years. Even so, it is a well written and extremely well researched book. It deals with all the mandatory topics on and around DevOps and I would encourage you to read it even if you feel very familiar with the topic. Also, I would encourage reading “The Goal” and “Phoenix Project” before this book, not after.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F5315%2F5687%2F6484%2Faccelerate_amazon.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F5315%2F5687%2F6484%2Faccelerate_amazon.jpg" alt="Accelerate" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.amazon.com/Accelerate-Software-Performing-Technology-Organizations-ebook/dp/B07B9F83WM/" rel="noopener noreferrer"&gt;ACCELERATE: THE SCIENCE OF LEAN SOFTWARE AND DEVOPS - NICOLE FORSGREN, JEZ HUMBLE, GENE KIM&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;Many processes and methods applied in software development today are used solely because they were suggested by Martin Fowler or Kent Beck said it’s good in their respective blog. Bringing in some more scientific approach is very welcome and my last two books do just that. The State of DevOps report has now been published for 7 years. In their own words “The State of DevOps Report is the longest standing, most widely referenced and largest body of DevOps research on the planet.”&lt;/p&gt;

&lt;p&gt;The “Accelerate” book takes a deep dive into how the report is researched, how it has evolved over the years, the results of the report and the statistical methods used and their justifications. In general it’s very nice to have research data to back up your arguments on why some practices should be considered in your organization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F6615%2F5687%2F6571%2Fflow_amazon.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fwww.nitor.com%2Fapplication%2Ffiles%2F6615%2F5687%2F6571%2Fflow_amazon.jpg" alt="The Principles of Product Development Flow" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://www.amazon.com/Principles-Product-Development-Flow-Generation/dp/1935401009/" rel="noopener noreferrer"&gt;THE PRINCIPLES OF PRODUCT DEVELOPMENT FLOW: SECOND GENERATION LEAN PRODUCT DEVELOPMENT - BY DONALD G. REINERTSEN&lt;/a&gt;
&lt;/h2&gt;

&lt;p&gt;With this book I’m torn. It is probably the best book on software development processes I’ve ever read, but the format (splitting everything into principles) is a tedious choice and the book is a pretty lengthy read. Despite its shortcomings I very much encourage you to read the book. I think it contains an endless amount of little golden nuggets and probably warrants several rereads.&lt;/p&gt;

&lt;p&gt;The book argues in a very pragmatic way that many of our typical project management and product development practices are broken or plain wrong. Then the book reaches out to other fields of science and pulls in the necessary research to prove its points. We get to read research from telecommunication networks, queuing theory, the military etc. The examples show how things we tend to do in projects have been proven unsuccessful in many other fields.&lt;/p&gt;

&lt;p&gt;This was originally published in my company's &lt;a href="https://www.nitor.com/en/news-and-blogs/my-must-read-list-software-process-books" rel="noopener noreferrer"&gt;blog&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>book</category>
      <category>lean</category>
      <category>agile</category>
      <category>devops</category>
    </item>
    <item>
      <title>Great User Stories for Continuous Deployment</title>
      <dc:creator>Janne Sinivirta</dc:creator>
      <pubDate>Wed, 31 May 2017 09:53:20 +0000</pubDate>
      <link>https://dev.to/v3rtti/great-user-stories-for-continuous-deployment</link>
      <guid>https://dev.to/v3rtti/great-user-stories-for-continuous-deployment</guid>
      <description>

&lt;p&gt;Unity Ads delivers targeted video advertisement to hundreds of millions of mobile phones at staggering rates. We have six teams building the platform at our Helsinki office. All the teams are able to deliver all parts of the product to production several times a day every day.&lt;/p&gt;

&lt;p&gt;We have wholeheartedly embraced the uncertainty in product development. We work in a fast paced industry where new companies and technologies come and go and each can drastically alter the whole landscape. We realize we need to innovate and implement accordingly. We admit that we can only make educated guesses on what will work and what will not. So we experiment, measure results and learn. Everything in our process aims to make experimentation fast, economic and safe.&lt;/p&gt;

&lt;p&gt;To make this work, one of the most important things we focus on is making the batch size of work as small as possible on all levels. For teams this means small and high quality user stories. These will be the focus of this blog post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Definition of Good User Story
&lt;/h2&gt;

&lt;p&gt;Continuous Deployment means that all changes made to the software are deployed to production as soon as they are ready. We deploy software to production several times a day.&lt;/p&gt;

&lt;p&gt;My definition of good user story in continuous deployment environment is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“The smallest increment we can make, from working software to working software, that still brings value or proves a hypothesis.”&lt;br&gt;
Breaking down my definition, it contains two challenges: bringing value and being a small increment. Developers often struggle with focusing or defining the value in each story. Product people struggle with the fact that most of their feature requests are bad ideas. To get to a working idea, the fastest way is to make small hypothesis, get it to production and prove it, and then iterate.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Value Hypothesis
&lt;/h2&gt;

&lt;p&gt;When discussing the goals behind features, it’s important to switch from calling them requirements to calling them hypothesis. When we are in an uncertain or fast changing environment, you can’t really have requirements. This naming change encourages experimentation. We should be allowed (even encouraged) to say that “we are unsure if this feature will bring the benefits we expect, so let’s find a way to verify the assumptions with minimal effort”.&lt;/p&gt;

&lt;p&gt;The teams here works towards goals not features. Getting there usually starts with the teams asking for the goals behind feature requests. You can use a technique like the annoying but sometimes effective “Five Whys or just more probing discussions. In the end, you should clearly understand the reasons why a specific feature is beneficial.&lt;/p&gt;

&lt;h2&gt;
  
  
  Safe Increments
&lt;/h2&gt;

&lt;p&gt;When the goal is clear, the next thing we hunt for is “the smallest increment that still brings value or proves a hypothesis”. It’s easy to split stories on a technical level so they make sense to implement but so that alone they don’t bring any value. I strongly discourage this.&lt;/p&gt;

&lt;p&gt;It is very easy to have the whole team busy completing technical bits and pieces for weeks only to realize that first time the whole thing is usable for the customer will still take few weeks or months of integration work. The most common culprit I see is the split of story to “frontend part and “backend part”. Neither brings any value alone and often when implemented apart it requires many iterations and neither implementer gets the satisfaction of “delivering anything.&lt;/p&gt;

&lt;p&gt;Generally the aim is to figure out “how can I throw away 80% of the story and still deploy working software that we can learn from”? You should elaborate the original story and make all the steps and constraints it has explicit. If it turns out the feature describes a workflow or a “wizard style, consider if it’s possible to deliver only one or few of the workflow steps. Are there variations in the feature? Consider implementing just one of the variations first. Richard Lawrence describes similar patterns in his blog with more detailed examples.&lt;/p&gt;

&lt;p&gt;A great way to encourage the developers to shift their focus from delivering code to delivering value was to make team decision to have a product demo every two weeks. In each demo only show working software from staging environment. No excuses, no showing stories or burndown charts. This has the effect of having developers consider “what will I have to show for this in the next demo when planning each story.&lt;/p&gt;

&lt;h2&gt;
  
  
  More Resources
&lt;/h2&gt;

&lt;p&gt;There is relatively little good resources on the topic of good user stories and methods to arriving to those. This is surprising, considering the importance of the topic for the success of your whole organization and ability to do meaningful continuous deployment. Agile Alliance lists the following resources:&lt;/p&gt;

&lt;p&gt;&lt;a href="http://xp123.com/articles/twenty-ways-to-split-stories/"&gt;Twenty Ways to Split Stories&lt;/a&gt;, 2005&lt;br&gt;
&lt;a href="http://www.jbrains.ca/permalink/how-youll-probably-learn-to-split-features"&gt;How You'll Probably Learn to Split Features&lt;/a&gt;, 2008&lt;br&gt;
&lt;a href="http://www.richardlawrence.info/2009/10/28/patterns-for-splitting-user-stories/"&gt;Patterns for Splitting User Stories&lt;/a&gt;, 2009&lt;br&gt;
&lt;a href="http://www.infoq.com/news/2011/04/how-to-split-user-stories"&gt;InfoQ: How to Split User Stories&lt;/a&gt;, 2011&lt;br&gt;
&lt;a href="http://gojko.net/2012/01/23/splitting-user-stories-the-hamburger-method/"&gt;Splitting User Stories: the Hamburger Method&lt;/a&gt;, 2012&lt;/p&gt;

&lt;p&gt;Let me know if you have others you have found useful!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Big thanks to Heikki Tunkelo and Samuel Husso for all the feedback while writing this article.&lt;/em&gt; This was originally published on my &lt;a href="https://www.linkedin.com/pulse/great-user-stories-continuous-deployment-janne-sinivirta"&gt;LinkedIn page&lt;/a&gt;.&lt;/p&gt;


</description>
      <category>agile</category>
      <category>continuous</category>
      <category>userstory</category>
      <category>patterns</category>
    </item>
  </channel>
</rss>
