<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: spearzy</title>
    <description>The latest articles on DEV Community by spearzy (@spearzy).</description>
    <link>https://dev.to/spearzy</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/spearzy"/>
    <language>en</language>
    <item>
      <title>I wrote a .NET assertion library to understand assertion libraries</title>
      <dc:creator>spearzy</dc:creator>
      <pubDate>Tue, 05 May 2026 16:32:35 +0000</pubDate>
      <link>https://dev.to/spearzy/i-wrote-a-net-assertion-library-to-understand-assertion-libraries-1mmg</link>
      <guid>https://dev.to/spearzy/i-wrote-a-net-assertion-library-to-understand-assertion-libraries-1mmg</guid>
      <description>&lt;p&gt;I have been working on a .NET assertion library called Axiom Assertions.&lt;/p&gt;

&lt;p&gt;It started as a way to learn how assertion libraries work, then grew into an experiment around deterministic output, batching, analyzers, and AI-focused test assertions.&lt;/p&gt;

&lt;p&gt;The repo is here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/spearzy/Axiom-Assertions" rel="noopener noreferrer"&gt;https://github.com/spearzy/Axiom-Assertions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This did not start as a plan to overthrow every assertion library in .NET. That would be a bit much. The ecosystem already has good tools, and most teams quite reasonably do not wake up looking for a new way to write &lt;code&gt;Should()&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The real reason was simpler: I wanted to understand how assertion libraries work by writing one myself.&lt;/p&gt;

&lt;p&gt;At some point, reading the docs for existing libraries only gets you so far. You can use an assertion library for years and still not really think about how much work is hiding behind a nice failure message. Formatting values, comparing collections, showing diffs, supporting equivalency, deciding when to fail fast, deciding when to collect failures, making APIs feel natural without turning them into soup. There is a lot going on.&lt;/p&gt;

&lt;p&gt;So I built one. As one does, when curiosity gets out of hand.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;I built Axiom Assertions, which is a fluent assertion library for .NET tests.&lt;/p&gt;

&lt;p&gt;The basic usage is intentionally familiar:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Axiom.Assertions&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Should&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;NotBeNull&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Email&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Should&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;Contain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That part is not meant to be clever.&lt;/p&gt;

&lt;p&gt;Where I wanted to experiment was in a few specific areas: deterministic failure output, explicit batching, analyzer-backed migration help, and optional assertion packages for things like JSON, HTTP responses, vectors, and retrieval tests.&lt;/p&gt;

&lt;p&gt;I also wanted those areas to be split sensibly. If you only need the core assertion library, you should not have to pull in every extra package just because it exists.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why deterministic output mattered to me
&lt;/h2&gt;

&lt;p&gt;One thing I wanted to explore was stable failure output.&lt;/p&gt;

&lt;p&gt;Test failures are already annoying. They become more annoying when the output is noisy, inconsistent, hard to diff, or awkward to read in CI. A failure message should help you understand what broke, not give you a small formatting puzzle just for fun.&lt;/p&gt;

&lt;p&gt;There are already assertion libraries that focus heavily on readable failure messages. Shouldly, for example, has always had that as one of its main strengths.&lt;/p&gt;

&lt;p&gt;Axiom is not trying to say those tools are wrong. It is more of an experiment in making stable, predictable output one of the design constraints from the start.&lt;/p&gt;

&lt;p&gt;That means thinking about questions like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;will this message stay readable in CI?&lt;/li&gt;
&lt;li&gt;will the output be stable enough for code review?&lt;/li&gt;
&lt;li&gt;can repeated failures be compared without noise?&lt;/li&gt;
&lt;li&gt;are collections and structured values shown in a predictable way?&lt;/li&gt;
&lt;li&gt;does the output help, or is it just technically correct?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;“Technically correct” is fine for a compiler. It is, howver, less charming in a failed test at 5:10pm.&lt;/p&gt;

&lt;h2&gt;
  
  
  Explicit batching
&lt;/h2&gt;

&lt;p&gt;Another area I wanted to explore was grouped assertions.&lt;/p&gt;

&lt;p&gt;Sometimes fail-fast is exactly right. If the first condition fails, there is no point continuing.&lt;/p&gt;

&lt;p&gt;Other times, especially when checking a larger object or response, it is useful to collect several failures at once. If a user profile has five fields wrong, I would rather see all five in one run than fix them one at a time like some sort of unit testing advent calendar.&lt;/p&gt;

&lt;p&gt;Axiom has an explicit batching API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Axiom.Assertions&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Axiom.Core&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;var&lt;/span&gt; &lt;span class="n"&gt;batch&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Assert&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Batch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"profile"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Should&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;StartWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"A"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Email&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Should&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;Contain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"@"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Roles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Should&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;Contain&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"admin"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The goal is to make grouped assertions intentional. Not hidden, not magical, just clear at the call site.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI testing bit
&lt;/h2&gt;

&lt;p&gt;The other reason Axiom grew beyond a learning project was that I wanted to play with assertions for AI-adjacent code.&lt;/p&gt;

&lt;p&gt;A lot of test libraries are very good at normal application assertions: strings, collections, numbers, exceptions, objects, HTTP responses, and so on.&lt;/p&gt;

&lt;p&gt;But newer systems often involve embeddings, similarity, ranking, retrieval, and “close enough” comparisons. Plain equality is usually the wrong tool there. If you are testing retrieval results or vector similarity, you often care about things like top-k ranking, dot products, cosine similarity, or mean reciprocal rank.&lt;/p&gt;

&lt;p&gt;So Axiom has an optional vectors package:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Axiom.Assertions&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Axiom.Vectors&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Should&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;HaveDotProductWith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;expected&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;expectedDotProduct&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1f&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tolerance&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.001f&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Should&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;ContainInTopK&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"doc-7"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="n"&gt;queries&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Should&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;HaveMeanReciprocalRank&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;expectedMeanReciprocalRank&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.75&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is probably the most experimental part of the project, but also the part I find most interesting.&lt;/p&gt;

&lt;p&gt;Testing AI-ish code often gets vague very quickly. I wanted to see what a more explicit assertion API might look like for those cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optional packages
&lt;/h2&gt;

&lt;p&gt;Axiom is split into a main package and a few optional ones.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotnet add package Axiom.Assertions
dotnet add package Axiom.Json
dotnet add package Axiom.Http
dotnet add package Axiom.Vectors
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The idea is that the main assertion library should not need to carry every possible assertion type. If a project needs JSON assertions, add JSON. If it needs HTTP assertions, add HTTP. If it needs vector or retrieval assertions, add those.&lt;/p&gt;

&lt;p&gt;If it does not need them, fine. Not every test project needs to bring a suitcase.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I am looking for
&lt;/h2&gt;

&lt;p&gt;The project is usable, but still early in adoption.&lt;/p&gt;

&lt;p&gt;I am mainly looking for feedback from people who write a lot of .NET tests, especially around:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;whether the failure output is actually readable&lt;/li&gt;
&lt;li&gt;whether the batching API feels useful&lt;/li&gt;
&lt;li&gt;whether the package split makes sense&lt;/li&gt;
&lt;li&gt;whether the vector and retrieval assertions match real testing needs&lt;/li&gt;
&lt;li&gt;what would stop someone from trying this in a small test project&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I am not expecting anyone to rewrite a whole test suite for fun. That is how people end up in meetings.&lt;/p&gt;

&lt;p&gt;Trying it in a small test project, looking through the docs, or opening an issue with something confusing would all be useful.&lt;/p&gt;

&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;p&gt;Repository:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/spearzy/Axiom-Assertions" rel="noopener noreferrer"&gt;https://github.com/spearzy/Axiom-Assertions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docs:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://spearzy.github.io/Axiom-Assertions/" rel="noopener noreferrer"&gt;https://spearzy.github.io/Axiom-Assertions/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;NuGet:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.nuget.org/packages/Axiom.Assertions" rel="noopener noreferrer"&gt;https://www.nuget.org/packages/Axiom.Assertions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you have opinions about assertion libraries, test failure output, or testing vector/retrieval-heavy code in .NET, I would genuinely like to hear them.&lt;/p&gt;

&lt;p&gt;If Axiom looks useful, interesting, or worth keeping an eye on, a star on the repo would be appreciated. It helps other .NET developers find the project and gives me a useful signal that the idea is worth continuing.&lt;/p&gt;

&lt;p&gt;The creation of this article was supported by the use of AI.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>testing</category>
      <category>csharp</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
