<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Björn Weström</title>
    <description>The latest articles on DEV Community by Björn Weström (@bjowes).</description>
    <link>https://dev.to/bjowes</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bjowes"/>
    <language>en</language>
    <item>
      <title>Nasty exceptions in Timer scope</title>
      <dc:creator>Björn Weström</dc:creator>
      <pubDate>Tue, 16 Mar 2021 22:37:00 +0000</pubDate>
      <link>https://dev.to/bjowes/nasty-exceptions-in-timer-scope-1i84</link>
      <guid>https://dev.to/bjowes/nasty-exceptions-in-timer-scope-1i84</guid>
      <description>&lt;h2&gt;
  
  
  TL; DR
&lt;/h2&gt;

&lt;p&gt;Unhandled exceptions occurring in Timer scope in a .Net Core application will crash the application without any logs or any graceful shutdown. Make sure to catch any exceptions in Timer scope and handle them without rethrowing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Timer scope
&lt;/h2&gt;

&lt;p&gt;With Timer scope I refer to any method that is set to execute when a Timer fires. Typically the code would look something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;timer&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;System&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Threading&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Timer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;MethodInTimerScope&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;MethodInTimerScope&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;object&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Perform Timer based task&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Exceptions in Timer scope
&lt;/h2&gt;

&lt;p&gt;When an unhandled exception occurs within request scope (anything initiated from an incoming HTTP request through a Controller), it will be caught by a default exception handler. The exception will be logged and the request will receive a 500 response.&lt;br&gt;
Similarly, when an exception occurs during application startup, shutdown, or inside a Background task (such as a &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.hosting.backgroundservice"&gt;BackgroundService&lt;/a&gt;), it is nicely logged and handled appropriately.&lt;/p&gt;

&lt;p&gt;However, when an unhandled exception occurs within Timer scope, it seems this scope is out of reach for the regular catch-alls of exceptions. It causes the whole application to crash immediately, without leaving any clues behind in the logs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Avoiding hard crashes from Timer scope exceptions
&lt;/h2&gt;

&lt;p&gt;The key to avoiding this situation is to catch any exceptions within the Timer scope to log and handle them. If the exceptions are so severe that the application should shut down, a graceful shutdown operation can be initiated using the &lt;a href="https://docs.microsoft.com/en-us/dotnet/api/microsoft.extensions.hosting.ihostapplicationlifetime.stopapplication?view=dotnet-plat-ext-5.0#Microsoft_Extensions_Hosting_IHostApplicationLifetime_StopApplication"&gt;IHostApplicationLifetime.StopApplication()&lt;/a&gt; method.&lt;/p&gt;

&lt;h3&gt;
  
  
  Disclaimer
&lt;/h3&gt;

&lt;p&gt;This applies for .Net Core 3.1 and above. Has been tested on Windows with IIS hosting, and on OS X from Visual Studio.&lt;/p&gt;

</description>
      <category>dotnet</category>
      <category>csharp</category>
    </item>
    <item>
      <title>Keeping System.Text.Json lean</title>
      <dc:creator>Björn Weström</dc:creator>
      <pubDate>Wed, 09 Sep 2020 21:25:20 +0000</pubDate>
      <link>https://dev.to/bjowes/keeping-system-text-json-lean-12jc</link>
      <guid>https://dev.to/bjowes/keeping-system-text-json-lean-12jc</guid>
      <description>&lt;h2&gt;
  
  
  TL; DR
&lt;/h2&gt;

&lt;p&gt;Serialization with &lt;code&gt;System.Text.Json&lt;/code&gt; has an unexpected performance penalty when used with options, such as setting the PropertyNamingPolicy to CamelCase. For small objects, serialization is &lt;strong&gt;~200x slower!&lt;/strong&gt; To avoid this issue, store the options object in a class member and pass that member to JsonSerializer.Serialize.&lt;/p&gt;

&lt;h2&gt;
  
  
  UPDATE 2020-09-15
&lt;/h2&gt;

&lt;p&gt;I posted &lt;a href="https://github.com/dotnet/runtime/issues/42167"&gt;an issue&lt;/a&gt; about this on Github, and the dotnet maintainers were quick to respond. This behavior is understood and the recommendation is to use a static (or shared) options object to avoid it. The root cause is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The serializer undergoes a warm-up phase during the first (de)serialization of every type in the object graph when a new options instance is passed to it. This warm-up includes creating a cache of metadata it needs to perform (de)serialization: funcs to property getters, setters, ctor arguments, specified attributes etc. This metadata caches is stored in the options instance. This process is not cheap, and it is recommended to cache options instances for reuse on subsequent calls to the serializer to avoid unnecessarily undergoing the warm-up repeatedly&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;There are related issues &lt;a href="https://github.com/dotnet/runtime/issues/40072"&gt;#40072&lt;/a&gt; and &lt;a href="https://github.com/dotnet/runtime/issues/38982"&gt;#38982&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How it all began
&lt;/h2&gt;

&lt;p&gt;I was working on upgrading several of our applications from .Net Core 2.1 to .Net Core 3.1. One significant change was the switch from &lt;code&gt;Newtonsoft.Json&lt;/code&gt; to &lt;code&gt;System.Text.Json&lt;/code&gt; for serialization. Since the &lt;code&gt;System.Text.Json&lt;/code&gt; package has been positively received due to the improved performance, I decided to give it a go.&lt;/p&gt;

&lt;p&gt;Making the transition wasn't particularly difficult, since we didn't use any advanced features of &lt;code&gt;Newtonsoft.Json&lt;/code&gt;. The main snag was handling reference loops, which &lt;code&gt;Newtonsoft.Json&lt;/code&gt; conveniently could sort out for you, but even that has been fixed with a few well placed &lt;code&gt;[JsonIgnore]&lt;/code&gt; attributes.&lt;/p&gt;

&lt;p&gt;To keep the transition as smooth as possible, I decided to configure System.Text.Json to use camelCase for property names. Many of the APIs has a angular based frontend as its main consumer, and it just didn't make sense to me to start sending PascalCase JSON. (This is also the default configuration for the serializer built into .Net Core MVC)&lt;/p&gt;

&lt;p&gt;I was just finishing up a Web API, and everything looked promising. But there was a particular operation that seemed unusually slow. The operation fetches a list of objects, in my tests I was retrieving 80 objects. Stepping through the code in the debugger, I found that the Automapper conversion from entity model to contract model took ~150 ms! The objects are simple enough, about 10 properties. However, they include a Hash property, which is calculated from a serialization of the entity object.&lt;/p&gt;

&lt;p&gt;In a trial and error attempt to find the root cause of the delay, I removed the options parameter to the &lt;code&gt;JsonSerializer.Serialize&lt;/code&gt; call. And sure enough, the same Automapper conversion now took only 3 ms! I found it hard to believe that using camelCase would make the serialization orders of magnitude slower - looking at available benchmarks online the performance impact should be hardly noticeable. This piqued my interest and I decided to do some benchmarks of my own!&lt;/p&gt;

&lt;h2&gt;
  
  
  Different methods of passing the options
&lt;/h2&gt;

&lt;p&gt;Some more trial and error revealed that putting the JsonSerializationOptions in a static class member and passing that member to &lt;code&gt;JsonSerialize.Serialize&lt;/code&gt; reached similar performance as without passing any options at all. Hence I put together this benchmark with different methods of passing in options, with and without camelCase, and comparing that to using no options.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmark
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="py"&gt;BenchmarkDotNet&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;v0.12.1, OS=Windows 10.0.17763.1397 (1809/October2018Update/Redstone5)&lt;/span&gt;
&lt;span class="err"&gt;Intel&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;i7-8700&lt;/span&gt; &lt;span class="err"&gt;CPU&lt;/span&gt; &lt;span class="err"&gt;3.20GHz&lt;/span&gt; &lt;span class="err"&gt;(Coffee&lt;/span&gt; &lt;span class="err"&gt;Lake),&lt;/span&gt; &lt;span class="err"&gt;1&lt;/span&gt; &lt;span class="err"&gt;CPU,&lt;/span&gt; &lt;span class="err"&gt;12&lt;/span&gt; &lt;span class="err"&gt;logical&lt;/span&gt; &lt;span class="err"&gt;and&lt;/span&gt; &lt;span class="err"&gt;6&lt;/span&gt; &lt;span class="err"&gt;physical&lt;/span&gt; &lt;span class="err"&gt;cores&lt;/span&gt;
&lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="py"&gt;SDK&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;3.1.101&lt;/span&gt;
  &lt;span class="nn"&gt;[Host]&lt;/span&gt;     &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;3.1.5&lt;/span&gt; &lt;span class="err"&gt;(CoreCLR&lt;/span&gt; &lt;span class="err"&gt;4.700.20.26901,&lt;/span&gt; &lt;span class="err"&gt;CoreFX&lt;/span&gt; &lt;span class="err"&gt;4.700.20.27001),&lt;/span&gt; &lt;span class="err"&gt;X64&lt;/span&gt; &lt;span class="err"&gt;RyuJIT&lt;/span&gt;
  &lt;span class="err"&gt;DefaultJob&lt;/span&gt; &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;3.1.5&lt;/span&gt; &lt;span class="err"&gt;(CoreCLR&lt;/span&gt; &lt;span class="err"&gt;4.700.20.26901,&lt;/span&gt; &lt;span class="err"&gt;CoreFX&lt;/span&gt; &lt;span class="err"&gt;4.700.20.27001),&lt;/span&gt; &lt;span class="err"&gt;X64&lt;/span&gt; &lt;span class="err"&gt;RyuJIT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Mean&lt;/th&gt;
&lt;th&gt;Error&lt;/th&gt;
&lt;th&gt;StdDev&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Serialize&lt;/td&gt;
&lt;td&gt;4.479 μs&lt;/td&gt;
&lt;td&gt;0.0123 μs&lt;/td&gt;
&lt;td&gt;0.0103 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_Default&lt;/td&gt;
&lt;td&gt;839.179 μs&lt;/td&gt;
&lt;td&gt;7.5675 μs&lt;/td&gt;
&lt;td&gt;7.0786 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_CamelCase&lt;/td&gt;
&lt;td&gt;858.304 μs&lt;/td&gt;
&lt;td&gt;11.1388 μs&lt;/td&gt;
&lt;td&gt;9.3014 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_LocalOptions_Default&lt;/td&gt;
&lt;td&gt;842.628 μs&lt;/td&gt;
&lt;td&gt;4.0955 μs&lt;/td&gt;
&lt;td&gt;3.4199 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_LocalOptions_CamelCase&lt;/td&gt;
&lt;td&gt;839.716 μs&lt;/td&gt;
&lt;td&gt;6.8355 μs&lt;/td&gt;
&lt;td&gt;6.3939 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticMemberOptions_Default&lt;/td&gt;
&lt;td&gt;4.464 μs&lt;/td&gt;
&lt;td&gt;0.0144 μs&lt;/td&gt;
&lt;td&gt;0.0120 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticMemberOptions_CamelCase&lt;/td&gt;
&lt;td&gt;4.428 μs&lt;/td&gt;
&lt;td&gt;0.0588 μs&lt;/td&gt;
&lt;td&gt;0.0699 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_MemberOptions_Default&lt;/td&gt;
&lt;td&gt;4.499 μs&lt;/td&gt;
&lt;td&gt;0.0311 μs&lt;/td&gt;
&lt;td&gt;0.0259 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_MemberOptions_CamelCase&lt;/td&gt;
&lt;td&gt;4.578 μs&lt;/td&gt;
&lt;td&gt;0.0228 μs&lt;/td&gt;
&lt;td&gt;0.0202 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The difference between creating the options on the fly (inline or locally) and providing them from a class member is daunting! It adds nearly 1 ms to the serialization time. Since creating the options on the fly means creating them for each call to &lt;code&gt;JsonSerializer.Serialize&lt;/code&gt; it was expected that these should perform slightly worse. Could the instantiation of the options object explain this difference? Lets find out!&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmark code
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Type of function call
&lt;/h4&gt;

&lt;p&gt;The middle part of the name of each benchmark identifies the type of call to &lt;code&gt;JsonSerializer.Serialize&lt;/code&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;InlineOptions - Options are instantiated inline
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;myObject&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;JsonSerializerOptions&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;LocalOptions - Options are instantiated as a local variable
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;JsonSerializerOptions&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;myObject&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;options&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;StaticMemberOptions - Options are instantiated as a static class member
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;static&lt;/span&gt; &lt;span class="n"&gt;JsonSerializerOptions&lt;/span&gt; &lt;span class="n"&gt;jsonStaticDefaultOptions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;JsonSerializerOptions&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;myObject&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;jsonStaticDefaultOptions&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;MemberOption - Options are instantiated as a class member
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;JsonSerializerOptions&lt;/span&gt; &lt;span class="n"&gt;jsonDefaultOptions&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;jsonDefaultOptions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;JsonSerializerOptions&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;...&lt;/span&gt;
&lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jsonDefaultOptions&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h4&gt;
  
  
  Type of options
&lt;/h4&gt;

&lt;p&gt;The suffix of the name of each benchmark identifies the type of options used.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Default - Default options
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;JsonSerializerOptions&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;CamelCase - Options set for camelCase
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;JsonSerializerOptions&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;PropertyNamingPolicy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;JsonNamingPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CamelCase&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Options construction
&lt;/h2&gt;

&lt;p&gt;If creating the options when they are needed slows down the serialization, could it be that the options object requires some heavy lifting to be instantiated? Unlikely but worth a benchmark.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmark
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="py"&gt;BenchmarkDotNet&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;v0.12.1, OS=Windows 10.0.17763.1397 (1809/October2018Update/Redstone5)&lt;/span&gt;
&lt;span class="err"&gt;Intel&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;i7-8700&lt;/span&gt; &lt;span class="err"&gt;CPU&lt;/span&gt; &lt;span class="err"&gt;3.20GHz&lt;/span&gt; &lt;span class="err"&gt;(Coffee&lt;/span&gt; &lt;span class="err"&gt;Lake),&lt;/span&gt; &lt;span class="err"&gt;1&lt;/span&gt; &lt;span class="err"&gt;CPU,&lt;/span&gt; &lt;span class="err"&gt;12&lt;/span&gt; &lt;span class="err"&gt;logical&lt;/span&gt; &lt;span class="err"&gt;and&lt;/span&gt; &lt;span class="err"&gt;6&lt;/span&gt; &lt;span class="err"&gt;physical&lt;/span&gt; &lt;span class="err"&gt;cores&lt;/span&gt;
&lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="py"&gt;SDK&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;3.1.101&lt;/span&gt;
  &lt;span class="nn"&gt;[Host]&lt;/span&gt;     &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;3.1.5&lt;/span&gt; &lt;span class="err"&gt;(CoreCLR&lt;/span&gt; &lt;span class="err"&gt;4.700.20.26901,&lt;/span&gt; &lt;span class="err"&gt;CoreFX&lt;/span&gt; &lt;span class="err"&gt;4.700.20.27001),&lt;/span&gt; &lt;span class="err"&gt;X64&lt;/span&gt; &lt;span class="err"&gt;RyuJIT&lt;/span&gt;
  &lt;span class="err"&gt;DefaultJob&lt;/span&gt; &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;3.1.5&lt;/span&gt; &lt;span class="err"&gt;(CoreCLR&lt;/span&gt; &lt;span class="err"&gt;4.700.20.26901,&lt;/span&gt; &lt;span class="err"&gt;CoreFX&lt;/span&gt; &lt;span class="err"&gt;4.700.20.27001),&lt;/span&gt; &lt;span class="err"&gt;X64&lt;/span&gt; &lt;span class="err"&gt;RyuJIT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Mean&lt;/th&gt;
&lt;th&gt;Error&lt;/th&gt;
&lt;th&gt;StdDev&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;NoOp&lt;/td&gt;
&lt;td&gt;0.0163 ns&lt;/td&gt;
&lt;td&gt;0.0014 ns&lt;/td&gt;
&lt;td&gt;0.0011 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CreateMyObject&lt;/td&gt;
&lt;td&gt;4.7044 ns&lt;/td&gt;
&lt;td&gt;0.0101 ns&lt;/td&gt;
&lt;td&gt;0.0089 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CreateOptions&lt;/td&gt;
&lt;td&gt;337.1717 ns&lt;/td&gt;
&lt;td&gt;3.2470 ns&lt;/td&gt;
&lt;td&gt;2.5351 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CreateOptions_Camel&lt;/td&gt;
&lt;td&gt;341.6551 ns&lt;/td&gt;
&lt;td&gt;3.1801 ns&lt;/td&gt;
&lt;td&gt;2.8190 ns&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The results show that the options object is indeed a rather large one (compare to the test object &lt;code&gt;MyObject&lt;/code&gt; used for the serialization). But it still takes only a fraction of a μs to instantiate - this cannot explain the large performance gap in the previous benchmark.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmark code
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;NoOp is simply an empty function, added for reference&lt;/li&gt;
&lt;li&gt;CreateMyObject instantiates an object of the simple test class &lt;code&gt;MyObject&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;CreateOptions / CreateOptions_Camel instantiates a &lt;code&gt;JsonSerializerOptions&lt;/code&gt; object in a similar way as in the previous benchmark.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Serialization of a list
&lt;/h2&gt;

&lt;p&gt;In the previous benchmarks a really small object was serialized. How does this performance gap scale if we serialize something larger, like a list of the small objects?&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmark
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="py"&gt;BenchmarkDotNet&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;v0.12.1, OS=Windows 10.0.17763.1397 (1809/October2018Update/Redstone5)&lt;/span&gt;
&lt;span class="err"&gt;Intel&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;i7-8700&lt;/span&gt; &lt;span class="err"&gt;CPU&lt;/span&gt; &lt;span class="err"&gt;3.20GHz&lt;/span&gt; &lt;span class="err"&gt;(Coffee&lt;/span&gt; &lt;span class="err"&gt;Lake),&lt;/span&gt; &lt;span class="err"&gt;1&lt;/span&gt; &lt;span class="err"&gt;CPU,&lt;/span&gt; &lt;span class="err"&gt;12&lt;/span&gt; &lt;span class="err"&gt;logical&lt;/span&gt; &lt;span class="err"&gt;and&lt;/span&gt; &lt;span class="err"&gt;6&lt;/span&gt; &lt;span class="err"&gt;physical&lt;/span&gt; &lt;span class="err"&gt;cores&lt;/span&gt;
&lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="py"&gt;SDK&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;3.1.101&lt;/span&gt;
  &lt;span class="nn"&gt;[Host]&lt;/span&gt;     &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;3.1.5&lt;/span&gt; &lt;span class="err"&gt;(CoreCLR&lt;/span&gt; &lt;span class="err"&gt;4.700.20.26901,&lt;/span&gt; &lt;span class="err"&gt;CoreFX&lt;/span&gt; &lt;span class="err"&gt;4.700.20.27001),&lt;/span&gt; &lt;span class="err"&gt;X64&lt;/span&gt; &lt;span class="err"&gt;RyuJIT&lt;/span&gt;
  &lt;span class="err"&gt;DefaultJob&lt;/span&gt; &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;3.1.5&lt;/span&gt; &lt;span class="err"&gt;(CoreCLR&lt;/span&gt; &lt;span class="err"&gt;4.700.20.26901,&lt;/span&gt; &lt;span class="err"&gt;CoreFX&lt;/span&gt; &lt;span class="err"&gt;4.700.20.27001),&lt;/span&gt; &lt;span class="err"&gt;X64&lt;/span&gt; &lt;span class="err"&gt;RyuJIT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;N&lt;/th&gt;
&lt;th&gt;Mean&lt;/th&gt;
&lt;th&gt;Error&lt;/th&gt;
&lt;th&gt;StdDev&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Serialize&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;45.92 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.913 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.250 μs&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_Default&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;944.99 μs&lt;/td&gt;
&lt;td&gt;9.046 μs&lt;/td&gt;
&lt;td&gt;8.019 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_CamelCase&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;938.90 μs&lt;/td&gt;
&lt;td&gt;17.853 μs&lt;/td&gt;
&lt;td&gt;19.844 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_Default&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;43.30 μs&lt;/td&gt;
&lt;td&gt;0.106 μs&lt;/td&gt;
&lt;td&gt;0.089 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_CamelCase&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;44.24 μs&lt;/td&gt;
&lt;td&gt;0.040 μs&lt;/td&gt;
&lt;td&gt;0.037 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Serialize&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;100&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;528.36 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.481 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.313 μs&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_Default&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;1,429.01 μs&lt;/td&gt;
&lt;td&gt;10.067 μs&lt;/td&gt;
&lt;td&gt;8.924 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_CamelCase&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;1,434.56 μs&lt;/td&gt;
&lt;td&gt;4.027 μs&lt;/td&gt;
&lt;td&gt;3.570 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_Default&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;510.18 μs&lt;/td&gt;
&lt;td&gt;2.280 μs&lt;/td&gt;
&lt;td&gt;2.133 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_CamelCase&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;517.13 μs&lt;/td&gt;
&lt;td&gt;2.558 μs&lt;/td&gt;
&lt;td&gt;2.268 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Serialize&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1000&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;4,852.29 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;25.266 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;23.634 μs&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_Default&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;5,727.67 μs&lt;/td&gt;
&lt;td&gt;81.384 μs&lt;/td&gt;
&lt;td&gt;72.145 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_CamelCase&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;5,713.51 μs&lt;/td&gt;
&lt;td&gt;84.481 μs&lt;/td&gt;
&lt;td&gt;70.545 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_Default&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;4,829.04 μs&lt;/td&gt;
&lt;td&gt;25.773 μs&lt;/td&gt;
&lt;td&gt;24.108 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_CamelCase&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;4,939.82 μs&lt;/td&gt;
&lt;td&gt;22.851 μs&lt;/td&gt;
&lt;td&gt;21.374 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Where N is the number of objects in the list. Ok, so it seems that the on the fly options add about 900 μs to the serialization regardless of object size. That's good news at least.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serialization in a loop
&lt;/h2&gt;

&lt;p&gt;How about serializing those objects one by one?&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmark
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="py"&gt;BenchmarkDotNet&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;v0.12.1, OS=Windows 10.0.17763.1397 (1809/October2018Update/Redstone5)&lt;/span&gt;
&lt;span class="err"&gt;Intel&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;i7-8700&lt;/span&gt; &lt;span class="err"&gt;CPU&lt;/span&gt; &lt;span class="err"&gt;3.20GHz&lt;/span&gt; &lt;span class="err"&gt;(Coffee&lt;/span&gt; &lt;span class="err"&gt;Lake),&lt;/span&gt; &lt;span class="err"&gt;1&lt;/span&gt; &lt;span class="err"&gt;CPU,&lt;/span&gt; &lt;span class="err"&gt;12&lt;/span&gt; &lt;span class="err"&gt;logical&lt;/span&gt; &lt;span class="err"&gt;and&lt;/span&gt; &lt;span class="err"&gt;6&lt;/span&gt; &lt;span class="err"&gt;physical&lt;/span&gt; &lt;span class="err"&gt;cores&lt;/span&gt;
&lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="py"&gt;SDK&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;3.1.101&lt;/span&gt;
  &lt;span class="nn"&gt;[Host]&lt;/span&gt;     &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;3.1.5&lt;/span&gt; &lt;span class="err"&gt;(CoreCLR&lt;/span&gt; &lt;span class="err"&gt;4.700.20.26901,&lt;/span&gt; &lt;span class="err"&gt;CoreFX&lt;/span&gt; &lt;span class="err"&gt;4.700.20.27001),&lt;/span&gt; &lt;span class="err"&gt;X64&lt;/span&gt; &lt;span class="err"&gt;RyuJIT&lt;/span&gt;
  &lt;span class="err"&gt;DefaultJob&lt;/span&gt; &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;3.1.5&lt;/span&gt; &lt;span class="err"&gt;(CoreCLR&lt;/span&gt; &lt;span class="err"&gt;4.700.20.26901,&lt;/span&gt; &lt;span class="err"&gt;CoreFX&lt;/span&gt; &lt;span class="err"&gt;4.700.20.27001),&lt;/span&gt; &lt;span class="err"&gt;X64&lt;/span&gt; &lt;span class="err"&gt;RyuJIT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;N&lt;/th&gt;
&lt;th&gt;Mean&lt;/th&gt;
&lt;th&gt;Error&lt;/th&gt;
&lt;th&gt;StdDev&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Serialize&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;10&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;44.59 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.038 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.033 μs&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_Default&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;8,364.56 μs&lt;/td&gt;
&lt;td&gt;80.834 μs&lt;/td&gt;
&lt;td&gt;75.612 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_CamelCase&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;8,438.08 μs&lt;/td&gt;
&lt;td&gt;57.459 μs&lt;/td&gt;
&lt;td&gt;53.747 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_Default&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;45.02 μs&lt;/td&gt;
&lt;td&gt;0.142 μs&lt;/td&gt;
&lt;td&gt;0.133 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_CamelCase&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;44.94 μs&lt;/td&gt;
&lt;td&gt;0.126 μs&lt;/td&gt;
&lt;td&gt;0.112 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Serialize&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;100&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;463.57 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.972 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1.748 μs&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_Default&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;84,244.24 μs&lt;/td&gt;
&lt;td&gt;594.933 μs&lt;/td&gt;
&lt;td&gt;556.501 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_CamelCase&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;88,661.04 μs&lt;/td&gt;
&lt;td&gt;1,743.013 μs&lt;/td&gt;
&lt;td&gt;1,711.872 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_Default&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;503.88 μs&lt;/td&gt;
&lt;td&gt;9.914 μs&lt;/td&gt;
&lt;td&gt;14.532 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_CamelCase&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;504.59 μs&lt;/td&gt;
&lt;td&gt;10.077 μs&lt;/td&gt;
&lt;td&gt;20.810 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Serialize&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;1000&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5,070.30 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;101.357 μs&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;224.600 μs&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_Default&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;898,815.78 μs&lt;/td&gt;
&lt;td&gt;16,983.724 μs&lt;/td&gt;
&lt;td&gt;17,441.035 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_CamelCase&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;900,245.68 μs&lt;/td&gt;
&lt;td&gt;17,592.917 μs&lt;/td&gt;
&lt;td&gt;25,231.237 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_Default&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;4,902.03 μs&lt;/td&gt;
&lt;td&gt;74.758 μs&lt;/td&gt;
&lt;td&gt;58.366 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_StaticOptions_CamelCase&lt;/td&gt;
&lt;td&gt;1000&lt;/td&gt;
&lt;td&gt;5,213.93 μs&lt;/td&gt;
&lt;td&gt;103.911 μs&lt;/td&gt;
&lt;td&gt;282.699 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Where N is the number of objects to serialize. All objects are created before the benchmark, then they are serialized one by one (one call to &lt;code&gt;JsonSerializer.Serialize&lt;/code&gt; per object). This is similar to the results from serialization of the list - on the fly instantiation of options adds 800-900 μs per call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Alternate overloads
&lt;/h2&gt;

&lt;p&gt;Finally I decided to check if there are other overloads of &lt;code&gt;JsonSerializer.Serialize&lt;/code&gt; that would perform better with on the fly options.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmark
&lt;/h3&gt;



&lt;div class="highlight"&gt;&lt;pre class="highlight ini"&gt;&lt;code&gt;&lt;span class="py"&gt;BenchmarkDotNet&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;v0.12.1, OS=Windows 10.0.17763.1397 (1809/October2018Update/Redstone5)&lt;/span&gt;
&lt;span class="err"&gt;Intel&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;i7-8700&lt;/span&gt; &lt;span class="err"&gt;CPU&lt;/span&gt; &lt;span class="err"&gt;3.20GHz&lt;/span&gt; &lt;span class="err"&gt;(Coffee&lt;/span&gt; &lt;span class="err"&gt;Lake),&lt;/span&gt; &lt;span class="err"&gt;1&lt;/span&gt; &lt;span class="err"&gt;CPU,&lt;/span&gt; &lt;span class="err"&gt;12&lt;/span&gt; &lt;span class="err"&gt;logical&lt;/span&gt; &lt;span class="err"&gt;and&lt;/span&gt; &lt;span class="err"&gt;6&lt;/span&gt; &lt;span class="err"&gt;physical&lt;/span&gt; &lt;span class="err"&gt;cores&lt;/span&gt;
&lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="py"&gt;SDK&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;3.1.101&lt;/span&gt;
  &lt;span class="nn"&gt;[Host]&lt;/span&gt;     &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;3.1.5&lt;/span&gt; &lt;span class="err"&gt;(CoreCLR&lt;/span&gt; &lt;span class="err"&gt;4.700.20.26901,&lt;/span&gt; &lt;span class="err"&gt;CoreFX&lt;/span&gt; &lt;span class="err"&gt;4.700.20.27001),&lt;/span&gt; &lt;span class="err"&gt;X64&lt;/span&gt; &lt;span class="err"&gt;RyuJIT&lt;/span&gt;
  &lt;span class="err"&gt;DefaultJob&lt;/span&gt; &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="err"&gt;.NET&lt;/span&gt; &lt;span class="err"&gt;Core&lt;/span&gt; &lt;span class="err"&gt;3.1.5&lt;/span&gt; &lt;span class="err"&gt;(CoreCLR&lt;/span&gt; &lt;span class="err"&gt;4.700.20.26901,&lt;/span&gt; &lt;span class="err"&gt;CoreFX&lt;/span&gt; &lt;span class="err"&gt;4.700.20.27001),&lt;/span&gt; &lt;span class="err"&gt;X64&lt;/span&gt; &lt;span class="err"&gt;RyuJIT&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Method&lt;/th&gt;
&lt;th&gt;Mean&lt;/th&gt;
&lt;th&gt;Error&lt;/th&gt;
&lt;th&gt;StdDev&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Serialize&lt;/td&gt;
&lt;td&gt;4.470 μs&lt;/td&gt;
&lt;td&gt;0.0237 μs&lt;/td&gt;
&lt;td&gt;0.0185 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_InlineOptions_Default&lt;/td&gt;
&lt;td&gt;848.252 μs&lt;/td&gt;
&lt;td&gt;5.4895 μs&lt;/td&gt;
&lt;td&gt;5.1349 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_AltInlineOptions_Default&lt;/td&gt;
&lt;td&gt;836.154 μs&lt;/td&gt;
&lt;td&gt;3.9665 μs&lt;/td&gt;
&lt;td&gt;3.5162 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Serialize_Alt2InlineOptions_Default&lt;/td&gt;
&lt;td&gt;846.886 μs&lt;/td&gt;
&lt;td&gt;8.8031 μs&lt;/td&gt;
&lt;td&gt;8.2345 μs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The results are clear, all overloads for passing options to &lt;code&gt;JsonSerializer.Serialize&lt;/code&gt; shares the same weakness.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmark code
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;InlineOptions - This is the method used in the previous benchmarks&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AltInlineOptions - Overload that accepts a type argument&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Serialize&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;myObject&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;MyObject&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;JsonSerializerOptions&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Alt2InlineOptions - Generic overload that accepts a type argument
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight"&gt;&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Serialize&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;MyObject&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;myObject&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;JsonSerializerOptions&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;



&lt;h2&gt;
  
  
  Discussion
&lt;/h2&gt;

&lt;p&gt;Some will claim that 900 μs of additional delay for serialization isn't a big deal. While this is correct in some scenarios, I think that anyone striving to keep their request pipeline lean won't agree. In our applications we are approaching 10 ms response time for the less complex requests (not a record for sure, but this includes a lot of enterprise stuff like logging, audit logging, AD authorization). Adding 900 μs makes or apps 9% slower!&lt;/p&gt;

&lt;p&gt;Secondly, I would presume that there are many others who perform multiple serializations within a single request. In our case, we do this for each item in a list to calculate a hash value for each item. When returning 100 items, you are suddenly accumulating ~100 ms of delay, which is pretty bad - in particular when it can be easily avoided.&lt;/p&gt;

&lt;p&gt;What is worse is that the official documentation is &lt;em&gt;full of examples that instantiate the options on the fly&lt;/em&gt;! &lt;a href="https://docs.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-how-to#use-camel-case-for-all-json-property-names"&gt;Example of how to use camelCase&lt;/a&gt;. I agree fully that it makes the example code much more compact when presented this way, but there are no notes about the performance impact. A senior developer would probably move the options to a class member if that member is used multiple times within a class, but when the options object is used only once it makes the code more clean to simply do it inline.&lt;/p&gt;

&lt;p&gt;The burning question is why does this performance gap exist? If instantiating objects inline and passing them to functions is really this terrible, we should avoid that everywhere. But that cannot be true?&lt;br&gt;
My guess is that the options object is used quite heavily when initiating the serialization. We saw from the object creation benchmark that the options object is quite large, hence there are plenty of options that must be processed. If accessing properties of a class member is slightly faster than accessing properties of a locally scoped object, then this difference would accumulate.&lt;/p&gt;

&lt;p&gt;Still I think it is a mystery that the difference is so huge. How can serializing a small object be 200x slower due to how options are passed?&lt;/p&gt;

&lt;h2&gt;
  
  
  Source code
&lt;/h2&gt;

&lt;p&gt;The full source used in the benchmarks can be found &lt;a href="https://github.com/bjowes/SystemTextJson-Benchmark"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tools
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;All benchmarks are performed using &lt;a href="https://benchmarkdotnet.org/"&gt;BenchmarkDotNet&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Test data for serialization is generated using &lt;a href="https://github.com/nickdodd79/AutoBogus"&gt;AutoBogus&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>csharp</category>
      <category>dotnet</category>
    </item>
  </channel>
</rss>
