<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Steve Bjorg</title>
    <description>The latest articles on DEV Community by Steve Bjorg (@bjorg).</description>
    <link>https://dev.to/bjorg</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bjorg"/>
    <language>en</language>
    <item>
      <title>Lessons Learned on Optimizing .NET on AWS Lambda</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Fri, 14 Oct 2022 17:56:02 +0000</pubDate>
      <link>https://dev.to/lambdasharp/lessons-learned-on-optimizing-net-on-aws-lambda-2n5m</link>
      <guid>https://dev.to/lambdasharp/lessons-learned-on-optimizing-net-on-aws-lambda-2n5m</guid>
      <description>&lt;p&gt;I learned a lot diving into the foundational parts of the AWS Lambda implementation for .NET, but I also have some questions left. Maybe I will pick this up sometime later or maybe someone will feel inspired to venture deeper into this topic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Before sharing my conclusions, I want to stress again that you should use them as a starting point and benchmark your own code to see what makes sense for your situation.&lt;/p&gt;

&lt;p&gt;Also, make sure you understand what &lt;a href="https://dev.to/lambdasharp/optimal-strategies-for-net-on-aws-lambda-45kg"&gt;optimal&lt;/a&gt; means for your application. There is no one-size-fits-all. Know what your objectives are ahead of time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tiered Compilation
&lt;/h3&gt;

&lt;p&gt;Based on the data gathered by the benchmarks, &lt;em&gt;Tiered Compilation&lt;/em&gt; should only be used if minimizing cold start duration is your top priority. From a cost perspective, it never makes sense to enable it.&lt;/p&gt;

&lt;h3&gt;
  
  
  ReadyToRun
&lt;/h3&gt;

&lt;p&gt;This option is easy to recommend. If you know what the target CPU architecture is going to be then &lt;em&gt;ReadyToRun&lt;/em&gt; is an obvious choice. Even with &lt;em&gt;Tiered Compilation&lt;/em&gt; disabled, the Lambda function performs very well. This was quite a surprise, because &lt;em&gt;ReadyToRun&lt;/em&gt; generates unoptimized code (Tier 0) and without &lt;em&gt;Tiered Compilation&lt;/em&gt; enabled, that code will never be rejitted. However, from the measurements, the rejitting overhead is so onerous that 100 warm stars are not enough to make up the difference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-JIT .NET
&lt;/h3&gt;

&lt;p&gt;I would recommend setting the &lt;code&gt;AWS_LAMBDA_DOTNET_PREJIT&lt;/code&gt; environment variable to &lt;code&gt;Always&lt;/code&gt; unless cold start duration is critical. If anything, I would explore how to pre-JIT even more of the code during the INIT phase since it's free and runs faster than the INVOKE phase for lower memory configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  CPU Architecture
&lt;/h3&gt;

&lt;p&gt;The ARM64 architecture is the exciting new kid on the block for .NET and AWS Lambda, but the venerable x86-64 architecture should not be discounted. In these benchmarks, it has often fared better in performance, but at increased cost. There are also more issues with the ARM64 architecture. Make sure to check the &lt;a href="https://github.com/aws/aws-lambda-dotnet/issues?q=is%3Aissue+is%3Aopen+ARM64"&gt;issue tracker&lt;/a&gt; to see if any of them might affect your project.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Work
&lt;/h2&gt;

&lt;p&gt;Here are some areas I would like to explore further when time permits.&lt;/p&gt;

&lt;h3&gt;
  
  
  Are all AWS Lambda regions the same?
&lt;/h3&gt;

&lt;p&gt;My benchmarks were conducted in &lt;code&gt;us-west-2&lt;/code&gt;. I would assume all regions perform the same, but it would be interesting to confirm this is the case indeed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Benchmarking with I/O Operations
&lt;/h3&gt;

&lt;p&gt;I received some feedback that my benchmarks were not representative because they lacked I/O operations, such as when interacting with other services. That is correct and it was intentional. I/O operations are at least an order of magnitude slower than compute operations. My interest was driven by understanding the interplay of various compiler options, CPU architectures, and memory configurations. Adding I/O into the mix would have prevented establishing a clean baseline. That said, I agree that benchmarking these scenarios with I/O operations is both interesting and valuable.&lt;/p&gt;

&lt;h3&gt;
  
  
  .NET 7 Native
&lt;/h3&gt;

&lt;p&gt;One of the features I'm most excited about in .NET 7 is native compilation. Preliminary results shared by others have shown very promising improvements to performance. However, because native compilation uses the generic Lambda runtime, the INIT phase is no longer free. Does that also mean, the INIT phase no longer runs at full speed? If so, it changes everything about how we need to approach minimizing execution costs. Still, the promise of much faster execution is tantalizing to say the least.&lt;/p&gt;

&lt;h3&gt;
  
  
  Self-Hosted .NET
&lt;/h3&gt;

&lt;p&gt;I thought about benchmarking self-hosted .NET Lambda functions but given that native compilation is a much better alternative, I did not bother to do so. From my experience, self-hosted functions are large and slow. The only time it made sense to consider using them was with .NET 5 to access newer features before .NET 6 was supported by AWS Lambda. For .NET 7, I would focus on native compilation instead and ignore the self-hosted option.&lt;/p&gt;

&lt;h3&gt;
  
  
  More Pre-Jitting during INIT Phase
&lt;/h3&gt;

&lt;p&gt;As mentioned, I think there is something to be said about pre-jitting more code during the INIT phase. I don't know what else would make sense, but I would explore this area to shift some billable execution time to the free INIT phase. I feel there is some untapped potential here.&lt;/p&gt;

&lt;h3&gt;
  
  
  Custom Amazon.Lambda.RuntimeSupport Package
&lt;/h3&gt;

&lt;p&gt;I can't shake the feeling that there are some opportunities to specialize the &lt;a href="https://www.nuget.org/packages/Amazon.Lambda.RuntimeSupport/"&gt;Amazon.Lambda.RuntimeSupport package&lt;/a&gt; for greater flexibility and better performance.&lt;/p&gt;

&lt;p&gt;To better understand how Lambda functions interact with the AWS Lambda service, I created a &lt;a href="https://github.com/bjorg/aws-lambda-dotnet-benchmark"&gt;mock implementation&lt;/a&gt; of the service and its 4 APIs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/2018-06-01/runtime/invocation/next&lt;/code&gt;: This endpoint returns the payload for the next Lambda invocation request.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/2018-06-01/runtime/invocation/{awsRequestId}/response&lt;/code&gt;: This endpoint receives the response of a successful Lambda invocation.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/2018-06-01/runtime/invocation/{awsRequestId}/error&lt;/code&gt;: This endpoint receives the error message of a failed Lambda invocation.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/2018-06-01/runtime/init/error&lt;/code&gt;: This endpoint receives the error message of a failed Lambda initialization.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Lambda invocation is suspended on &lt;code&gt;/2018-06-01/runtime/invocation/next&lt;/code&gt; if no other payload is available. That means, it is technically possible to respond to a request using &lt;code&gt;/2018-06-01/runtime/invocation/{awsRequestId}/response&lt;/code&gt; and then do some additional post response clean-up work that does not impact the responsiveness of the Lambda function. For example, the garbage collector could be explicitly triggered. I don't know if that makes sense in practice, but it's an interesting notion to explore.&lt;/p&gt;

&lt;p&gt;It also bugs me that baseline .NET Core 3.1 runs faster than .NET 6. With all the hard work that went into optimizing performance of .NET 6, it feels wrong. Maybe something could be done at the custom runtime level to improve things further.&lt;/p&gt;

&lt;h3&gt;
  
  
  More C# Compiler Options
&lt;/h3&gt;

&lt;p&gt;I only benchmarked the most obvious C# compiler options, such as &lt;em&gt;Tiered Compilation&lt;/em&gt; and &lt;em&gt;ReadyToRun&lt;/em&gt;, but there are &lt;a href="https://learn.microsoft.com/en-us/dotnet/core/runtime-config/compilation"&gt;more options&lt;/a&gt; that might be interesting to explore.&lt;/p&gt;

&lt;h3&gt;
  
  
  Profile-Guided Optimizations (PGO)
&lt;/h3&gt;

&lt;p&gt;This is one of the exciting features that got away this time. &lt;a href="https://github.com/dotnet/runtime/blob/main/docs/design/features/dotnet-pgo.md"&gt;Profile Guided Optimization (PGO)&lt;/a&gt; enables the .NET runtime to gather execution information that can be fed back into the C# compiler to produce a better executable. In essence, it's a smart optimizer that looks at real-world data to produce the best possible code.&lt;/p&gt;

&lt;p&gt;I don't know how one would instrument a Lambda function to collect the profile data, but if it possible, it would be very interesting to make it part of a CI/CD pipeline. Something akin to the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Build and deploy unoptimized version of Lambda function&lt;/li&gt;
&lt;li&gt;Run integration tests against deployed Lambda function and collect profile data&lt;/li&gt;
&lt;li&gt;Re-build Lambda function with profile information&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Parting Thoughts
&lt;/h2&gt;

&lt;p&gt;While there are stones left unturned on this journey, I hope that some of this work can already been put to good use. I find measuring performance very rewarding, because it can be objectively assessed. I also think it's important because the faster our code runs, the less harm we do to the environment. Last, but not least, there is also an attractive notion of minimalism that is easily capture as &lt;strong&gt;only execute what is needed and nothing more.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you have any questions, suggestions, or corrections, please leave them in the comments and I will update these posts accordingly.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Baseline Performance for AWS Lambda .NET using Top-Level Statements</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Wed, 12 Oct 2022 18:03:35 +0000</pubDate>
      <link>https://dev.to/lambdasharp/baseline-performance-for-aws-lambda-net-using-top-level-statements-bi9</link>
      <guid>https://dev.to/lambdasharp/baseline-performance-for-aws-lambda-net-using-top-level-statements-bi9</guid>
      <description>&lt;p&gt;.NET 6 introduced top-level statements, which simplify the entry point of the application code. Unlike the previous style of Lambda definitions, this project creates an executable instead of an assembly. That means we have to provide our own Lambda host implementation. Fortunately, AWS already provides one in the &lt;a href="https://www.nuget.org/packages/Amazon.Lambda.RuntimeSupport/" rel="noopener noreferrer"&gt;Amazon.Lambda.RuntimeSupport package&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Minimal Top-Level Lambda Function
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Projects/MinimalTopLevel" rel="noopener noreferrer"&gt;MinimalTopLevel project&lt;/a&gt; defined a Lambda function that takes a stream and returns an empty response. It has no business logic and only includes required libraries. There is also no deserialization of a payload. This is the Lambda function using top-level statements with the least amount of overhead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Amazon.Lambda.Core&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Amazon.Lambda.RuntimeSupport&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Amazon.Lambda.Serialization.SystemTextJson&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;LambdaBootstrapBuilder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Handler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;DefaultLambdaJsonSerializer&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Build&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;RunAsync&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="n"&gt;Task&lt;/span&gt; &lt;span class="nf"&gt;Handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Stream&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ILambdaContext&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;CompletedTask&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Benchmark Data for .NET 6 on x86-64
&lt;/h2&gt;

&lt;p&gt;Again, we see that the duration of the INIT phase is not impacted until we exceed the 3,008 MB threshold, which also drives up cost.&lt;/p&gt;

&lt;p&gt;However, compared to the &lt;a href="https://dev.to/lambdasharp/baseline-performance-for-net-on-aws-lambda-32al"&gt;Minimal baseline project&lt;/a&gt;, this Lambda function has 20% to 100% longer cold start durations.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Cold Start&lt;/th&gt;
&lt;th&gt;Total Warm Used (100)&lt;/th&gt;
&lt;th&gt;Cost (µ$)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;128MB&lt;/td&gt;
&lt;td&gt;235.420&lt;/td&gt;
&lt;td&gt;1,484.898&lt;/td&gt;
&lt;td&gt;1,720.318&lt;/td&gt;
&lt;td&gt;367.174&lt;/td&gt;
&lt;td&gt;24.05849&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;236.617&lt;/td&gt;
&lt;td&gt;744.375&lt;/td&gt;
&lt;td&gt;980.992&lt;/td&gt;
&lt;td&gt;151.328&lt;/td&gt;
&lt;td&gt;23.93210&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;512MB&lt;/td&gt;
&lt;td&gt;236.002&lt;/td&gt;
&lt;td&gt;353.587&lt;/td&gt;
&lt;td&gt;589.589&lt;/td&gt;
&lt;td&gt;120.821&lt;/td&gt;
&lt;td&gt;24.15341&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1024MB&lt;/td&gt;
&lt;td&gt;238.403&lt;/td&gt;
&lt;td&gt;163.425&lt;/td&gt;
&lt;td&gt;401.828&lt;/td&gt;
&lt;td&gt;115.392&lt;/td&gt;
&lt;td&gt;24.84696&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;234.304&lt;/td&gt;
&lt;td&gt;96.894&lt;/td&gt;
&lt;td&gt;331.198&lt;/td&gt;
&lt;td&gt;115.843&lt;/td&gt;
&lt;td&gt;26.32520&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5120MB&lt;/td&gt;
&lt;td&gt;216.870&lt;/td&gt;
&lt;td&gt;92.632&lt;/td&gt;
&lt;td&gt;309.502&lt;/td&gt;
&lt;td&gt;117.367&lt;/td&gt;
&lt;td&gt;37.69996&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7ob92avxrik1488atuv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7ob92avxrik1488atuv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/MinimalTopLevel-Net6-x64-NoTC-NoR2R-NoPreJIT-ANY%20(Cold%20Start).png" rel="noopener noreferrer"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwba3faf5e35u2lho0bh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwba3faf5e35u2lho0bh.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/MinimalTopLevel-Net6-x64-NoTC-NoR2R-NoPreJIT-ANY%20(Lifetime%20Cost).png" rel="noopener noreferrer"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Minimum Cold Start Duration
&lt;/h2&gt;

&lt;p&gt;Fortunately, the situation improves quite a bit once we look at the optimal configuration for minimum cold start duration. Using top-level statements shift some of the overhead from the INIT phase to the first INVOKE phase, but otherwise the total duration is very close to what was measured for the &lt;a href="https://dev.to/lambdasharp/baseline-performance-for-net-on-aws-lambda-32al"&gt;Minimal baseline project&lt;/a&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Cold Start&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;5120MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;178.743&lt;/td&gt;
&lt;td&gt;70.947&lt;/td&gt;
&lt;td&gt;249.690&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;5120MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;190.382&lt;/td&gt;
&lt;td&gt;67.500&lt;/td&gt;
&lt;td&gt;257.882&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;189.152&lt;/td&gt;
&lt;td&gt;58.625&lt;/td&gt;
&lt;td&gt;247.777&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;5120MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;176.368&lt;/td&gt;
&lt;td&gt;57.039&lt;/td&gt;
&lt;td&gt;233.407&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frywk8cylzbmue6rxrcde.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frywk8cylzbmue6rxrcde.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/MinimalTopLevel-Net6-ANY-ANY-NoR2R-ANY-ANY%20(Minimal%20Cold%20Start).png" rel="noopener noreferrer"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Minimum Execution Cost
&lt;/h2&gt;

&lt;p&gt;Again, the ARM64 architecture is the most cost-effective approach. &lt;em&gt;ReadyToRun&lt;/em&gt;, &lt;em&gt;Tiered Compilation&lt;/em&gt;, and the &lt;em&gt;PreJIT&lt;/em&gt; settings all contribute to reduce cost a bit further for the minimal top-level project. That said, the minimum execution cost is ~8.5% higher when using top-level statements. This increased cost is most likely due to the higher memory configuration, which is required to compensate for the increased overhead of the INIT and first INVOKE phases.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Warm Used (100)&lt;/th&gt;
&lt;th&gt;Cost (µ$)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;1024MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;234.381&lt;/td&gt;
&lt;td&gt;168.534&lt;/td&gt;
&lt;td&gt;122.581&lt;/td&gt;
&lt;td&gt;24.08155&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;1024MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;252.593&lt;/td&gt;
&lt;td&gt;167.558&lt;/td&gt;
&lt;td&gt;124.272&lt;/td&gt;
&lt;td&gt;24.09109&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;1024MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;202.952&lt;/td&gt;
&lt;td&gt;124.406&lt;/td&gt;
&lt;td&gt;152.314&lt;/td&gt;
&lt;td&gt;23.88962&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;1024MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;218.905&lt;/td&gt;
&lt;td&gt;118.831&lt;/td&gt;
&lt;td&gt;152.727&lt;/td&gt;
&lt;td&gt;23.82079&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6zgk9z0e76vhl2s5ijj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6zgk9z0e76vhl2s5ijj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/MinimalTopLevel-Net6-ANY-ANY-ANY-ANY-ANY%20(Minimal%20Lifetime%20Cost).png" rel="noopener noreferrer"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;It's been fun diving into the fundamentals of AWS Lambda for .NET functions. The post is summarizing my findings and thoughts on future projects that might be interesting to explore.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>serverless</category>
    </item>
    <item>
      <title>The Surprising Cold Start Penalty in the AWS SDK for .NET</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Fri, 09 Sep 2022 21:07:17 +0000</pubDate>
      <link>https://dev.to/lambdasharp/the-surprising-cold-start-penalty-in-the-aws-sdk-for-net-246l</link>
      <guid>https://dev.to/lambdasharp/the-surprising-cold-start-penalty-in-the-aws-sdk-for-net-246l</guid>
      <description>&lt;p&gt;This post is about raising awareness of a performance penalty when initializing the AWS SDK for .NET.&lt;/p&gt;

&lt;p&gt;This is a startup tax incurred by all AWS Lambda functions using .NET. Fortunately, it's trivial to make it happen during the INIT phase where it's free. However, there is no way of avoiding it during a cold start. I can't help but think the initialization overhead should be much lower. &lt;/p&gt;

&lt;p&gt;For this benchmark, our code is almost identical with the baseline function, except that we initialize an S3 client in the constructor.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.IO&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Threading.Tasks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Amazon.S3&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;Benchmark.AwsSdk&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;sealed&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Function&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

        &lt;span class="c1"&gt;//--- Fields ---&lt;/span&gt;
        &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;IAmazonS3&lt;/span&gt;&lt;span class="p"&gt;?&lt;/span&gt; &lt;span class="n"&gt;_s3Client&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

        &lt;span class="c1"&gt;//--- Constructors ---&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;Function&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

            &lt;span class="c1"&gt;// initialize S3 client&lt;/span&gt;
            &lt;span class="n"&gt;_s3Client&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;AmazonS3Client&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;//--- Methods ---&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;ProcessAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Stream&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cold Start Durations
&lt;/h2&gt;

&lt;p&gt;The following table shows the new measurements using .NET 6, &lt;em&gt;Tiered Compilation&lt;/em&gt;, and &lt;em&gt;ReadyToRun&lt;/em&gt; with 3 different memory configurations: 1,024 MB, 1,769 MB, and 5,120 MB.&lt;/p&gt;

&lt;p&gt;Initializing the AWS SDK for .NET adds a 120+ ms penalty to the cold start duration. The impact remains the same, no matter the memory configuration since the initialization is happening during the INIT phase.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Cold Start&lt;/th&gt;
&lt;th&gt;Penalty&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;1024MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;380.461&lt;/td&gt;
&lt;td&gt;41.773&lt;/td&gt;
&lt;td&gt;422.234&lt;/td&gt;
&lt;td&gt;147.21&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;377.036&lt;/td&gt;
&lt;td&gt;29.365&lt;/td&gt;
&lt;td&gt;406.401&lt;/td&gt;
&lt;td&gt;141.856&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;5120MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;349.227&lt;/td&gt;
&lt;td&gt;28.428&lt;/td&gt;
&lt;td&gt;377.655&lt;/td&gt;
&lt;td&gt;136.467&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1024MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;344.643&lt;/td&gt;
&lt;td&gt;29.678&lt;/td&gt;
&lt;td&gt;374.321&lt;/td&gt;
&lt;td&gt;128.797&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;339.3&lt;/td&gt;
&lt;td&gt;23.058&lt;/td&gt;
&lt;td&gt;362.358&lt;/td&gt;
&lt;td&gt;122.975&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;5120MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;325.178&lt;/td&gt;
&lt;td&gt;22.468&lt;/td&gt;
&lt;td&gt;347.646&lt;/td&gt;
&lt;td&gt;124.422&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Potential Cause
&lt;/h2&gt;

&lt;p&gt;Looking at the code for the &lt;a href="https://github.com/aws/aws-sdk-net"&gt;AWS SDK for .NET&lt;/a&gt;, I suspect the culprit of the slow initialization is the &lt;a href="https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json"&gt;endpoints.js&lt;/a&gt; file. This 700+ KB megalodon JSON file is parsed every time the AWS SDK is initialized. Since this happens in the AWS Core assembly, this penalty is incurred by all AWS service clients.&lt;/p&gt;

&lt;p&gt;I hope this is something the AWS team fixes in the future. As a clueless layman, I would expect the endpoint definitions to belong to their respective packages. This would also produce a smaller Lambda deployment package as the current &lt;em&gt;AWSSDK.Core.dll&lt;/em&gt; assembly size is an eyewatering 1.5 MB!&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;In the next post, I'm benchmarking the impact of using top-level statements with Lambda functions. This new way of writing Lambda code is aesthetically pleasing, but does it have a hidden cost?&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Benchmarking .NET JSON Serializers on AWS Lambda</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Wed, 31 Aug 2022 18:31:02 +0000</pubDate>
      <link>https://dev.to/lambdasharp/benchmarking-net-json-serializers-on-aws-lambda-279m</link>
      <guid>https://dev.to/lambdasharp/benchmarking-net-json-serializers-on-aws-lambda-279m</guid>
      <description>&lt;p&gt;Virtually all .NET code on AWS Lambda has to deal with JSON serialization. Historically, &lt;a href="https://www.newtonsoft.com/json" rel="noopener noreferrer"&gt;Newtonsoft Json.NET&lt;/a&gt; has been the go-to library. More recently, &lt;em&gt;System.Text.Json&lt;/em&gt; was introduced in .NET Core 3. Both libraries use reflection to build their serialization logic. The newest technique, called source generator, was introduced in .NET 6 and uses a compile-time approach that avoids reflection.&lt;/p&gt;

&lt;p&gt;So, now we have three approaches to choose from, which begs the question: Is there a clear winner or is it more nuanced?&lt;/p&gt;

&lt;p&gt;For these benchmarks, the code deserializes a fairly &lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/blob/main/Docs/Methodology.md#json-serializer-measurements" rel="noopener noreferrer"&gt;bloated JSON data structure&lt;/a&gt; taken from the GitHub API documentation and then returns an empty response.&lt;/p&gt;

&lt;h2&gt;
  
  
  Newtonsoft Json.NET
&lt;/h2&gt;

&lt;p&gt;This library has been around for so long and has been so popular that it broke the download counter when it exceeded 2 billion on &lt;a href="https://www.nuget.org/" rel="noopener noreferrer"&gt;nuget.org&lt;/a&gt;. The counter has been fixed since, but this impressive milestone remains!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.IO&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Threading.Tasks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Amazon.Lambda.Core&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Amazon.Lambda.Serialization.Json&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;assembly&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nf"&gt;LambdaSerializer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;JsonSerializer&lt;/span&gt;&lt;span class="p"&gt;))]&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;Benchmark.NewtonsoftJson&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;sealed&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Function&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

        &lt;span class="c1"&gt;//--- Methods ---&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;ProcessAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Root&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Minimum Cold Start Duration
&lt;/h3&gt;

&lt;p&gt;The 4 fastest cold start durations use the x86-64 architecture and &lt;em&gt;ReadyToRun&lt;/em&gt;. The fastest uses &lt;em&gt;Tiered Compilation&lt;/em&gt; as well. The PreJIT option is always slower when enabled, but still makes the top 4 cut.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Cold Start&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;262.942&lt;/td&gt;
&lt;td&gt;186.097&lt;/td&gt;
&lt;td&gt;449.039&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;317.328&lt;/td&gt;
&lt;td&gt;151.456&lt;/td&gt;
&lt;td&gt;468.784&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;236.714&lt;/td&gt;
&lt;td&gt;170.028&lt;/td&gt;
&lt;td&gt;406.742&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;295.209&lt;/td&gt;
&lt;td&gt;137.727&lt;/td&gt;
&lt;td&gt;432.936&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdadbgaimjdskcyynrx3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdadbgaimjdskcyynrx3.png" alt="Newtonsoft Json.NET - Cold Start Duration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/NewtonsoftJson-Net6-ANY-ANY-ANY-ANY-01769%20(Minimal%20Cold%20Start).png" rel="noopener noreferrer"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Minimum Execution Cost
&lt;/h3&gt;

&lt;p&gt;I'll admit, I was a bit surprised here. I would have expected ARM64 to be the obvious choice since the execution cost is 20% lower. However, that was not the case. Instead, we have a 50/50 split with x86-64 winning ever so slightly.&lt;/p&gt;

&lt;p&gt;Also interesting is that the cheapest execution cost always uses the PreJIT option. That makes intuitively sense since this option shifts some cost from the first INVOKE phase to the free INIT phase and only has a small overhead penalty otherwise.&lt;/p&gt;

&lt;p&gt;Similarly, &lt;em&gt;Tiered Compilation&lt;/em&gt; is disabled for all because it introduces additional overhead during the warm INVOKE phases.&lt;/p&gt;

&lt;p&gt;Most fascinating to me is that ARM64 is cheaper with 512 MB memory, while x86-64 is cheaper with 256 MB. This is probably just an oddity, but it serves to highlight that nothing is ever obvious and why benchmarking the actual code is so important!&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Warm Used (100)&lt;/th&gt;
&lt;th&gt;Cost (µ$)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;346.884&lt;/td&gt;
&lt;td&gt;1598.711&lt;/td&gt;
&lt;td&gt;406.117&lt;/td&gt;
&lt;td&gt;26.88279408&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;512MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;348.615&lt;/td&gt;
&lt;td&gt;753.974&lt;/td&gt;
&lt;td&gt;238.541&lt;/td&gt;
&lt;td&gt;26.81680042&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;317.574&lt;/td&gt;
&lt;td&gt;1186.12&lt;/td&gt;
&lt;td&gt;377.718&lt;/td&gt;
&lt;td&gt;26.71600553&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;512MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;314.298&lt;/td&gt;
&lt;td&gt;562.768&lt;/td&gt;
&lt;td&gt;234.544&lt;/td&gt;
&lt;td&gt;26.84427746&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9w8b5wz6ssmmv1wm4lui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9w8b5wz6ssmmv1wm4lui.png" alt="Newtonsoft Json.NET - Execution Cost and Total Warm Execution Time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/NewtonsoftJson-Net6-ANY-ANY-ANY-ANY-01769%20(Minimal%20Lifetime%20Cost).png" rel="noopener noreferrer"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  System.Text.Json - Reflection
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;System.Text.Json&lt;/em&gt; was introduced in .NET Core 3. The initial release was not feature-rich enough to be a compelling choice. However, that is no longer the case. By .NET 5, all my concerns were addressed, and it has been my preferred choice since. Sadly, we had to wait until .NET 6, which is &lt;a href="https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core" rel="noopener noreferrer"&gt;LTS&lt;/a&gt;, for it to become supported on AWS Lambda.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.IO&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Threading.Tasks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Amazon.Lambda.Core&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Amazon.Lambda.Serialization.SystemTextJson&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;assembly&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="nf"&gt;LambdaSerializer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;DefaultLambdaJsonSerializer&lt;/span&gt;&lt;span class="p"&gt;))]&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;Benchmark.SystemTextJson&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;sealed&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Function&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

        &lt;span class="c1"&gt;//--- Methods ---&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;ProcessAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Root&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Minimum Cold Start Duration
&lt;/h3&gt;

&lt;p&gt;Similar to Json.NET, the 4 fastest cold start durations use the x86-64 architecture. Unlike the previous benchmark, all of them have &lt;em&gt;Tiered Compilation&lt;/em&gt; enabled. &lt;em&gt;ReadyToRun&lt;/em&gt; provides an ever so slight benefit, but not much. That's likely due to the fact that the JSON serialization code lives in the .NET framework. Same as before, PreJIT makes things slower, but it's still among the 4 fastest configurations. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Cold Start&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;231.55&lt;/td&gt;
&lt;td&gt;97.37&lt;/td&gt;
&lt;td&gt;328.92&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;276.791&lt;/td&gt;
&lt;td&gt;74.063&lt;/td&gt;
&lt;td&gt;350.854&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;226.864&lt;/td&gt;
&lt;td&gt;93.64&lt;/td&gt;
&lt;td&gt;320.504&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;273.615&lt;/td&gt;
&lt;td&gt;71.244&lt;/td&gt;
&lt;td&gt;344.859&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fqde6xbumsqb323wqpl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fqde6xbumsqb323wqpl.png" alt="System.Text.Json - Reflection - Cold Start Duration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/SystemTextJson-Net6-ANY-ANY-ANY-ANY-01769%20(Minimal%20Cold%20Start).png" rel="noopener noreferrer"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Minimum Execution Cost
&lt;/h3&gt;

&lt;p&gt;Identical to the Json.NET benchmark, the 4 cheapest execution costs disable &lt;em&gt;Tiered Compilation&lt;/em&gt; and enable the PreJIT option. Also, results are evenly split between ARM64 and x86-64.&lt;/p&gt;

&lt;p&gt;Again, the optimal configuration uses the x86-64 architecture with &lt;em&gt;ReadyToRun&lt;/em&gt; enabled. However, this time, all 4 optimal configurations agree on 256 MB for memory.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Warm Used (100)&lt;/th&gt;
&lt;th&gt;Cost (µ$)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;335.019&lt;/td&gt;
&lt;td&gt;977.84&lt;/td&gt;
&lt;td&gt;344.601&lt;/td&gt;
&lt;td&gt;24.60815771&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;330.424&lt;/td&gt;
&lt;td&gt;966.123&lt;/td&gt;
&lt;td&gt;347.232&lt;/td&gt;
&lt;td&gt;24.57787356&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;302.287&lt;/td&gt;
&lt;td&gt;688.363&lt;/td&gt;
&lt;td&gt;341.735&lt;/td&gt;
&lt;td&gt;24.49208483&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;293.871&lt;/td&gt;
&lt;td&gt;679.57&lt;/td&gt;
&lt;td&gt;299.889&lt;/td&gt;
&lt;td&gt;24.28108858&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1d0b4c9dje8zzo8ho9n0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1d0b4c9dje8zzo8ho9n0.png" alt="System.Text.Json - Reflection - Execution Cost and Total Warm Execution Time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/SystemTextJson-Net6-ANY-ANY-ANY-ANY-01769%20(Minimal%20Lifetime%20Cost).png" rel="noopener noreferrer"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  System.Text.Json - Source Generator
&lt;/h2&gt;

&lt;p&gt;New in .NET 6 is the ability to generate the JSON serialization code during compilation instead of relying on reflection at runtime.&lt;/p&gt;

&lt;p&gt;Personally, as someone who cares a lot about performance, I find source generators a really exciting addition to our developer toolbox. However, I don't consider this iteration to be production ready, because it is missing some features I rely on. In particular, the lack of custom type converters to override the default JSON serialization behavior is a blocker for me. That said, for some smaller projects, it might be viable. My biggest recommendation here is to thoroughly validate the output to ensure any behavior changes are caught during development.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Text.Json.Serialization&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Amazon.Lambda.Core&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Amazon.Lambda.Serialization.SystemTextJson&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;Benchmark.SourceGeneratorJson&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;assembly&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;LambdaSerializer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;SourceGeneratorLambdaJsonSerializer&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;FunctionSerializerContext&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;))]&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;Benchmark.SourceGeneratorJson&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;JsonSerializable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Root&lt;/span&gt;&lt;span class="p"&gt;))]&lt;/span&gt;
&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;partial&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FunctionSerializerContext&lt;/span&gt; &lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;JsonSerializerContext&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;sealed&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Function&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="c1"&gt;//--- Methods ---&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;ProcessAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Root&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Minimum Cold Start Duration
&lt;/h3&gt;

&lt;p&gt;This time, the 4 fastest cold starts all use &lt;em&gt;Tiered Compilation&lt;/em&gt; and &lt;em&gt;ReadyToRun&lt;/em&gt;. Since source generators create more code to jit, it makes sense that these options improve cold start performance, since that's their purpose. Also, unlike the previous benchmarks, ARM64 and x86-64 are now competing for the top spot. PreJIT again slows things down a bit, but still makes it into the top 4.&lt;/p&gt;

&lt;p&gt;Despite ARM64 finally making an appearance in the &lt;em&gt;Minimum Cold Start Duration&lt;/em&gt; benchmark, the x86-64 architecture still secures the top two spots.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Cold Start&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;249.244&lt;/td&gt;
&lt;td&gt;65.429&lt;/td&gt;
&lt;td&gt;314.673&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;276.097&lt;/td&gt;
&lt;td&gt;60.221&lt;/td&gt;
&lt;td&gt;336.318&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;240.88&lt;/td&gt;
&lt;td&gt;53.104&lt;/td&gt;
&lt;td&gt;293.984&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;265.776&lt;/td&gt;
&lt;td&gt;46.327&lt;/td&gt;
&lt;td&gt;312.103&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj90i1wef23uc9mt5m6t4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj90i1wef23uc9mt5m6t4.png" alt="System.Text.Json - Source Generator - Cold Start Duration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/SourceGeneratorJson-Net6-ANY-ANY-ANY-ANY-01769%20(Minimal%20Cold%20Start).png" rel="noopener noreferrer"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Minimum Execution Cost
&lt;/h3&gt;

&lt;p&gt;The results for this benchmark are a bit more complicated to parse. For the first time, we don't have a symmetry across options. Instead, ARM64 secures the 3 out of 4 cheapest spots. The same is true for the PreJIT option and the 256 MB memory configuration.&lt;/p&gt;

&lt;p&gt;Similar to the Json.NET benchmark, the cheapest configurations use &lt;em&gt;ReadyToRun&lt;/em&gt; and, as for all execution cost benchmarks, &lt;em&gt;Tiered Compilation&lt;/em&gt; is disabled.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Warm Used (100)&lt;/th&gt;
&lt;th&gt;Cost (µ$)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;287.093&lt;/td&gt;
&lt;td&gt;702.015&lt;/td&gt;
&lt;td&gt;294.423&lt;/td&gt;
&lt;td&gt;23.52147561&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;311.507&lt;/td&gt;
&lt;td&gt;660.822&lt;/td&gt;
&lt;td&gt;295.178&lt;/td&gt;
&lt;td&gt;23.38668193&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;512MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;312.017&lt;/td&gt;
&lt;td&gt;315.322&lt;/td&gt;
&lt;td&gt;204.109&lt;/td&gt;
&lt;td&gt;23.66288998&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;294.279&lt;/td&gt;
&lt;td&gt;519.965&lt;/td&gt;
&lt;td&gt;298.581&lt;/td&gt;
&lt;td&gt;23.61061349&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc0p67kcm73cwsuacumt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc0p67kcm73cwsuacumt.png" alt="System.Text.Json - Source Generator - Execution Cost and Total Warm Execution Time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/SourceGeneratorsJson-Net6-ANY-ANY-ANY-ANY-01769%20(Minimal%20Lifetime%20Cost).png" rel="noopener noreferrer"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Here are our observed lower bounds for the JSON serialization libraries, as well as the &lt;a href="https://dev.to/lambdasharp/baseline-performance-for-net-on-aws-lambda-32al"&gt;baseline performance&lt;/a&gt; on .NET 6 for comparison. I've omitted .NET Core 3.1 since I don't consider a viable target runtime anymore. However, you can explore the result set in the &lt;a href="https://docs.google.com/spreadsheets/d/1pOVJQi9Q2COj0amqV06BT8Nbgacf0nuyHpFj_P7sYH4/edit?usp=sharing" rel="noopener noreferrer"&gt;interactive Google spreadsheet&lt;/a&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Baseline for .NET 6

&lt;ul&gt;
&lt;li&gt;Cold start duration: 223ms&lt;/li&gt;
&lt;li&gt;Execution cost: 21.94µ$&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Newtonsoft Json.NET

&lt;ul&gt;
&lt;li&gt;Cold start duration: 433 ms&lt;/li&gt;
&lt;li&gt;Execution cost: 26.72 µ$&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;System.Text.Json - Reflection

&lt;ul&gt;
&lt;li&gt;Cold start duration: 321 ms&lt;/li&gt;
&lt;li&gt;Execution cost: 24.28 µ$&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;System.Text.Json - Source Generator

&lt;ul&gt;
&lt;li&gt;Cold start duration: 294 ms&lt;/li&gt;
&lt;li&gt;Execution cost: 23.39 µ$&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;It shouldn't be a surprise that Json.NET, which has been around for a long time, has accumulated a lot of cruft. Json.NET is truly a Swiss army knife for serialization and this flexibility comes at a cost. It adds at least 210 ms to our cold start duration and it's also the most expensive JSON library to run.&lt;/p&gt;

&lt;p&gt;The newer &lt;em&gt;System.Text.Json&lt;/em&gt; library has a compelling performance and value benefit over Json.NET. It only adds 100 ms to our cold start duration and is 9% cheaper to run compared to Json.NET.&lt;/p&gt;

&lt;p&gt;However, the clear winner is the new JSON source generator with only 70 ms in cold start overhead compared to our baseline. Cost is also 12% lower than Json.NET. That said, the lack of features may not make it a good choice just yet.&lt;/p&gt;

&lt;p&gt;When it comes to minimizing cold start duration, the more memory, the better. These benchmarks used 1,769 MB, which unlocks most of the available vCPU performance, but not all of it. Full vCPU performance is achieved at 3,008 MB, which almost doubles the cost for a 10% improvement (&lt;a href="https://www.sentiatechblog.com/aws-re-invent-2020-day-3-optimizing-lambda-cost-with-multi-threading" rel="noopener noreferrer"&gt;source&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;For minimizing cost, 256 MB seems to be the preferred choice. &lt;em&gt;Tiered Compilation&lt;/em&gt; should never be used, but &lt;em&gt;ReadyToRun&lt;/em&gt; is beneficial. The weird thing about this configuration is that &lt;em&gt;ReadyToRun&lt;/em&gt; produces Tier0 code (i.e. dirty JIT without inlining, hoisting, or any of that delicious performance stuff). With &lt;em&gt;Tiered Compilation&lt;/em&gt; disabled, our code will never be optimized further, as far as I know.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;For the next post, I'm going to investigate the overhead introduced by the AWS SDK. Since most Lambda functions will use it, I thought it would be useful to understand what the initialization cost is.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Baseline Performance for .NET on AWS Lambda</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Tue, 30 Aug 2022 15:29:55 +0000</pubDate>
      <link>https://dev.to/lambdasharp/baseline-performance-for-net-on-aws-lambda-32al</link>
      <guid>https://dev.to/lambdasharp/baseline-performance-for-net-on-aws-lambda-32al</guid>
      <description>&lt;p&gt;I always like to understand what the lower bound looks like. What is the absolute fastest performance we can hope for? I find it insightful as it sets a baseline for everything else.&lt;/p&gt;

&lt;p&gt;A necessary warning here is the risk of extrapolating too much from such a trivial sample. We need to take the data for what it is: a baseline. It's not representative of real-world business logic. Simply adding some I/O operations would greatly increase the processing time. Usually I/O is 1,000x to 1,000,000x slower than code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Minimal Lambda Function
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Projects/Minimal/"&gt;Minimal project&lt;/a&gt; defines a Lambda function that takes a stream and returns an empty response. It has no business logic and only includes required libraries. There is also no deserialization of a payload. This is the Lambda function with the least amount of overhead.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.IO&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;using&lt;/span&gt; &lt;span class="nn"&gt;System.Threading.Tasks&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;namespace&lt;/span&gt; &lt;span class="nn"&gt;Benchmark.Minimal&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;sealed&lt;/span&gt; &lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Function&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

        &lt;span class="c1"&gt;//--- Methods ---&lt;/span&gt;
        &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;ProcessAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;Stream&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Stream&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Null&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Benchmark Data for .NET 6 on x86-64
&lt;/h2&gt;

&lt;p&gt;The data neatly shows that the INIT phase is approximately the same for all memory configurations under the 3,008 MB threshold. As mentioned in the &lt;a href="https://dev.to/lambdasharp/anatomy-of-the-aws-lambda-lifecycle-17a5"&gt;Anatomy of the AWS Lambda Lifecycle&lt;/a&gt; post, the INIT phase always runs at full speed.&lt;/p&gt;

&lt;p&gt;The cold INVOKE phase is about 10x slower for 128 MB than it is for 1,024 MB. However, the sum of all warm INVOKE phases is only ~3x slower. Yet, the cost is less than 5% higher for the improved performance.&lt;/p&gt;

&lt;p&gt;It's surprising that even for such a trivial example, we can already appreciate the delicate balance between performance and cost.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Cold Start&lt;/th&gt;
&lt;th&gt;Total Warm Used (100)&lt;/th&gt;
&lt;th&gt;Cost (µ$)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;128MB&lt;/td&gt;
&lt;td&gt;235.615&lt;/td&gt;
&lt;td&gt;620.921&lt;/td&gt;
&lt;td&gt;856.536&lt;/td&gt;
&lt;td&gt;365.519&lt;/td&gt;
&lt;td&gt;22.25509&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;238.296&lt;/td&gt;
&lt;td&gt;315.731&lt;/td&gt;
&lt;td&gt;554.027&lt;/td&gt;
&lt;td&gt;150.124&lt;/td&gt;
&lt;td&gt;22.14107&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;512MB&lt;/td&gt;
&lt;td&gt;241.193&lt;/td&gt;
&lt;td&gt;136.89&lt;/td&gt;
&lt;td&gt;378.083&lt;/td&gt;
&lt;td&gt;124.686&lt;/td&gt;
&lt;td&gt;22.37980&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1024MB&lt;/td&gt;
&lt;td&gt;239.972&lt;/td&gt;
&lt;td&gt;60.804&lt;/td&gt;
&lt;td&gt;300.776&lt;/td&gt;
&lt;td&gt;115.53&lt;/td&gt;
&lt;td&gt;23.13891&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;241.005&lt;/td&gt;
&lt;td&gt;37.623&lt;/td&gt;
&lt;td&gt;278.628&lt;/td&gt;
&lt;td&gt;116.322&lt;/td&gt;
&lt;td&gt;24.63246&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5120MB&lt;/td&gt;
&lt;td&gt;218.112&lt;/td&gt;
&lt;td&gt;37.009&lt;/td&gt;
&lt;td&gt;255.121&lt;/td&gt;
&lt;td&gt;119.559&lt;/td&gt;
&lt;td&gt;33.24730&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--d8qXL7rg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-x64-NoTC-NoR2R-NoPreJIT-ANY%2520%28Cold%2520Start%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--d8qXL7rg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-x64-NoTC-NoR2R-NoPreJIT-ANY%2520%28Cold%2520Start%29.png" alt="Cold Start Duration" width="880" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-x64-NoTC-NoR2R-NoPreJIT-ANY%20(Cold%20Start).png"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0nGMGv91--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-x64-NoTC-NoR2R-NoPreJIT-ANY%2520%28Lifetime%2520Cost%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0nGMGv91--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-x64-NoTC-NoR2R-NoPreJIT-ANY%2520%28Lifetime%2520Cost%29.png" alt="Lifetime Execution Cost and Total Warm Execution Time" width="880" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-x64-NoTC-NoR2R-NoPreJIT-ANY%20(Lifetime%20Cost).png"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Minimum Cold Start Duration for .NET 6
&lt;/h2&gt;

&lt;p&gt;Not surprisingly, the lowest cold start duration was achieved using the highest memory configuration. &lt;em&gt;Tiered Compilation&lt;/em&gt; also helped lower the number. However, &lt;em&gt;ReadyToRun&lt;/em&gt; did not make much of an impact, which is expected since our minimal project has almost no code.&lt;/p&gt;

&lt;p&gt;More notable is that the ARM64 architecture was slower for comparable memory configurations than the x86-64 architecture.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Cold Start&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;5120MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;211.006&lt;/td&gt;
&lt;td&gt;30.165&lt;/td&gt;
&lt;td&gt;241.171&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1024MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;213.085&lt;/td&gt;
&lt;td&gt;33.173&lt;/td&gt;
&lt;td&gt;246.258&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;215.754&lt;/td&gt;
&lt;td&gt;24.164&lt;/td&gt;
&lt;td&gt;239.918&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;5120MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;198.771&lt;/td&gt;
&lt;td&gt;24.094&lt;/td&gt;
&lt;td&gt;222.865&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TH25PSet--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-ANY-ANY-NoR2R-ANY-ANY%2520%28Minimal%2520Cold%2520Start%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TH25PSet--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-ANY-ANY-NoR2R-ANY-ANY%2520%28Minimal%2520Cold%2520Start%29.png" alt="Minimum Cold Start Duration for .NET 6" width="880" height="336"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-ANY-ANY-NoR2R-ANY-ANY%20(Minimal%20Cold%20Start).png"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Minimum Execution Cost for .NET 6
&lt;/h2&gt;

&lt;p&gt;Another unsurprising result is that the ARM64 architecture yields the lowest execution cost since its unit price is 20% lower. Similarly, the memory configuration is towards the bottom end at only 256 MB.&lt;/p&gt;

&lt;p&gt;More interesting is that &lt;em&gt;Tiered Compilation&lt;/em&gt; is always more expensive to operate. This makes intuitively sense since it requires additional processing time to re-jit code. After that, it's a bit of tossup between the &lt;em&gt;ReadyToRun&lt;/em&gt; and &lt;em&gt;PreJIT&lt;/em&gt; settings.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Warm Used (100)&lt;/th&gt;
&lt;th&gt;Cost (µ$)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;266.026&lt;/td&gt;
&lt;td&gt;378.676&lt;/td&gt;
&lt;td&gt;158.064&lt;/td&gt;
&lt;td&gt;21.98914228&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;288.025&lt;/td&gt;
&lt;td&gt;371.274&lt;/td&gt;
&lt;td&gt;161.529&lt;/td&gt;
&lt;td&gt;21.97601788&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;264.304&lt;/td&gt;
&lt;td&gt;361.657&lt;/td&gt;
&lt;td&gt;164.619&lt;/td&gt;
&lt;td&gt;21.95426344&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;256MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;287.762&lt;/td&gt;
&lt;td&gt;361.285&lt;/td&gt;
&lt;td&gt;160.248&lt;/td&gt;
&lt;td&gt;21.93844936&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TXpdcaut--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-ANY-ANY-ANY-ANY-ANY%2520%28Minimal%2520Lifetime%2520Cost%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TXpdcaut--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-ANY-ANY-ANY-ANY-ANY%2520%28Minimal%2520Lifetime%2520Cost%29.png" alt="Lifetime Execution Cost and Total Warm Execution Time for .NET 6" width="880" height="346"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Net6-ANY-ANY-ANY-ANY-ANY%20(Minimal%20Lifetime%20Cost).png"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What about .NET Core 3.1?
&lt;/h2&gt;

&lt;p&gt;I struggled if I should mention this since .NET Core 3.1 is reaching end-of-life in December 2022, but the performance delta for the baseline case is just staggering.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A Lambda function using .NET Core 3.1 with 512 MB is 40% faster on cold start than one using .NET 6 with 5,120 MB!&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I'm just flabbergasted by this outcome. All I can do is remind myself that this baseline test is not representative of real-world code.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Cold Start&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;512MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;150.129&lt;/td&gt;
&lt;td&gt;6.903&lt;/td&gt;
&lt;td&gt;157.032&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1024MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;148.376&lt;/td&gt;
&lt;td&gt;6.081&lt;/td&gt;
&lt;td&gt;154.457&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;x86_64&lt;/td&gt;
&lt;td&gt;1769MB&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;148.338&lt;/td&gt;
&lt;td&gt;5.972&lt;/td&gt;
&lt;td&gt;154.31&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--vgYDK-WV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Core31-ANY-ANY-NoR2R-ANY-ANY%2520%28Minimal%2520Cold%2520Start%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--vgYDK-WV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Core31-ANY-ANY-NoR2R-ANY-ANY%2520%28Minimal%2520Cold%2520Start%29.png" alt="Minimum Cold Start Duration for .NET Core 3.1" width="880" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Core31-ANY-ANY-NoR2R-ANY-ANY%20(Minimal%20Cold%20Start).png"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Similarly, execution cost is lower with .NET Core 3.1, but not as dramatically. Still, for .NET 6 there were just 4 configurations that achieved a cost under 22µ$. For .NET Core 3.1, there are 39 configurations under 21µ$!&lt;/p&gt;

&lt;p&gt;Interestingly, the 4 lowest cost configurations follow a similar pattern: ARM64, 128 MB, no &lt;em&gt;Tiered Compilation&lt;/em&gt;, and tossup for &lt;em&gt;ReadyToRun&lt;/em&gt; and &lt;em&gt;PreJIT&lt;/em&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Architecture&lt;/th&gt;
&lt;th&gt;Memory Size&lt;/th&gt;
&lt;th&gt;Tiered&lt;/th&gt;
&lt;th&gt;Ready2Run&lt;/th&gt;
&lt;th&gt;PreJIT&lt;/th&gt;
&lt;th&gt;Init&lt;/th&gt;
&lt;th&gt;Cold Used&lt;/th&gt;
&lt;th&gt;Total Warm Used (100)&lt;/th&gt;
&lt;th&gt;Cost (µ$)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;128MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;162.366&lt;/td&gt;
&lt;td&gt;102.693&lt;/td&gt;
&lt;td&gt;110.096&lt;/td&gt;
&lt;td&gt;20.55465044&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;128MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;186.627&lt;/td&gt;
&lt;td&gt;98.641&lt;/td&gt;
&lt;td&gt;112.327&lt;/td&gt;
&lt;td&gt;20.55161642&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;128MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;161.989&lt;/td&gt;
&lt;td&gt;88.677&lt;/td&gt;
&lt;td&gt;110.391&lt;/td&gt;
&lt;td&gt;20.53178133&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;arm64&lt;/td&gt;
&lt;td&gt;128MB&lt;/td&gt;
&lt;td&gt;no&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;yes&lt;/td&gt;
&lt;td&gt;185.923&lt;/td&gt;
&lt;td&gt;85.289&lt;/td&gt;
&lt;td&gt;117.811&lt;/td&gt;
&lt;td&gt;20.53850086&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_262WLbU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Core31-ANY-ANY-ANY-ANY-ANY%2520%28Minimal%2520Lifetime%2520Cost%29.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_262WLbU--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Core31-ANY-ANY-ANY-ANY-ANY%2520%28Minimal%2520Lifetime%2520Cost%29.png" alt="Lifetime Execution Cost and Total Warm Execution Time for .NET Core 3.1" width="880" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://raw.githubusercontent.com/LambdaSharp/LambdaSharp.Benchmark/main/Docs/Minimal-Core31-ANY-ANY-ANY-ANY-ANY%20(Minimal%20Lifetime%20Cost).png"&gt;Fullsize Image&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Based on the benchmarks, we can establish these lower bounds.&lt;/p&gt;

&lt;p&gt;For .NET 6:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cold start duration: 223ms&lt;/li&gt;
&lt;li&gt;Execution cost: 21.94µ$&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For .NET Core 3.1:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cold start duration: 154ms&lt;/li&gt;
&lt;li&gt;Execution cost: 20.53µ$&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Unless anything fundamental changes, we should not expect to do better than these baseline values.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;In the next post, I'm going to benchmark JSON serializers. Specifically, the popular Newtonsoft JSON.NET library, the built-in &lt;em&gt;System.Text.Json&lt;/em&gt; namespace, and the new .NET 6 JSON source generators.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Benchmarking .NET on AWS Lambda</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Mon, 29 Aug 2022 17:50:11 +0000</pubDate>
      <link>https://dev.to/lambdasharp/benchmarking-net-on-aws-lambda-182a</link>
      <guid>https://dev.to/lambdasharp/benchmarking-net-on-aws-lambda-182a</guid>
      <description>&lt;p&gt;My motivation for benchmarking all the compiler, deployment, and runtime options has been to feed my curiosity. I've seen various blog posts recommending one setting over another, but they never provided a justification.&lt;/p&gt;

&lt;p&gt;Unfortunately, we can't use the outstanding &lt;a href="https://benchmarkdotnet.org/"&gt;BenchmarkDotNet&lt;/a&gt; tool with AWS Lambda. So, I built a &lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark"&gt;benchmarking harness&lt;/a&gt; to collect data for all the deployment options, and hopefully, determine an "optimal" combination. &lt;/p&gt;

&lt;p&gt;As the first blog post of this series mentioned, there are two distinct cases we can optimize for: "Minimize Cold Start Duration" or "Minimize Operating Cost". Now that the groundwork has been laid, we can formally capture what that means.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimal Strategies
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;To &lt;em&gt;Minimize Cold Start Duration&lt;/em&gt;, we need to minimize the INIT and the first INVOKE phase. The optimal configuration yields the lowest duration, measured in milliseconds (ms), to process the request. For this measurement, we rely on the data reported by AWS Lambda in the logs. This is the same data that is used for billing and is the most accurate we have access to.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To &lt;em&gt;Minimize Execution Cost&lt;/em&gt;, we need to minimize the sum of all INVOKE phases (cold and warm) while taking into account the Lambda memory configuration and CPU architecture. The optimal configuration yields the lowest execution cost for 1 cold start followed by 100 warm invocations. AWS Lambe execution has an extremely low unit cost. To make the number more intuitive, I opted to report execution cost in millionth of a dollar, or micro-Dollars (µ$).&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Benchmarked Options
&lt;/h2&gt;

&lt;p&gt;To leave no stone unturned, the benchmarking harness executes all possible permutations of the following options.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tiered Compilation: On and Off&lt;/li&gt;
&lt;li&gt;ReadyToRun: On and Off&lt;/li&gt;
&lt;li&gt;Lambda Memory: 128 MB, 256 MB, 512 MB, 1024 MB, 1769 MB, and 5120 MB&lt;/li&gt;
&lt;li&gt;CPU Architecture: x86-64 and ARM64&lt;/li&gt;
&lt;li&gt;.NET Runtime: .NET Core 3.1 and .NET 6&lt;/li&gt;
&lt;li&gt;Pre-JIT .NET: On and Off&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These options produce 192 unique combinations that are benchmarked.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarking Approach
&lt;/h2&gt;

&lt;p&gt;The Lambda function for each project is measured performing 100 cold starts. Each cold start is followed by 100 warm invocations. The results are then averaged.&lt;/p&gt;

&lt;p&gt;I debated using the median or a percentile value instead of the average value. The challenge is when these values are further combined. For example, summing the p99 value for the INIT phase with p99 value of the first INVOKE phase doesn't seem right. Furthermore, outliers happen in real life. I'm hoping that 100 cold and warm invocations are sufficient to fairly represent what should be expected in real world situations.&lt;/p&gt;

&lt;p&gt;That said, the &lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Data"&gt;raw measurements&lt;/a&gt; have been captured for each benchmark. That way, alternative analyses can be done without having to collect the data again.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmarked Projects
&lt;/h2&gt;

&lt;p&gt;My interest was in studying how the options are impacting the compute aspect of Lambda functions. Therefore, I opted to only benchmark projects that do not perform I/O operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Minimal Baseline
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Projects/Minimal/"&gt;Minimal project&lt;/a&gt; establishes a baseline for all projects. It has no business logic and only includes required libraries.&lt;/p&gt;

&lt;h3&gt;
  
  
  JSON Serializers
&lt;/h3&gt;

&lt;p&gt;JSON serialization is necessary for virtually all Lambda functions. With .NET 6, there are 3 common approached to handle this task.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Projects/NewtonsoftJson/"&gt;NewtonsoftJson&lt;/a&gt;: using Newtonsoft JSON.NET&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Projects/SourceGeneratorJson/"&gt;SourceGeneratorJson&lt;/a&gt;: using .NET 6 source generators for JSON parsing&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Projects/SystemTextJson/"&gt;SystemTextJson&lt;/a&gt;: using System.Text.Json&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  AWS SDK
&lt;/h3&gt;

&lt;p&gt;Most Lambda functions will interact with other AWS services via the AWS SDK. The &lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Projects/AwsSdk/"&gt;AwsSdk project&lt;/a&gt; is used to benchmark the cost of initializing the SDK.&lt;/p&gt;

&lt;h3&gt;
  
  
  Top-Level Statements
&lt;/h3&gt;

&lt;p&gt;Available starting in .NET 6, Lambda functions can use top-level statements instead of declaring a class. What is the performance impact when doing so?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Projects/AwsNewtonsoftJson/"&gt;AwsNewtonsoftJson&lt;/a&gt;: using AWS .NET SDK and Newtonsoft JSON.NET&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Projects/SampleAwsNewtonsoftTopLevel/"&gt;SampleAwsNewtonsoftTopLevel&lt;/a&gt;: using AWS .NET SDK, Newtonsoft JSON.NET, and Top-Level Statements&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Projects/SampleAwsSystemTextJsonTopLevel/"&gt;SampleAwsSystemTextJsonTopLevel&lt;/a&gt;: using AWS .NET SDK, System.Text.Json, and Top-Level Statements&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Minimal API
&lt;/h3&gt;

&lt;p&gt;Also new in .NET 6 is a new approach to express ASP.NET routes using top-level statements. This sample was taken from the &lt;a href="https://aws.amazon.com/blogs/compute/introducing-the-net-6-runtime-for-aws-lambda/"&gt;.NET 6 support on AWS Lambda&lt;/a&gt; announcement blog post.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark/tree/main/Projects/SampleMinimalApi/"&gt;SampleMinimalApi&lt;/a&gt;: using ASP.NET Core Minimal API&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Benchmark Explorer
&lt;/h2&gt;

&lt;p&gt;The results from the benchmarks have been compiled into an &lt;a href="https://docs.google.com/spreadsheets/d/1pOVJQi9Q2COj0amqV06BT8Nbgacf0nuyHpFj_P7sYH4/edit?usp=sharing"&gt;interactive Google spreadsheet&lt;/a&gt;. Feel free to explore the data anyway you like and draw your own conclusions. Any feedback on improving the analysis or visualization is welcome.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Finally, we can dive into the results and see what new insights we can gain! First up is the baseline performance measurement. While this is not critical for production code, it gives us a foundation to work on.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Compile, Deploy, and Runtime Performance Options</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Tue, 09 Aug 2022 15:39:32 +0000</pubDate>
      <link>https://dev.to/lambdasharp/compile-deploy-and-runtime-performance-options-16fi</link>
      <guid>https://dev.to/lambdasharp/compile-deploy-and-runtime-performance-options-16fi</guid>
      <description>&lt;p&gt;There are lots of options that impact performance. I actually found it a bit overwhelming, which is why I wanted to benchmark all the possible combinations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojt02mwcgtullnlh9uvl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojt02mwcgtullnlh9uvl.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  C# Compiler Options
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Tiered Compilation
&lt;/h3&gt;

&lt;p&gt;This is a .NET runtime option that can be set at compile time. It instructs the .NET runtime to perform a &lt;em&gt;dirty&lt;/em&gt; JIT (called Tier0) that is generated faster but leads to less performant code. If the code is run often enough, it is replaced by an optimized version (called Tier1) later on.&lt;/p&gt;

&lt;p&gt;Without &lt;em&gt;Tiered Compilation&lt;/em&gt;, the jitter emits all code as Tier1. Optimizing start-up code can be wasteful, especially if the code is only run once. With this option enabled, the jitter waits 100ms before it starts optimizing methods that invoked 30 times or more. This means that the time savings gained during the Lambda cold start impact subsequent warm invocations.&lt;/p&gt;

&lt;p&gt;The whole process is quite complex and fascinating. For more details, check out the &lt;a href="https://github.com/dotnet/runtime/blob/41419131095d36fb5b811600ad0dab3b0d804269/docs/design/features/tiered-compilation.md" rel="noopener noreferrer"&gt;&lt;em&gt;Tiered Compilation&lt;/em&gt; specification&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  ReadyToRun
&lt;/h3&gt;

&lt;p&gt;This .NET compiler option instructs the compiler to include pre-jitted code in the produced assembly. Note this option can increase the assembly size by 200% to 300%.&lt;/p&gt;

&lt;p&gt;During startup, the runtime uses the pre-jitted code, but only when it matches the CPU architecture of the execution environment. The pre-jitted code is not optimized and equivalent to that of a &lt;em&gt;dirty&lt;/em&gt; JIT (Tier0). When &lt;em&gt;Tiered Compilation&lt;/em&gt; is also enabled, the pre-jitted code is eventually optimized when it is invoked often enough.&lt;/p&gt;

&lt;p&gt;For more details, check out &lt;a href="https://docs.microsoft.com/en-us/dotnet/core/deploying/ready-to-run" rel="noopener noreferrer"&gt;the official page about ReadyToRun Compilation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fey3gl100671pj50l4gq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fey3gl100671pj50l4gq5.png" alt="Image description"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda Function Options
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Memory
&lt;/h3&gt;

&lt;p&gt;Performance of a Lambda execution environment is directly tied to its memory configuration. However, the relationship is not linear. Single-threaded performance maxes out at 3,008 MB, which provides 100% capacity of 2 vCPU cores. After that, additional fractional cores are added until the maximum of 10,240 MB is reach, which provides 6 cores.&lt;/p&gt;

&lt;p&gt;For an in-depth analysis for Lambda memory configuration and the impact on performance, check out &lt;a href="https://www.sentiatechblog.com/aws-re-invent-2020-day-3-optimizing-lambda-cost-with-multi-threading" rel="noopener noreferrer"&gt;Optimizing Lambda Cost with Multi-Threading&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;An important detail is that performance is boosted during the INIT phase of the execution environment. It makes no difference if the Lambda function is configured for 128 MB or 3,008 MB. In both cases, the duration of the INIT phase will be the same and perform as if the Lambda had been configured for 3,008 MB. Only if it exceeds that threshold, will the INIT phase run faster, assuming it can use more than two cores.&lt;/p&gt;

&lt;h3&gt;
  
  
  CPU Architecture
&lt;/h3&gt;

&lt;p&gt;AWS Lambda supports two CPU architectures, depending on the region: x86 64-bit and ARM64. Cost for ARM64 is 20% lower than x86 for the same memory configuration. This makes it a very appealing choice, when available. &lt;/p&gt;

&lt;p&gt;As of this writing, the following regions don't yet support ARM64 for Lambda.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;US West - Northern California&lt;/li&gt;
&lt;li&gt;Africa - Cape Town&lt;/li&gt;
&lt;li&gt;Asia Pacific - Hong Kong, Jakarta, Osaka, and Seoul&lt;/li&gt;
&lt;li&gt;Canada - Central&lt;/li&gt;
&lt;li&gt;Europe - Milan, Paris, and Stockholm&lt;/li&gt;
&lt;li&gt;Middle East - Bahrain&lt;/li&gt;
&lt;li&gt;South America - Sao Paulo&lt;/li&gt;
&lt;li&gt;AWS GovCloud - US-East, and US-West&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  .NET Runtime
&lt;/h3&gt;

&lt;p&gt;At this time, only .NET Core 3.1 and .NET 6 runtimes can be used for new function deployments. However, in typical AWS fashion, old functions continue run. I can attest to that, as I still have some old .NET Core 1.0 functions chugging away.&lt;/p&gt;

&lt;p&gt;Beware that .NET Core 3.1 will reach end-of-life on December 13th, 2022. At some point thereafter, it will not be possible to create new .NET Core 3.1 functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  .NET Host for AWS Lambda
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pre-JIT .NET - &lt;code&gt;AWS_LAMBDA_DOTNET_PREJIT&lt;/code&gt;
&lt;/h3&gt;

&lt;p&gt;When this environment variable is set to "Always", it instructs the .NET host for AWS Lambda to prepare code during the INIT phase of the execution environment rather than to wait for the INVOKE phase.&lt;/p&gt;

&lt;p&gt;It's default use-case is for &lt;a href="https://aws.amazon.com/blogs/aws/new-provisioned-concurrency-for-lambda-functions/" rel="noopener noreferrer"&gt;Provisioned Concurrency&lt;/a&gt;, which allows one or more Lambda execution environments to be pre-initialized to avoid cold starts. However, it can also be set to always perform the code preparation.&lt;/p&gt;

&lt;p&gt;The interesting property of this environment variable is that it moves some the code jitting overhead from the INVOKE phase to the INIT phase. The INIT phase always runs at the performance level of a 3,008 MB memory configuration, unless set higher. In addition, the INIT phase is also free of charge, unless it exceeds 10 seconds.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;In the next post, I'm covering how the benchmarking was performed methodology.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Anatomy of the AWS Lambda Lifecycle</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Mon, 08 Aug 2022 15:40:37 +0000</pubDate>
      <link>https://dev.to/lambdasharp/anatomy-of-the-aws-lambda-lifecycle-17a5</link>
      <guid>https://dev.to/lambdasharp/anatomy-of-the-aws-lambda-lifecycle-17a5</guid>
      <description>&lt;p&gt;Before we dive into the nitty gritty, let's define some AWS Lambda terminology. Feel free to skip to the next post and come back to it later if need be.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthioiglx4vjhj1ywi5pv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthioiglx4vjhj1ywi5pv.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Lambda
&lt;/h2&gt;

&lt;p&gt;This is the name of the AWS service that allows us to run code in a serverless manner. That means, we don't have to think about servers, virtual or physical. No need to worry about backups, patching, or maintenance time (woohoo!).&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda Function
&lt;/h2&gt;

&lt;p&gt;This the definition of our deployment. It contains configuration options and a zip package with our code. However, this is not running code, just the definition for it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda Execution Environment
&lt;/h2&gt;

&lt;p&gt;This is a runnable instance of our code that has been deployed into a secure, isolated runtime environment using the configuration of our Lambda Function. Additional execution environments get created as requests come in and we need more capacity to handle them. AWS Lambda will create them as needed and shut them down when they idle for too long. We can have one execution environment, 100s of them, or none. All managed automatically. Each execution environment gets the amount of memory and processing power assigned to it by its definition, as well as 512 MB of ephemeral storage. It's important to note that these execution environments are completely independent of each other as they share absolutely no state.&lt;/p&gt;

&lt;p&gt;For a more detailed description, check out the official page about the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtime-environment.html#runtimes-lifecycle" rel="noopener noreferrer"&gt;AWS Lambda execution environment&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lambda Invocation
&lt;/h2&gt;

&lt;p&gt;This is an invocation of one of our Lambda execution environments in response to a request. Although an execution environment can handle multiple invocations in a row, it only ever handles one invocation at a time. Hence our code does not need to be thread-safe (unless we explicitly make it multi-threaded). Since the execution environment is reused between invocations, state in the ephemeral storage is shared from one invocation to the next. That means, if we're not careful, we can run out of ephemeral storage. Similarly, in-memory state is also kept between consecutive invocations. This allows us to initialize components we can reuse for the lifetime of the execution environment. An invocation is instantaneously aborted if it exceeds its duration or memory limits, as defined by the Lambda function. It's a situation we should strive to avoid! Finally, it's important to note that the execution environment is suspended, along with any background threads, as soon as our code returns a response to the requests.&lt;/p&gt;

&lt;p&gt;For more details, check out this fantastic write-up on &lt;a href="https://aws.amazon.com/blogs/compute/understanding-aws-lambda-scaling-and-throughput/" rel="noopener noreferrer"&gt;understanding AWS Lambda scaling and throughput&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lifecycle &amp;amp; Billing
&lt;/h2&gt;

&lt;p&gt;The following graphic from the &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtime-environment.html#runtimes-lifecycle" rel="noopener noreferrer"&gt;AWS Lambda execution environment&lt;/a&gt; documentation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcz3uej7trw3mb73c7bc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcz3uej7trw3mb73c7bc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The billing of a Lambda invocation is based on the duration of the INVOKE phase only. The INIT phase is not billed unless provisioned concurrency is used, or the INIT phase exceeds 10s.&lt;/p&gt;

&lt;p&gt;Check out this excellent write-up on &lt;a href="https://bitesizedserverless.com/bite/when-is-the-lambda-init-phase-free-and-when-is-it-billed/" rel="noopener noreferrer"&gt;the INIT phase billing&lt;/a&gt; for more details.&lt;/p&gt;

&lt;p&gt;Additionally, the performance of the INIT phase is boosted for low memory configurations. Thus, unless the Lambda function is given a high memory limit (e.g. 5GB), the INIT phase takes the same amount of time for all configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;The next post explores the compilation and deployment options that impact performance.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Optimal Strategies for .NET on AWS Lambda</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Mon, 08 Aug 2022 15:39:00 +0000</pubDate>
      <link>https://dev.to/lambdasharp/optimal-strategies-for-net-on-aws-lambda-45kg</link>
      <guid>https://dev.to/lambdasharp/optimal-strategies-for-net-on-aws-lambda-45kg</guid>
      <description>&lt;p&gt;I care a lot about performance. Making my code faster makes me happy and the metric for success is trivial. It's especially trivial in AWS Lambda since execution time is automatically reported. There's also a financial incentive as it is tied to the billing model.&lt;/p&gt;

&lt;p&gt;And so, I've been wondering: &lt;strong&gt;Is there an optimal strategy for my .NET code on AWS Lambda?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7anb3qdhqsv2r9xoxio2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7anb3qdhqsv2r9xoxio2.png" alt="Image description"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;AWS Lambda is its own little beast to understand and master. There are a variety of deployment options available that impact performance. Combined with the various compilation and runtime options of .NET code, it presented an interesting challenge. There is also the fundamental question of what does "optimal" mean?&lt;/p&gt;

&lt;p&gt;Well ⟪spoiler alert⟫ there isn't just one "optimal" strategy! Instead, we have to choose what we want to optimize for. I settled on examining two strategies. Both have their place and it's rarely (never?) possible to achieve both. Note that if some of the terminology is unfamiliar, fear not. I'm going to explain each concept as we dive into the material. And, if I miss something, please let me know in the comments and I'll amend the posts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategy 1: Minimize Cold Start Duration
&lt;/h2&gt;

&lt;p&gt;This strategy seeks to achieve the shortest response time in the worst-case scenario: a cold start of our AWS Lambda function. In this case, a new execution environment must be created to handle our request. The execution environment has to be initialized, our code has to be loaded into it, and then our logic needs to be run to produce a response. All of this takes additional time when compared to subsequent requests. Correspondingly, this strategy trades faster initialization for absolute performance in later invocations, which may impact execution cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategy 2: Minimize Execution Cost
&lt;/h2&gt;

&lt;p&gt;This strategy minimizes the execution cost of the AWS Lambda instance, including cold start followed by some threshold of warm invocations. In this case, we are willing to have slower cold starts for better performance once the everything is warmed up. This strategy leverages one of the lesser-known quirks of the AWS Lambda billing model works (more on this later). This strategy works best when we know that our code will be invoked frequently. For the purpose of this study, I assume that our Lambda function is invoked at 100 times after a cold start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7osx6c5qs6s9ke9bjde.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7osx6c5qs6s9ke9bjde.png" alt="Image description"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing a Strategy
&lt;/h2&gt;

&lt;p&gt;The preferred strategy depends on the purpose of our code. For handling synchronous invocations, such as API requests, minimizing cold starts can be preferrable. Especially when a human is waiting at the other end. But for asynchronous invocations, such as with EventBridge, then minimizing execution cost is more important.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;In the next post, I'm going to dive into the lifecycle of an AWS Lambda instance and associated terminology. I'll then cover some of the compilation and runtime options for .NET code. Then I'll introduce the benchmarking methodology. Finally, I'll present my findings and conclusion. As with all studies, peer review and independent confirmation is critical. Therefore, all my code is also made available under a permissive open-source license in the &lt;a href="https://github.com/LambdaSharp/LambdaSharp.Benchmark" rel="noopener noreferrer"&gt;LambdaSharp.Benchmark GitHub repository&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; This is stating the obvious, but please check the date of this post. If it's older than 3 years, it's probably out-of-date! &lt;/p&gt;

</description>
      <category>aws</category>
      <category>dotnet</category>
      <category>serverless</category>
    </item>
    <item>
      <title>CloudWatch Logging for Web Apps (Part 3)</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Wed, 23 Sep 2020 23:16:29 +0000</pubDate>
      <link>https://dev.to/lambdasharp/cloudwatch-logging-for-web-apps-part-3-59om</link>
      <guid>https://dev.to/lambdasharp/cloudwatch-logging-for-web-apps-part-3-59om</guid>
      <description>&lt;p&gt;In the previous two posts, I covered the &lt;a href="https://dev.to/lambdasharp/cloudwatch-logging-for-web-apps-part-1-5935"&gt;CloudFormation template&lt;/a&gt; for creating a REST API to log to CloudWatch from a frontend app and then the &lt;a href="https://dev.to/lambdasharp/cloudwatch-logging-for-web-apps-part-2-b53"&gt;communication protocol&lt;/a&gt; of the REST API. This final post in the series covers the implementation of the frontend client to use the logging REST API.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;The frontend client is responsible for creating a CloudWatch log stream for each app session and then writing log messages to it in sequential batches. Each batch must include a sequence number from the previous batch, unless it is the first batch.&lt;/p&gt;

&lt;p&gt;For Blazor WebAssembly apps built with LambdaSharp, the implementation of the client resides in the &lt;a href="https://github.com/LambdaSharp/LambdaSharpTool/blob/main/src/LambdaSharp.App/LambdaSharpAppClient.cs"&gt;LambdaSharpAppClient&lt;/a&gt; class. The class is instantiated as a singleton and can be injected either directly or indirectly via the &lt;code&gt;ILogger&lt;/code&gt; interface. Only one of the following statement is needed for the frontend app to log to CloudWatch. The choices comes down to personal preference.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="n"&gt;@inject&lt;/span&gt; &lt;span class="n"&gt;LambdaSharpAppClient&lt;/span&gt; &lt;span class="n"&gt;AppClient&lt;/span&gt;
&lt;span class="n"&gt;@inject&lt;/span&gt; &lt;span class="n"&gt;ILogger&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;Index&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Logger&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Implementation
&lt;/h1&gt;

&lt;p&gt;CloudWatch Logs organizes log entries into log streams. A log stream is a chronological sequence of entries. Many log streams can exist concurrently. For Blazor WebAssembly apps--and other single-page apps--it is recommended to create a log stream on-demand when the first log message is generated.&lt;/p&gt;

&lt;p&gt;Note that the code in this post is modified for simplicity. The actual implementation covers a few more edge-cases that would be distracting. The complete implementation can be found &lt;a href="https://github.com/LambdaSharp/LambdaSharpTool/blob/main/src/LambdaSharp.App/LambdaSharpAppClient.cs"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Sending Log Messages
&lt;/h2&gt;

&lt;p&gt;The app client queues messages into an internal accumulator. This enables the implementation to send multiple messages at once to avoid being throttled.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;readonly&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;PutLogEventsRequestEntry&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;_logs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;List&lt;/span&gt;&lt;span class="p"&gt;&amp;lt;&lt;/span&gt;&lt;span class="n"&gt;PutLogEventsRequestEntry&lt;/span&gt;&lt;span class="p"&gt;&amp;gt;();&lt;/span&gt;
&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;SendMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="c1"&gt;// queue message for server-side logging&lt;/span&gt;
    &lt;span class="n"&gt;_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;PutLogEventsRequestEntry&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;Timestamp&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;DateTimeOffset&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UtcNow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;ToUnixTimeMilliseconds&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="n"&gt;Message&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nf"&gt;ArgumentNullException&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;nameof&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Timed Accumulator
&lt;/h2&gt;

&lt;p&gt;The accumulator is checked every second by a timer for pending messages. The timer callback first makes sure that any previous asynchronous operation is completed. It then attempts to flush any accumulated messages.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt; &lt;span class="n"&gt;_previousOperationTask&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;OnTimer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;object&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(!(&lt;/span&gt;&lt;span class="n"&gt;_previousOperationTask&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="n"&gt;IsCompleted&lt;/span&gt; &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

        &lt;span class="c1"&gt;// previous operation is still going; wait until next timer invocation to proceed&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// initialize invocation to FlushAsync(), but don't wait for it to finish&lt;/span&gt;
    &lt;span class="n"&gt;_previousOperationTask&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;FlushAsync&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Batch Sending
&lt;/h2&gt;

&lt;p&gt;Before the first batch of accumulated messages can be sent, the client must ensure that a log stream was created. It then chunks the accumulated messages into batches constrained to 1MB in size or 10,000 messages, whichever is lower. If the operation fails, because the client is possibly offline, the messages are inserted back into the accumulator.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;_logStreamName&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="n"&gt;_sequenceToken&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;Task&lt;/span&gt; &lt;span class="nf"&gt;FlushAsync&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="c1"&gt;// check if any messages are pending&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(!&lt;/span&gt;&lt;span class="n"&gt;_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// check if a log stream must be created&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_logStreamName&lt;/span&gt; &lt;span class="p"&gt;==&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;_logStreamName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;AppInstanceId&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;CreateLogStreamAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;CreateLogStreamRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;LogStreamName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_logStreamName&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt; &lt;span class="p"&gt;!=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;WriteLine&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;$"*** ERROR: unable to create log stream: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;_logStreamName&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt; (Error: &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s"&gt;)"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;


    &lt;span class="c1"&gt;// NOTE (2020-08-06, bjorg): we limit the number of log message we send in the unlikely event that we have too many&lt;/span&gt;
    &lt;span class="c1"&gt;//  See: https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/API_PutLogEvents.html&lt;/span&gt;
    &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;MaxPayloadSize&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="n"&gt;_048_576&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;MaxMessageCount&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;&lt;span class="n"&gt;_000&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="c1"&gt;// consume as many accumulated log messages as possible&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;payloadByteCount&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;logs&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Take&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;MaxMessageCount&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;TakeWhile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt; &lt;span class="p"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;logMessageByteCount&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Encoding&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;UTF8&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;GetByteCount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="m"&gt;26&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;payloadByteCount&lt;/span&gt; &lt;span class="p"&gt;+&lt;/span&gt; &lt;span class="n"&gt;logMessageByteCount&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;MaxPayloadSize&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;payloadByteCount&lt;/span&gt; &lt;span class="p"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;logMessageByteCount&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}).&lt;/span&gt;&lt;span class="nf"&gt;ToList&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="n"&gt;_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;RemoveRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Count&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// send log messages to CloudWatch Logs&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;PutLogEventsAsync&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="n"&gt;PutLogEventsRequest&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;LogStreamName&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_logStreamName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;LogEvents&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;SequenceToken&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;_sequenceToken&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;

        &lt;span class="c1"&gt;// on error, re-insert the log messages and try again later&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Error&lt;/span&gt; &lt;span class="p"&gt;!=&lt;/span&gt; &lt;span class="k"&gt;null&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;InsertRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;_sequenceToken&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NextSequenceToken&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

        &lt;span class="c1"&gt;// on exception, re-insert the log messages and try again later&lt;/span&gt;
        &lt;span class="n"&gt;_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;InsertRange&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;logs&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Disposal
&lt;/h1&gt;

&lt;p&gt;Finally, the client performs a final flush operation when being disposed to ensure that all pending messages in the accumulator are sent to CloudWatch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight csharp"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="n"&gt;ValueTask&lt;/span&gt; &lt;span class="n"&gt;IAsyncDisposable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;DisposeAsync&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;

    &lt;span class="c1"&gt;// stop timer and wait for any lingering timer operations to finish&lt;/span&gt;
    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_timer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;DisposeAsync&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="c1"&gt;// wait for any in-flight operation to complete&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt;&lt;span class="p"&gt;(!(&lt;/span&gt;&lt;span class="n"&gt;_previousOperationTask&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="n"&gt;IsCompleted&lt;/span&gt; &lt;span class="p"&gt;??&lt;/span&gt; &lt;span class="k"&gt;true&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;_previousOperationTask&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// flush all remaining messages&lt;/span&gt;
    &lt;span class="k"&gt;while&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;_logs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;FlushAsync&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;That's it. I hope you enjoyed this &lt;em&gt;behind-the-scenes&lt;/em&gt; series of how LambdaSharp implemented CloudWatch Logs support for Blazor WebAssembly frontend apps. Hopefully you may find it useful in your own endeavors. &lt;/p&gt;

&lt;p&gt;Happy Hacking!&lt;/p&gt;

</description>
      <category>blazor</category>
      <category>aws</category>
      <category>cloudwatch</category>
      <category>serverless</category>
    </item>
    <item>
      <title>CloudWatch Logging for Web Apps (Part 2)</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Tue, 15 Sep 2020 19:46:55 +0000</pubDate>
      <link>https://dev.to/lambdasharp/cloudwatch-logging-for-web-apps-part-2-b53</link>
      <guid>https://dev.to/lambdasharp/cloudwatch-logging-for-web-apps-part-2-b53</guid>
      <description>&lt;p&gt;In the &lt;a href="https://dev.to/lambdasharp/cloudwatch-logging-for-web-apps-part-1-5935"&gt;previous post&lt;/a&gt;, I covered the CloudFormation template for creating a REST API to log to CloudWatch from a frontend app. This post covers the communication protocol of the REST API.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;Each app has a dedicated log group, created by the CloudFormation template, to make it easy to track all log messages for the app. Log groups contain log streams, which themselves contain chronologically ordered log entries.&lt;/p&gt;

&lt;p&gt;The app is responsible for creating the log stream. For single page apps (SPA), such as &lt;a href="https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor"&gt;Blazor WebAssembly&lt;/a&gt;, the log stream can be created when the app is loaded. Creating a log stream per app session has the benefit that the log entries show the chronological sequence of operations done by a user.&lt;/p&gt;

&lt;p&gt;Once a log stream is created, the app can then send log entries to it. Log entries are sent as a batch operation. After the first batch, the CloudWatch API requires a sequence token for each subsequent batch. The sequence token is obtained in the response from the preceding batch operation.&lt;/p&gt;

&lt;p&gt;I will leave the details of how to batch log entries for the next and final post. This post merely focuses on the protocol we will need to implement for it.&lt;/p&gt;

&lt;h1&gt;
  
  
  API Key
&lt;/h1&gt;

&lt;p&gt;The CloudFormation template from the previous post created an API key using the CloudFormation stack ID to limit access to the REST API. This API key needs to be communicated to the frontend app via a JSON configuration file, for example. By basing the API key on the stack ID, the API key is different for each deployment, but ultimately a malicious actor could load the JSON file with the API key and spam the REST API. Unfortunately, for frontend apps, there is no technique to safely pass an API key without someone else getting a hold of it.&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://lambdasharp.net/"&gt;LambdaSharp&lt;/a&gt;, the API key is made of two parts: the stack ID and the build GUID of the app assembly. For this tutorial, I skipped the build GUID part, because it can only be done reliably with tooling and only applies to .NET apps. Note the LambdaSharp approach does not make it safer, only harder for a third party to obtain the API key.&lt;/p&gt;

&lt;h1&gt;
  
  
  REST API
&lt;/h1&gt;

&lt;p&gt;The logging REST API has two endpoints: one for creating a log stream and another for sending log entries to it.&lt;/p&gt;

&lt;h2&gt;
  
  
  POST:/.app/logs - Create Log Stream
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;POST:/.app/logs&lt;/code&gt; endpoint creates a new log stream in the app log group. A log stream is a sequence of log events that originate from an app instance.&lt;/p&gt;

&lt;p&gt;There is no limit on the number of log streams that can be created. There is a limit of 50 requests-per-second on this operations, after which requests are throttled.&lt;/p&gt;

&lt;p&gt;The log stream name must match the following guidelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Log stream names must be unique within the log group.&lt;/li&gt;
&lt;li&gt;Log stream names can be between 1 and 512 characters long.&lt;/li&gt;
&lt;li&gt;The ':' (colon) and '*' (asterisk) characters are not allowed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Request Syntax
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"logStreamName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"string"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Request Parameters
&lt;/h3&gt;

&lt;p&gt;The request accepts the following data in JSON format.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;logStreamName&lt;/code&gt; (required): The name of the log stream. Minimum length of 1. Maximum length of 512. Value must match pattern: &lt;code&gt;[^:*]*&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Success Response (HTTP Status Code: 200)
&lt;/h3&gt;

&lt;p&gt;On success, the API responds with an empty JSON document.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Bad Request Response (HTTP Status Code: 400)
&lt;/h3&gt;

&lt;p&gt;On a &lt;em&gt;Bad Request&lt;/em&gt; response, the body contains a message describing why the request was rejected. Additional details can be found in the API logs when enabled.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; request body is missing required fields&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Invalid request body"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; request validation error&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1 validation error detected: Value &lt;/span&gt;&lt;span class="se"&gt;\'\'&lt;/span&gt;&lt;span class="s2"&gt; at &lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s2"&gt;logStreamName&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s2"&gt; failed to satisfy constraint: Member must have length greater than or equal to 1"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Internal Error Response (HTTP Status Code: 500)
&lt;/h3&gt;

&lt;p&gt;On an &lt;em&gt;Internal Error&lt;/em&gt; response, the body contains a generic message. The actual reason can be found in the API logs when enabled.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Unexpected response from service."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  PUT:/.app/logs - Put Log Messages
&lt;/h2&gt;

&lt;p&gt;The &lt;code&gt;PUT:/.app/logs&lt;/code&gt; endpoint uploads a batch of log messages to the specified log stream.&lt;/p&gt;

&lt;p&gt;The request must include the sequence token obtained from the response of the previous call, unless it is the first request to a newly created log stream. Using the same &lt;code&gt;sequenceToken&lt;/code&gt; twice within a narrow time period may cause both calls to be successful or one might be rejected.&lt;/p&gt;

&lt;p&gt;The batch of events must satisfy the following constraints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The maximum batch size is 1,048,576 bytes. This size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event.&lt;/li&gt;
&lt;li&gt;None of the log events in the batch can be more than 2 hours in the future.&lt;/li&gt;
&lt;li&gt;None of the log events in the batch can be older than 14 days or older than the retention period of the log group.&lt;/li&gt;
&lt;li&gt;The log events in the batch must be in chronological order by their timestamp. The timestamp is the time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC.&lt;/li&gt;
&lt;li&gt;A batch of log events in a single request cannot span more than 24 hours. Otherwise, the operation fails.&lt;/li&gt;
&lt;li&gt;The maximum number of log events in a batch is 10,000.&lt;/li&gt;
&lt;li&gt;There is a quota of 5 requests per second per log stream. Additional requests are throttled. This quota cannot be changed.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Request Syntax
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"logEvents"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
         &lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"string"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
         &lt;/span&gt;&lt;span class="nl"&gt;"timestamp"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;number&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"logStreamName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"string"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"sequenceToken"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"string"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Request Parameters
&lt;/h3&gt;

&lt;p&gt;The request accepts the following data in JSON format.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;logEvents&lt;/code&gt; (required): The log events. Minimum number of 1 item. Maximum number of 10,000 items.

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;message&lt;/code&gt; (required): The raw event message. Minimum length of 1.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;timestamp&lt;/code&gt; (required): The time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. Minimum value of 0.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;logStreamName&lt;/code&gt; (required): The name of the log stream. Minimum length of 1. Maximum length of 512. Value must match pattern: &lt;code&gt;[^:&lt;em&gt;]&lt;/em&gt;&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sequenceToken&lt;/code&gt; (optional): The sequence token obtained from the response of the previous call. An upload in a newly created log stream does not require a sequence token. Using the same &lt;code&gt;sequenceToken&lt;/code&gt; twice within a narrow time period may cause both calls to be successful or one might be rejected.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Success Response (HTTP Status Code: 200)
&lt;/h3&gt;

&lt;p&gt;On success, the API responds with a JSON document containing the sequence token for the next request.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"nextSequenceToken"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"49608818592289528730168753288679022865213175397425034930"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Bad Request Response (HTTP Status Code: 400)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; request body is missing required fields&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Invalid request body"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; request validation error&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1 validation error detected: Value &lt;/span&gt;&lt;span class="se"&gt;\'\'&lt;/span&gt;&lt;span class="s2"&gt; at &lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s2"&gt;logStreamName&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s2"&gt; failed to satisfy constraint: Member must have length greater than or equal to 1"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Example:&lt;/strong&gt; The &lt;code&gt;sequenceToken&lt;/code&gt; field is either missing or reusing a previous token value&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"The given batch of log events has already been accepted. The next batch can be sent with sequenceToken: 49608818592289528730168753288679022865213175397425034930"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"nextSequenceToken"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"49608818592289528730168753288679022865213175397425034930"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Internal Error Response (HTTP Status Code: 500)
&lt;/h3&gt;

&lt;p&gt;On an &lt;em&gt;Internal Error&lt;/em&gt; response, the body contains a generic message. The actual reason can be found in the API logs when enabled.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"error"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Unexpected response from service."&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion - &lt;em&gt;To be concluded...&lt;/em&gt;
&lt;/h1&gt;

&lt;p&gt;In this post, we covered the communication protocol for a frontend apps to log to CloudWatch directly. In the next post, we will conclude this series with how to implement it in the frontend.&lt;/p&gt;

&lt;p&gt;Happy Hacking!&lt;/p&gt;

</description>
      <category>blazor</category>
      <category>aws</category>
      <category>cloudwatch</category>
      <category>serverless</category>
    </item>
    <item>
      <title>CloudWatch Logging for Web Apps (Part 1)</title>
      <dc:creator>Steve Bjorg</dc:creator>
      <pubDate>Thu, 10 Sep 2020 19:11:27 +0000</pubDate>
      <link>https://dev.to/lambdasharp/cloudwatch-logging-for-web-apps-part-1-5935</link>
      <guid>https://dev.to/lambdasharp/cloudwatch-logging-for-web-apps-part-1-5935</guid>
      <description>&lt;p&gt;In this post, I'm describing how to replicate the &lt;a href="https://lambdasharp.net/"&gt;LambdaSharp&lt;/a&gt; app capability to log to &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html"&gt;CloudWatch Logs&lt;/a&gt; using an &lt;a href="https://aws.amazon.com/api-gateway/"&gt;Amazon API Gateway&lt;/a&gt; REST API.&lt;/p&gt;

&lt;p&gt;Observability is a critical building block for developers. Therefore, it is an integral part of the LambdaSharp developer experience. For reference, this is how a &lt;a href="https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor"&gt;Blazor WebAssembly&lt;/a&gt; app is created and automatically wired for CloudWatch logging in LambdaSharp.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;Module&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Sample.BlazorWebAssembly&lt;/span&gt;
&lt;span class="na"&gt;Items&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;App&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MyBlazorApp&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Yes, that is really it!&lt;/em&gt; Nothing additional is needed, but there are plenty of &lt;a href="https://lambdasharp.net/syntax/Module-App.html"&gt;additional capabilities&lt;/a&gt;. However, non-LambdaSharp developers may want to achieve the same capability for their apps using their preferred framework. That is the purpose of this post. It shows how to build the CloudWatch logging capability for any frontend app using any framework.&lt;/p&gt;

&lt;h1&gt;
  
  
  Overview
&lt;/h1&gt;

&lt;p&gt;This implementation does not use any Lambda functions. Instead, we enable logging to CloudWatch by directly integrating the API Gateway REST API with the CloudWatch Logs API using &lt;a href="https://velocity.apache.org/"&gt;Apache Velocity&lt;/a&gt; templates. This design means there is only minimal code involved, no Lambda cold-start latencies, and no Lambda invocation costs.&lt;/p&gt;

&lt;p&gt;The implementation is described in terms of CloudFormation resources using the YAML notation, but the same outcome can be achieved by using AWS Console instead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Logging REST API
&lt;/h2&gt;

&lt;p&gt;First, we need to create a new API Gateway resource. We define the top-level &lt;code&gt;.app&lt;/code&gt; resource to anchor our API. In LambdaSharp, this top-level resource name is configurable via CloudFormation parameters, which are omitted here.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;RestApi&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::RestApi&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${AWS::StackName}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;App&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;API"&lt;/span&gt;

&lt;span class="na"&gt;RestApiAppResource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::Resource&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;RestApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
    &lt;span class="na"&gt;ParentId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;RestApi.RootResourceId&lt;/span&gt;
    &lt;span class="na"&gt;PathPart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;.app&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  CloudWatch Log Group
&lt;/h2&gt;

&lt;p&gt;The CloudWatch Log Group should be created explicitly for each app to make it is easy to distinguish them across apps. In addition, a log retention policy should be set to limit the amount of storage the log group uses to avoid being billed indefinitely for it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;LogGroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Logs::LogGroup&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;RetentionInDays&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;90&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  API Gateway IAM Role
&lt;/h2&gt;

&lt;p&gt;API Gateway needs permission to create log streams in the log group and write to them. This is achieved by the following IAM role definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;RestApiRole&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::IAM::Role&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;AssumeRolePolicyDocument&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2012-10-17&lt;/span&gt;
      &lt;span class="na"&gt;Statement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Sid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ApiGatewayPrincipal&lt;/span&gt;
          &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow&lt;/span&gt;
          &lt;span class="na"&gt;Principal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;Service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apigateway.amazonaws.com&lt;/span&gt;
          &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sts:AssumeRole&lt;/span&gt;
    &lt;span class="na"&gt;Policies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;PolicyName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ApiLogsPolicy&lt;/span&gt;
        &lt;span class="na"&gt;PolicyDocument&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;Version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;2012-10-17&lt;/span&gt;
          &lt;span class="na"&gt;Statement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Sid&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LogGroupPermission&lt;/span&gt;
              &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow&lt;/span&gt;
              &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;logs:CreateLogStream&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;logs:PutLogEvents&lt;/span&gt;
              &lt;span class="na"&gt;Resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:${LogGroup}"&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:${LogGroup}:log-stream:*"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  API Validation
&lt;/h2&gt;

&lt;p&gt;A neat feature of API Gateway REST API is that it can validate requests against a JSON schema model. This capability prevents unnecessary invocations of the API when the incoming payload is not valid. Validation is enabled by associating each API Gateway method with the following validator declaration.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;RestApiValidator&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::RequestValidator&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;RestApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
    &lt;span class="na"&gt;ValidateRequestBody&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;ValidateRequestParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  REST API
&lt;/h2&gt;

&lt;p&gt;This next section is a bit heavy, because how API Gateway resources, methods, and integrations are built.&lt;/p&gt;

&lt;p&gt;First, we create the &lt;code&gt;logs&lt;/code&gt; resource associated with the API methods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;RestApiAppLogsResource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::Resource&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;RestApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
    &lt;span class="na"&gt;ParentId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiAppResource&lt;/span&gt;
    &lt;span class="na"&gt;PathPart&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;logs"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we need to create the OPTIONS method to handle CORS requests. Note this implementation uses &lt;code&gt;Allow-Origin: '*'&lt;/code&gt;, which should be replaced with the actual host scheme and name from which the application is served. LambdaSharp uses CloudFormation parameters to make it configurable, but these were omitted for brevity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;RestApiAppLogsResourceOPTIONS&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::Method&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;AuthorizationType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NONE&lt;/span&gt;
    &lt;span class="na"&gt;RestApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
    &lt;span class="na"&gt;ResourceId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiAppLogsResource&lt;/span&gt;
    &lt;span class="na"&gt;HttpMethod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;OPTIONS&lt;/span&gt;
    &lt;span class="na"&gt;Integration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;IntegrationResponses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;204&lt;/span&gt;
          &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Methods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'OPTIONS,POST,PUT'"&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'*'"&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Max-Age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'600'"&lt;/span&gt;
          &lt;span class="na"&gt;ResponseTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;'&lt;/span&gt;
      &lt;span class="na"&gt;PassthroughBehavior&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WHEN_NO_MATCH&lt;/span&gt;
      &lt;span class="na"&gt;RequestTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{"statusCode":&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;200}'&lt;/span&gt;
      &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MOCK&lt;/span&gt;
    &lt;span class="na"&gt;MethodResponses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;204&lt;/span&gt;
        &lt;span class="na"&gt;ResponseModels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Empty'&lt;/span&gt;
        &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Headers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Methods&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Max-Age&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the browser is authorized via the OPTIONS request, we need to provide two additional endpoints: one for creating a new log stream using POST and another for writing to a log stream using PUT. In addition, we define a JSON schema model for each endpoint to validate requests before they are executed.&lt;/p&gt;

&lt;p&gt;Note that the app is responsible for creating a new log stream. For single page apps (SPA), a new log stream should be created each time the app loads. This is also the behavior for Blazor WebAssembly apps built with LambdaSharp.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create LogStream - POST:/.app/logs
&lt;/h3&gt;

&lt;p&gt;The POST method creates a new log stream in the associated log group. Some of the response handling relates to how errors are returned to the calling application. Emphasis of the handling is on providing useful feedback without revealing too many internal details.&lt;/p&gt;

&lt;p&gt;Similar to the OPTIONS method, this configurations uses &lt;code&gt;Allow-Origin: '*'&lt;/code&gt;, which should be replaced with the actual host scheme and name from which the application is served. LambdaSharp uses CloudFormation parameters to make it configurable, but these were omitted for brevity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;RestApiAppLogsResourcePOST&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::Method&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;OperationName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CreateLogStream&lt;/span&gt;
    &lt;span class="na"&gt;ApiKeyRequired&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;RestApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
    &lt;span class="na"&gt;ResourceId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiAppLogsResource&lt;/span&gt;
    &lt;span class="na"&gt;AuthorizationType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NONE&lt;/span&gt;
    &lt;span class="na"&gt;HttpMethod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POST&lt;/span&gt;
    &lt;span class="na"&gt;RequestModels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiAppLogsResourcePOSTRequestModel&lt;/span&gt;
    &lt;span class="na"&gt;RequestValidatorId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiValidator&lt;/span&gt;
    &lt;span class="na"&gt;Integration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS&lt;/span&gt;
      &lt;span class="na"&gt;IntegrationHttpMethod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POST&lt;/span&gt;
      &lt;span class="na"&gt;Uri&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:${AWS::Partition}:apigateway:${AWS::Region}:logs:action/CreateLogStream"&lt;/span&gt;
      &lt;span class="na"&gt;Credentials&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;RestApiRole.Arn&lt;/span&gt;
      &lt;span class="na"&gt;PassthroughBehavior&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WHEN_NO_TEMPLATES&lt;/span&gt;
      &lt;span class="na"&gt;RequestParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;integration.request.header.Content-Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'application/x-amz-json-1.1'"&lt;/span&gt;
        &lt;span class="na"&gt;integration.request.header.X-Amz-Target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'Logs_20140328.CreateLogStream'"&lt;/span&gt;
      &lt;span class="na"&gt;RequestTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
          &lt;span class="s"&gt;#set($body = $input.path('$'))&lt;/span&gt;
          &lt;span class="s"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;"logGroupName": "${LogGroup}",&lt;/span&gt;
            &lt;span class="s"&gt;"logStreamName": "$body.logStreamName"&lt;/span&gt;
          &lt;span class="s"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;IntegrationResponses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;SelectionPattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200"&lt;/span&gt;
          &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
          &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'*'"&lt;/span&gt;
          &lt;span class="na"&gt;ResponseTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/x-amz-json-1.1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
              &lt;span class="s"&gt;{ }&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;SelectionPattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;400"&lt;/span&gt;
          &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;400&lt;/span&gt;
          &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'*'"&lt;/span&gt;
          &lt;span class="na"&gt;ResponseTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/x-amz-json-1.1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
              &lt;span class="s"&gt;#set($body = $input.path('$'))&lt;/span&gt;
              &lt;span class="s"&gt;{&lt;/span&gt;
              &lt;span class="s"&gt;#if($body.message.isEmpty())&lt;/span&gt;
                &lt;span class="s"&gt;"error": "Unknown error"&lt;/span&gt;
              &lt;span class="s"&gt;#else&lt;/span&gt;
                &lt;span class="s"&gt;"error": "$util.escapeJavaScript($body.message).replaceAll("\\'","'")"&lt;/span&gt;
              &lt;span class="s"&gt;#end&lt;/span&gt;
              &lt;span class="s"&gt;}&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;500&lt;/span&gt;
          &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'*'"&lt;/span&gt;
          &lt;span class="na"&gt;ResponseTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/x-amz-json-1.1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
              &lt;span class="s"&gt;{&lt;/span&gt;
                &lt;span class="s"&gt;"error": "Unexpected response from service."&lt;/span&gt;
              &lt;span class="s"&gt;}&lt;/span&gt;

    &lt;span class="na"&gt;MethodResponses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
        &lt;span class="na"&gt;ResponseModels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Empty&lt;/span&gt;
        &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;400&lt;/span&gt;
        &lt;span class="na"&gt;ResponseModels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Empty&lt;/span&gt;
        &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;500&lt;/span&gt;
        &lt;span class="na"&gt;ResponseModels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Empty&lt;/span&gt;
        &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;

&lt;span class="na"&gt;RestApiAppLogsResourcePOSTRequestModel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::Model&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CreateLogStream&lt;/span&gt;
    &lt;span class="na"&gt;ContentType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application/json&lt;/span&gt;
    &lt;span class="na"&gt;RestApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
    &lt;span class="na"&gt;Schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;$schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://json-schema.org/draft-04/schema#&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;object&lt;/span&gt;
      &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;logStreamName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
      &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;logStreamName&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Append to Log Stream - POST .app/logs
&lt;/h3&gt;

&lt;p&gt;Similar to the POST method, the PUT method validates incoming requests and limits what internal details are exposed when errors occur.&lt;/p&gt;

&lt;p&gt;Similar to the OPTIONS method, this configurations uses &lt;code&gt;Allow-Origin: '*'&lt;/code&gt;, which should be replaced with the actual host scheme and name from which the application is served. LambdaSharp uses CloudFormation parameters to make it configurable, but these were omitted for brevity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;RestApiAppLogsResourcePUT&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::Method&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;OperationName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PutLogEvents&lt;/span&gt;
    &lt;span class="na"&gt;ApiKeyRequired&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;RestApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
    &lt;span class="na"&gt;ResourceId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiAppLogsResource&lt;/span&gt;
    &lt;span class="na"&gt;AuthorizationType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;NONE&lt;/span&gt;
    &lt;span class="na"&gt;HttpMethod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PUT&lt;/span&gt;
    &lt;span class="na"&gt;RequestModels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiAppLogsResourcePUTRequestModel&lt;/span&gt;
    &lt;span class="na"&gt;RequestValidatorId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiValidator&lt;/span&gt;
    &lt;span class="na"&gt;Integration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS&lt;/span&gt;
      &lt;span class="na"&gt;IntegrationHttpMethod&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;POST&lt;/span&gt;
      &lt;span class="na"&gt;Uri&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:${AWS::Partition}:apigateway:${AWS::Region}:logs:action/PutLogEvents"&lt;/span&gt;
      &lt;span class="na"&gt;Credentials&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt;  &lt;span class="s"&gt;RestApiRole.Arn&lt;/span&gt;
      &lt;span class="na"&gt;PassthroughBehavior&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;WHEN_NO_TEMPLATES&lt;/span&gt;
      &lt;span class="na"&gt;RequestParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;integration.request.header.Content-Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'application/x-amz-json-1.1'"&lt;/span&gt;
        &lt;span class="na"&gt;integration.request.header.X-Amz-Target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'Logs_20140328.PutLogEvents'"&lt;/span&gt;
        &lt;span class="na"&gt;integration.request.header.X-Amzn-Logs-Format&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'json/emf'"&lt;/span&gt;
      &lt;span class="na"&gt;RequestTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
          &lt;span class="s"&gt;#set($body = $input.path('$'))&lt;/span&gt;
          &lt;span class="s"&gt;{&lt;/span&gt;
            &lt;span class="s"&gt;"logEvents": [&lt;/span&gt;
          &lt;span class="s"&gt;#foreach($logEvent in $body.logEvents)&lt;/span&gt;
                &lt;span class="s"&gt;{&lt;/span&gt;
                  &lt;span class="s"&gt;"message": "$util.escapeJavaScript($logEvent.message).replaceAll("\\'","'")",&lt;/span&gt;
                  &lt;span class="s"&gt;"timestamp": $logEvent.timestamp&lt;/span&gt;
                &lt;span class="s"&gt;}#if($foreach.hasNext),#end&lt;/span&gt;
          &lt;span class="s"&gt;#end&lt;/span&gt;
            &lt;span class="s"&gt;],&lt;/span&gt;
            &lt;span class="s"&gt;"logGroupName": "${LogGroup}",&lt;/span&gt;
            &lt;span class="s"&gt;"logStreamName": "$body.logStreamName",&lt;/span&gt;
            &lt;span class="s"&gt;"sequenceToken": #if($body.sequenceToken.isEmpty()) null#else "$body.sequenceToken"#end&lt;/span&gt;
          &lt;span class="s"&gt;}&lt;/span&gt;
      &lt;span class="na"&gt;IntegrationResponses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;SelectionPattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;200"&lt;/span&gt;
          &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
          &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'*'"&lt;/span&gt;
          &lt;span class="na"&gt;ResponseTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/x-amz-json-1.1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
              &lt;span class="s"&gt;{&lt;/span&gt;
                &lt;span class="s"&gt;"nextSequenceToken": "$input.path('$.nextSequenceToken')"&lt;/span&gt;
              &lt;span class="s"&gt;}&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;SelectionPattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;400"&lt;/span&gt;
          &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;400&lt;/span&gt;
          &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'*'"&lt;/span&gt;
          &lt;span class="na"&gt;ResponseTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/x-amz-json-1.1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
              &lt;span class="s"&gt;#set($body = $input.path('$'))&lt;/span&gt;
              &lt;span class="s"&gt;#if($body.expectedSequenceToken.isEmpty())&lt;/span&gt;
              &lt;span class="s"&gt;{&lt;/span&gt;
              &lt;span class="s"&gt;#if($body.message.isEmpty())&lt;/span&gt;
                &lt;span class="s"&gt;"error": "Unknown error"&lt;/span&gt;
              &lt;span class="s"&gt;#else&lt;/span&gt;
                &lt;span class="s"&gt;"error": "$util.escapeJavaScript($body.message).replaceAll("\\'","'")"&lt;/span&gt;
              &lt;span class="s"&gt;#end&lt;/span&gt;
              &lt;span class="s"&gt;}&lt;/span&gt;
              &lt;span class="s"&gt;#else&lt;/span&gt;
              &lt;span class="s"&gt;{&lt;/span&gt;
              &lt;span class="s"&gt;#if($body.message.isEmpty())&lt;/span&gt;
                &lt;span class="s"&gt;"error": "unknown error",&lt;/span&gt;
              &lt;span class="s"&gt;#else&lt;/span&gt;
                &lt;span class="s"&gt;"error": "$util.escapeJavaScript($body.message).replaceAll("\\'","'")",&lt;/span&gt;
              &lt;span class="s"&gt;#end&lt;/span&gt;
                &lt;span class="s"&gt;"nextSequenceToken": "$body.expectedSequenceToken"&lt;/span&gt;
              &lt;span class="s"&gt;}&lt;/span&gt;
              &lt;span class="s"&gt;#end&lt;/span&gt;

        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;500&lt;/span&gt;
          &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;'*'"&lt;/span&gt;
          &lt;span class="na"&gt;ResponseTemplates&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;application/x-amz-json-1.1&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|-&lt;/span&gt;
              &lt;span class="s"&gt;{&lt;/span&gt;
                &lt;span class="s"&gt;"error": "Unexpected response from service."&lt;/span&gt;
              &lt;span class="s"&gt;}&lt;/span&gt;

    &lt;span class="na"&gt;MethodResponses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
        &lt;span class="na"&gt;ResponseModels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Empty&lt;/span&gt;
        &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;400&lt;/span&gt;
        &lt;span class="na"&gt;ResponseModels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Empty&lt;/span&gt;
        &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;

      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;StatusCode&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;500&lt;/span&gt;
        &lt;span class="na"&gt;ResponseModels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;application/json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Empty&lt;/span&gt;
        &lt;span class="na"&gt;ResponseParameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;method.response.header.Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;false&lt;/span&gt;

&lt;span class="na"&gt;RestApiAppLogsResourcePUTRequestModel&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::Model&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;PutLogEvents&lt;/span&gt;
    &lt;span class="na"&gt;ContentType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application/json&lt;/span&gt;
    &lt;span class="na"&gt;RestApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
    &lt;span class="na"&gt;Schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;$schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http://json-schema.org/draft-04/schema#&lt;/span&gt;
      &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;object&lt;/span&gt;
      &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;logEvents&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;array&lt;/span&gt;
          &lt;span class="na"&gt;items&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;object&lt;/span&gt;
              &lt;span class="na"&gt;properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="na"&gt;message&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
                &lt;span class="na"&gt;timestamp&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;integer&lt;/span&gt;
              &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;message&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;timestamp&lt;/span&gt;
        &lt;span class="na"&gt;logStreamName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
        &lt;span class="na"&gt;sequenceToken&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;string&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;null"&lt;/span&gt;
      &lt;span class="na"&gt;required&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;logEvents&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;logStreamName&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  API Key &amp;amp; Usage Plan
&lt;/h1&gt;

&lt;p&gt;The following resources declare an API key and usage plan. The API key is set by default to the Base64 value of the CloudFormation stack GUID. It is recommended to explicitly set the API key since the frontend app will need access to it to use the logging REST API. The API key can be further obfuscated by combining it with an internal value of the app. In LambdaSharp, the API key is generated by combining the CloudFormation stack GUID and the compiled .NET Core assembly identifier GUID.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;RestApiKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::ApiKey&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${AWS::StackName}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;App&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;API&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Key"&lt;/span&gt;
    &lt;span class="na"&gt;Enabled&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="no"&gt;true&lt;/span&gt;
    &lt;span class="na"&gt;StageKeys&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;RestApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
        &lt;span class="na"&gt;StageName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiStage&lt;/span&gt;
    &lt;span class="na"&gt;Value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="s"&gt;Fn::Base64: !Select [ 2, !Split [ "/", !Ref AWS::StackId ]]&lt;/span&gt;

&lt;span class="na"&gt;RestApiUsagePlan&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::UsagePlan&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;ApiStages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;ApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
        &lt;span class="na"&gt;Stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiStage&lt;/span&gt;
    &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${AWS::StackName}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;App&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;API&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Usage&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Plan"&lt;/span&gt;
    &lt;span class="na"&gt;Throttle&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;BurstLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;200&lt;/span&gt;
      &lt;span class="na"&gt;RateLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;

&lt;span class="na"&gt;RestApiUsagePlanKey&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::UsagePlanKey&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;KeyId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiKey&lt;/span&gt;
    &lt;span class="na"&gt;KeyType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;API_KEY&lt;/span&gt;
    &lt;span class="na"&gt;UsagePlanId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiUsagePlan&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  API Deployment
&lt;/h1&gt;

&lt;p&gt;Finally, we define a stage called &lt;code&gt;LATEST&lt;/code&gt;, which is used by the deployment resource. Note that CloudFormation only runs the deployment once. Subsequent CloudFormation stack updates need to be manually deployed when the REST API changes. LambdaSharp uses the &lt;a href="https://lambdasharp.net/articles/Finalizer.html"&gt;Finalizer&lt;/a&gt; construct to allow for configuration changes to be always applied automatically.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;RestApiStage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::Stage&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;DeploymentId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApiDeployment&lt;/span&gt;
    &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;App API LATEST Stage&lt;/span&gt;
    &lt;span class="na"&gt;RestApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
    &lt;span class="na"&gt;StageName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LATEST&lt;/span&gt;

&lt;span class="na"&gt;RestApiDeployment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ApiGateway::Deployment&lt;/span&gt;
  &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;${AWS::StackName}&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;App&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;API"&lt;/span&gt;
    &lt;span class="na"&gt;RestApiId&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RestApi&lt;/span&gt;
  &lt;span class="na"&gt;DependsOn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;RestApiAppLogsResource&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;RestApiAppLogsResourcePOST&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;RestApiAppLogsResourcePOSTRequestModel&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;RestApiAppLogsResourcePUT&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;RestApiAppLogsResourcePUTRequestModel&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Conclusion - &lt;em&gt;To be continued...&lt;/em&gt;
&lt;/h1&gt;

&lt;p&gt;In this post, we created the resources required to enable a frontend apps to log to CloudWatch directly. In the next post, we will cover the protocol for logging via this REST API.&lt;/p&gt;

&lt;p&gt;Happy Hacking!&lt;/p&gt;

</description>
      <category>blazor</category>
      <category>aws</category>
      <category>cloudwatch</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
