<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pavel Fadeev</title>
    <description>The latest articles on DEV Community by Pavel Fadeev (@bearmug).</description>
    <link>https://dev.to/bearmug</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/bearmug"/>
    <language>en</language>
    <item>
      <title>GraalVM: build regular Micronaut Kotlin/Java lambdas, run natively</title>
      <dc:creator>Pavel Fadeev</dc:creator>
      <pubDate>Tue, 26 Nov 2019 20:21:19 +0000</pubDate>
      <link>https://dev.to/bearmug/graalvm-build-regular-micronaut-kotlin-java-lambdas-run-natively-1g0</link>
      <guid>https://dev.to/bearmug/graalvm-build-regular-micronaut-kotlin-java-lambdas-run-natively-1g0</guid>
      <description>&lt;p&gt;Finally! It’s time to forget about those cold starts and scheduled warm-up calls. Simply deploy Java (or Kotlin) natively compiled code to AWS Lambda! Or maybe not yet? The post is an attempt to answer this question with an experiment like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;deploy natively compiled (by GraalVM &lt;a href="https://www.graalvm.org/docs/reference-manual/native-image/"&gt;native-image tool&lt;/a&gt;) Kotlin and Java lambdas to AWS cloud&lt;/li&gt;
&lt;li&gt;have the same setup with Node.js lambda nearby&lt;/li&gt;
&lt;li&gt;bridge incoming calls through AWS API Gateway&lt;/li&gt;
&lt;li&gt;run basic tests to measure cold start timings and see behaviour after warm-up&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Project setup
&lt;/h2&gt;

&lt;p&gt;Project boilerplate has been generated with &lt;a href="https://micronaut.io/"&gt;Micronaut&lt;/a&gt; tooling and provisioned to AWS infrastructure using &lt;a href="https://serverless.com/"&gt;Serverless&lt;/a&gt; framework. Load testing delivered with &lt;a href="https://k6.io/"&gt;K6&lt;/a&gt;. Feel free to explore related &lt;a href="https://github.com/bearmug/serverless-playground/blob/master/doc/aws-lambda.md"&gt;step-by-step setup guide&lt;/a&gt; to deploy the same setup and play with load tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  Load scenarios
&lt;/h2&gt;

&lt;h3&gt;
  
  
  #1 “cold” starts evaluation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;each lambda deployed 10 times (which guarantees to have init run for the first call after deployment)&lt;/li&gt;
&lt;li&gt;the deployment followed by 10 seconds load test. The load test is strictly sequential, all requests issued one after another.&lt;/li&gt;
&lt;li&gt;client-side statistics captured from K6 log output, AWS-side figures taken from &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html"&gt;CloudWatch Logs Insights&lt;/a&gt; with simplest &lt;a href="https://github.com/bearmug/serverless-playground/blob/master/doc/aws-lambda.md#load-test-main-statistics-query"&gt;queries&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  #2 “warm” runs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;each lambda deployed once and called to trigger initial iteration&lt;/li&gt;
&lt;li&gt;deployment followed by 10 iterations, each with the same 10 seconds sequential load test&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;load test scenarios could be found here: &lt;a href="https://github.com/bearmug/serverless-playground/blob/master/config/load-test/aws-deploy-cold-call.sh"&gt;cold-start run&lt;/a&gt;, &lt;a href="https://github.com/bearmug/serverless-playground/blob/master/config/load-test/aws-deploy-warm-call.sh"&gt;warm run&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Measurements and their visualization
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Client-side measurements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;“cold” start outliers are clearly visible. Their value reaches &lt;strong&gt;~7.5&lt;/strong&gt; sec with no effort&lt;/li&gt;
&lt;li&gt;on a bright side, other min/max/percentile figures distributed quite evenly across tested lambdas. JVM/native max numbers leaning a little towards bigger values&lt;/li&gt;
&lt;li&gt;these slight fluctuations could be caused by the network layer as well. Next AWS-side measurements section might be used for cross-checks.
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--ViBQbhC0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/ur9hyxw0xoihmdoer5xw.png" alt="client-side latency measurements data"&gt;client-side latency measurements data

&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS-side measurements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Node.js min numbers &lt;strong&gt;0.84/0.79 ms(!)&lt;/strong&gt; are invisible at this log-scale chart&lt;/li&gt;
&lt;li&gt;alas, JVM-based lambdas show “cold” start numbers up to &lt;strong&gt;5 sec&lt;/strong&gt;. Add &lt;strong&gt;~2.5 sec&lt;/strong&gt; init time on top of this. Predictably sad&lt;/li&gt;
&lt;li&gt;native Kotlin/Java lambdas’ performance looks fairly acceptable. Even the worst iteration kept close to &lt;strong&gt;180 ms&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;it is worth to mention that enterprise GraalVM version features an opportunity to tweak performance with &lt;a href="https://www.graalvm.org/docs/reference-manual/native-image/#profile-guided-optimizations"&gt;profile-guided optimizations&lt;/a&gt;. At the moment GraalVM Enterprise Edition &lt;a href="https://www.graalvm.org/docs/faq/"&gt;licensed&lt;/a&gt; for free testing, evaluation, or for developing non-production applications only. Still, there is a hope to see this tooling within CE version as well…&lt;/li&gt;
&lt;li&gt;init timings for native images (&lt;strong&gt;~330 ms&lt;/strong&gt;) are much closer to Node.js ones (~160 ms)
&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--idQwIrGY--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://thepracticaldev.s3.amazonaws.com/i/j6v6p4lhwthecoh6r2m3.png" alt="AWS-side latency measurements data (init time is out of scope)"&gt;AWS-side latency measurements data (init time is out of scope)

&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;here you may find exact numbers for &lt;a href="https://github.com/bearmug/serverless-playground/blob/master/doc/aws-lambda.md#client-side-measurements"&gt;client-side&lt;/a&gt; and &lt;a href="https://github.com/bearmug/serverless-playground/blob/master/doc/aws-lambda.md#aws-side-measurements"&gt;AWS-side&lt;/a&gt; runs, including init timings and billing calculations&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Summary notes
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Java/Kotlin JVM lambdas are still experiencing “cold” start issues. These &lt;strong&gt;~5–7&lt;/strong&gt; seconds is just a matter of fact today. Further &lt;a href="https://github.com/andthearchitect/aws-lambda-java-runtime"&gt;manipulations&lt;/a&gt; with artefact size and classpath might help with init timings, but…&lt;/li&gt;
&lt;li&gt;GraalVM native lambdas have sustainable performance profile. Init time (&lt;strong&gt;~330ms&lt;/strong&gt;) comparable to Node.js reference lambda (&lt;strong&gt;~160ms&lt;/strong&gt;) and way better than for JVM (&lt;strong&gt;~2.5 sec&lt;/strong&gt;)&lt;/li&gt;
&lt;li&gt;At the same time, GraalVM/Micronaut/Serverless combo instantly leverages from Java/Kotlin existing ecosystem, reach tooling and active community&lt;/li&gt;
&lt;li&gt;Node.js timings are really good and very stable. Median is barely different from 99%-ile&lt;/li&gt;
&lt;li&gt;in fact, Java and Kotlin -based native images have the same size, &lt;strong&gt;14.62 MB&lt;/strong&gt;. JVM images sizing is a little different: &lt;strong&gt;38.58 MB&lt;/strong&gt; (Java) and &lt;strong&gt;44.37 MB&lt;/strong&gt; (Kotlin). And Node.js size is just &lt;strong&gt;297 bytes&lt;/strong&gt; :)&lt;/li&gt;
&lt;li&gt;Last but not least, this project is intentionally bent from autogenerated Micronaut template. This short journey has proven two things. #1: a lot of issues may arise during &lt;a href="https://guides.gradle.org/migrating-build-logic-from-groovy-to-kotlin/"&gt;migration to *.gradle.kts&lt;/a&gt;, restructuring multi-module Gradle project and &lt;a href="https://www.graalvm.org/docs/reference-manual/native-image/"&gt;native-image&lt;/a&gt; configuration tweaks. #2: either these issues solved multiple times or solvable with reasonable efforts.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;You may run described deployments and tests using project’s &lt;a href="https://github.com/bearmug/serverless-playground/blob/master/doc/aws-lambda.md"&gt;github repo&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>kotlin</category>
      <category>graalvm</category>
      <category>aws</category>
      <category>lambda</category>
    </item>
  </channel>
</rss>
