<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: María Arias de Reyna</title>
    <description>The latest articles on DEV Community by María Arias de Reyna (@delawen).</description>
    <link>https://dev.to/delawen</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/delawen"/>
    <language>en</language>
    <item>
      <title>How is Leyden improving Java Performance? Part 3 of 3</title>
      <dc:creator>María Arias de Reyna</dc:creator>
      <pubDate>Tue, 17 Mar 2026 11:20:00 +0000</pubDate>
      <link>https://dev.to/delawen/how-is-leyden-improving-java-performance-part-3-of-3-2mo2</link>
      <guid>https://dev.to/delawen/how-is-leyden-improving-java-performance-part-3-of-3-2mo2</guid>
      <description>&lt;p&gt;In part 1 of this series of 3 blog posts we introduced the specific performance challenges OpenJDK faces lowering application ‘startup’, ‘warmup’ and ‘initial footprint’ costs and provided an overview of what Leyden is doing to address those challenges. &lt;/p&gt;

&lt;p&gt;Part 2 described how to use the new capabilities offered by Leyden and presented test results which show that very significant progress has already been made and is set to continue. &lt;/p&gt;

&lt;p&gt;Part 3 provides a more detailed account of how Leyden’s proposed solution operates and presents a first look at tooling that allows you to assess the benefits that result and tune your application to make the most of what Leyden offers.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is inside the Ahead of Time Cache?
&lt;/h2&gt;

&lt;p&gt;Ideally, an AOT cache would simply include everything needed to allow a production run to skip straight through to its warmed up state. However, in practice training runs don’t always cover all the things that can happen at runtime and hence that the assets contained in any generated AOT cache will be more or less complete.&lt;/p&gt;

&lt;p&gt;In order to have some idea of how effective a training run has been it’s helpful to be able to look at a cache and see what is in it. Full details of the tooling that allows you to do that are presented in part 3 of this blog series. However, in order to prepare for that, we need to provide an overview of the JVM assets that end up in the cache and how the JVM uses them. We will follow up with some examples to show how effectively this improves startup and warmup.&lt;/p&gt;

&lt;p&gt;So, let’s take a deeper look at what exactly is inside the AOT Cache. There are several different ways of classifying the contents: &lt;/p&gt;

&lt;p&gt;The most straightforward way to classify AOT cache assets is to distinguish between Static and Dynamic data.&lt;/p&gt;

&lt;p&gt;Static assets are data that are available in or directly derived from bytecode, data that exist, even if only implicitly, at build time.&lt;/p&gt;

&lt;p&gt;Dynamic assets are data that get generated, or are collected, at runtime as a side-effect of execution. Some of them record information that can be used to trigger compilation and drive feedback-driven optimizations, including speculative optimization, beyond what an ahead-of-time compiler would be able to do. They can also include the compiled code that is generated as a result of that compilation. &lt;/p&gt;

&lt;p&gt;Finally, they include training data, created as a training run progresses to track what the JVM has done and why. Training data identify what JVM assets need to be stored into the cache when it is created. They are also installed in the cache, indexing the other assets and helping identify how to use them in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9d94n4orpjxb2gyr6tgy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9d94n4orpjxb2gyr6tgy.png" alt="Types of assets on the AOT Cache" width="749" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can also distinguish two types of data depending on their purpose: &lt;/p&gt;

&lt;p&gt;On one hand, there is the JVM data — metadata, heap data and code. This is a network of C++ objects that are used during normal JVM running to define and regulate Java execution. These objects must always exist, even when running without a cache, in order for the JVM to be able to run an app. This object network needs to be dumped to the archive on disk in a format that allows it to be quickly and correctly restored to the relevant memory areas of the production JVM in a valid (C++ Object) format and layout that matches the JVM’s expectations.&lt;/p&gt;

&lt;p&gt;On the other hand we also have Leyden’s own Cache Management Data, i.e. training data, which exists specifically to support creation and consumption of AOT cache. Training data are also saved and restored as C++ object data but the format and layout of these objects is determined solely by the Leyden cache management code. Its sole purpose is to track and regulate what assets get written to the AOT cache after training completes and what assets can or should be restored in production.&lt;/p&gt;

&lt;p&gt;Let’s see in detail what each data type means.&lt;/p&gt;

&lt;h3&gt;
  
  
  JVM Metadata
&lt;/h3&gt;

&lt;p&gt;Metadata stored in the AOT cache is a superset of what was stored in a CDS archive. The subset which overlaps with CDS is the static metadata. The latter represents the structure and hierarchy of classes in JDK and application code. Primarily, it helps avoid the cost of parsing bytecode, as it is in the same format as the JVM’s own internal metadata model: classes, methods, fields, inheritance between classes,... which can be mapped directly into memory. Having this information stored in the cache speeds up the time the Hotspot takes to decode the different class files, and to build the dependency graph.&lt;/p&gt;

&lt;p&gt;While starting the application, the Java Heap memory gets filled with objects and instances that are going to be used during runtime. Some of those heap data objects can be cached too because they are quite predictable, like Strings hardcoded in the source code, java.lang.Class instances, some content of class static fields, objects needed to run lambdas, the class graph module,... Those are all assets that are created in memory in the same way on every run. &lt;/p&gt;

&lt;p&gt;The heap data cached at the moment is restricted to very specific cases as it has to behave exactly the same on each and every run, but the type of data cached is expanding on each JDK version.&lt;/p&gt;

&lt;h3&gt;
  
  
  JVM Profile and Linkage Data
&lt;/h3&gt;

&lt;p&gt;The cache also includes dynamic JVM metadata i.e. MethodCounter, MethodData and ConstantPoolCache objects. These objects are created and attached to the static metadata methods and classes and their content is updated as a side-effect of executing method code. &lt;/p&gt;

&lt;p&gt;MethodCounter objects track how often the method they are attached to has been called. They are primarily used to trigger compilation via the baseline (C1) or optimizing (C2) compiler. The interpreter increments a method’s call count up to a threshold before scheduling a C1 compile, possibly including code that gathers further profile information. Instrumentation code in (Tier 2 andImage description 3) C1 compiled methods also updates the call count and when a higher threshold is reached either upgrades to (Tier 4) C2 compiled code or reverts to (Tier 1) C1 compiled code which includes no instrumentation&lt;/p&gt;

&lt;p&gt;MethodProfile objects record detailed information about their associated method’s hot and cold paths, argument types and other details of how it executes, most notably any history of speculative deoptimization. Apart from the deoptimization case, which applies for both C1 and C2 code, MethodProfile objects only receive updates via instrumentation code in (Tier 2 and 3) C1 compiled methods.&lt;/p&gt;

&lt;p&gt;ConstantPoolCache objects are attached to a clasImage descriptions and track the linkage of call and field access sites in any of the class’s methods. Prelinking avoids work at the first call or first field access and this is especially valuable when the call is an invokedynamic i.e. the bytecode that implements a lambda invocation. &lt;/p&gt;

&lt;p&gt;Linking a lambda involves running Java ‘bootstrap’ code that identifies a private class that owns the bytecode for the lambda body, asking it to construct and return a MethodHandle that can be used to execute the target. If a lambda can be run during training then the target class and method can be pre-loaded and the MethodHandle stored in the heap and linked from the ConstantPoolCache, avoiding the need to run the ‘bootstrap’ in production. If the lambda is executed repeatedly in production the called bytecode may even be inlined into the compiled code for the caller. Effectively, executing as lambda in training removes all setup overheads in production, making lambdas as cheap to use as a direct method call.&lt;/p&gt;

&lt;h3&gt;
  
  
  JVM Code and Code Management Data
&lt;/h3&gt;

&lt;p&gt;AdapterHandlers are a set of utilities used by the Hotspot to marshall method parameters when performing certain types of call. AdapterHandlers can be cached, avoiding the need to generate them on demand. They are identified by their AdapterFingerprint and indexed via a table of AdapterHandleEntry objects.&lt;/p&gt;

&lt;p&gt;Alongside these handlers various StubBlobs needed by the runtime are also cached. These blobs contain JITted code that implements one or more ‘stub’ routines. Stub routines include architecture- and OS-specific code used by the JVM to perform operations that are hard to write in a platform agnostic way. Examples include: flushing code regions after update by the JIT or call linker, unwinding the stack when an exception occurs, replacing a compiled stackframe with one or more interpreter frames when execution of a deopt trap forces a bail-out etc. There are also many stubs that provide hand crafted, high-performance implementations of math, crypto or memory copy methods that are used in place of Java implementations on some architectures, especially where hand-crafted code can use specialized hardware instructions to outperform the JIT compiler. Much of the stub and adapter code has to be generated before the JDK can fully startup. Storing it in the cache and reloading it in production provides a small but noticeable performance improvement.&lt;/p&gt;

&lt;p&gt;Leyden premain also includes CompiledMethods, i.e. pre-compiled Java methods, in the cache. This includes both C1 and C2 (Tiers 1 - 4) and in some cases different tier compiled versions of the same method. Having compiled code immediately available, especially Tier 4 code, is an enormous boost to performance. Lower tier code may be useful when the method only reached that tier during training or as a fallback if we need to deoptimize and reprofile. Pre-compiled Java methods are an enhancement we expect to add soon to the mainline JDK.&lt;/p&gt;

&lt;h3&gt;
  
  
  Leyden Training Data
&lt;/h3&gt;

&lt;p&gt;Training data is part of the Leyden specific code. It tracks which methods have actually been loaded, executed, and used during the training run and how they have been used.  Normally all loaded classes have associated class training data, but these may be omitted if, say, the class is loaded by a custom (user-defined) loader, is modified by an agent or fails to resolve because of linkage errors.There is a usage threshold which means that only methods that have been executed above that threshold will have associated method training data. Likewise, compiled method training data only exists for methods actually compiled during training. This helps both in keeping a smaller footprint in the cache and removing less useful data so processing the cache is faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do I Know Leyden Is Helping?
&lt;/h2&gt;

&lt;p&gt;Depending on how well you train your deployment you may see different improvements in time to reach application start (startup time) and time to reach peak performance (warmup time). Log output is one useful way to measure these two metrics but the details will depend on what monitoring capabilities are available in your test or production environment. However, simply measuring these two times (or even recording warmup profiles) doesn’t help with the problem of explaining why, for some given training regime, you get a specific improvement or perhaps, in some cases, no measurable improvement.&lt;/p&gt;

&lt;p&gt;For any given AOT cache (or set of alternative caches) it is very helpful to have some idea of what assets were included or excluded in the training set, which ones were written into the cache and what benefit they provide during a production run. In particular, it is useful to have both aggregate statistics and information on individual assets and their relationships. The Leyden project has provided a tool precisely to address these needs. Let's see a practical example of how to diagnose an AOT Cache.&lt;/p&gt;

&lt;p&gt;For the purpose of this article, we are going to use the following example application: &lt;a href="https://github.com/Delawen/bad-good-cache" rel="noopener noreferrer"&gt;https://github.com/Delawen/bad-good-cache&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;This is a web application that has a simple API and a basic html interface to use it.&lt;/p&gt;

&lt;p&gt;The first thing we need to do is to compile this application on the root folder:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;$ mvn clean package&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm6pz4hlp6od2017a0s7.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftm6pz4hlp6od2017a0s7.gif" alt="mvn clean package" width="200" height="124"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Training the application
&lt;/h2&gt;

&lt;p&gt;Once we have the jar created, we use it to start a training run:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;$ java -XX:AOTCacheOutput=target/app.aot -Xlog:aot+map=trace,aot+map+oops=trace:file=target/aot.map:none:filesize=0 -Xlog:class+load=info,aot+resolve*=trace,aot+codecache+exit=debug,aot*=warning:file=target/training.log:level,tags -jar target/quarkus-app/quarkus-run.jar&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The arguments we are going to use are the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;-XX:AOTCacheOutput=target/app.aot&lt;/strong&gt; Which will create an AOT file called &lt;code&gt;app.aot&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-Xlog:aot+map=trace,aot+map+oops=trace:file=target/aot.map:none:filesize=0&lt;/strong&gt; Which will create a map file that indexes and describes the previously created AOT file.
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-Xlog:class+load=info,aot+resolve*=trace,aot+codecache+exit=debug,aot*=warning:file=target/training.log:level,tags&lt;/strong&gt; Which will generate training log files with relevant information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To help us train the application, we are going to use the &lt;code&gt;oha\&lt;/code&gt; tool, that helps us run a series of requests that will showcase a user using the application:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;$ oha --urls-from-file src/main/resources/urls.txt -n 100&lt;/em&gt;&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwsfv1ms80rzxo5rrkxg.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwsfv1ms80rzxo5rrkxg.gif" alt="Training run" width="760" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have trained the application, let's stop it with &lt;em&gt;ctrl+c&lt;/em&gt;. It will take some time to stop while it builds the cache. It will do both the training and assembly steps at once.&lt;/p&gt;

&lt;p&gt;We should have created three types of files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;target/app.aot&lt;/strong&gt; : The AOT cache itself
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;target/aot.map&lt;/strong&gt; : The map file
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;target/training.log&lt;/strong&gt; : The logs for the training run&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now that we have the AOT cache, we can start a production run, in which we will also save log files:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;$ exit&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The arguments we are going to use are the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;-XX:AOTCache=target/app.aot&lt;/strong&gt; Make use of the AOT file called &lt;code&gt;app.aot&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;-Xlog:class+load=info,aot+resolve*=trace,aot+codecache+exit=debug,aot*=warning:file=target/production.log:level,tags&lt;/strong&gt; Which will generate a production log file with relevant information&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And we can use the application normally. Let's play a bit on &lt;a href="http://localhost:8080/" rel="noopener noreferrer"&gt;http://localhost:8080/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6z2lwbnno5gy8jti0wj.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6z2lwbnno5gy8jti0wj.gif" alt="Production Run" width="600" height="453"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On this run, we created the &lt;em&gt;production.log&lt;/em&gt; file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Analyzing the Cache
&lt;/h2&gt;

&lt;p&gt;After using it, we can stop it and analyze how the AOT Cache behaved with our AOT Cache diagnostics tool: &lt;a href="https://github.com/Delawen/leyden-analyzer" rel="noopener noreferrer"&gt;https://github.com/Delawen/leyden-analyzer&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;The first step is loading all the information into the tool, to run a proper analysis:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;&amp;gt; load aotCache  --background target/aot.map&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;&amp;gt; load trainingLog  --background target/training.log*&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;&amp;gt; load productionLog  --background target/production.log*&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv72kofa8l5sqihzox336.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv72kofa8l5sqihzox336.gif" alt="Analysis" width="560" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we are ready to start our analysis. A good place to start is the info command that shows a summarized version of what is inside the cache:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2uiwzejfxl06c1s6j52n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2uiwzejfxl06c1s6j52n.png" alt="Info" width="800" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Are we training the right thing?
&lt;/h3&gt;

&lt;p&gt;The first thing that should catch our attention is that there's more than 10% of classes that were used on the production run but were not cached. That's not usual, so let's dig into whatImage description those classes are. There are hundreds of them, so if we filter by our package name, that would make our exploration easier:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qy162x59wrvcednnvj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1qy162x59wrvcednnvj6.png" alt="ls --loaded=production" width="800" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;What does this mean? Let's take a closer look:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauqd1030lo0ywgyksi9k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fauqd1030lo0ywgyksi9k.png" alt="describe -i org.cutecats.rest.json.CatPhotoGenerator -hints" width="800" height="197"&gt;&lt;/a&gt;&lt;br&gt;
This class was not loaded during training but it was loaded during production. Something went wrong with our training. &lt;/p&gt;

&lt;p&gt;We can explore the class &lt;em&gt;org.cutecats.rest.json.CatPhotoGenerator&lt;/em&gt; by looking at the source code. There, we discover that it should be used by &lt;em&gt;org.cutecats.rest.json.CatResource&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsmt7f8c5m6drge89zlu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftsmt7f8c5m6drge89zlu.png" alt="describe -i org.cutecats.rest.json.CatResource" width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, this class was loaded both on training and production runs, and the metadata is included in the AOT Cache. But for some reason, none of its methods were profiled during the training run. This means that our training run did not make extensive use of this class. Maybe we should take a look at our training run. &lt;/p&gt;

&lt;p&gt;And indeed, there is an obvious mistake: the &lt;em&gt;urls.txt&lt;/em&gt;  file that oha used to create the requests only contains the static html pages. None of our Java classes are executed, although Quarkus loaded them at the beginning as services.&lt;/p&gt;

&lt;p&gt;Let's run again the training, changing the url to the Java endpoints instead of the html pages: &lt;a href="http://localhost:8080/cats" rel="noopener noreferrer"&gt;http://localhost:8080/cats&lt;/a&gt;  and &lt;a href="http://localhost:8080/list" rel="noopener noreferrer"&gt;http://localhost:8080/list&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Don't forget to remove the log and aot files from target/ after each try to have clean runs (the clean on the maven command should do that).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitlu30hylyznb4tf4kez.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fitlu30hylyznb4tf4kez.gif" alt="Training 2" width="1068" height="784"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If we analyze the results again with our tool,  we should see a different result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftouldogey5247oofxhzh.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftouldogey5247oofxhzh.gif" alt="Analysis 2" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodglwgkgke1f6iylhqqv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fodglwgkgke1f6iylhqqv.png" alt="Info" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We have increased the percentage of the classes used (96%) in production that were cached compared to our last attempt (89%). That's an improvement. &lt;/p&gt;

&lt;h3&gt;
  
  
  Did we load all relevant classes during Training?
&lt;/h3&gt;

&lt;p&gt;Let's check again for classes loaded in production that were not cached:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6giwwa05hkx45hzjgcao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6giwwa05hkx45hzjgcao.png" alt="ls --loaded=production" width="800" height="80"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Something is still not working as intended. Maybe we should approach this from the other side: are we executing some testing code that replaces the real production code that should be executed during training?&lt;/p&gt;

&lt;p&gt;Let's check if there's something being stored in the cache that we don't really need:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc8op283birg0li59uue.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc8op283birg0li59uue.png" alt="ls -pn=org.cutecats.rest.json -t=Class" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We can see a suspicious class called &lt;em&gt;DummyPhotoGenerator&lt;/em&gt;. That's supposed to be used only for testing purposes, not for real training and production. Using DummyPhotoGenerator instead of the CatPhotoGenerator class is making the code and classes used by CatPhotoGenerator not being used. If we explore our source code, we will discover that there is a “test” argument on the /cats endpoint that distinguishes between testing and production.&lt;/p&gt;

&lt;p&gt;To fix our training, we have to call the endpoint /cats with a test=false argument. &lt;strong&gt;Because the training run is not a test run.&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The training run has to be as close to production as possible. If we use test classes, not only will they be stored in the cache and be loaded on production run, but they may also hide real production code from being trained.&lt;/p&gt;

&lt;p&gt;Let's try again, now using &lt;a href="http://localhost:8080/cats?test=false" rel="noopener noreferrer"&gt;http://localhost:8080/cats?test=false&lt;/a&gt;  in the urls.txt file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap5aht7vkwb5z6ndw9ec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap5aht7vkwb5z6ndw9ec.png" alt="Analysis 3" width="800" height="587"&gt;&lt;/a&gt;&lt;br&gt;
We have increased a bit more the percentage of classes loaded, which is always a good sign. &lt;br&gt;
Image description&lt;br&gt;
Do we have any other classes loaded during production that were not cached during training?&lt;br&gt;
Are there any testing classes loaded during training or production runs?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03a6piohlhtpe6o9pxu2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F03a6piohlhtpe6o9pxu2.png" alt="Looking for classes" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We made sure that:&lt;/p&gt;

&lt;p&gt;- All our classes used in production are included in the cache&lt;/p&gt;

&lt;p&gt;- None of our testing classes are included in the cache&lt;/p&gt;

&lt;p&gt;Although we still don't have the aspiring ideal 100% classes cached, we are really close (98%) and we can be happy with the list of classes cached. We can now focus on how good the profiling of the methods is. &lt;/p&gt;

&lt;h3&gt;
  
  
  Are our methods properly trained?
&lt;/h3&gt;

&lt;p&gt;Maybe you already noticed another important information we have been ignoring until now: all our classes are labelled as "&lt;strong&gt;[Untrained]&lt;/strong&gt;". Let's take a closer look at that.&lt;/p&gt;

&lt;p&gt;Profiling is done on each method independently, so let's take a look at one of our methods that we know should be well trained. The describe command is pretty self explanatory on this case:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0x89ecm3c7yfyxv17dh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd0x89ecm3c7yfyxv17dh.png" alt="describe -t=Method" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's follow the recommendation and do more requests during the training run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;$ oha --urls-from-file src/main/resources/urls.txt -n 10k&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafd4p3y0b33anyhdphn3.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafd4p3y0b33anyhdphn3.gif" alt="Training" width="600" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And with 10 000 requests done, we can see that we got our method completely profiled and compiled to the higher level:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7cys8sp4ut7i9pj4lvy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7cys8sp4ut7i9pj4lvy.png" alt="Info" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9p8uizqz0ykh6yhxhd6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9p8uizqz0ykh6yhxhd6b.png" alt="describe trained method" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There’s still more improvements that can be done to the training that will greatly depend on your application, but now we have all the basics covered.&lt;/p&gt;

</description>
      <category>java</category>
      <category>openjdk</category>
      <category>leyden</category>
      <category>performance</category>
    </item>
    <item>
      <title>How is Leyden improving Java Performance? Part 2 of 3</title>
      <dc:creator>María Arias de Reyna</dc:creator>
      <pubDate>Tue, 17 Mar 2026 11:10:00 +0000</pubDate>
      <link>https://dev.to/delawen/how-is-leyden-improving-java-performance-part-2-of-3-n5n</link>
      <guid>https://dev.to/delawen/how-is-leyden-improving-java-performance-part-2-of-3-n5n</guid>
      <description>&lt;p&gt;In part 1 of this series of 3 blog posts we introduced the specific performance challenges OpenJDK faces lowering application ‘startup’, ‘warmup’ and ‘initial footprint’ costs and provided an overview of what Leyden is doing to address those challenges. &lt;/p&gt;

&lt;p&gt;Part 2 describes how to use the new AOT capabilities offered by Leyden and presents test results which show that very significant progress has already been made and is set to continue. &lt;/p&gt;

&lt;p&gt;Part 3 provides a more detailed account of how Leyden’s proposed solution operates, and offers a first look at tooling that allows you to assess the benefits that result and tune your application to make the most of what Leyden offers.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to use an AOT Cache
&lt;/h2&gt;

&lt;p&gt;To use an AOT cache (on JDK 25+), you need to add some JVM arguments to your app launch command. There are two ways of doing it, in 2 or 3 steps.&lt;/p&gt;

&lt;p&gt;Joint Training and Assembly steps — writing of the AOT cache is performed in a forked Java runtime at training run exit:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Training+Assembly Run: &lt;em&gt;java&lt;/em&gt; &lt;strong&gt;&lt;em&gt;-XX:AOTCacheOutput=${aot-cache-file}&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;-jar app.jar&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Production Run: &lt;em&gt;java&lt;/em&gt; &lt;strong&gt;&lt;em&gt;-XX:AOTCache=${aot-cache-file}&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;-jar app.jar&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Step 1 of the two step model runs your application until it exits (whether by means of some exit mechanism built into the application or simply by typing Ctrl-C on the console). At that point a  separate Assembly JVM is forked to consume the training data collected during the training run and generate an AOT cache using the name supplied via the AOTCache command line option. The training JVM waits for the Assembly JVM to finish writing this file before it completes its own exit. &lt;/p&gt;

&lt;p&gt;Step 2 runs the production application using the AOT cache specified by the AOTCache command line option.&lt;/p&gt;

&lt;p&gt;Separate Training and Assembly steps — allows the assembly run to be executed independently without delaying the training run exit:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Training Run: &lt;em&gt;java&lt;/em&gt; &lt;strong&gt;&lt;em&gt;- XX:AOTMode=record -XX:AOTConfiguration=${aot-cache-conf-file}&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;-jar app.jar&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Assembly Run: &lt;em&gt;java&lt;/em&gt; &lt;strong&gt;&lt;em&gt;-XX:AOTMode=create -XX:AOTConfiguration=${aot-cache-conf-file} -XX:AOTCacheOutput=${aot-cache-file}&lt;/em&gt;&lt;/strong&gt; &lt;em&gt;-jar app.jar&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;Production Run: &lt;em&gt;java&lt;/em&gt; &lt;strong&gt;&lt;em&gt;-XX:AOTCache=${aot-cache-file}&lt;/em&gt;&lt;/strong&gt;  &lt;em&gt;-jar app.jar&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The three step model allows you to manage training and assembly as independent steps.&lt;/p&gt;

&lt;p&gt;Step 1 runs your application until it exits, at which the training data collected during the training run is dumped to an AOT configuration file specified using the AOTConfiguration command line option. &lt;/p&gt;

&lt;p&gt;In step 2 this training data is passed to a new JVM using the same command line option and is used to generate an AOT cache to the file specified using the AOTCacheOutput command line option. &lt;/p&gt;

&lt;p&gt;Step 3 runs the production application using the AOT cache specified by the AOTCache command line option.&lt;/p&gt;

&lt;p&gt;The 3 step workflow is sometimes preferable because it allows the training JVM to exit more quickly. Dumping of training data is usually quick even if it is not instantaneous. Generation of the AOT cache takes substantially longer because there is a lot more work involved in sorting and laying out that data in a format that meets the JVM’s needs. &lt;/p&gt;

&lt;p&gt;Also, with the Leyden premain release, the Assembly JVM will perform a ‘cleanroom’ compilation of all the methods to be included in the cache, possibly compiling them at more than one compilation level. This adds more time to the cache generation step.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to properly execute the Training Run?
&lt;/h3&gt;

&lt;p&gt;The best way to train your application and generate the AOT cache is a &lt;em&gt;canary deployment&lt;/em&gt;, where you run your application in the real production environment with training enabled, allowing it to collect training data as it runs. However, that’s not always feasible, especially on containerized production environments that don’t have disk-write privileges.&lt;/p&gt;

&lt;p&gt;Depending on how your deployment is set up this may be the type of circumstance where you choose the 3 step training model, allowing your training JVM to exit quickly and relegating the assembly to a separate, follow-on deployment. Note that the assembly JVM does not run your application code so will not need access to resources like networks or databases.&lt;/p&gt;

&lt;p&gt;Recording requests made to your application and replaying them on a test server (either in real time or delayed), is also a very good way to generate the AOT cache, as it reproduces exactly the same kind of behaviour you can expect on production. Alternatively, you can generate synthetic request data that simulates the behaviour you expect to encounter in real production, although that may reduce the relevance or accuracy of the resulting AOT cache assets.&lt;/p&gt;

&lt;p&gt;If you have a strong testing framework, and you are using Quarkus, you can always &lt;a href="https://quarkus.io/guides/getting-started-testing#generating-an-aot-cache-during-integration-tests" rel="noopener noreferrer"&gt;generate the AOT cache using integration tests&lt;/a&gt;. Note that you will need to run the methods repeatedly (probably several thousands of calls) to generate the proper compilation optimizations.&lt;/p&gt;

&lt;p&gt;The best results arise when the raining run resembles a production run as closely as possible. However, whatever training method you employ, on any cache produced will only be usable in production if you run with the same JVM and the same command line JVM options. &lt;/p&gt;

&lt;p&gt;You can add extra jars at the end of the production run classpath but the initial segment must be the same as the classpath provided during training. &lt;/p&gt;

&lt;p&gt;At the moment of writing this article, you also need to deploy on the same CPU family and operating system. In upcoming versions which will include compiled code in the cache, the production hardware must implement the exact same CPU features as the hardware used for the training run. If the CPU features are not identical then compiled and stub code assets will be ignored (other cache assets will still be usable).&lt;/p&gt;

&lt;p&gt;Remember to follow these basic constraints when generating the cache: same hardware, same Java version, same Operating System, and same JVM arguments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should I start using AOT Cache in Java already?
&lt;/h2&gt;

&lt;p&gt;The short answer is &lt;strong&gt;yes&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Whether your application gets significantly faster now, or if you are interested in testing it to help &lt;a href="https://openjdk.org/projects/leyden/" rel="noopener noreferrer"&gt;Leyden development&lt;/a&gt; move towards your interests, you should start using the AOT cache already. &lt;/p&gt;

&lt;p&gt;Note that you need at least JDK 25 to be able to use it. Performance gains are incremental with each new JDK release. The actual improvements that you can achieve using Leyden depend strongly on your application and how you use it. &lt;/p&gt;

&lt;p&gt;Let’s see some examples. We are going to run them over JDK 26.&lt;/p&gt;

&lt;h3&gt;
  
  
  Heavy Mathematical Example
&lt;/h3&gt;

&lt;p&gt;First we are going to use &lt;a href="https://github.com/Delawen/jvm-performance-benchmarks-rest-wrapper.git" rel="noopener noreferrer"&gt;a benchmark application&lt;/a&gt; that runs heavy mathematical operations via a REST API. We are going to train this application twice to compare how different training affects performance on production.&lt;/p&gt;

&lt;p&gt;This application makes use of an &lt;a href="https://github.com/quarkusio/quarkus/pull/52224" rel="noopener noreferrer"&gt;aot-jar from Quarkus&lt;/a&gt; which is optimized for Leyden and available since version 3.32.0.&lt;/p&gt;

&lt;p&gt;We are going to use a training run that randomly calls the following urls:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;/nqueens/16 : to calculate the nqueens problem with a 16 board size
&lt;/li&gt;
&lt;li&gt;/fibonacci/100 : to calculate fibonacci series with input 100
&lt;/li&gt;
&lt;li&gt;/nqueens : to calculate the nqueens problem with either 16 or 8 as the board size
&lt;/li&gt;
&lt;li&gt;/fibonacci : to calculate fibonacci with a random number between 1 and 100&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The idea is to have a load that is partly random (as an API with real users would be) but has a preference over specific branches or loop unrolling sizes.&lt;/p&gt;

&lt;p&gt;We will do a training with 1000 requests and a second training with 60 000 requests. That should represent how different training affects final performance. We are going to run the application on a Linux machine assigning 2 cores to our application.&lt;/p&gt;

&lt;p&gt;Due to the kind of things Java 26 is storing in the AOT cache, we can theorize that there won’t be much difference between the different trainings when comparing startup time (startup of the application and opening the port), as most of the code run during initialization is run only once, so adding more requests to the training won’t improve that initialization time. This is something that may change in future developments as more assets are included in the AOT cache. &lt;/p&gt;

&lt;p&gt;By using the &lt;a href="https://github.com/Delawen/leyden-analyzer/" rel="noopener noreferrer"&gt;Java AOT Cache Diagnostics Tool&lt;/a&gt;, we can compare the contents of the cache generated on each training.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;1000 requests training&lt;/th&gt;
&lt;th&gt;60 000 requests training&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgqbt6fjhvzn9hg4tuh9.png" alt="Summary of the contents of the cache" width="800" height="980"&gt;&lt;/td&gt;
&lt;td&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gsxbnd1s0wp17s6qpvh.png" alt="Summary of the contents of the cache" width="800" height="980"&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As expected, both trainings seem to have cached the same amount of metadata (slightly above 99% of classes used), because the code loaded into memory on both cases should be more or less the same (some timeout or runtime exception thrown may explain differences). This means that the startup time using any of the generated caches should be similar.&lt;/p&gt;

&lt;p&gt;The training with 60 000 requests has many more methods that are profiled and compiled at a higher level because it had a longer time to profile and optimize. That should lead to better outcomes on the warmup time. &lt;/p&gt;

&lt;p&gt;Anyway, we should notice an improvement on startup time compared to the regular Java deployment, because we have a lot of metadata, profile and linkage data and some heap data already cached during AOT. And as we can see on the following graph, time to first response (which includes initialization) is already cut in half.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqygejm886j7mbi7hmyeq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqygejm886j7mbi7hmyeq.png" alt="Graph showing startup times" width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The other interesting measurement is how much and for how long response times are disrupted during the early stages of application execution. There is always some small variation in response times even when an app is fully warmed up – often referred to as &lt;em&gt;jitter&lt;/em&gt;. &lt;/p&gt;

&lt;p&gt;However, during warmup the housekeeping work that the JVM has to do can significantly increase jitter. Individual responses may be delayed because they require the thread to execute one-off events like load or initialize a class, link a call site or field access site, or update profile data. Background JIT compilation will also steal CPU cycles, potentially pre-empting request handling in Java threads. Finally, early requests will mostly execute relatively slowly in the interpreter, while later requests will  gradually respond more quickly as the JIT compiler delivers compiled code.&lt;/p&gt;

&lt;p&gt;In theory, this is where longer training sessions should have a bigger impact. Better training results in more cached classes and heap objects needed in production, more pre-linking of calls and accesses, more method profile data to allow earlier and better informed compilation. So, we should see not only an improvement compared to the regular java version, but also a difference between the two trained caches.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fye2msxg4ad4qtpx87949.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fye2msxg4ad4qtpx87949.png" alt="Graph showing response times" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The graph above shows individual response times for requests for each of the three deployments, Traditional Java (no AOT), AOT trained with 1000 requests, and AOT trained with 60 000 requests. &lt;/p&gt;

&lt;p&gt;In all cases, the request rate is constant and within the peak capacity of the server. In all 3 cases the jitter slowly decays as the request count increases, eventually converging to a low, random variation. &lt;/p&gt;

&lt;p&gt;However, it is also very clear that &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;There is a lot more JVM housework being performed in the non-AOT case than when using AOT, reaching peak performance later.
&lt;/li&gt;
&lt;li&gt;The well trained cache suffers less jitter, i.e. removes a lot more housework, than the weakly trained app.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29sbgmnsw6ssl24c07nu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29sbgmnsw6ssl24c07nu.png" alt="Response Peak Times" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkw0edxat8cv5ye1vgedq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkw0edxat8cv5ye1vgedq.png" alt="Response Peak Times" width="800" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that JDK26 does store training data on the cache, but does not store compiled code. This means, future versions of the JDK will show a much larger difference between weak and strong training regimes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Simple REST API
&lt;/h3&gt;

&lt;p&gt;Now, we are going to do the same with a &lt;a href="https://github.com/gsmet/quarkus-aot/tree/main/quarkus-hibernate-orm-simple" rel="noopener noreferrer"&gt;simple REST API application using Quarkus&lt;/a&gt; that connects to a database to extract data. This time we employ the Leyden premain JVM which caches code compiled code as well as all the other AOT cache assets mentioned earlier.&lt;/p&gt;

&lt;p&gt;We use a single training run with 10 000 requests, executing a test that calls the endpoint &lt;em&gt;/fruits&lt;/em&gt; repeatedly. In contrast with the previous example, in this case we are not going to observe as much advantage from speculative compilation because the code is simpler and the entrypoint is always called with the same parameters. But we should still see an improvement all the same.&lt;/p&gt;

&lt;p&gt;Let’s take a look at the time to first response to see if it is improved by Leyden AOT:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4f0m21ugh7r831whoo7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4f0m21ugh7r831whoo7u.png" alt="Graph with startup times" width="800" height="569"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Startup time in this example is slower than in the previous example because we have to initialize connections to the database and load the database model. &lt;/p&gt;

&lt;p&gt;Now, let’s take a look at the response times and see if the warmup time is also improved thanks to the AOT cache.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtk9g9m66vas1r9kjei7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdtk9g9m66vas1r9kjei7.png" alt="Response Times" width="800" height="318"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Both runs suffer jitter during the first requests, at which point most of the housekeeping work is completed. The dramatic drop off in jitter for the AOT run around request 45 indicates that at this point almost all loading, initialization and linking costs have been met and the necessary compiled code has been delivered. By contrast the non-AOT run is still suffering jitter even after 100 requests i.e. it has still not warmed up to reach peak performance.&lt;/p&gt;

&lt;p&gt;These are only a couple of examples that showcase how Leyden is already improving your startup and warmup time. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How far Leyden can help your application can only be discovered by trying it.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>java</category>
      <category>leyden</category>
      <category>openjdk</category>
      <category>performance</category>
    </item>
    <item>
      <title>How is Leyden improving Java Performance? Part 1 of 3</title>
      <dc:creator>María Arias de Reyna</dc:creator>
      <pubDate>Tue, 17 Mar 2026 11:00:00 +0000</pubDate>
      <link>https://dev.to/delawen/how-is-leyden-improving-java-performance-part-1-of-3-4n7e</link>
      <guid>https://dev.to/delawen/how-is-leyden-improving-java-performance-part-1-of-3-4n7e</guid>
      <description>&lt;p&gt;In this series of 3 blog posts we will explain how OpenJDK project Leyden is helping to improve a specific area of performance where Java has notably lagged behind other languages i.e. application ‘startup’, ‘warmup’, and ‘initial footprint’. &lt;/p&gt;

&lt;p&gt;Part 1 explains what those terms mean and why Java faces challenges in matching the behaviour of other languages. It then provides an overview of what Leyden has done to improve startup and warmup in existing JDK releases and what is planned for upcoming releases. &lt;/p&gt;

&lt;p&gt;Part 2 describes how to use the new capabilities offered by Leyden and presents test results which show that very significant progress has already been made and is set to continue. &lt;/p&gt;

&lt;p&gt;Part 3 provides a more detailed account of how Leyden’s proposed solution operates and presents a first look at tooling that allows you to assess the benefits that result and tune your application to make the most of what Leyden offers.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Brief History of Java Performance
&lt;/h2&gt;

&lt;p&gt;Java has been one of the most popular programming object-oriented languages for decades. Its success relies heavily on the fact that it offers a &lt;em&gt;portable, managed runtime&lt;/em&gt; that makes it easy and safe to resolve many common programming challenges. In particular, Java was the first portable language to make it straightforward for programmers to deliver multi-threaded applications which allocate and manage storage at runtime without risk of invalid memory accesses.&lt;/p&gt;

&lt;p&gt;The fact that Java remains popular still surprises some programmers, given that it belongs to the family of dynamic languages that, most notably, includes Lisp, Smalltalk and Self. Dynamic languages allow their code base to be incrementally defined as the program executes. That code base is often implemented using a language-specific virtual machine. Dynamic languages were traditionally executed by interpreting either the source code or an intermediate bytecode derived from the source. This often caused lower performance than native-compiled, non-dynamic languages.&lt;/p&gt;

&lt;p&gt;However, modern Java runtimes rely on powerful ‘just-in-time’ (JIT) compilers to translate bytecode to native machine code at runtime. JIT compilation, a technique originally tried in Smalltalk nearly 40 years ago, has improved Java performance by orders of magnitude from the early days of an interpreter-only runtime. The use of runtime execution profiling supports &lt;em&gt;feedback directed optimization&lt;/em&gt; and &lt;em&gt;speculative optimization&lt;/em&gt;. This has allowed Java JIT compilers to achieve peak performance that far exceeds what can be achieved with programs that are compiled ahead-of-time (AOT).&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Java takes time to reach peak performance
&lt;/h3&gt;

&lt;p&gt;The downside of dynamic class loading and JIT compilation is that a Java runtime takes some time to achieve this impressive peak performance. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6ovdzozdg8rxwymccts.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx6ovdzozdg8rxwymccts.png" alt="Image representing the normal flow of a java run, from cold to peak performance as described below" width="800" height="272"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a new Java application is launched, it is normally a ‘cold start’. Details of all the classes and methods the application needs to use are only available in a compact bytecode representation, stored on disk either in application supplied class files or embedded in the Java platform’s jmod files.The Java Virtual Machine (JVM) has to parse and unwrap this bytecode, constructing its own ‘metadata’ model of the class and method base, one that the interpreter and compiled code can efficiently operate over. It also has to set the base state of each loaded class, running Java ‘static init’ code to populate the class’s static fields, before it can execute any of the class’s methods.&lt;/p&gt;

&lt;p&gt;In addition, the JVM has to perform &lt;em&gt;dynamic linkage&lt;/em&gt;. When compilation or execution of a Java method first encounters a call (invoke bytecode) or a data access (get/putfield bytecode) the JVM has to &lt;em&gt;link&lt;/em&gt; that call or data access site. That involves replacing references to the target class and method/field, which occur as symbol names in the bytecode, with a direct memory reference. This identifies first the target metadata class, and then the target metadata method or field. If the target class has not yet been encountered during execution, this linking step may trigger further bytecode loading, parsing, and class initialization.&lt;/p&gt;

&lt;p&gt;The JVM normally starts off executing Java methods in the interpreter. Of course, it could always execute native code, compiling the Java method bytecode either immediately at load or lazily at first call. However, compilation takes time to complete so it is normally better done in the background while proceeding to interpret. Indeed, JIT compilation frequently pays off more when done selectively. Methods that only get called once or twice can take more cycles to compile than to simply interpret the bytecode.&lt;/p&gt;

&lt;p&gt;Furthermore, without runtime execution profile data as input, the compiler is unable to make informed, &lt;em&gt;feedback-directed&lt;/em&gt; optimizations that significantly improve performance of the compiled code. Most importantly, it cannot simplify the compiled code by &lt;em&gt;speculating&lt;/em&gt; that previous execution patterns will continue, replacing code that lies on untaken ’cold’ branches with traps. Speculative compilation, an optimization first used in the Self compiler over 30 years ago, reduces both the size and the complexity of bytecode that feeds into a specific compilation. That, in turn, enables deep inlining of method calls and offers the possibility to identify many more derived optimizations. The rare case where a trap on a cold branch gets executed is handled by &lt;em&gt;deoptimizing&lt;/em&gt; i.e. jumping back into the interpreter and recompiling the method with an updated branch profile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Housekeeping considered harmful
&lt;/h2&gt;

&lt;p&gt;During early stages of application execution, the JVM housekeeping overheads listed above are at their highest. Class loading and initialization, class linking, and recording of method execution profile data occur frequently as side effects of execution, for both application and JDK runtime methods, impeding direct forward progress of the application. Method compilation proceeds in dedicated, background compiler threads, but this still steals CPU cycles, once again, impeding application progress.&lt;/p&gt;

&lt;p&gt;The impedance of JVM housekeeping work gradually decreases, as more and more of the required JDK code and application code is gradually linked into the runtime. At the same time delivery of compiled code improves application execution speed incrementally. &lt;/p&gt;

&lt;p&gt;After some time, a steady state is reached where most or all classes are loaded and linked, most or all methods have been profiled, and all ‘hot’ methods have been compiled with highly efficient code. Very occasionally variation in input data or a phase change in program behaviour drives the application down a cold path, triggering deoptimization and incurring extra JVM overheads. However, by and large, applications mostly warm up and continue to run with steady peak performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leyden Project ‘premain’ Experiment
&lt;/h2&gt;

&lt;p&gt;Project Leyden has been experimenting with reducing the impedance of JVM house keeping tasks in &lt;a href="https://github.com/openjdk/leyden/tree/premain" rel="noopener noreferrer"&gt;the ‘premain’ branch of the project repository&lt;/a&gt;. The observation that drives the Leyden premain experiment is that, most of the time, the housekeeping operations that occur during an application run involve doing exactly or almost exactly the same work with the same result, certainly in the early stages where the impedance is high. On every run a lot of the same byecode gets loaded and linked, the same classes get initialized, the same methods turn out to be hot, and end up getting compiled with the same or very similar profile information.&lt;/p&gt;

&lt;p&gt;This is especially true for the JDK runtime code that runs before entering the application main method, likewise for JDK library code that the application calls out to. The JVM will always load base classes like java.lang.Object, java.lang.Class, or java.util.String. The same String instances, hard coded as literals in JDK methods, are added to the heap on every single run. Container classes like List and HashTable are commonly reused for the same purposes.&lt;/p&gt;

&lt;p&gt;JDK classes are fixed for any given release so their class, method and field metadata will always be the same and they will always cross-reference each other (i.e. be linked) in exactly the same way. In fact the Leyden premain branch gets its name from its original focus, which was optimizing this JDK execution that happens before entering application main.&lt;/p&gt;

&lt;p&gt;The idea of profiting from this &lt;em&gt;identity&lt;/em&gt; of JDK metadata across runs is not new. Since JDK13 &lt;a href="https://docs.oracle.com/en/java/javase/25/vm/class-data-sharing.html" rel="noopener noreferrer"&gt;Class Data Sharing (CDS)&lt;/a&gt; has been able to optimize away class loading and bytecode parsing for JDK classes by storing the JVM’s metadata model of the JDK classes in a CDS archive, allowing it to be reloaded ’oven-ready’ on subsequent runs. &lt;/p&gt;

&lt;p&gt;That version of CDS provided an effective, albeit limited, limited warm-start capability for the JDK, halving the time taken for the JDK to start up i.e. complete JDK initialization and enter the application main routine. CDS also helped application warmup by lowering initial costs involved in callouts to JDK library code.&lt;/p&gt;

&lt;p&gt;With application classes there is no strong guarantee that the same classes will be present in the same format between one run and the next. Or that classes loaded and used on one run will always be loaded and used in the same way on subsequent runs. However, so long as the same jars appear in the classpath and the class bytecode is loaded without runtime-specific agent transformations, then it is possible and, for many classes, quite probable that saved metadata will be reusable. &lt;/p&gt;

&lt;p&gt;More recent versions of CDS have supported save and restore metadata for application classes via a dynamic CDS archive, allowing the JVM to bypass loading and bytecode parsing costs for those classes on subsequent runs, improving both application startup and warmup.&lt;/p&gt;

&lt;p&gt;Leyden’s premain branch builds on this success but it is addressing a bigger prize than just archived metadata. The broader internal JVM state — not just metadata but static field data, linkage data, method profiles, compiled code — which is slowly constructed during warmup, may vary depending on precisely what happens on each run. However, most of what is created on one run, if it could be saved in an archive – as CDS currently does with metadata, ought to be reusable on a subsequent run, short circuiting the housekeeping overheads normally incurred to create it. &lt;/p&gt;

&lt;p&gt;Even if some saved state might turn out not to be useful, because, say, a class was not referenced or a method not called in the subsequent run, the ability to reuse some of the state should still pay off. The cost of reloading the required state can be made much lower than the cost of recreating, meaning the application can reach peak performance earlier, with less impedance from the JVM. The more reusable state that can be saved the greater the reduction in impedance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Training and Production Runs
&lt;/h3&gt;

&lt;p&gt;So, the basic idea behind is to run your application twice:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Training Run: in which we cache the metadata, the profiling statistics, some heap data, compiled code,...
&lt;/li&gt;
&lt;li&gt;Production Run: loads the previously (ahead of time) cached information, so the run starts hot. This is the “real” run in which we make use of our app.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8axh0ndton697otj3v7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe8axh0ndton697otj3v7.png" alt="Left side there is a JAR blue block with an arrow (training run) pointing to an orange block called AOT cache. Both blocks have arrows to an orange production run." width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of course, this only makes sense if the training run accurately represents the production run. &lt;/p&gt;

&lt;p&gt;To achieve this, we need to respect the following constraints:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Same Hardware: Or the compiled code may not be able to run, and the optimizations made may even be against performance in our production run.
&lt;/li&gt;
&lt;li&gt;Same Java version and source code: If we change the source code, anything cached related to the source code gets deprecated and becomes useless.
&lt;/li&gt;
&lt;li&gt;Same Operative System Family: There are pieces of the JVM that behave differently on Linux, Windows or MacOS. We can’t just reuse our cached information if we change it.
&lt;/li&gt;
&lt;li&gt;Same JVM options (mostly): We could maybe change some JVM options (like use a different garbage collector). But then, profiling statistics that we cache, and information about how the application behaves, may no longer be valid to our new configuration. Better not to play with these settings.
&lt;/li&gt;
&lt;li&gt;[optional] No Custom Classloaders: The cache will ignore (for now) the classes loaded with a custom classloader. This means that part of the application will not be hot when run for the second time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some of the AOT improvements developed in the Leyden branch have already been made available in the latest JDK LTS version at the moment of writing this article (25). But the plan for subsequent releases is that more features will be migrated, more things will be cached, and the performance gains will get better and better. The performance gains strongly depend on your app usage, the JDK version you are using, and how good is the training you are doing.&lt;/p&gt;

&lt;p&gt;In the next post we will explain how to use the new AOT capabilities that are available in both JDK25. We will also present test results which show that very significant progress has already been made and is set to continue on capabilities already present in JDK 26 and on.&lt;/p&gt;

</description>
      <category>openjdk</category>
      <category>java</category>
      <category>leyden</category>
      <category>performance</category>
    </item>
    <item>
      <title>Are online events the new normal?</title>
      <dc:creator>María Arias de Reyna</dc:creator>
      <pubDate>Wed, 28 Oct 2020 14:20:00 +0000</pubDate>
      <link>https://dev.to/delawen/are-online-events-the-new-normal-4m6d</link>
      <guid>https://dev.to/delawen/are-online-events-the-new-normal-4m6d</guid>
      <description>&lt;p&gt;Some of you may know I am going to co-chair the next &lt;a href="https://2021.foss4g.org/" rel="noopener noreferrer"&gt;FOSS4G&lt;/a&gt;. ( &lt;strong&gt;&lt;em&gt;Come Hell or High Water, but we are going to have a FOSS4G next year!&lt;/em&gt;&lt;/strong&gt; ) I have also participated on the organization of the last &lt;a href="https://www.apachecon.com/acna2020/" rel="noopener noreferrer"&gt;ApacheCON&lt;/a&gt; and some other minor online events. I have also spoken on many online events by now.&lt;/p&gt;

&lt;p&gt;And, let me tell you, I am not as optimistic as those people. I really hope this post ages badly and in the following years we get better options for online events.&lt;/p&gt;

&lt;p&gt;But as of today, &lt;strong&gt;I don’t feel we are anywhere close to replace face to face events with online ones&lt;/strong&gt;. Delivering the talks as streaming is something good but we will be losing most of what makes a conference an amazing experience.&lt;/p&gt;

&lt;p&gt;Let me tell you why.&lt;/p&gt;

&lt;h2&gt;
  
  
  Online Events Drawbacks
&lt;/h2&gt;

&lt;p&gt;Online events allow people from all over the world to attend without having to travel. People with hard conciliation and people that can’t afford or are not able to travel will be on your event. It’s true, you will get attendees you wouldn’t have if it was a face to face event. And that’s good!&lt;/p&gt;

&lt;p&gt;But that’s where the advantages of an online event end. And where a lot of new issues are created.&lt;/p&gt;

&lt;h3&gt;
  
  
  Timezones
&lt;/h3&gt;

&lt;p&gt;Have you ever tried to find a good time slot for a simple one hour meeting between an Australian and someone living in Los Angeles? Exactly.&lt;/p&gt;

&lt;p&gt;Now imagine allocating at least six good hours for streaming and interactions and trying to have at least everybody awake, no matter if it is early or late. Spoiler: you can’t.&lt;/p&gt;

&lt;p&gt;Of course you can always find some arbitrary schedule that fits your biggest group of potential attendees and force the rest of your attendees adapt to it. Just make them imagine they are traveling and live on a different timezone, waking up at 3am to watch your talks, having lunch instead of breakfast and sleeping during the day. Seriously: is your event worth it?&lt;/p&gt;

&lt;p&gt;Do you remember those people that couldn’t travel to your face to face event but can attend your online event? A good percentage of them will not attend your online event if it doesn’t fit their timezone. It doesn’t matter if they have the means to do it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hacking timezones
&lt;/h4&gt;

&lt;p&gt;Of course, there are patches for this. Like having a 24/7 event so at all times there’s something happening for the people that are awake at that time. Which, in summary, just means having like separated events that overlap a bit between regions of the world. You will end up interacting mostly with people on your time zone region.&lt;/p&gt;

&lt;p&gt;Some events are replaying their talks at different time zones so everyone has the opportunity to attend all sessions. But what’s the fun of watching some past talk, when even the speaker is no longer available for questions? I can watch it next month and it will be the same.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bandwidth
&lt;/h3&gt;

&lt;p&gt;I just mentioned &lt;em&gt;those people that couldn’t travel to your face to face event but can attend your online event&lt;/em&gt;. If only that was as truthful as it sounds.&lt;/p&gt;

&lt;p&gt;Not everyone has a good internet connection at home. We can even say not everyone has an internet connection at hand to watch and participate on your event. Thinking otherwise just shows your privilege and how you forget not everyone has access to a good and stable bandwidth. Specially at home.&lt;/p&gt;

&lt;p&gt;For some people it is easier to travel to your face to face event than to get a good bandwidth to watch the streamed talks.&lt;/p&gt;

&lt;p&gt;So this group of people that wouldn’t travel to your event but would attend your online event just keep shrinking. Your real potential attendees are those in the right time zone and the right technological requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  Shadow Audience on Online Events
&lt;/h3&gt;

&lt;p&gt;This one is for speakers. There’s no way your talk will be the same if you can’t see, hear and &lt;em&gt;feel&lt;/em&gt; the audience as in face to face. Online events, no matter if the audience has their cameras on, are isolating. It is never the same.&lt;/p&gt;

&lt;p&gt;Not having feedback on how well your talk is being received (or even worse: seeing a lot of cameras of distracted attendees doing something else) is a huge handicap that even the most experienced speaker will suffer. Not knowing if you should speed up or slow down to adapt to your audience is weird. Not knowing if your audience is laughing at your jokes or thinking you are a weirdo.&lt;/p&gt;

&lt;p&gt;And probably your speakers are not familiar with your platform, which means they will be extra nervous of messing up. If the mic fails on a room full of people, you notice immediately and you can even just talk louder while it gets fixed. If the mic fails on an online event, you may not even notice until several minutes have already passed.&lt;/p&gt;

&lt;p&gt;All these issues lead to talks not as good as those made face to face.&lt;/p&gt;

&lt;h3&gt;
  
  
  Disengagement
&lt;/h3&gt;

&lt;p&gt;Let’s be fair: attendees will not be physically disconnected from their daily routines as they are when traveling to your event. Most of them will keep going to work, passing time with their families and even skipping some talks to do house work. These changes of context means losing attention.&lt;/p&gt;

&lt;p&gt;Your event is just the side dish of their day to day, an extra appendix. They will never be as engaged as if they had to travel to a venue and be there physically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Language Barrier on Online Events
&lt;/h3&gt;

&lt;p&gt;I know, I know what you are thinking. No, María, this problem is the same, no matter if you are face to face or online. Let me guess: you are an English native speaker.&lt;/p&gt;

&lt;p&gt;Even when having reliable broadband and enjoying high quality smooth 60fps video, understanding a foreign speaker over video is harder than in person. Maybe because the audience is shadowed and the speaker has no feedback on how well it is being received. Or maybe it is just because we speak differently face to face than online. Maybe we are missing corporal language. But it happens.&lt;/p&gt;

&lt;p&gt;This could be fixed with subtitles on most cases. But, what kind of subtitles? The ones that run on the fly and so frequently confuse technical terms making the subtitle confusing? Or the ones that are attached to a pre-recorded video so the speaker is not really there?&lt;/p&gt;

&lt;h3&gt;
  
  
  No random meetings
&lt;/h3&gt;

&lt;p&gt;As you don’t have to move from one room to the next, sit next to someone on a room or even mingle during the coffee breaks, you just lose all those opportunities to meet both friends and strangers.&lt;/p&gt;

&lt;p&gt;Your event will be missing a space where attendees can meet more or less randomly. Some events are matching random attendees on private rooms or have general room halls for random chatting. And that’s a good idea, but still, it doesn’t replace introducing yourself to someone specific you feel is a good match when you see that person talking to someone you know or having a break and being open to engage in conversation.&lt;/p&gt;

&lt;p&gt;Do you want to have an Ice Breaker or a Gala Dinner? Maybe you can try &lt;a href="https://hubs.mozilla.com/" rel="noopener noreferrer"&gt;Mozilla hub&lt;/a&gt; for that. Too over complicated? High hardware requirements? But what other social platforms could be there that go beyond a chat and are not a simple video conference? How can we mimic this walking over the venue and saying hi to old friends or asking someone about their shirt drawing?&lt;/p&gt;

&lt;h2&gt;
  
  
  Financial Side
&lt;/h2&gt;

&lt;p&gt;Contrary to popular belief, if you really want to have an online event that can compete with a face to face event, the cost will not be cheap. The proper software stack with the proper bandwidth can be as costly as the venue. And whatever you may be saving on catering you will spend in making the event more social and engaging.&lt;/p&gt;

&lt;p&gt;It’s true that a basic online event can be done very cheaply, as I demonstrate on the next section. But still, it will be a cheap event and will not patch many of the social drawbacks exposed.&lt;/p&gt;

&lt;p&gt;If you want to make the conference online, you have to work towards more lines of economic viability than in traditional face to face events. Because there are many things you can’t offer now (being the merchandise placement the most obvious one).&lt;/p&gt;

&lt;h3&gt;
  
  
  Sponsorship of Online Events
&lt;/h3&gt;

&lt;p&gt;Obviously you will try to keep sponsors on board. But it would be insane to think sponsors will be willing to pay the same prices as in traditional events if you don’t offer anything above and beyond.&lt;/p&gt;

&lt;p&gt;If your conference has some kind of sponsorship booth area, you need some kind of replacement. How do you plan to do that? And how are you going to make attendees go through it? Now you don’t have halls where the booths are showing fancy stuff.&lt;/p&gt;

&lt;p&gt;If you host some kind of Business to Business slot, are you just going to run it as a normal online video session? How are you going to help them interact?&lt;/p&gt;

&lt;h3&gt;
  
  
  Tickets
&lt;/h3&gt;

&lt;p&gt;Most face to face events reconverted to online events are “in panic” offering free tickets, which will make it difficult for any online event to sell tickets at a reasonable price in the near future.&lt;/p&gt;

&lt;p&gt;To make sure people would want to pay for your specific event, you must prepare some extras besides the streaming. You have to offer some kind of exclusive content to paid attendees besides the basic chats on the streamings to make sure the ticket price is worth it.&lt;/p&gt;

&lt;p&gt;Maybe promote more panels where only paid users can participate and ask questions? Adding prizes or badges to users that participate more or attend more sessions?&lt;/p&gt;

&lt;p&gt;Some events are offering open price donation tickets. Which means, you can have the ticket for the price you deem it worth. I have seen (from the inside) a number of events doing this. I won’t share actual numbers but trust me: don’t count with this money.&lt;/p&gt;

&lt;h3&gt;
  
  
  Goodies
&lt;/h3&gt;

&lt;p&gt;Have you thought of an online shop to sell goodies branded as the conference? Difficult to estimate if this will work if you haven’t sell goodies before, but tech conference attendees are well known for collecting branded shirts. This is worth a try.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dare Donations
&lt;/h3&gt;

&lt;p&gt;This was an idea that popped out watching the &lt;a href="https://gamesdonequick.com/" rel="noopener noreferrer"&gt;Games Done Quick&lt;/a&gt;events. They keep people engaged and donating money through a whole week. They propose “dares” and people donate to vote what to do. You have to find something that make attendees engage and control somehow the type of content they are going to see.&lt;/p&gt;

&lt;p&gt;For example: If between 9:00 and 10:00 we get a minimum amount in donations, then some speaker will run a live demo on their presentation instead of a video. Or vote with donations what question to ask on a panel. Maybe even choose between two different talks.&lt;/p&gt;

&lt;p&gt;This works awesomely well on the GDC but the GDC is a fun event with no consequence and that’s not the tone with most tech conferences. I have no way to estimate if this will work on our case. It may work if you choose the right “dares” and speakers are willing to collaborate.&lt;/p&gt;

&lt;p&gt;Probably easier on FOSS conferences, where the camaraderie is a common denominator. Weirder in other types of conferences, specially academic ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software Stack of Online Events
&lt;/h2&gt;

&lt;p&gt;There are many valid software stacks to use for your online events. Here is a summary of what I have experimented with and what I am comfortable with.&lt;/p&gt;

&lt;p&gt;Let’s start with a basic stack that covers minimum functionality for a cheap price and then proceed to more wholesome solutions.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Basic Stack
&lt;/h3&gt;

&lt;p&gt;This is the basic stack I have found that covers the minimum needed to offer a proper online conference: talk streaming and access to a VIP chat. The price would be ridiculously cheap if you use one of the popular free streamings like Twitch or YouTube.&lt;/p&gt;

&lt;h4&gt;
  
  
  OBS Studio
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://obsproject.com/" rel="noopener noreferrer"&gt;OBS Studio&lt;/a&gt; is an old friend of streamers. And also FOSS. No matter what streaming platform you are going to use (something free or some custom server), it works. You can have combined screens and cams and different audio sources to get a beautifully orchestrated output.&lt;/p&gt;

&lt;p&gt;If you follow gamer streamers, you will probably be familiar with OBS already.&lt;/p&gt;

&lt;p&gt;If OBS Studio is too complex to use for you, your moderators or even your speakers, you can always try &lt;a href="https://streamyard.com/" rel="noopener noreferrer"&gt;StreamYard&lt;/a&gt;. It offers a similar functionality on a web browser friendly interface. It will allow you to easily stream to popular free platforms too. Personally I am very satisfied with this tool, even if it is not FOSS. It is really easy to use and the hardware and knowledge requirements are low.&lt;/p&gt;

&lt;h4&gt;
  
  
  Mattermost
&lt;/h4&gt;

&lt;p&gt;To cover the social side, &lt;a href="https://mattermost.com/" rel="noopener noreferrer"&gt;Mattermost&lt;/a&gt; is my preferred option. It is a Slack-like messaging platform where you can offer special channels to your attendees according to their ticket (free or paid). There, you can have access to all social interactions; like asking the speakers, meeting other attendees or discussing on common rooms.&lt;/p&gt;

&lt;h4&gt;
  
  
  Website
&lt;/h4&gt;

&lt;p&gt;Although this stack covers all the minimal requirements for an online conference, it is messy and with decoupled parts. Your attendees may feel lost.&lt;/p&gt;

&lt;p&gt;Attendees will have two windows or tabs in the browser: Mattermost and the streaming. The links to each streaming talk may be on some schedule in some webpage. It will be worth adding some developing time to create a website that contains everything on an usable format and gives some sense of being a single component.&lt;/p&gt;

&lt;h4&gt;
  
  
  Offline Talks for Online Events
&lt;/h4&gt;

&lt;p&gt;If you are afraid your software stack may fail at the worst moment, you can ask your speakers to send their recorded presentations before the online event. You can even ask them for optional subtitles to overcome any language barrier.&lt;/p&gt;

&lt;p&gt;Asking for recordings in advance gives you time to test if the video is right and schedule the release at the proper time and date. This reduces a lot the possibility of real time failures. Also, this way you make sure all videos have the proper duration, making the schedule more stable.&lt;/p&gt;

&lt;p&gt;This gives the speakers also the opportunity to edit their videos on a fancy friendly way, adding special effects or anything that may be too risky to run on real time.&lt;/p&gt;

&lt;p&gt;The speakers can also be asked to be at that proper time and date on the chat to interact with the attendees so the Q&amp;amp;A can run as usual.&lt;/p&gt;

&lt;p&gt;But the event will probably look less dynamic, with all these pre-recorded talks. No cross-referencing talks you just watched, no possible change mid talk because someone made a good point.&lt;/p&gt;

&lt;h3&gt;
  
  
  Promising Software Stacks
&lt;/h3&gt;

&lt;p&gt;If you don’t want to use the free streaming options and/or you are looking for a more holistic solution, there are some platforms and software stacks you can check. Take into account that these options require you to have the proper bandwidth, which means, they are going to be more expensive than the previously described minimum viable plan.&lt;/p&gt;

&lt;p&gt;So the first thing you have to ask yourself before jumping into any of these platforms is: is it worth the risk of using our own/rented streaming servers instead of something like YouTube or Twitch?&lt;/p&gt;

&lt;p&gt;The most attractive (to me) alternatives right now are the following.&lt;/p&gt;

&lt;h4&gt;
  
  
  BigBlueButton
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://bigbluebutton.org/" rel="noopener noreferrer"&gt;BBB&lt;/a&gt; is a FOSS Conferencing System for online learning. Not designed as a conference software, it provides the users with class-like rooms and a chat per room. As a tool for teachers I felt it was very complete. As a tool for an online event, I felt things missing. It felt like many disconnected talks, not like a full schedule.&lt;/p&gt;

&lt;h4&gt;
  
  
  Venueless
&lt;/h4&gt;

&lt;p&gt;From the creators of &lt;a href="http://pretix.eu/" rel="noopener noreferrer"&gt;Pretix&lt;/a&gt; and &lt;a href="http://pretalx.org" rel="noopener noreferrer"&gt;Pretalx&lt;/a&gt;, &lt;a href="http://venueless.org/" rel="noopener noreferrer"&gt;Venueless&lt;/a&gt; aspires to be the FOSS option for conferences. It is built on top of Big Blue Button, offering a wrapper that should make the experience more “conference-like”. I have to confess I haven’t tried it yet, but seeing how Pretix and Pretalx work, I’m confident this is going to be a big one. If you are considering using BBB, you should take a look at this one. If only because it provides features on top of it.&lt;/p&gt;

&lt;p&gt;Both options have the huge advantage of being FOSS, which means, if you don’t like or need something in particular, you can dedicate your own developers into contributing for it. Even if you use a third party hosting, with proper care your contributions will make it to the main release so any third party hosting will offer them for you.&lt;/p&gt;

&lt;h4&gt;
  
  
  Hopin
&lt;/h4&gt;

&lt;p&gt;&lt;a href="http://Hopin.to" rel="noopener noreferrer"&gt;Hopin.to&lt;/a&gt; was used on ApacheCON. It is not FOSS, which means you are completely dependent on what the company behind it wants to implement. For me, that’s a big red flag, specially on online events, which is something the software stacks are not yet mature enough. You cannot be confident yet that a restricted licensed software will be mature enough to provide you with everything you need.&lt;/p&gt;

&lt;p&gt;On the social side, Hopin offered fancy features like matching you with random attendees, mimicking what a coffee break would be, or offering specific booth rooms so you can talk with sponsors.&lt;/p&gt;

&lt;p&gt;Truth is, I am usually very social on face to face events and talk with random people and visit the booths. But nothing was &lt;em&gt;calling&lt;/em&gt; me to use these features on the online version of this conference. It’s not as if you see someone you want to talk to in the coffee break so you just approach them. It looks forced to me. The risk of matching a complete stranger with nothing in common and having an uncomfortable conversation was high.&lt;/p&gt;

&lt;h2&gt;
  
  
  Online Events Experiences
&lt;/h2&gt;

&lt;p&gt;I’m sure all of you have some experiences with online events by now. Let me share some insights I have experienced and why I still think online events can’t replace face to face ones.&lt;/p&gt;

&lt;h3&gt;
  
  
  EsLibre experience
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://eslib.re/2020/" rel="noopener noreferrer"&gt;EsLibre&lt;/a&gt; used Big Blue Button as the main conference platform, with &lt;a href="https://rocket.chat/" rel="noopener noreferrer"&gt;RocketChat&lt;/a&gt; as the sidechat and &lt;a href="https://hubs.mozilla.com/" rel="noopener noreferrer"&gt;MozillaHub&lt;/a&gt; for the Gala Dinner and social “after beers”. The experience was pretty good… except everything was disconnected, as we were using three different components completely decoupled.&lt;/p&gt;

&lt;p&gt;And still, not the same as a face to face.&lt;/p&gt;

&lt;h3&gt;
  
  
  ApacheCON experience
&lt;/h3&gt;

&lt;p&gt;ApacheCON used Hopin. It was in general a good experience, well organized, and &lt;a href="https://blogs.apache.org/conferences/entry/apachecon-home-2020-was-a" rel="noopener noreferrer"&gt;we had more attendees than ever.&lt;/a&gt; But that may be misleading: most of the almost six thousand registered attendees logged in at some point in the event, but you rarely saw more than a thousand connected at the same time.&lt;/p&gt;

&lt;p&gt;We used the “three timezone region” approach, which in the end led us to have like three different groups of attendees that didn’t mingle among each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Online Events can’t match Social Face to Face Events&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;At least, not for me, not with the technology and the social capabilities we have right now.&lt;/p&gt;

&lt;p&gt;I really really hope this post ages badly. Because there’s a lot of people that can’t travel to online events and they are missing all the awesomeness of a full event experience.&lt;/p&gt;

&lt;p&gt;And, truth be told, having everyone traveling several times per year to different conferences is not ecological, not sustainable in the long term.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://delawen.com/2020/10/are-online-events-the-new-normal/" rel="noopener noreferrer"&gt;Are online events the new normal?&lt;/a&gt; appeared first on &lt;a href="https://delawen.com" rel="noopener noreferrer"&gt;my personal blo&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>events</category>
      <category>opensource</category>
    </item>
    <item>
      <title>What’s all this integration agitation, anyway?</title>
      <dc:creator>María Arias de Reyna</dc:creator>
      <pubDate>Tue, 06 Oct 2020 18:05:00 +0000</pubDate>
      <link>https://dev.to/delawen/what-s-all-this-integration-fuss-anyway-35pg</link>
      <guid>https://dev.to/delawen/what-s-all-this-integration-fuss-anyway-35pg</guid>
      <description>&lt;p&gt;You may have heard any or all of these keywords before: middleware, integration, orchestration. And you may be wondering why and when to use them. Take a walk with me to understand when and how integration frameworks are useful.&lt;/p&gt;

&lt;p&gt;Imagine you are in charge of solving a new need of your company. There is no complete software stack for what you need. You will have to involve your team to create something new. Even if you reuse some components, you have to make them interact and talk to each other.&lt;/p&gt;

&lt;p&gt;You are an experienced software engineer and have solved previously many of the requirements with some components you are already familiar with. But now you have to orchestrate all these components together and make them work like a clock. Now you need a proper &lt;em&gt;integration.&lt;/em&gt; You want all of them to cooperate smoothly in your architecture.&lt;/p&gt;

&lt;p&gt;The first thing any good developer thinks about is building a custom software that acts as the glue between all these components. Maybe adding some fancy extra functionality. And, (why not?) as we are at the beginning of a new exciting project, probably we want to try all these new technologies you have been reading and hearing about. Whatever the fad buzzword is now, you are willing to try it.&lt;/p&gt;

&lt;p&gt;Although this may be appealing, your inner experienced engineer tells you to stop. There’s &lt;a rel="noreferrer noopener nofollow" href="https://www.freecodecamp.org/news/what-is-middleware-with-example-use-cases/"&gt;something&lt;/a&gt; you also read about, these&lt;a rel="noreferrer noopener nofollow" href="https://dzone.com/articles/when-use-apache-camel"&gt; integration frameworks&lt;/a&gt;. Could they be useful here?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Integration Paradigm
&lt;/h2&gt;

&lt;p&gt;As much as we would like to start a new clean project from scratch and throw all our ingeniousness into it, we shouldn’t reinvent the wheel. Let’s take a look at what is this middleware or integration software.&lt;/p&gt;

&lt;p&gt;Middleware, or integration software, can help us orchestrate and automate the interaction between different applications, APIs, third party services or any other software piece we may have to connect.&lt;/p&gt;

&lt;p&gt;A proper integration tool should provide us with the following features: transformation, integration patterns and connectors to existing protocols and components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Transformations
&lt;/h3&gt;

&lt;p&gt;When we connect different components of an architecture, they rarely speak the same languages or, on this case, data formats. Some components will output an xml that has to be fed to the following component on a json form. Maybe we even need to add or remove some attributes on that json data.&lt;/p&gt;

&lt;p&gt;We need some way to easily transform the data traveling from one component to the following so it fits properly.&lt;/p&gt;

&lt;p&gt;If we want to do this with our own script, there are many libraries that can help us doing this like &lt;a href="http://You%20are%20already%20an%20experienced%20engineer%20and%20have%20solved%20previously%20many%20of%20the%20requirements%20with%20some%20components%20you%20are%20already%20familiar%20with.%20But%20now%20you%20have%20to%20orchestrate%20all%20these%20components%20together%20and%20make%20them%20work%20like%20a%20clock.%20Now%20we%20need%20a%20proper%20integration." rel="noreferrer noopener"&gt;Jackson&lt;/a&gt; or the &lt;a href="https://docs.python.org/3/library/xml.html" rel="noreferrer noopener"&gt;built-in xml libraries on Python&lt;/a&gt; . We even have the &lt;a href="https://www.w3schools.com/xml/xsl_client.asp" rel="noreferrer noopener"&gt;XSLT&lt;/a&gt; language to transform XML. But to use any of these properly, we would have to learn first how to use them. And any code we generate will have to be maintained and upgraded properly.&lt;/p&gt;

&lt;p&gt;An integration framework allows us to define what is the mapping between the output of one component and the input of the following so we can forget about the explicit implementation. The less code we have to maintain, the better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enterprise Integration Patterns
&lt;/h3&gt;

&lt;p&gt;Not all workflows in the architecture will be lineal. Some of the steps will require broadcasting, some of them will require conditional flowing. Some will require waiting the output of different components to conflate the data. These action patterns are something that have been studied for a long time. And as with software development patterns, you can classify them and study them to create better integrations.&lt;/p&gt;

&lt;p&gt;You can find all of these patterns prettily explained in the classic &lt;a rel="noreferrer noopener" href="https://www.enterpriseintegrationpatterns.com/"&gt;Enterprise Integration Patterns&lt;/a&gt; book.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connectors
&lt;/h3&gt;

&lt;p&gt;All of the above is useless if we can’t connect to (and from) the specific component we need. &lt;/p&gt;

&lt;p&gt;Our ideal integration framework should offer support for common protocols like ftp, http, jdbc,… Also it should offer support to connect to common components like a mail server, messaging services, atom,… We could claim even that no integration tool would be good if it doesn’t also support specific well known services like being able to send a message through a Telegram bot or store information on Elastic Search.&lt;/p&gt;

&lt;p&gt;Being able to seamlessly connect from one component to the next without having to worry about the specifics of their interfaces is what distinguishes an average integration tool from a good integration tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Apache Camel
&lt;/h2&gt;

&lt;p&gt;Let’s talk about something less abstract. At this point you may be wondering where you can find a good integration framework.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://camel.apache.org/" rel="noreferrer noopener"&gt;Apache Camel&lt;/a&gt; is not only one of the most active projects inside the Apache Software Foundation, it is also the lightest and most complete integration framework available. And on top of it, it is also &lt;strong&gt;Free and Open Source Software&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;Camel is already an old actor on the integration world. It has support for hundreds of components, protocols and formats. Some of these protocols come very handy allowing the user, for example, to connect to any REST API that they need. &lt;/p&gt;

&lt;p&gt;Camel uses its own DSL, a simplified language to define easily the workflows step by step. &lt;/p&gt;

&lt;h3&gt;
  
  
  Camel-K
&lt;/h3&gt;

&lt;p&gt;Camel is also available in Knative. This means, we can use it on a serverless environment, making sure the orchestration between services runs and escalates properly.&lt;/p&gt;

&lt;h1&gt;
  
  
  Camel K Orchestration Example
&lt;/h1&gt;

&lt;p&gt;This example demonstrates how to orchestrate integrations using Camel-K and &lt;a href="https://kafka.apache.org/" rel="noreferrer noopener"&gt;Kafka&lt;/a&gt; as a messaging service. We are going to implement two integrations that interact through a database to simulate how cat adoptions work. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/openshift-integration/camel-k-example-orchestration" rel="noreferrer noopener"&gt;The full example can be found on Github.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/openshift-integration/camel-k-example-orchestration/blob/master/images/flux_diagram.svg" rel="noreferrer noopener"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fopenshift-integration%2Fcamel-k-example-orchestration%2Fraw%2Fmaster%2Fimages%2Fflux_diagram.svg" alt="Flux diagram" width="800" height="400"&gt;&lt;/a&gt;Two integration workflows that simulate how cat adoptions work&lt;/p&gt;

&lt;p&gt;One integration will store cats coming from Kafka to the database waiting for a person to adopt them. The second integration will receive people interested in adopting and will match cats with them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cat Input from Kafka to Database
&lt;/h3&gt;

&lt;p&gt;First we are going to implement the storage of cat input messages to the database. &lt;/p&gt;

&lt;p&gt;As you can see, the Camel DSL is very intuitive: this integration listens to the proper Kafka broker and for every message that arrives, it unmarshalls the json to extract the data and pushes it to the database. The Cat class is just a simple bean with getters and setters for the attributes.&lt;/p&gt;

&lt;pre&gt;
// camel-k: language=java

import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.model.dataformat.JsonLibrary;
import model.Cat;

public class CatInput extends RouteBuilder {
  &lt;a class="mentioned-user" href="https://dev.to/override"&gt;@override&lt;/a&gt;
  public void configure() throws Exception {

    //Listen to kafka cat broker
    from("kafka:cat?brokers=my-cluster-kafka-bootstrap:9092")
    .log("Message received from Kafka : ${body}")
    .unmarshal().json(JsonLibrary.Gson, Cat.class)
  
    //Store it on the database with a null person
    .setBody().simple("INSERT INTO cat (name, image) VALUES ('${body.name}', '${body.image}')")
    .to("jdbc:postgresBean?")
  
    //Write some log to know it finishes properly
    .log("Cat stored.");}
  }
}
&lt;/pre&gt;

&lt;h3&gt;
  
  
  Person Input from Kafka to Adopt
&lt;/h3&gt;

&lt;p&gt;Now we are going to implement the reception of people wanting to adopt a cat.&lt;/p&gt;

&lt;p&gt;This integration is a bit more complex, as we are going to introduce a conditional choice: if there is a cat available on the database, it will be assigned to the person. If there is no cat (otherwise), a message will be returned saying no cat is available.&lt;/p&gt;

&lt;pre&gt;
// camel-k: language=java

import org.apache.camel.builder.RouteBuilder;

public class PersonInput extends RouteBuilder {
  &lt;a class="mentioned-user" href="https://dev.to/override"&gt;@override&lt;/a&gt;
  public void configure() throws Exception {
    //Listen to kafka person broker
    from("kafka:person?brokers=my-cluster-kafka-bootstrap:9092")
    .log("Message received from Kafka : ${body}")
    .log("${body} wants to adopt a cat")
    
    //Store the name of the person
    .setProperty("person", simple("${body}"))
    
    //Search for a lonely cat
    .log("...looking for available cats...")
    .setBody().simple("SELECT id, name, image FROM cat WHERE person is NULL LIMIT 1;")
    .to("jdbc:postgresBean?")
    
    .choice()
      .when(header("CamelJdbcRowCount").isGreaterThanOrEqualTo(1))
        .setProperty("catname", simple("${body[0][name]}"))
        .setProperty("catimage", simple("${body[0][image]}"))
        .setProperty("catid", simple("${body[0][id]}"))
        .log("Cat found called ${exchangeProperty.catname} with ID ${exchangeProperty.catid}")
        //There's a cat available, adopt it!
        .setBody().simple("UPDATE cat SET person='${exchangeProperty.person}' WHERE id=${exchangeProperty.catid}")
        .to("jdbc:postgresBean?")
  
        //Write some log to know it finishes properly
        .setBody().simple("Congratulations! ${exchangeProperty.catname} adopted ${exchangeProperty.person}. See how happy is on ${exchangeProperty.catimage}.")
        .to("log:info")
      .otherwise()
        //Write some log to know it finishes properly
        .setBody().simple("We are sorry, there's no cat looking for a family at this moment.")
        .to("log:info")
    .end();
  }
}
&lt;/pre&gt;

&lt;h3&gt;
  
  
  Feeding data automatically
&lt;/h3&gt;

&lt;p&gt;As an extra step on this exercise, we are going to implement a final job that sends random new cat data to the Kafka “cat” topic with a timer.&lt;/p&gt;

&lt;p&gt;The complexity on this class is not the Camel side, but the random generator of cat names.&lt;/p&gt;

&lt;pre&gt;
// camel-k: language=java dependency=camel:gson

import java.util.HashMap;
import java.util.Map;
import java.util.Random;

import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.model.dataformat.JsonLibrary;

public class AutoCat extends RouteBuilder {
  &lt;a class="mentioned-user" href="https://dev.to/override"&gt;@override&lt;/a&gt;
  public void configure() throws Exception {

    // Preparing properties to build a GeoJSON Feature
    Processor processor = new Processor() {

      String[] title = new String[] { "", "Lady", "Princess", "Mighty", "Your Highness", "Little", "Purry", "Empress", "Doctor", "Professor" };
      String[] firstname = new String[] { "Dewey", "Butter", "Merlin", "Epiphany", "Blasfemy", "Metaphor", "Fuzzy",
          "Whity", "Astro", "Salty", "Smol", "Whiskers", "Scully" };
      String[] lastname = new String[] { "", "Luna", "Wild", "Dragonis", "Firefly", "Puff", "Purrcy", "Priss",
          "Catsie" };

      Random r = new Random();

      &lt;a class="mentioned-user" href="https://dev.to/override"&gt;@override&lt;/a&gt;
      public void process(Exchange exchange) throws Exception {
        Map&amp;lt;String, String&amp;gt; map = new HashMap&amp;lt;String, String&amp;gt;();
        map.put("image", exchange.getProperty("catimage").toString());

        StringBuilder name = new StringBuilder();
        name.append(title[r.nextInt(title.length)]);
        name.append(" ");
        name.append(firstname[r.nextInt(firstname.length)]);
        name.append(" ");
        name.append(lastname[r.nextInt(lastname.length)]);

        exchange.setProperty("catname", name.toString());
        map.put("name", name.toString().trim());

        exchange.getMessage().setBody(map);

      }

    };

    // Listen to kafka cat broker
    from("timer:java?period=10s")
      
      // Take a random image
      .to("https://api.thecatapi.com/v1/images/search")
      .unmarshal().json(JsonLibrary.Gson)
      .log("A new cat arrived today ${body[0][url]}")
      .setProperty("catimage", simple("${body[0][url]}"))

      // name cat and prepare json
      .process(processor)
      .log("${body}")
      .marshal().json(JsonLibrary.Gson)
      .log("We named them ${exchangeProperty.catname}")

      // Send it to Kafka cat broker
      .to("kafka:cat?brokers=my-cluster-kafka-bootstrap:9092")

      // Write some log to know it finishes properly
      .log("Cat is looking for a family.");

  }
}
&lt;/pre&gt;

&lt;p&gt;Now you are ready to implement your own orchestrations with Kafka and Camel K.&lt;/p&gt;

&lt;p&gt;The post &lt;a rel="nofollow noopener noreferrer" href="http://delawen.com/2020/10/what-is-integration/"&gt;What’s all this integration agitation, anyway?&lt;/a&gt; appeared first on &lt;a rel="nofollow noopener noreferrer" href="http://delawen.com"&gt;Délawen's blog&lt;/a&gt; written by &lt;a rel="nofollow noopener noreferrer" href="http://delawen.com/author/delawen/"&gt;María Arias de Reyna Domínguez&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>freedom</category>
      <category>integration</category>
    </item>
    <item>
      <title>From Cat to Hat</title>
      <dc:creator>María Arias de Reyna</dc:creator>
      <pubDate>Fri, 19 Jul 2019 10:54:39 +0000</pubDate>
      <link>https://dev.to/delawen/from-cat-to-hat-468o</link>
      <guid>https://dev.to/delawen/from-cat-to-hat-468o</guid>
      <description>&lt;p&gt;On June I joined Red Hat as a Senior Software Engineer. I always said that I preferred to work on a small company before a big company, because on a big company you can’t be anything more than a number. That you can’t really grow professionally if you are such a small piece on a … &lt;a href="http://delawen.com/2019/07/from-cat-to-hat/" rel="noopener noreferrer"&gt;Continue reading "From Cat to Hat"&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://delawen.com/2019/07/from-cat-to-hat/" rel="noopener noreferrer"&gt;From Cat to Hat&lt;/a&gt; appeared first on &lt;a href="http://delawen.com" rel="noopener noreferrer"&gt;Délawen&lt;/a&gt; written by &lt;a href="http://delawen.com/author/delawen/" rel="noopener noreferrer"&gt;María Arias de Reyna Domínguez&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>communities</category>
      <category>freelibreopensources</category>
      <category>redhat</category>
    </item>
    <item>
      <title>I want to fix the diversity on my event, help me!</title>
      <dc:creator>María Arias de Reyna</dc:creator>
      <pubDate>Sun, 07 Jul 2019 19:34:16 +0000</pubDate>
      <link>https://dev.to/delawen/i-want-to-fix-the-diversity-on-my-event-help-me-2i5f</link>
      <guid>https://dev.to/delawen/i-want-to-fix-the-diversity-on-my-event-help-me-2i5f</guid>
      <description>&lt;p&gt;If someone linked you this post is probable you are organizing an event where diversity and inclusivity is an issue and they want to help you fix that. If you want, you can jump to the subsection that better adjust to your case. Remember: diversity is not a TL;DR, you probably need to read the … &lt;a href="http://delawen.com/2019/07/i-want-to-fix-the-diversity-on-my-event-help-me/" rel="noopener noreferrer"&gt;Continue reading "I want to fix the diversity on my event, help me!"&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://delawen.com/2019/07/i-want-to-fix-the-diversity-on-my-event-help-me/" rel="noopener noreferrer"&gt;I want to fix the diversity on my event, help me!&lt;/a&gt; appeared first on &lt;a href="http://delawen.com" rel="noopener noreferrer"&gt;Délawen&lt;/a&gt; written by &lt;a href="http://delawen.com/author/delawen/" rel="noopener noreferrer"&gt;María Arias de Reyna Domínguez&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>communities</category>
      <category>feminism</category>
      <category>freedom</category>
      <category>event</category>
    </item>
    <item>
      <title>How can I get a diverse and inclusive team?</title>
      <dc:creator>María Arias de Reyna</dc:creator>
      <pubDate>Wed, 05 Jun 2019 17:25:47 +0000</pubDate>
      <link>https://dev.to/delawen/how-can-i-get-a-diverse-and-inclusive-team-2gak</link>
      <guid>https://dev.to/delawen/how-can-i-get-a-diverse-and-inclusive-team-2gak</guid>
      <description>&lt;p&gt;TL;DR: If you are really interested on improving the inclusivity and diversity on your team, you need more than a TL;DR&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://delawen.com/2019/06/how-can-i-get-a-diverse-team/" rel="noopener noreferrer"&gt;How can I get a diverse and inclusive team?&lt;/a&gt; appeared first on &lt;a href="http://delawen.com" rel="noopener noreferrer"&gt;Délawen&lt;/a&gt; written by &lt;a href="http://delawen.com/author/delawen/" rel="noopener noreferrer"&gt;María Arias de Reyna Domínguez&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>communities</category>
      <category>feminism</category>
      <category>event</category>
    </item>
    <item>
      <title>Why do we need diversity on our teams?</title>
      <dc:creator>María Arias de Reyna</dc:creator>
      <pubDate>Mon, 07 May 2018 21:09:45 +0000</pubDate>
      <link>https://dev.to/delawen/why-do-we-need-diversity-on-our-teams-1e4d</link>
      <guid>https://dev.to/delawen/why-do-we-need-diversity-on-our-teams-1e4d</guid>
      <description>&lt;p&gt;Let’s overlook the statistical justice of having a diverse team. Let’s overlook the social justice of working on a team with high diversity. Go directly to why you, someone interested in having business at the end of the year, are also interested in having a diverse team… &lt;a href="http://delawen.com/2018/05/why-do-we-need-diversity-on-our-teams/" rel="noopener noreferrer"&gt;Continue reading "Why do we need diversity on our teams?"&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="http://delawen.com/2018/05/why-do-we-need-diversity-on-our-teams/" rel="noopener noreferrer"&gt;Why do we need diversity on our teams?&lt;/a&gt; appeared first on &lt;a href="http://delawen.com" rel="noopener noreferrer"&gt;Délawen&lt;/a&gt; written by &lt;a href="http://delawen.com/author/delawen/" rel="noopener noreferrer"&gt;María Arias de Reyna Domínguez&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>feminism</category>
    </item>
  </channel>
</rss>
