<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: MY.GAMES</title>
    <description>The latest articles on DEV Community by MY.GAMES (@mygames).</description>
    <link>https://dev.to/mygames</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mygames"/>
    <language>en</language>
    <item>
      <title>Optimizing the Ever-Growing Balance in the War Robots Project</title>
      <dc:creator>MY.GAMES</dc:creator>
      <pubDate>Tue, 16 Sep 2025 10:00:10 +0000</pubDate>
      <link>https://dev.to/mygames/optimizing-the-ever-growing-balance-in-the-war-robots-project-18p</link>
      <guid>https://dev.to/mygames/optimizing-the-ever-growing-balance-in-the-war-robots-project-18p</guid>
      <description>&lt;p&gt;Hello! My name is Sergey Kachan, and I’m a client developer on the War Robots project.&lt;/p&gt;

&lt;p&gt;War Robots has been around for many years, and during this time the game has accumulated a huge variety of content: robots, weapons, drones, titans, pilots, and so on. And for all of this to work, we need to store a large amount of different types of information. This information is stored in “balances.”&lt;/p&gt;

&lt;p&gt;Today I’m going to talk about how balances are structured in our project, what’s happened to them over the past 11 years, and how we’ve dealt with it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Balances in the Project
&lt;/h2&gt;

&lt;p&gt;Like any other project, War Robots can be divided into two parts: meta and core gameplay.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Meta gameplay (metagaming)&lt;/strong&gt; is any activity that goes beyond the core game loop but still affects the gameplay. This includes purchasing and upgrading game content, participating in social or event activities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core gameplay (core gameplay loop)&lt;/strong&gt; is the main repeating cycle of actions that the player performs in the game to achieve their goals. In our case, it’s robot battles on specific maps.&lt;/p&gt;

&lt;p&gt;Each part of the project needs its own balance, so we also split balances into two categories — meta and core.&lt;/p&gt;

&lt;p&gt;War Robots also has so-called &lt;strong&gt;Skirmish modes&lt;/strong&gt;, which require their own separate balances.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;Skirmish mode&lt;/strong&gt; is a modification of existing modes or maps with different characteristics or rules. Skirmish modes are often event-based, available to players during various holidays, mainly for fun. For example, players might be able to kill each other with a single shot or move around in zero gravity.&lt;/p&gt;

&lt;p&gt;So in total, we have 4 balances: 2 for the default mode and 2 for the Skirmish mode.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxw9du8uw36332zimnyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxw9du8uw36332zimnyr.png" alt=" " width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Over 11 years, War Robots has accumulated a ton of awesome content:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;95 robots&lt;/li&gt;
&lt;li&gt;21 titans&lt;/li&gt;
&lt;li&gt;175 different weapons&lt;/li&gt;
&lt;li&gt;40 drones&lt;/li&gt;
&lt;li&gt;16 motherships&lt;/li&gt;
&lt;li&gt;a huge number of skins, remodels, modules, pilots, turrets, ultimate versions of content, and maps&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And as you can imagine, to make all of this work we need to store information about behavior, stats, availability, prices, and much, much more.&lt;/p&gt;

&lt;p&gt;As a result, our balances have grown to an indecent size:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8gevxg0xki73q0ib5iv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg8gevxg0xki73q0ib5iv.png" alt=" " width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After some quick calculations, we found that a player would need to download &lt;strong&gt;44.6 MB&lt;/strong&gt;. That’s quite a lot!&lt;/p&gt;

&lt;p&gt;We really didn’t want to force players to download such large amounts of data every time a balance changed. And distributing that much data via CDN isn’t exactly cheap either.&lt;/p&gt;

&lt;p&gt;Just to remind you: War Robots has reached &lt;strong&gt;300 million registered users&lt;/strong&gt;. In 2024, our monthly active audience was &lt;strong&gt;4.7 million people&lt;/strong&gt;, and &lt;strong&gt;690 thousand players&lt;/strong&gt; logged in every day.&lt;/p&gt;

&lt;p&gt;Now imagine the amount of data. A lot, right? We thought so too. So, we decided to do everything we could to cut down the size of our balances!&lt;/p&gt;

&lt;h2&gt;
  
  
  Hunting Down the Problem
&lt;/h2&gt;

&lt;p&gt;The first step was to analyze the balances and try to figure out: &lt;em&gt;“What’s taking up so much space?”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Manually going through everything was the last thing we wanted to do — it would’ve taken ages. So, we wrote a set of tools that collected and aggregated all the information we needed about the balances.&lt;/p&gt;

&lt;p&gt;The tool would take a balance file as input and, using reflection, iterate through all the structures, gathering data on what types of information we stored and how much space each one occupied.&lt;/p&gt;

&lt;p&gt;The results were discouraging:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Meta Balance:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5n118wrzh0v2hw6lfc6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv5n118wrzh0v2hw6lfc6.png" alt=" " width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Balance:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj079hh2v8ak1kdntnea7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj079hh2v8ak1kdntnea7.png" alt=" " width="800" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After analyzing the situation, we realized that &lt;strong&gt;strings were taking up far too much space&lt;/strong&gt;, and something had to be done about it.&lt;/p&gt;

&lt;p&gt;So, we built another tool. This one scanned the balance file and generated a map of all the strings along with the number of times each one was duplicated.&lt;/p&gt;

&lt;p&gt;The results weren’t encouraging either. Some strings were repeated &lt;strong&gt;tens of thousands of times!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We had found the problem. Now the question was: &lt;em&gt;how do we fix it?&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimizing the Balances
&lt;/h2&gt;

&lt;p&gt;For obvious reasons, we couldn’t just get rid of strings altogether. Strings are used for things like localization keys and various IDs. But what we could do was eliminate the &lt;strong&gt;duplication&lt;/strong&gt; of strings.&lt;/p&gt;

&lt;p&gt;The idea was as simple as it gets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a list of &lt;strong&gt;unique strings&lt;/strong&gt; for each balance (essentially, a dedicated storage).&lt;/li&gt;
&lt;li&gt;Send this list along with the data.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class BalanceMessage
{
  public BalanceMessageData Data;
  public StringStorage Storage;
  public string Version;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;StringStorage is essentially a wrapper around a list of strings. When we build the string storage, each balance structure remembers the index of the string it needs. Later, when retrieving data, we just pass the index and quickly get the value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class StringStorage
{
   public List&amp;lt;string&amp;gt; Values;
   public string GetValue(StringIdx id) =&amp;gt; Values[id];
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Instead of passing the strings themselves inside the balance structures, we began passing the index of where the string is stored in the string storage.&lt;/p&gt;

&lt;p&gt;Before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class SomeBalanceMessage
{
  public string Id;
  public string Name;
  public int Amount;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class SomeBalanceMessageV2
{
  public StringIdx Id;
  public StringIdx Name;
  public int Amount;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;StringIdx is basically just a wrapper around an int. This way, we completely eliminated direct string transfers inside the balance structures.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public readonly struct StringIdx : IEquatable&amp;lt;StringIdx&amp;gt;
{
  private readonly int _id;


  internal StringIdx(int value) {_id = value; }


  public static implicit operator int(StringIdx value) =&amp;gt; value._id;


 public bool Equals(StringIdx other) =&amp;gt; _id == other._id;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach reduced the number of strings by tens of times.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvoqsf4i4fbzvl42odxl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvoqsf4i4fbzvl42odxl.png" alt=" " width="800" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not bad, right?&lt;/p&gt;

&lt;p&gt;But that was just the beginning — we didn’t stop there.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reworking the Data Protocol
&lt;/h2&gt;

&lt;p&gt;For transmitting and processing balance structures, we had been using &lt;strong&gt;MessagePack&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;MessagePack is a binary data serialization format designed as a more compact and faster alternative to JSON. It’s meant for efficient data exchange between applications or services, allowing a significant reduction in data size — especially useful where performance and bandwidth matter.&lt;/p&gt;

&lt;p&gt;Initially, MessagePack came in a JSON-like format, where the data used &lt;strong&gt;string keys&lt;/strong&gt;. That’s certainly convenient, but also quite space-consuming. So we decided to sacrifice some flexibility and switch to a &lt;strong&gt;binary byte array&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Before:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class SomeBalanceMessage
{
  [Key("id")]
  public string Id;

  [Key("name")]
  public string Name;

  [Key("amount")]
  public int Amount;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class SomeBalanceMessageV2
{
  [Key(0)]
  public StringIdx Id;

  [Key(1)]
  public StringIdx Name;

  [Key(2)]
  public int Amount;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also removed all empty collections — instead of sending them, we now transmit null values. This reduced both the overall data size and the time required for serialization and deserialization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the Changes
&lt;/h2&gt;

&lt;p&gt;A golden rule of good development (and one that will save you a lot of nerves) is to always implement new features in a way that lets you quickly roll them back if something goes wrong. For that reason, we add all new features behind “toggles.” To make this work, we had to support two versions of balances at the same time: the old one and the optimized one.&lt;/p&gt;

&lt;p&gt;During development, we needed to make sure that all data was transferred correctly. Old and new balances — regardless of format or structure — had to produce the exact same values. And remember, the optimized balances had changed their structure drastically, but that wasn’t supposed to affect anything except their size.&lt;/p&gt;

&lt;p&gt;To achieve this, we wrote a large number of unit tests for each balance.&lt;/p&gt;

&lt;p&gt;At first, we compared all fields “head-on” — checking every single one explicitly. This worked, but it was time-consuming, and even the smallest change in the balances would break the tests, forcing us to rewrite them constantly. This slowed us down and was quite distracting.&lt;/p&gt;

&lt;p&gt;Eventually, we had enough of that and came up with a more convenient testing approach for comparing balances.&lt;/p&gt;

&lt;p&gt;Reflection came to the rescue again. We took two versions of the balance structures, e.g. SomeBalanceMessage and SomeBalanceMessageV2, and iterated over them — comparing field counts, names, and values. If anything didn’t match, we tracked down the problem. This solution saved us a huge amount of time later on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimization Results
&lt;/h2&gt;

&lt;p&gt;Thanks to these optimizations, we managed to reduce both the size of the files transmitted over the network and the time it takes to deserialize them on the client. We also decreased the amount of memory required on the client side after balance deserialization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;File Size&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtcxpkj728lnm287p9q4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjtcxpkj728lnm287p9q4.png" alt=" " width="800" height="168"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deserialization Time&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyldrtfkflm7v520okzyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyldrtfkflm7v520okzyc.png" alt=" " width="800" height="156"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Data in Memory&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnmp6kgm6zvw7na4scah.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnmp6kgm6zvw7na4scah.png" alt=" " width="800" height="119"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;The results of the optimization fully satisfied us. The balance files were reduced by more than 80%. Traffic went down, and the players were happy.&lt;/p&gt;

&lt;p&gt;To sum it up: be careful with the data you transmit, and don’t send anything unnecessary.&lt;/p&gt;

&lt;p&gt;Strings are best stored in unique storages to avoid creating duplicates. And if your custom data (prices, stats, etc.) also contains a lot of repetition, try packing those into unique storages as well. This will save you many megabytes — and a lot of money on maintaining CDN servers.&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>programming</category>
      <category>development</category>
    </item>
    <item>
      <title>Microservices: Is It Worth the Trouble?</title>
      <dc:creator>MY.GAMES</dc:creator>
      <pubDate>Mon, 18 Aug 2025 07:42:43 +0000</pubDate>
      <link>https://dev.to/mygames/microservices-is-it-worth-the-trouble-3a69</link>
      <guid>https://dev.to/mygames/microservices-is-it-worth-the-trouble-3a69</guid>
      <description>&lt;p&gt;Hi, I’m Stanislav Yablonskiy, Lead Server Developer at Pixonic (MY.GAMES). And today, let’s discuss microservices.&lt;/p&gt;

&lt;p&gt;Microservices are an approach to software development (primarily backend development) where functionality is broken down into the smallest possible components, each of which operates independently. Each such component has its own API. It may have its own database and can be written in its own programming language. They communicate over the network.&lt;/p&gt;

&lt;p&gt;Microservices are very popular nowadays, but their use introduces significant overhead in terms of network, memory, and CPU.&lt;br&gt;
Every call turns into the need for serialization, sending and receiving data over the network. In addition, it's no longer possible to use classic database transactions, which leads to either distributed transactions or eventual consistency. Distributed transactions are slow and expensive, while eventual consistency means that the results of operations may not appear immediately, and data may temporarily be inconsistent. &lt;/p&gt;

&lt;p&gt;Using microservices forces developers to write more code in each individual service due to the difficulties of accessing already written logic from other services. Sometimes it’s hard to reuse existing code, or you might not even know it exists — since other people may be working on a different project. Let’s talk more about the overheads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Microservices’ Overheads
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Debug Overhead
&lt;/h3&gt;

&lt;p&gt;Debugging becomes much more difficult with microservices. A regular debugger is almost useless in such conditions since you can’t debug all services at once. Without a properly set up system of logging, tracing, and metrics, debugging is nearly impossible until the problem is localized. This means you need a special environment where not only the service being debugged is running, but also all its dependencies (other services, databases, queues, etc.).&lt;/p&gt;

&lt;h3&gt;
  
  
  HTTP Overhead
&lt;/h3&gt;

&lt;p&gt;The HTTP protocol has a lot of built-in functionality. It supports various routes, parameter-passing methods, response codes, and is supported by many ready-to-use services (including proxies). But it’s not lightweight — it forces every service to implement a lot of not-so-efficient code to parse and generate paths, headers, and so on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Protobuf Overhead
&lt;/h3&gt;

&lt;p&gt;Serialization for network communication and deserialization upon receiving messages is required.&lt;/p&gt;

&lt;p&gt;When using protobuf for message exchange, you need to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;create objects,&lt;/li&gt;
&lt;li&gt;convert them to byte arrays,&lt;/li&gt;
&lt;li&gt;and immediately discard them after use.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This creates a lot of extra work for garbage collector or dynamic memory manager.&lt;/p&gt;

&lt;h3&gt;
  
  
  Network Overhead
&lt;/h3&gt;

&lt;p&gt;Transmitting data over the network increases service response time. It increases memory and CPU consumption, even if microservices are running on the same host.&lt;/p&gt;

&lt;h3&gt;
  
  
  Memory Overhead
&lt;/h3&gt;

&lt;p&gt;Sending and receiving messages requires maintaining additional data structures, using separate threads, and synchronizing them. Each separate process, especially one running in a container, consumes a significant amount of memory just by existing.&lt;/p&gt;

&lt;h3&gt;
  
  
  CPU Overhead
&lt;/h3&gt;

&lt;p&gt;Naturally, all this inter-process and inter-container communication requires computing resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Database Overhead
&lt;/h3&gt;

&lt;p&gt;Normal transactions are impossible when operations span multiple microservices. Distributed transactions are much slower and require complex — often manual — coordination. This increases the time cost both for development and for executing such operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Network Disk Overhead
&lt;/h3&gt;

&lt;p&gt;Microservice containers are often run on network-mounted disks. This increases latency, reduces performance (IOPS), and makes it unpredictable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Borders Overhead
&lt;/h3&gt;

&lt;p&gt;Designing and developing microservices brings difficulties in evolving and refactoring a project. &lt;/p&gt;

&lt;p&gt;It’s not easy to change the responsibility zone of a service. You can’t just rename or delete something. You can’t simply move code from one service to another.&lt;/p&gt;

&lt;p&gt;This usually requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a lot of time and effort,&lt;/li&gt;
&lt;li&gt;several API versions,&lt;/li&gt;
&lt;li&gt;and complex migrations before functionality can be redistributed between services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In addition, if you want to update or replace a library, you’ll need to do it across all projects, not just one.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure Overhead
&lt;/h3&gt;

&lt;p&gt;You can’t just “do microservices.” You’ll need infrastructure — no, INFRASTRUCTURE:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;containers (each containing copies of shared libraries),&lt;/li&gt;
&lt;li&gt;Kubernetes,&lt;/li&gt;
&lt;li&gt;cloud services,&lt;/li&gt;
&lt;li&gt;queues (RabbitMQ, Kafka),&lt;/li&gt;
&lt;li&gt;configuration sync tools (Zookeeper, Etcd, Consul), and so on.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All this requires massive resources from both machines and people.&lt;/p&gt;

&lt;h3&gt;
  
  
  Independent Deploy Overhead
&lt;/h3&gt;

&lt;p&gt;Supporting independent deployments means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;each service must be deployable separately,&lt;/li&gt;
&lt;li&gt;each must have its own CI/CD pipeline,&lt;/li&gt;
&lt;li&gt;and the hardest part — API versioning.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each service will have to support multiple API versions simultaneously. And the callers will have to track these versions and update their calls in time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Distributed Ball of Mud
&lt;/h3&gt;

&lt;p&gt;There is a near-100% chance that you won’t get your service boundaries right from the beginning. Instead of clean microservices, you’ll end up with a distributed ball of mud — where functionality is poorly distributed, external calls trigger entire chains of internal service calls, and the whole thing is terribly slow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is the Monolith Really That Scary?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Modular Monoliths
&lt;/h3&gt;

&lt;p&gt;Modular monoliths allow you to avoid most of the microservice overhead — while still providing separation that can be used later if necessary.&lt;/p&gt;

&lt;p&gt;This approach involves writing the application (primarily the backend) as a single service split into individual modules with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;clearly defined boundaries, and&lt;/li&gt;
&lt;li&gt;minimal interdependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This makes it possible to split them into services if scaling really requires it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Wait, You Can Do That?
&lt;/h3&gt;

&lt;p&gt;Many benefits attributed to microservice architecture can be achieved in a monolith:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Modularity&lt;/strong&gt; can be implemented with language features: classes, namespaces, projects, assemblies;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple databases&lt;/strong&gt; — possible, if truly needed;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multiple languages&lt;/strong&gt; — also possible, for example combining C/C++/C#/Java with scripting languages like JavaScript, Python, or Erlang for higher-level development;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Interop&lt;/strong&gt; — many platforms support calling C/C++ from Java, C#, Python, JavaScript, or Erlang;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Message queues&lt;/strong&gt; — just use the appropriate data structure.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And when you want to debug — one keypress, and the whole application is at your fingertips.&lt;/p&gt;

&lt;h3&gt;
  
  
  Actor Frameworks
&lt;/h3&gt;

&lt;p&gt;Actor frameworks allow you to build microservices — without the microservices.&lt;/p&gt;

&lt;p&gt;All logic is split into classes (actors) that communicate only via a message bus (queues).&lt;/p&gt;

&lt;p&gt;These actors can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;exist within a single process, or&lt;/li&gt;
&lt;li&gt;be distributed across multiple processes.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This way, you get the microservice programming model, but most infrastructure is handled by the framework itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Architecture should be chosen based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;project requirements,&lt;/li&gt;
&lt;li&gt;available resources,&lt;/li&gt;
&lt;li&gt;and team expertise.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Microservices are not a silver bullet. They’re useful for huge projects and teams — but the monolith is not obsolete and is not technical debt by default.&lt;/p&gt;

&lt;p&gt;What matters most is the balance between flexibility and complexity, scalability and maintainability — so that the system you build is effective and sustainable.&lt;/p&gt;

</description>
      <category>softwaredevelopment</category>
      <category>gamedev</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Meta Tutorial in War Robots: how it works and why it’s useful</title>
      <dc:creator>MY.GAMES</dc:creator>
      <pubDate>Thu, 12 Jun 2025 12:20:04 +0000</pubDate>
      <link>https://dev.to/mygames/meta-tutorial-in-war-robots-how-it-works-and-why-its-useful-4kkm</link>
      <guid>https://dev.to/mygames/meta-tutorial-in-war-robots-how-it-works-and-why-its-useful-4kkm</guid>
      <description>&lt;p&gt;Hi everyone! My name is Alexey Tsigelnikov, and I’m a developer on the War Robots project. Like any respectable game with multiple layers of content, we have tutorials — not only to help players understand the gameplay but also, importantly, to help them navigate the meta layer. In this article, I’ll talk about the second type of tutorial — why it exists and how it works.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why a tutorial Is needed
&lt;/h2&gt;

&lt;p&gt;War Robots consists of two parts: the battle part and the meta part. In the battle part, the player engages in combat against other players and bots. In the meta part, the player manages their hangar — buying, selling, and upgrading robots and weapons, spending and earning currency, joining clans, completing quests, and more.&lt;/p&gt;

&lt;p&gt;This article focuses on the tutorial system implemented in the meta part of the game. To help users better understand how the game’s various mechanics work, we created a tutorial system. Its goal is to show users what actions they need to take to achieve certain outcomes — for example, tutorials explain how to buy a new robot, upgrade weapons, obtain a pilot, and perform other similar tasks.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhpivqgjoly4uwf1uc9u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqhpivqgjoly4uwf1uc9u.png" alt="Image description" width="800" height="503"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;The garage screen tutorial shows how to enter a battle — and motivates the player to start it&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The tutorial consists of a set of markers that introduce players to new mechanics and guide them through interacting with them.&lt;/p&gt;

&lt;p&gt;Players are incentivized to complete tutorials through quest rewards. These may be granted automatically upon completing a tutorial (accompanied by a popup), or the player may need to manually claim them from the quest window.&lt;/p&gt;

&lt;h2&gt;
  
  
  War Robots tutorial architecture
&lt;/h2&gt;

&lt;p&gt;The tutorial system is configured in three parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Visual settings for the tutorial&lt;/strong&gt;, stored in a table-based config on the server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tutorial components placed in the prefabs&lt;/strong&gt; of UI screens and dialog windows. This is configured on the client.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tutorial definitions and markers&lt;/strong&gt;, also in a table-based config stored on the server.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;General tutorial settings&lt;/strong&gt; are received from the server when the player logs in. These include the following flags:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Show helper character: yes/no&lt;/li&gt;
&lt;li&gt;Enable screen dimming: yes/no&lt;/li&gt;
&lt;li&gt;Allow removing dimming only by clicking the highlighted element: yes/no&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p0000d6pg9p3kpx9nj2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2p0000d6pg9p3kpx9nj2.png" alt="Image description" width="800" height="502"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Tutorial Element Schema (marker, dimming, helper character)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up tutorial components in dialog box prefabs.&lt;/strong&gt; To display a tutorial marker on screen, a TutorialTarget component must be added to the UI element, with the appropriate parameters configured:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funn0yq0qr13mces519w1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funn0yq0qr13mces519w1.png" alt="Image description" width="706" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Canvas&lt;/em&gt;&lt;/strong&gt; — if this field is filled, the element will be highlighted above the dimming layer.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Optional Canvases&lt;/em&gt;&lt;/strong&gt; — if populated, elements from these canvases will also be highlighted when dimming is active.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Parent Window&lt;/em&gt;&lt;/strong&gt; — used if the marker is part of a dialog window.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Is Bottom Character Position&lt;/em&gt;&lt;/strong&gt; — determines whether the helper character appears at the bottom or top of the screen.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Terminate Tutorial Button&lt;/em&gt;&lt;/strong&gt; — if set, the tutorial ends early when the button is clicked.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Up / Down / Left / Right&lt;/em&gt;&lt;/strong&gt; — enable directional arrows for the marker.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwt3bqnnxq2al0d1qnxw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwt3bqnnxq2al0d1qnxw.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Circle&lt;/em&gt;&lt;/strong&gt; — enables a circular element in the center of the marker.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Layer&lt;/em&gt;&lt;/strong&gt; — defines the rendering layer for the marker.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;em&gt;Id&lt;/em&gt;&lt;/strong&gt; — unique identifier of the tutorial marker.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;List of tutorials and their markers.&lt;/strong&gt; On the server, a tutorial config stores information about tutorials, their markers, and the text for the helper character. For example, tutor1 contains the following Id markers: GO_TO_BATTLE, GO_TO_GARAGE, BUY_SLOT.&lt;/p&gt;

&lt;p&gt;The GO_TO_BATTLE marker displays the helper character with text localized using the tag “hello_text”. Note: the order of markers in the config does not determine their display priority. Any marker will appear as long as its Id is specified in the TutorialTarget component.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxiktdw5b82nip05ztg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyxiktdw5b82nip05ztg7.png" alt="Image description" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How It Works.&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When the game launches, the client receives the general tutorial settings (e.g. whether to show dimming or display helper character text).&lt;/li&gt;
&lt;li&gt;The server sends the current active tutorial to the client, which includes a list of marker IDs.&lt;/li&gt;
&lt;li&gt;When a UI screen (prefab) is opened, components on that screen check if their Id exists in the active tutorial.&lt;/li&gt;
&lt;li&gt;If a match is found, the tutorial marker is activated and displayed.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foi3q9q2sb1n46hucf9ak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foi3q9q2sb1n46hucf9ak.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Tutorial workflow (config, server, token list, prefab)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; The marker itself does not know the step order, reward triggers, or when a tutorial is completed. These decisions are handled entirely by the server.&lt;/p&gt;

&lt;p&gt;For example, in the tutorial for purchasing a robot, the marker simply highlights the relevant UI buttons. Only after the player successfully buys the robot does the server register the action, grant the reward, and potentially send a new tutorial to the client.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pros and Cons of this approach
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Pros:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No new code is required to add new tutorials.&lt;/li&gt;
&lt;li&gt;Easy to scale and expand. Game designers can create new tutorials using existing markers.&lt;/li&gt;
&lt;li&gt;Tutorial progress is stored server-side.&lt;/li&gt;
&lt;li&gt;Tutorials can be updated without releasing a new client version — config is server-side, and prefabs can be updated via bundles.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Cons:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Requires synchronization of marker IDs between server and client.&lt;/li&gt;
&lt;li&gt;String-based marker IDs are prone to errors in the config.&lt;/li&gt;
&lt;li&gt;Debugging and validation tools are required.&lt;/li&gt;
&lt;li&gt;Every tutorial needs manual testing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To make it easier to customize and test tutorials, we have created the following tools:&lt;/p&gt;

&lt;p&gt;**Tutorial marker search tool. **Helps locate and configure tutorial markers within prefabs. Users can select a tutorial, see all related prefabs and markers, and click to open a prefab with the target object auto-selected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2a6tv9b9rvvq2wzf46xs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2a6tv9b9rvvq2wzf46xs.png" alt="Image description" width="800" height="835"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Tutorial marker search tool&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cheat tool for playing tutorials.&lt;/strong&gt; Allows for testing any tutorial by enabling its markers in either an “incomplete” or “completed” phase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Plans for the future
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Move tutorial config to the client.&lt;/strong&gt; This will eliminate server dependency and reduce the likelihood of configuration errors. The config will still be updateable via bundles, without needing a new client build.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Render all tutorial markers in a single canvas.&lt;/strong&gt; This will improve performance and simplify rendering logic.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  To sum it up
&lt;/h2&gt;

&lt;p&gt;The current tutorial system effectively introduces players to various game mechanics using a combination of markers and helper character dialogues.&lt;/p&gt;

&lt;p&gt;It offers powerful customization options — including visual settings, directional arrows, character text, and filters for specific robot or weapon types.&lt;/p&gt;

&lt;p&gt;Game designers can create and modify tutorials and deploy updates without requiring programmer involvement.&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>unity3d</category>
    </item>
    <item>
      <title>How to optimize UIs in Unity: slow performance causes and solutions</title>
      <dc:creator>MY.GAMES</dc:creator>
      <pubDate>Tue, 27 May 2025 15:33:24 +0000</pubDate>
      <link>https://dev.to/mygames/how-to-optimize-uis-in-unity-slow-performance-causes-and-solutions-3inh</link>
      <guid>https://dev.to/mygames/how-to-optimize-uis-in-unity-slow-performance-causes-and-solutions-3inh</guid>
      <description>&lt;p&gt;Hello! I’m Sergey Begichev, Client Developer at Pixonic (MY.GAMES). In this post, I’ll be discussing UI optimization in Unity3D. While rendering a set of textures may seem simple, it can lead to significant performance issues. For instance, in our War Robots project, unoptimized UI versions accounted for up to 30% of the total CPU load — an astonishing figure!&lt;/p&gt;

&lt;p&gt;Typically, this problem arises under two conditions: one, when there are numerous dynamic objects and two, when designers create layouts that prioritize reliable scaling across different resolutions. Even a small UI can generate a noticeable load under these circumstances. Let’s explore how this works, identify the causes of the load, and discuss potential solutions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unity’s recommendations
&lt;/h2&gt;

&lt;p&gt;First, let’s review &lt;a href="https://unity.com/how-to/unity-ui-optimization-tips" rel="noopener noreferrer"&gt;Unity’s recommendations&lt;/a&gt; for UI optimization, which I have summarized into six key points:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Split up your canvases into sub-canvases&lt;/li&gt;
&lt;li&gt;Remove unnecessary Raycast Target&lt;/li&gt;
&lt;li&gt;Avoid using expensive elements (Large List, Grid views etc.)&lt;/li&gt;
&lt;li&gt;Avoid layout groups&lt;/li&gt;
&lt;li&gt;Hide canvas instead of Game Object (GO)&lt;/li&gt;
&lt;li&gt;Use animators optimally&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While points 2 and 3 are intuitively clear, the rest of the recommendations can be problematic to imagine in practice. For instance, the advice to “split up your canvases into sub-canvases” is certainly valuable, but Unity doesn’t provide clear guidelines on the principles behind this division. Speaking for myself, in practical terms, I want to know where it makes the most sense to implement sub-canvases.&lt;/p&gt;

&lt;p&gt;Consider the advice to “avoid layout groups.” While they can contribute to high UI load, many large UIs come with multiple layout groups, and reworking everything can be time-consuming. Moreover, layout designers who eschew layout groups may find themselves spending significantly more time on their tasks. Therefore, it would be helpful to understand when such groups should be avoided, when they can be beneficial, and what actions to take if we cannot eliminate them.&lt;/p&gt;

&lt;p&gt;This ambiguity in Unity’s recommendations is a core issue — it’s often unclear on what principles we should apply for these suggestions.&lt;/p&gt;

&lt;h2&gt;
  
  
  UI construction principles
&lt;/h2&gt;

&lt;p&gt;To optimize UI performance, it’s essential to understand how Unity constructs the UI. Understanding these stages is crucial for effective UI optimization in Unity. We can broadly identify three key stages in this process:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Layout.&lt;/strong&gt; Initially, Unity arranges all UI elements based on their sizes and designated positions. These positions are calculated in relation to screen edges and other elements, forming a chain of dependencies.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Batching.&lt;/strong&gt; Next, Unity groups individual elements into batches for more efficient rendering. Drawing one large element is always more efficient than rendering multiple smaller ones. (For a deeper dive into batching, refer to &lt;a href="https://medium.com/my-games-company/batching-tamed-reducing-batches-via-ui-mask-optimization-4d346175140d" rel="noopener noreferrer"&gt;this article&lt;/a&gt;.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rendering.&lt;/strong&gt; Finally, Unity draws the collected batches. The fewer batches there are, the faster the rendering process will be.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;While there are other elements involved in the process, these three stages account for the majority of issues, so for now, let’s focus on them.&lt;/p&gt;

&lt;p&gt;Ideally, when our UI remains static — meaning nothing moves or changes — we can build the layout once, create a single large batch, and render it efficiently.&lt;/p&gt;

&lt;p&gt;However, if we modify the position of even one element, we must recalculate its position and rebuild the affected batch. If other elements depend on this position, we’ll then need to recalculate their positions too, causing a cascading effect throughout the hierarchy. And the more elements that need adjustment, the higher the batching load becomes.&lt;/p&gt;

&lt;p&gt;So, changes in a layout can trigger a ripple effect throughout the entire UI, and our goal is to minimize the number of changes. (Alternatively, we can aim to isolate changes to prevent a chain reaction.)&lt;/p&gt;

&lt;p&gt;As a practical example, this issue is particularly pronounced when using layout groups. Each time a layout is rebuilt, every LayoutElement performs a GetComponent operation, which can be quite resource-intensive.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multiple tests
&lt;/h2&gt;

&lt;p&gt;Let’s examine a series of examples to compare the performance results. (All tests were conducted using Unity version 2022.3.24f1 on a Google Pixel 1 device.)&lt;/p&gt;

&lt;p&gt;In this test, we’ll create a layout group featuring a single element, and we’ll analyze two scenarios: one where we change the size of the element, and another where we’re utilizing the FillAmount property.&lt;/p&gt;

&lt;p&gt;RectTransform changes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre7ou3h3j6fz67ywinsu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fre7ou3h3j6fz67ywinsu.png" alt="Image description" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FlllAmount changes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnqiwdmtr41k4mxym3rj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnqiwdmtr41k4mxym3rj.png" alt="Image description" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the second example, we’ll try to do the same thing, but in a layout group with 8 elements. In this case, we’ll still only be changing one element.&lt;/p&gt;

&lt;p&gt;RectTransform changes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu4cbj0q75zrxschmsh7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu4cbj0q75zrxschmsh7.png" alt="Image description" width="800" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FlllAmount changes:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fun3x4guiwnbyw78kkucz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fun3x4guiwnbyw78kkucz.png" alt="Image description" width="800" height="169"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If, in the previous example, changes to the RectTransform resulted in a load of 0.2 ms on the layout, this time the load increases to 0.7 ms. Similarly, the load from batching updates rises from 0.65 ms to 1.10 ms.&lt;/p&gt;

&lt;p&gt;Although we’re still modifying just one element, the increased size of the layout significantly impacts the load during the rebuild.&lt;/p&gt;

&lt;p&gt;In contrast, when we adjust the FillAmount of an element, we observe no increase in load, even with a larger number of elements. This is because modifying FillAmount does not trigger a layout rebuild, resulting in only a slight increase in batching update load.&lt;/p&gt;

&lt;p&gt;Clearly, using FillAmount is the more efficient choice in this scenario. However, the situation becomes more complex when we alter the scale or position of an element. In these cases, it’s challenging to replace Unity’s built-in mechanisms that don’t trigger layout rebuild.&lt;/p&gt;

&lt;p&gt;This is where SubCanvases come into play. Let’s examine the results when we encapsulate a changeable element within a SubCanvas.&lt;/p&gt;

&lt;p&gt;We’ll create a layout group with 8 elements, one of which will be housed within a SubCanvas, and then modify its transform.&lt;/p&gt;

&lt;p&gt;RectTransform changes in SubCanvas:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu98n6ffx0vlil1he9px0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu98n6ffx0vlil1he9px0.png" alt="Image description" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the results indicate, encapsulating a single element within a SubCanvas almost eliminates the load on the layout; this is because SubCanvas isolates all changes, preventing a rebuild in the higher levels of the hierarchy.&lt;/p&gt;

&lt;p&gt;However, it’s important to note that changes within the canvas will not influence the positioning of elements outside of it. Therefore, if we expand the elements too much, there exists a risk that they may overlap with neighboring elements.&lt;/p&gt;

&lt;p&gt;Let’s proceed by wrapping 8 layout elements in a SubCanvas:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf0xr8mv3yfzq2hlejl8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkf0xr8mv3yfzq2hlejl8.png" alt="Image description" width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The previous example demonstrates that, while the load on the layout remains low, the batching update has doubled. This means that, although dividing elements into multiple SubCanvases helps reduce the load on layout build, it increases the load on batch assembly. Consequently, this could lead us to a net negative effect overall.&lt;/p&gt;

&lt;p&gt;Now, let’s conduct another experiment. First, we’ll create a layout group with 8 elements and then modify one of the layout elements using the animator.&lt;/p&gt;

&lt;p&gt;The animator will adjust the RectTransform to a new value:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwulzh8u6lz36oznsi3r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftwulzh8u6lz36oznsi3r.png" alt="Image description" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, we see the same result as in the second example where we changed everything manually. This is logical, because it makes no difference what we use to change RectTransform.&lt;/p&gt;

&lt;p&gt;The animator changes RectTransform to a similar value:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99nnizuetfautiritfd9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F99nnizuetfautiritfd9.png" alt="Image description" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Animators previously faced an issue where they would continuously overwrite the same value every frame, even if that value remained unchanged. This would inadvertently trigger a layout rebuild. Fortunately, newer versions of Unity have resolved this problem, eliminating the need to switch to alternative tweening methods solely for performance improvements.&lt;/p&gt;

&lt;p&gt;Now, let’s examine how changing the text value behaves within a layout group with 8 elements and whether it triggers a layout rebuild.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1xprnql4uleab2yjqe6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1xprnql4uleab2yjqe6.png" alt="Image description" width="800" height="172"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We see that the rebuild is also triggered.&lt;/p&gt;

&lt;p&gt;Now we’ll change the value of TextMechPro in the layout group of 8 elements:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqwp8rqr0eecct6e2v2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqwp8rqr0eecct6e2v2z.png" alt="Image description" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;TextMechPro also triggers a layout rebuild, and it even looks like it puts more load on batching and rendering than regular Text.&lt;/p&gt;

&lt;p&gt;Changing the TextMechPro value in SubCanvas in layout group of 8 elements:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fur552at6y8w1ymrwprx7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fur552at6y8w1ymrwprx7.png" alt="Image description" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;SubCanvas has effectively isolated the changes, preventing layout rebuild. Yet, while the load on batching updates has decreased, it remains relatively high. This becomes a concern when working with text, as each letter is treated as a separate texture. Modifying the text consequently affects multiple textures.&lt;/p&gt;

&lt;p&gt;Now, let’s evaluate the load incurred when turning a GameObject (GO) on and off within the layout group.&lt;/p&gt;

&lt;p&gt;Turning on and off a GameObject inside a layout group of 8 elements:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7n9d7sd5icj3r7swu0bk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7n9d7sd5icj3r7swu0bk.png" alt="Image description" width="800" height="174"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see, turning on or off a GO also triggers a layout rebuild.&lt;/p&gt;

&lt;p&gt;Turning on a GO inside a SubCanvas with a layout group of 8 elements:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ekznyscnt4sns611ynt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ekznyscnt4sns611ynt.png" alt="Image description" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this case, SubCanvas also helps to relieve the load.&lt;/p&gt;

&lt;p&gt;Now let’s check what the load if we turn on or off the entire GO with a layout group:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbc88of9vwlbj69ni0jk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbc88of9vwlbj69ni0jk.png" alt="Image description" width="800" height="171"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As the results show, the load reached its highest level yet. Enabling the root element triggers a layout rebuild for the child elements, which, in turn, results in significant load on both batching and rendering.&lt;/p&gt;

&lt;p&gt;So, what can we do if we need to enable or disable entire UI elements without creating excessive load? Instead of enabling and disabling the GO itself, you can simply disable the Canvas or the Canvas Group component. Additionally, setting the alpha channel of the Canvas Group to 0 can achieve the same effect while avoiding performance issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1d6n7skh85qie8o2419a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1d6n7skh85qie8o2419a.png" alt="Image description" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s what happens to the load when we disable the Canvas Group component. Since the GO remains enabled while the canvas is disabled, the layout is preserved but simply not displayed. This approach not only results in a low layout load but also significantly reduces the load on batching and rendering.&lt;/p&gt;

&lt;p&gt;Next, let’s examine the impact of changing the SiblingIndex within the layout group.&lt;/p&gt;

&lt;p&gt;Changing SiblingIndex inside a layout group of 8 elements:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv9pxrx2bo2zlgt2yct5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv9pxrx2bo2zlgt2yct5.png" alt="Image description" width="800" height="186"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As observed, the load remains significant, at 0.7 ms for updating the layout. This clearly indicates that modifications to the SiblingIndex also trigger a layout rebuild.&lt;/p&gt;

&lt;p&gt;Now, let’s experiment with a different approach. Instead of changing the SiblingIndex, we’ll swap the textures of two elements within the layout group.&lt;/p&gt;

&lt;p&gt;Swapping textures of two elements in a layout group of 8 elements:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwzuimx4c8v6nizyfn9f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqwzuimx4c8v6nizyfn9f.png" alt="Image description" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we can see, the situation has not improved; in fact, it has gotten worse. Replacing the texture also triggers a rebuild.&lt;/p&gt;

&lt;p&gt;Now, let’s create a custom layout group. We’ll construct 8 elements and simply swap the positions of two of them.&lt;/p&gt;

&lt;p&gt;Custom layout group with 8 elements:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmebgkwfx6dp0o52azxv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsmebgkwfx6dp0o52azxv.png" alt="Image description" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The load has indeed significantly decreased — and this is expected. In this example, the script simply swaps the positions of two elements, eliminating heavy GetComponent operations and the need to recalculate the positions of all elements. As a result, there is less updating required for batching. While this approach seems like a silver bullet, it’s important to note that performing calculations in scripts also contributes to the overall load.&lt;/p&gt;

&lt;p&gt;As we introduce more complexity into our layout group, the load will inevitably increase, but it won’t necessarily reflect in the Layout section since the calculations occur in scripts. So, it’s crucial to monitor the efficiency of the code ourselves. However, for simple layout groups, custom solutions can be an excellent option.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;p&gt;Rebuilding the layout presents a significant challenge. To address this issue, we must identify its root causes, which can vary. Here are the primary factors that lead to layout rebuilds:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Animation of elements: movement, scale, rotation (any change of the transform)&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Replacing sprites&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Rewriting text&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Turning GO on and off, adding/removing GO&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Changing sibling index&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It’s important to highlight a few aspects that no longer pose problems in newer versions of Unity but which did in earlier ones: overwriting the same text and repeatedly setting the same value with an animator.&lt;/p&gt;

&lt;p&gt;Now that we’ve identified the factors that trigger a layout rebuild, let’s summarize our solution options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Wrap a GameObject (GO) that triggers a rebuild in a SubCanvas.&lt;/strong&gt;&lt;br&gt;
This approach isolates changes, preventing them from affecting other elements up the hierarchy. However, be cautious — too many SubCanvases can significantly increase the load on batching.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Turn on and off the SubCanvas or Canvas Group instead of the GO.&lt;/strong&gt;&lt;br&gt;
Use an object pool rather than creating new GOs. This method preserves the layout in memory, allowing for quick activation of elements without the need for a rebuild.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Utilize shader animations.&lt;/strong&gt;&lt;br&gt;
Changing the texture using a shader will not trigger a layout rebuild. However, keep in mind that textures might overlap with other elements. This method effectively serves a similar purpose as using SubCanvases, but it does require writing a shader.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Replace Unity’s layout group with a custom layout group.&lt;/strong&gt;&lt;br&gt;
One of the key issues with Unity’s layout groups is that each LayoutElement calls GetComponent during rebuilding, which is resource-intensive. Creating a custom layout group can address this issue, but it has its own challenges. Custom components may have specific operational requirements that you need to understand for effective use. Nonetheless, this approach can be more efficient, especially for simpler layout group scenarios.&lt;/p&gt;

</description>
      <category>unity3d</category>
      <category>ui</category>
      <category>productivity</category>
      <category>gamedev</category>
    </item>
    <item>
      <title>5 ways to automatically make your automated tests awesome</title>
      <dc:creator>MY.GAMES</dc:creator>
      <pubDate>Wed, 12 Feb 2025 13:30:11 +0000</pubDate>
      <link>https://dev.to/mygames/5-ways-to-automatically-make-your-automated-tests-awesome-20nf</link>
      <guid>https://dev.to/mygames/5-ways-to-automatically-make-your-automated-tests-awesome-20nf</guid>
      <description>&lt;p&gt;I’m Alexey Fedotkin, and I focus on Automated Testing at MY.GAMES (Pixonic). In this post, I’d like to share valuable insights that my team and I have gained over the past few years while developing automated tests for the War Robots project.&lt;/p&gt;

&lt;p&gt;Quick disclaimer: since the effectiveness of different approaches and techniques often depends on the specific development context, the project itself, and the team involved, our experience is particularly relevant to similar projects (especially mobile games and shooters). That said, many of these insights are universal and can benefit everyone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tip 1: Emphasize Transparency
&lt;/h2&gt;

&lt;p&gt;An often-overlooked principle is that “any business process should be transparent and understandable.” But why is this important? This is because ambiguity breeds questions and mistrust, which can lead to wasted time and resources. Furthermore, this can result in negative feedback about your engineers’ work — potentially creating more significant problems down the line.&lt;/p&gt;

&lt;p&gt;For instance, we’ve hosted several meetups where we discussed our testing processes and capabilities. As a result, we noticed a significant increase in review requests for various feature branches coming directly from the development team.&lt;/p&gt;

&lt;p&gt;Why? This shift occurred because our colleagues understood that automated tests aren’t just some abstract concept handled by a few people in the studio. Rather, they’re valuable tools that can eliminate errors and streamline future merges.&lt;/p&gt;

&lt;p&gt;After this, we received ideas for test improvements and refinements, focusing the AQA team’s efforts on what the project genuinely needs. Ultimately, this type of collaboration makes it easier for everyone to deliver high-quality content to production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tip 2: Ask Questions
&lt;/h2&gt;

&lt;p&gt;While this point leans more towards traditional QA than automation, it’s crucial to continually ask questions about what users want and what gaps exist. People often won’t voice their needs or the tools that could simplify their tasks unless prompted directly.&lt;/p&gt;

&lt;p&gt;As an example of this, we utilize large tables to describe game balances and these contain significant amounts of data. Occasionally, errors can crop up within these tables since they’re handled by people, and errors are also a natural part of the process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F854gi94repjp9iuawpe7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F854gi94repjp9iuawpe7.jpg" alt="Image description" width="753" height="47"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;just a “small” JSON&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After discussing things with the game designers, we identified the specific rules the data must adhere to, the associated limitations, and, crucially, the need for data verification — something which we often lack the necessary resources to do. Yet, this oversight contributes to periodic errors and inaccuracies in production data. And while these issues can be resolved quickly without requiring fixes in the game client or a full review cycle, they can still impact player experience and perceptions of the game.&lt;/p&gt;

&lt;p&gt;To solve this, we created a script that scans all of the relevant tables for inconsistencies against established rules. This allows us to identify and correct errors before rolling out the balance updates to production.&lt;/p&gt;
&lt;h2&gt;
  
  
  Tip 3: Set Priorities
&lt;/h2&gt;

&lt;p&gt;Writing tests is a resource-intensive and time-consuming endeavor. Moreover, software development is rife with unexpected challenges — ranging from environmental issues to shifts in team priorities regarding product direction. Thus, it’s essential to keep clarifying which content is most critical, those that should be prioritized for testing, and those which will deliver the greatest value to the product.&lt;/p&gt;

&lt;p&gt;To demonstrate a case of this in the wild, we developed a prioritization map for content coverage. This map allows the entire team to easily access Confluence and view which elements will definitely be included in the automated regression testing for the upcoming release, as well as those which may not make it and will require closer attention from the test engineers.&lt;/p&gt;

&lt;p&gt;Aim to test game mechanics rather than the content itself. This approach allows for broader coverage while minimizing testing time.&lt;/p&gt;

&lt;p&gt;Normal test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[TestCase(Equip.Scald)]
[CustomRetry]
[Description("Verify that Blast has a cumulative effect and is applied after filling the corresponding scale")]
public void Blast_Cumulative_Effect_Test(Equip blastEquip)
{
   PrepareCoreEnvironmentHelper
       .SetLevelAndSkipPopups()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;“Overkill” approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[TestCase(Equip.Scald)]
[TestCase(Equip.Daeja)]
[TestCase(Equip.Exodus)]
[TestCase(Equip.Quarker)]
[TestCase(Equip.Vengeance)]
[CustomRetry]
[Description("Verify that Blast has a cumulative effect and is applied after filling the corresponding scale")]
public void Blast_Cumulative_Effect_Test(Equip blastEquip)
{
   PrepareCoreEnvironmentHelper
       .SetLevelAndSkipPopups()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Case in point, consider five rocket launchers that each fire single shots and apply “Effect A” upon impact. The mechanics are defined in the code only once and remain consistent; only the balance numbers and visual skins change (accuracy in these aspects is verified at a different level, not through functional tests). Therefore, a single test can effectively verify the functionality of the code that describes Effect A and the application of damage numbers from the balance for the rocket launcher. As the number of tests grows, efficiency becomes critical — even when you have a robust infrastructure for executing them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tip 4: Maintain Your Test Farm
&lt;/h2&gt;

&lt;p&gt;It’s important to monitor your testing environment closely. If you work remotely and are not often in the office, consider setting up a camera to stream the testing process in real time. This will facilitate easier analysis of tests and help you quickly identify the causes of any failures.&lt;/p&gt;

&lt;p&gt;Additionally, create utility scripts for routine tasks like restarting the system, clearing space, or checking device statuses. These tools will extend the lifespan of your equipment and reduce failures, resulting in fewer test restarts and less wasted time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaknrdoxsdv950vneium.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjaknrdoxsdv950vneium.jpg" alt="Image description" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tip 5: Continuously Enhance Your Code
&lt;/h2&gt;

&lt;p&gt;Here’s an undisputed truth: you should always prioritize maintenance and improvement of your test code. Regardless of how diligently you write your code or how carefully you avoid design flaws, errors will inevitably occur. Mistakes are part of the human experience. However, so is the ability to rectify them — whether they pertain to code or the testing process! After all, in essence, any test case is a set of steps to get from point A to point B.&lt;/p&gt;

&lt;p&gt;As your number of tests grows, so will the frequency of these “paths” through the screens. Don’t hesitate to streamline them, including seeking assistance from developers in the process. Request updates to old tests, acquire new cheats, and adjust your test cases to reflect the current state of the project. Continuous improvement in this area will always yield benefits.&lt;/p&gt;

&lt;p&gt;For instance, in our game, we have drones that serve as support units for robots. Some of these drones have mechanics that activate upon dealing specific damage values within a certain timeframe. Previously, the testing process required us to select the appropriate weapons, set shooting durations, and define error margins. Now, we’ve developed a cheat that can apply the required damage from one specified mech to another.&lt;/p&gt;

&lt;p&gt;Now it looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Tools.BattleActionsHelper.ApplyDamageToSpecifiedMech(ToolsPack, Players.SecondPlayer, Players.FirstPlayer, EquipSlotsList.FirstSlot, damageAmount);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Works literally in one and a half seconds, does not depend on rebalances and the speed of the particles from the weapon&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And here’s how it was before refactoring:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ToolsPack[DroneAbilityHelper.SecondPlayer].CommonActions.PressSelectedElementDuringTimeInterval(BattleUiKeys.FireAllWeaponsFor4ThSlotButton, ShootingTimeMs);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;You need to shoot, but before that you need to think about all the guns or from a part, you need to remember about the distance and dispersion when shooting, you need to wait until the required number of shots hit the target. If there is a rebalance, this can make the tests unstable… in general, everything was difficult&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Is it cool? Absolutely! Is it efficient? Definitely! So, why didn’t we implement this sooner? The answer is simple: there was no demand for it. That said, as the project has evolved, this need has emerged, making its implementation relevant now.&lt;/p&gt;




&lt;p&gt;Inevitably, you’ll encounter challenges and obstacles with your tests. It’s important to remember that there’s no one-size-fits-all solution, so achieving perfection is unlikely. However, this doesn’t mean you can’t address the issues at hand — especially since many of these hurdles have already been navigated by industry colleagues whose experiences can provide valuable insights!&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>automation</category>
      <category>testing</category>
      <category>programming</category>
    </item>
    <item>
      <title>Singularity: Streamlining Game Development with a Universal Framework</title>
      <dc:creator>MY.GAMES</dc:creator>
      <pubDate>Wed, 15 Jan 2025 11:51:35 +0000</pubDate>
      <link>https://dev.to/mygames/singularity-streamlining-game-development-with-a-universal-framework-4i90</link>
      <guid>https://dev.to/mygames/singularity-streamlining-game-development-with-a-universal-framework-4i90</guid>
      <description>&lt;p&gt;Hello! I'm Andrey Makhorin, Server Developer at Pixonic (MY.GAMES). In this article, I'll share how my team and I created a universal solution for backend development. You'll learn about the concept, its outcome, and how our system, called Singularity, performed in real-world projects. I'll also go deep into the challenges we faced.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;When a game studio is starting out, it's crucial to quickly formulate and implement a compelling idea: dozens of hypotheses are tested, and the game undergoes constant changes; new features are added and unsuccessful solutions are revised or discarded. However, this process of rapid iteration, coupled with tight deadlines and a short planning horizon, can lead to the accumulation of technical debt.&lt;/p&gt;

&lt;p&gt;With existing technical debt, reusing old solutions can be complicated since various issues need to be resolved with them. This is obviously not optimal. But there is another way: a “universal framework”. By designing generic, reusable components (such as layout elements, windows, and libraries that implement network interactions), studios can significantly reduce the time and effort required to develop new features. This approach not only reduces the amount of code developers need to write, it also ensures the code has already been tested, resulting in less time spent on maintenance.&lt;/p&gt;

&lt;p&gt;We’ve discussed feature development within the context of one game, but now let’s look at the situation from another angle: for any game studio, reusing small pieces of code within a project can be an effective strategy for streamlining production, but eventually, they'll need to create a new hit game. Reusing solutions from an existing project could, in theory, accelerate this process, but two significant hurdles arise. First of all, the same technical debt issues apply here, and second, any old solutions were likely tailored to the specific requirements of the previous game, making them ill-suited for the new project.&lt;/p&gt;

&lt;p&gt;These issues are compounded by further issues: the database design may not meet the new project's requirements, the technologies may be outdated, and the new team may lack the necessary expertise.&lt;/p&gt;

&lt;p&gt;Furthermore, the core system is often designed with a specific genre or game in mind, making it difficult to adapt to a new project.&lt;/p&gt;

&lt;p&gt;Again, this is where a universal framework comes into play, and while creating games that are vastly different from one another may seem like an insurmountable challenge, there are examples of platforms that have successfully tackled this problem: PlayFab, Photon Engine, and similar platforms have demonstrated their ability to significantly reduce development time, allowing developers to focus on building games rather than infrastructure.&lt;/p&gt;

&lt;p&gt;Now, let’s jump into our story.&lt;/p&gt;

&lt;h2&gt;
  
  
  The need for Singularity
&lt;/h2&gt;

&lt;p&gt;For multiplayer games, a robust backend is essential. Case in point: our flagship game, War Robots. It’s a mobile PvP shooter, it has been around for over 10 years and it’s accumulated numerous features requiring backend support. And while our server code was tailored to the project's specifics, it was using technologies that had become outdated.&lt;/p&gt;

&lt;p&gt;When it came time to develop a new game, we realized that trying to reuse War Robots’ server components would be problematic. The code was too project-specific and required expertise in technologies that the new team lacked.&lt;/p&gt;

&lt;p&gt;We also recognized that the new project's success was not guaranteed, and, even if it did succeed, we’d eventually need to create yet another new game, and we’d be facing the same "blank slate" problem. To avoid this and do some future-proofing, we decided to identify the essential components required for game development and then create a universal framework that could be used across all future projects.&lt;/p&gt;

&lt;p&gt;Our goal was to provide developers with a tool that would spare them the need to repeatedly design backend architectures, database schemas, interaction protocols, and specific technologies. We wanted to free folks from the burden of implementing authorization, payment processing, and user information storage, allowing them to focus on the game's core aspects: gameplay, design, business logic, and more.&lt;/p&gt;

&lt;p&gt;Additionally, we wanted not only to accelerate development with our new framework, but also to enable client programmers to write server-side logic without deep knowledge of networking, DBMS, or infrastructure.&lt;/p&gt;

&lt;p&gt;Beyond that, by standardizing a set of services, our DevOps team would be able to treat all game projects similarly, with only the IP addresses changing. This would enable us to create reusable deployment script templates and monitoring dashboards.&lt;/p&gt;

&lt;p&gt;Throughout the process, we made architectural decisions that took into account the possibility of reusing the backend in future games. This approach ensured that our framework would be flexible, scalable, and adaptable to diverse project requirements.&lt;/p&gt;

&lt;p&gt;(It’s also worth noting that the development of the framework was not an island – it was created in parallel with another project.)&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating the platform
&lt;/h2&gt;

&lt;p&gt;We decided to give Singularity a set of functions agnostic to the genre, setting, or core gameplay of a game, including:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authentication&lt;/li&gt;
&lt;li&gt;User Data Storage&lt;/li&gt;
&lt;li&gt;Game Settings and Balance Parsing&lt;/li&gt;
&lt;li&gt;Payment Processing&lt;/li&gt;
&lt;li&gt;AB Testing Distribution&lt;/li&gt;
&lt;li&gt;Analytics Service Integration&lt;/li&gt;
&lt;li&gt;Server Admin Panel&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These functions are fundamental to any multi-user mobile project (at the very least, they’re relevant to projects developed in Pixonic).&lt;/p&gt;

&lt;p&gt;In addition to these core functions, Singularity was designed to accommodate more project-specific features closer to the business logic. These capabilities are built using abstractions, making them reusable and extensible across different projects.&lt;/p&gt;

&lt;p&gt;Some examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Quests&lt;/li&gt;
&lt;li&gt;Offers&lt;/li&gt;
&lt;li&gt;Friends list&lt;/li&gt;
&lt;li&gt;Matchmaking&lt;/li&gt;
&lt;li&gt;Rating tables&lt;/li&gt;
&lt;li&gt;Online status of players&lt;/li&gt;
&lt;li&gt;In-game notifications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdji1v1d3e95i1q3f9ju.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdji1v1d3e95i1q3f9ju.jpeg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Technically, the Singularity platform consists of four components:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Server SDK: This is a set of libraries based on which game programmers can develop their servers.&lt;/li&gt;
&lt;li&gt;Client SDK: Also a set of libraries, but for developing a mobile application.&lt;/li&gt;
&lt;li&gt;A set of ready-made microservices: These are ready-made servers that do not require modification. Among them are the authentication server, balance server and others.&lt;/li&gt;
&lt;li&gt;Extension libraries: These libraries already implement various features, such as offers, quests, etc. Game programmers can enable these extensions if their game requires it.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Up next, let’s examine each of these components.&lt;/p&gt;

&lt;h2&gt;
  
  
  Server SDK
&lt;/h2&gt;

&lt;p&gt;Some services, like the profile service and matchmaking, require game-specific business logic. To accommodate this, we've designed these services to be distributed as libraries. By then building on top of these libraries, developers can create applications that incorporate command handlers, matchmaking logic, and other project-specific components.&lt;/p&gt;

&lt;p&gt;This approach is analogous to building an ASP.NET application, where the framework provides low-level HTTP protocol functionality, meanwhile, the developer can focus on creating controllers and models that contain the business logic.&lt;/p&gt;

&lt;p&gt;For example, let's say we want to add the ability to change usernames within the game. To do this, the programmers would need to write a command class that includes the new username and a handler for this command.&lt;/p&gt;

&lt;p&gt;Here's an example of a ChangeNameCommand:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class ChangeNameCommand : ICommand
{
       public string Name { get; set; }
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;An example of this command handler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class ChangeNameCommandHandler : ICommandHandler&amp;lt;ChangeNameCommand&amp;gt;
{
       private IProfile Profile { get; }


       public ChangeNameCommandHandler(IProfile profile)
       {
           Profile = profile;
       }


       public void Handle(ICommandContext context, ChangeNameCommand command)
       {
           Profile.Name = command.Name;
       }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the handler must be initialized with an IProfile implementation, which is handled automatically through dependency injection. Some models, such as IProfile, IWallet, and IInventory, are available for implementation without additional steps. However, these models may not be very convenient to work with due to their abstract nature, providing data and accepting arguments that are not tailored to specific project needs.&lt;/p&gt;

&lt;p&gt;To make things easier, projects can define their own domain models, register them similarly to handlers, and inject them into constructors as needed. This approach allows for a more tailored and convenient experience when working with data.&lt;/p&gt;

&lt;p&gt;Here's an example of a domain model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class WRProfile
{
       public readonly IProfile Raw;

       public WRProfile(IProfile profile)
       {
           Raw = profile;
       }

       public int Level
       {
           get =&amp;gt; Raw.Attributes["level"].AsInt();
           set =&amp;gt; Raw.Attributes["level"] = value;
       }
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By default, the player profile does not contain the Level property. However, by creating a project-specific model, this kind of property can be added and one can easily read or change player-level information in command handlers.&lt;/p&gt;

&lt;p&gt;An example of a command handler using the domain model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class LevelUpCommandHandler : ICommandHandler&amp;lt;LevelUpCommand&amp;gt;
{
       private  WRProfile Profile { get; }

       public LevelUpCommandHandler(WRProfile profile)
       {
           Profile = profile;
       }

       public void Handle(ICommandContext context, LevelUpCommand command)
       {
           Profile.Level += 1;
       }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That code clearly demonstrates that the business logic for a specific game is insulated from the underlying transport or data storage layers. This abstraction allows programmers to focus on the core game mechanics without worrying about transactionality, race conditions, or other common backend issues.&lt;/p&gt;

&lt;p&gt;Further still, Singularity offers extensive flexibility for enhancing game logic. The player's profile is a collection of "key-typed value" pairs, enabling game designers to easily add any properties, just as they envision.&lt;/p&gt;

&lt;p&gt;Beyond the profile, the player entity in Singularity is made up of several essential components designed to maintain flexibility. Notably, this includes a wallet that tracks the amount of each currency within it as well as an inventory that lists the player's items.&lt;/p&gt;

&lt;p&gt;Interestingly, items in Singularity are abstract entities similar to profiles; each item has a unique identifier and a set of key-typed value pairs. So, an item doesn't necessarily need to be a tangible object like a weapon, clothing, or resource in the game world. Instead, it can represent any general description issued uniquely to players, like a quest or offer. In the following section, I’ll detail how these concepts are implemented within a specific game project.&lt;/p&gt;

&lt;p&gt;One key difference in Singularity is that items store a reference to a general description in the balance sheet. While this description remains static, the properties of the individual item issued can change. For example, players can be given the ability to change weapon skins.&lt;/p&gt;

&lt;p&gt;Additionally, we have robust options for migrating player data. In traditional backend development, the database schema is often tightly coupled with the business logic, and changes to an entity’s properties typically require direct schema modifications.&lt;/p&gt;

&lt;p&gt;However, the traditional approach is unsuitable for Singularity because the framework lacks awareness of the business properties associated with a player entity, and the game development team lacks direct access to the database. Instead, migrations are designed and registered as command handlers that operate without direct repository interaction. When a player connects to the server, their data is fetched from the database. If any migrations registered on the server have not yet been applied to this player, they are executed, and the updated state is saved back to the database.&lt;/p&gt;

&lt;p&gt;The list of applied migrations is also stored as a player property, and this approach has another significant advantage: it allows migrations to be staggered over time. This allows us to avoid downtimes and performance issues that massive data changes might otherwise cause, such as when adding a new property to all player records and setting it to a default value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Client SDK
&lt;/h2&gt;

&lt;p&gt;Singularity offers a straightforward interface for backend interaction, allowing project teams to focus on game development without worrying about issues of protocol or network communication technologies. (That said, the SDK does provide the flexibility to override default serialization methods for project-specific commands if necessary.)&lt;/p&gt;

&lt;p&gt;The SDK enables direct interaction with the API, but it also includes a wrapper that automates routine tasks. For instance, executing a command on the profile service generates a set of events that indicate changes in the player’s profile. The wrapper applies these events to the local state, ensuring the client maintains the current version of the profile.&lt;/p&gt;

&lt;p&gt;Here’s an example of a command call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var result = _sandbox.ExecSync(new LevelUpCommand())

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Ready-made Microservices
&lt;/h2&gt;

&lt;p&gt;Most services within Singularity are designed to be versatile and do not require customization for specific projects. These services are distributed as pre-built applications and can be utilized across various games.&lt;/p&gt;

&lt;p&gt;The suite of ready-made services includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A gateway for client requests&lt;/li&gt;
&lt;li&gt;An authentication service&lt;/li&gt;
&lt;li&gt;A service for parsing and storing settings and balance tables&lt;/li&gt;
&lt;li&gt;An online status service&lt;/li&gt;
&lt;li&gt;A friends service&lt;/li&gt;
&lt;li&gt;A leaderboard service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Some services are fundamental to the platform and must be deployed, such as the authentication service and gateway. Others are optional, like the friends service and leaderboard, and can be excluded from the environment of games that do not require them.&lt;/p&gt;

&lt;p&gt;I'll touch on the issues related to managing a large number of services later, but for now, it's essential to emphasize that optional services should remain optional. As the number of services grows, the complexity and onboarding threshold for new projects also increase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Extension Libraries
&lt;/h2&gt;

&lt;p&gt;While Singularity’s core framework is quite capable, significant features can be implemented independently by project teams without modifying the core. When functionality is identified as potentially beneficial for multiple projects, it can be developed by the framework team and released as separate extension libraries. These libraries can then be integrated and utilized in-game command handlers.&lt;/p&gt;

&lt;p&gt;Some example features that might apply here are quests and offers. From the core framework’s perspective, these entities are simply items assigned to players. However, extension libraries can imbue these items with specific properties and behavior, transforming them into quests or offers. This capability allows for dynamic modification of item properties, enabling the tracking of quest progress or recording the last date an offer was presented to the player.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results so far
&lt;/h2&gt;

&lt;p&gt;Singularity has been successfully implemented in one of our latest globally available games, Little Big Robots, and this has given the client developers the power to handle the server logic themselves. Additionally, we've been able to create prototypes that utilize existing functionality without the need for direct support from the platform team.&lt;/p&gt;

&lt;p&gt;However, this universal solution is not without its challenges. As the number of features has expanded, so has the complexity of interacting with the platform. Singularity has evolved from a simple tool into a sophisticated, intricate system—similar in some ways to the transition from a basic push-button phone to a fully-featured smartphone.&lt;/p&gt;

&lt;p&gt;While Singularity has alleviated the need for developers to dive into the complexities of databases and network communication, it has also introduced its own learning curve. Developers now need to understand the nuances of Singularity itself, which can be a significant shift.&lt;/p&gt;

&lt;p&gt;The challenges are faced by folks ranging from developers to infrastructure administrators. These professionals often have deep expertise in deploying and maintaining well-known solutions like Postgres and Kafka. However, Singularity is an internal product, necessitating that they acquire new skills: they need to learn the intricacies of Singularity's clusters, differentiate between required and optional services, and understand which metrics are critical for monitoring.&lt;/p&gt;

&lt;p&gt;While it's true that within a company, the developers can always reach out to the platform's creators for some advice, but this process inevitably demands time. Our goal is to minimize the barrier to entry as much as possible. Achieving this necessitates comprehensive documentation for each new feature, which can slow down development, but is nonetheless considered an investment in the platform's long-term success. Moreover, robust unit and integration test coverage is essential to ensure system reliability.&lt;/p&gt;

&lt;p&gt;Singularity heavily relies on automated testing because manual testing would require developing separate game instances, which is impractical. Automated tests can catch the vast majority—that is, 99%—of errors. However, there's always a small percentage of issues that only become evident during specific game tests. This can impact release timelines because the Singularity team and project teams often work asynchronously. A blocking error might be found in code written long ago, and the platform development team may be occupied with another critical task. (This challenge is not unique to Singularity and can occur in custom backend development as well.)&lt;/p&gt;

&lt;p&gt;Another significant challenge is managing updates across all projects that use Singularity. Typically, there is one flagship project that drives the framework's development with a constant stream of feature requests and enhancements. Interaction with this project's team is close-knit; we understand their needs and how they can leverage our platform to solve their problems.&lt;/p&gt;

&lt;p&gt;While some flagship projects are closely involved with the framework team, other games in early development stages often operate independently, relying solely on existing functionality and documentation. This can sometimes lead to redundant or suboptimal solutions, as developers might misunderstand the documentation or misuse the available features. To mitigate this, it's crucial to facilitate knowledge sharing through presentations, meetups, and team interchanges, although such initiatives do require a considerable investment of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Future
&lt;/h2&gt;

&lt;p&gt;Singularity has already demonstrated its value across our games and is poised to further evolve. While we do plan to introduce new features, our primary focus right now is on ensuring that these enhancements do not complicate the platform’s usability for project teams.&lt;/p&gt;

&lt;p&gt;Besides this, it is necessary to lower the barrier to entry, simplify deployment, add flexibility in terms of analytics, allowing projects to connect their solutions. This is a challenge for the team, but we believe and see in practice that the efforts invested in our solution will definitely pay off in full!&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>server</category>
      <category>backend</category>
      <category>development</category>
    </item>
    <item>
      <title>Game analytic power: how we process more than 1 billion events per day</title>
      <dc:creator>MY.GAMES</dc:creator>
      <pubDate>Fri, 24 Nov 2023 13:46:28 +0000</pubDate>
      <link>https://dev.to/mygames/game-analytic-power-how-we-process-more-than-1-billion-events-per-day-ieo</link>
      <guid>https://dev.to/mygames/game-analytic-power-how-we-process-more-than-1-billion-events-per-day-ieo</guid>
      <description>&lt;p&gt;When we operate our mech shooter, War Robots, and our other games, we want to know what’s going on there; it’s also nice to know what’s happening on the servers — this means we need to collect events. Obviously, it’s impossible to do this manually, necessitating some kind of system. To that end, at our studio, we have AppMetr, which has been helping us out for years.&lt;/p&gt;

&lt;p&gt;In this article, we’ll talk about how we collect events from our mobile devices and servers, how we store them, and why we don’t use ready-made analytical databases.&lt;/p&gt;

&lt;p&gt;We first thought about the need to collect and analyze data back in 2011. At that time, the Pixonic studio was growing and changing rapidly, so we needed a very flexible analytical system.&lt;/p&gt;

&lt;p&gt;Our first team of developers didn’t find a suitable solution, so a team member began the creation of our own system. We called it AppMetr, and built it in Java, using Cassandra to store events — we chose these technologies for one simple reason: the team had experience working with them. Additionally, we used Storm for distributed event processing, and Kafka for queues, and of all this was running on Linux servers.&lt;/p&gt;

&lt;p&gt;Simultaneously, our main title, the mobile game War Robots, began a period of very active growth. In 6 months, the number of events received by AppMetr increased from 150 to 1 billion events per day. The old architecture couldn’t keep up, so we had to make quick decisions.&lt;/p&gt;

&lt;p&gt;AppMetr does three things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It collects events&lt;/li&gt;
&lt;li&gt;It stores them&lt;/li&gt;
&lt;li&gt;It allows us to perform analytical queries on them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An event is a simple JSON object with a name and some set of attributes. (However, we don’t know in advance the events, names or attributes that will come to us.)&lt;/p&gt;

&lt;p&gt;Event sets are combined into batches, which are transmitted to us over the network. We also support all the standard, JSON-permissible data types, and we use a flexible data scheme, which is very convenient — different development teams can come up with new events and make changes to old ones (for example, by changing the type for some attribute).&lt;/p&gt;

&lt;p&gt;That said, this flexible data scheme has its own problems and there aren’t many analytical databases on the market that use it. This is because it’s quite difficult to create or maintain such a system. And if you try to create it yourself, then without fail, over time, your code will become more and more complicated, it will be difficult to optimize, and everything will work slowly. Compressing data without a scheme is also not an easy task. Fortunately, in our case, we managed to partially bypass these problems — but more on that later.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AppMetr works
&lt;/h2&gt;

&lt;p&gt;We wrote libraries for various programming languages and platforms that accumulate events in memory, form batches, compress them, and send them via the HTTPS protocol to our web servers.&lt;/p&gt;

&lt;p&gt;The first one to accept HTTP requests was NGINX. It simply redirected these requests to our processing services, where we separated them, filtered out duplicates and enriched the events with additional attributes (for example, we’d add the current level to each player event).&lt;/p&gt;

&lt;p&gt;The problem is that, sometimes, due to poor mobile connection, clients often make several attempts to send the next batch of data. As a result, we ended up with duplicates. At the same time, these duplicates were filtered by the serial number of the batch, stored in the client settings, and incremented with each dispatch, then stored in the so-called “shared preferences”. But it turned out that shared preferences can be sometimes rolled back, and their values can no longer be trusted.&lt;/p&gt;

&lt;p&gt;So now, we filter duplicates differently: we calculate the checksum from the client’s request body, save it in Cassandra for six months, and if the client repeats the request within six months — we ignore it. In any case, we must immediately accept events, even if some of our servers are overloaded or unavailable, and when clients are sending much more events than usual (this happens, for example, during when the game is being featured in the store).&lt;/p&gt;

&lt;p&gt;To do all this, we created a simple microservice on Vertex called API Proxy. We put it between NGINX and our processing services. This microservice stores the entire incoming HTTP request in Kafka, and responds with “OK” to the client. Thanks to the queue of processing HTTP requests in Kafka, we can now easily endure load peaks and various data center failures, because the second microservice that forwards HTTP requests from Kafka to our processing logic is able to repeat the request, unlike the client.&lt;/p&gt;

&lt;p&gt;Now, events enriched with additional attributes are sent to the Tracking service, which groups them by event name and date of occurrence; also it groups events in five minute blocks. The Tracking service records these blocks to Cassandra each time under a new key.&lt;/p&gt;

&lt;p&gt;That’s how it all works now, but we previously did it differently: we formed aggregates from all incoming events per day and per five minutes. The only catch was that, quite often, clients were sending events “from the past”. Therefore, we had to generate these aggregates again and rewrite the corresponding keys in Cassandra. This frequent overwriting of data in Cassandra led to a catastrophic slowdown during subsequent readings. In addition, the re-formation of aggregates heavily loaded the network and our infrastructure, making querying very slow for us.&lt;/p&gt;

&lt;p&gt;As a result, we abandoned the creation of aggregates and began a path towards happiness. Tracking now saves an immutable data block in Cassandra every five minutes. If the event is rare, we store this block in Cassandra for one node, if it’s frequent, then we store it for several nodes. In other words, we measure the speed of the incoming stream of events during some window and dynamically calculate how many Cassandra nodes we’ll need to store the blocks of this type of event.&lt;/p&gt;

&lt;p&gt;It’s also nice when the hard drives on our database servers are loaded evenly. To do this, we change the partition key for the saved blocks every day, but in this case, it’s easy since this key contains the current date.&lt;/p&gt;

&lt;p&gt;What is the block of events that we save in Cassandra? It consists of columns for each attribute and meta information. This meta information contains the compression settings for each column (we also store the minimum and maximum values in each column). We can read any column from this block independently of other blocks. For example, if we want to calculate player levels, we’ll read the Level column, and if we want to calculate the levels of players and their UserID, then we will read both columns at the same time, and the same event will correspond to the same ordinal values in both columns.&lt;/p&gt;

&lt;p&gt;As mentioned above, we don’t have a fixed data scheme, and we don’t know what type of value we’ll receive. It may turn out that UserID can be sent as a number, and after two seconds as a string. We’re trying to determine the most general data type that will be used for a given column in a given block, and in another block the same column may have a different type.&lt;/p&gt;

&lt;p&gt;In addition, we compress attribute columns independently of each other, analyze the stream of values, cardinalities, and other parameters of this stream, and choose the most optimal compression algorithm. When a block is formed, we try to roughly estimate how big the column will be after compression using each of the methods — and we’ll use the winner. In addition, we try to compress the column using ZSTD, but if the size of the column doesn’t decrease, we save it in Cassandra as is. In this form, we store about 2.5 trillion events in Cassandra, which take up about 180 terabytes of data (considering a replication factor of 3).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk26kobruurzgzqjxg098.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk26kobruurzgzqjxg098.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why not use a ready-made solution?
&lt;/h2&gt;

&lt;p&gt;There is this opinion that it’s better to use ready-made solutions instead of creating them yourself. For some people, this really works — but not for us. All of the third-party solutions we’ve found were significantly more expensive than the cost of our team and our servers. Thus, we use our AppMetr.&lt;/p&gt;

&lt;p&gt;We could use ready-made analytical databases, like ClickHouse, but we have certain problems with it. The main one is that, over the years of AppMetr’s work, tens of thousands of different events with different attributes have accumulated, and transferring them to the built-in ClickHouse data scheme would not be an easy task. We also can’t create a new table in ClickHouse for each type of event, because this would be an anti-pattern.&lt;/p&gt;

&lt;p&gt;So, we created our own system, and in Java, even though Java is considered slow. It’s unlikely, of course, that AppMetr will ever reach ClickHouse speeds, but we don’t need that right now. We recently conducted a small experiment purely for ourselves: we launched a query for a year based on hundreds of millions of events filtered by one attribute and grouped by date. This query was executed on one server at a speed of 72 million events per second, and we were completely satisfied with this result.&lt;/p&gt;

&lt;p&gt;Another thing: we also wrote SQL support for our system, but it’s very limited and only works with standard simple queries. That is, there is no “JOIN” or subquery support, and this is a problem because our analysts need the ability to perform complex queries.&lt;/p&gt;

&lt;p&gt;We decided not to waste time reinventing the wheel and simply installed &lt;a href="https://trino.io/" rel="noopener noreferrer"&gt;Trino&lt;/a&gt; on our servers. It’s a full featured SQL query engine that works on your data. Now our analysts can use it to work with data from AppMetr and execute queries at different levels of complexity.&lt;/p&gt;

&lt;p&gt;Actually, AppMetr is very popular within our company due to its flexibility: any employee can create a new project, distribute access to it, add dashboards with various charts, share a link to a chart with a colleague, click on various filtering expressions, add groupings, and so on. Developers and analysts can take data from AppMetr using the SQL API and process it in Jupiter or locally. And game developers can integrate the AppMetr SDK into their product, come up with a name for an event, add some attributes, and see it on the charts in minutes.&lt;/p&gt;

&lt;p&gt;Let’s move on to the most interesting part and talk about how we perform queries on collected events. Actually, everything is pretty simple. We have several query servers. The initial SQL query of the user arrives at a random server, where we divide it and form a query execution scheme, which we send to some queries-server.&lt;/p&gt;

&lt;p&gt;It turns out that writing your own SQL dialect is quite simple; we took the ready-made ANTLR4 library, wrote some simple grammar for it, implemented a visitor, and now we parse SQL queries without any problems. Our query servers are partitioned by the name of the event, so the query coordinator always knows which server to send the user’s request to. And if the server is unavailable, then the request is forwarded to the next one by ring.&lt;/p&gt;

&lt;p&gt;The query servers cache the event blocks they download from Cassandra on their local hard drives, so they can take full advantage of the operating system’s page cache and work with local data very efficiently. It’s easy for us to cache event blocks on query-servers because they are immutable.&lt;/p&gt;

&lt;p&gt;We cache blocks using an algorithm and LRU, meaning the least-used block is deleted first, and we use SSD disks for the disk cache. On each query server we have a simple folder scheme; there is a folder for each day and each event. It’s worth noting that the first component in this folder structure is the server number; this is needed so that any query server can take on the load of the previous one in the ring.&lt;/p&gt;

&lt;p&gt;And since we use the column-oriented database approach, we have a separate file for each event attribute in this folder. We also have two files with meta information: these indicate which columns and which blocks have already been requested from Cassandra, as well as the compression settings for each column.&lt;/p&gt;

&lt;p&gt;Before we used to store data in the cache in a different way. For each type of event from each day, we recorded data in one large file and read it from top to bottom, skipping attribute columns that were unnecessary for this request. Now, we store different event columns in different files; the amount of data read by our application from the disk has not changed, but the reading speed has increased significantly.&lt;/p&gt;

&lt;p&gt;We only use sequential reading and sequential recording. If it’s necessary to record a new block of data to the cache, then we simply add a new block’s columns to the end of the corresponding column files on disk; and when we start executing a query, we download only the attribute columns necessary to execute this query from Cassandra. Also, before reading the column, we map it entirely into memory and this further helps increase speed. (It’s worth noting that you can map memory to files even larger than your RAM.)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnzebcv3p5x29c11000e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmnzebcv3p5x29c11000e.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How AppMetr executes queries
&lt;/h2&gt;

&lt;p&gt;Any analytic query that comes into AppMetr goes through four simple steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We download data blocks from Cassandra to the cache on query-servers&lt;/li&gt;
&lt;li&gt;We filter these data blocks&lt;/li&gt;
&lt;li&gt;We group events from these blocks, taking into account the aggregation functions that are specified in the query&lt;/li&gt;
&lt;li&gt;We perform post-processing of the query results: for example, we get the 10 most popular keys&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let’s take a closer look at some of these steps.&lt;/p&gt;

&lt;p&gt;When we download data blocks from Cassandra, the query server knows which blocks it already has in the cache and which ones it doesn’t, so it only requests new information. And if several query servers want to use the same data at the same time, or rather, several users want to use the same data to execute a query, then the query server will download them from Cassandra only once.&lt;/p&gt;

&lt;p&gt;When we go through the filtering step, we form a criterion tree in memory. Each criterion receives some column of block attributes as input, filters this column by some expression, and returns a bit set with the result. Each ordinal value in the original column in this bit-set will correspond to bit 0 or 1. One means that this value satisfies the criterion filtering condition. Bit sets obtained from different criteria are combined using logical expressions and a final bit set is formed with the result of filtering, which is passed to the next stage of processing.&lt;/p&gt;

&lt;p&gt;The next step is to group the data. The input of the grouping is the initial block of the event and a bit-set with the filtering result. We iterate over the original event block using the bit set and group the events in memory using regular HashMap.&lt;/p&gt;

&lt;p&gt;It’s worth noting that we always perform grouping in memory, but if the size of the grouping map exceeds the limits, then we sort it by key and record it to a file on disk. We create a new grouping map in memory. When it gets filled up, we take it back to the disk again. Upon completion of the grouping, we have several sorted files on the disk and one map in memory, which we also sort. Since there are multiple sorted data sources, we can use “merge sort” to merge them. We create a merge iterator for these data sources and pass it to the next stage of processing.&lt;/p&gt;

&lt;p&gt;This grouping pattern had its issues. Namely, the issue of data accumulation in memory, and then its abrupt release and transfer to disk. Previously, we used Java 8, G1 Garbage Collector, and a 30-gigabyte Heap on each server — we had to deal with the fact that Garbage Collector stopped collecting garbage in the old generation, and we had a long Full GC. We looked at the GC logs and saw a message there: GC concurrent-mark-reset-for-overflow.&lt;/p&gt;

&lt;p&gt;It seemed that we had enough Heap, but anyway, Full GC worked often. It turned out that G1 lacked an internal stack to store our temporary object graph. We increased the MarkStackSize parameter, after which it began collecting garbage in the old generation as well, and this meant we stopped having the long Full GC.&lt;/p&gt;

&lt;p&gt;Once we’ve calculated the results of query execution, now we must send them to the client. What could go wrong here? Actually, a lot. The fact is that we use merge iterators, which are some kinds of deferred calculations that we cannot do all at once, otherwise we’ll get OutOfMemory. So, we stream the query result to a client with back pressure support, while encoding the response in CSV format. If the client starts to read the result more slowly, then we likewise perform the iteration more slowly over the merge iterator, reading data from the disk more slowly. As a result, streaming the result helps us return large query results.&lt;/p&gt;

&lt;p&gt;To speed up the execution of requests within a single server, we use servers with multiple cores, and AppMetr utilizes them all. All the data in AppMetr is partitioned by day, so different days can be processed in parallel on different cores, and these different threads on different cores don’t share any common state; we can greatly optimize our code, without using concurrent data structures, which helps a lot.&lt;/p&gt;

&lt;p&gt;In addition, we try to use only native data types, we do not use String if we need to filter a string in some substring, and perform all operations on byte arrays. And, of course, we make the minimum number of allocations of new temporary objects.&lt;/p&gt;

&lt;p&gt;Finally, let’s take a look at how we share the resources of the query server between queries running in parallel. To do this, we have a solution that has shown great results in practice. For context, we received the task of utilizing server resources as efficiently as possible, but at the same time allowing the parallel execution of queries. To do this, each query server runs one instance of our application. We allocate a certain amount of RAM and a certain number of processor cores to this application; a separate thread of execution starts for each processor core in the application, and the memory allocated to the query-server is divided equally between these threads.&lt;/p&gt;

&lt;p&gt;A thread can’t use the memory allocated to another thread. That is, in parallel on one query server, we can execute as many queries as there are cores in the server, and if it’s impossible to allocate a guaranteed core to a new query, then this query is queued.&lt;/p&gt;

&lt;p&gt;Let’s consider this situation as an example: a new user request comes, we allocate a guaranteed core to it, the request begins to be executed, and at that moment — the rebalancing service is triggered. It considers the number of currently running requests and the number of cores in the server. This service can add an extra thread of execution to any active request or take this thread away by passing it to another request. The rebalancing service takes into account various nuances, for example, additional threads may not be of great use for some requests if they’re a request for one day, for example.&lt;/p&gt;




&lt;p&gt;It’s not difficult to create your own specialized column store, and it will work quickly even in Java. In this case, you don’t have to aggregate the event or set up a data scheme in advance. In our case, we created AppMetr for our tasks — it helps us quickly collect all the necessary information and is equally useful to employees from various departments.&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>analytics</category>
    </item>
    <item>
      <title>Unity Realtime Multiplayer, Part 1: Networking Basics</title>
      <dc:creator>MY.GAMES</dc:creator>
      <pubDate>Mon, 31 Jul 2023 11:38:40 +0000</pubDate>
      <link>https://dev.to/mygames/unity-realtime-multiplayer-part-1-networking-basics-34lh</link>
      <guid>https://dev.to/mygames/unity-realtime-multiplayer-part-1-networking-basics-34lh</guid>
      <description>&lt;p&gt;Network interaction is critical considering for most modern games, whether mobile, console, PC, or VR . It doesn't matter if you're creating a simple multiplayer game or an ambitious MMO — network programming knowledge is key.&lt;/p&gt;

&lt;p&gt;Hello everyone, I'm Dmitrii Ivashchenko, a Lead Software Engineer at MY.GAMES. This series of articles, on the "Unity Networking Landscape in 2023" will cover critical aspects and constraints of network environments, delve into various protocols (including TCP, UDP, and WebSocket) and highlight the significance of the Reliable UDP protocol. We'll explore the impact of NAT on real-time multiplayer games and guide you on preparing game data for network transmission.&lt;/p&gt;

&lt;p&gt;We'll look at topics ranging from the basics to more advanced concepts like transport protocols, network architecture patterns, ready-made solutions for Unity, and more. We'll analyze both official Unity solutions and third-party tools to help you find the optimal choice for your projects.&lt;/p&gt;

&lt;p&gt;In this first post, we'll cover the critical elements of network programming and look at the obstacles and issues developers often face when creating games that feature networking.&lt;/p&gt;

&lt;h1&gt;
  
  
  Understanding the infrastructure
&lt;/h1&gt;

&lt;p&gt;The Internet is a complex system comprising various devices, each with unique functions. Let's talk about some of those. Typically, an individual's connection to the Internet begins with a device such as a computer or a smartphone. These connect to a local network through routers or modems, which enable communication between the local network and the ISP. &lt;/p&gt;

&lt;p&gt;The ISP has larger routers and switches that manage traffic from multiple local networks, and these devices comprise the backbone of the Internet, which includes a complicated network of high-capacity routers and fiber-optic cables spanning continents and oceans; separate companies known as backbone providers are responsible for maintaining this backbone.&lt;/p&gt;

&lt;p&gt;Additionally, data centers house powerful servers where websites, applications, and online services reside. When you request access to a website or online service, your request travels through this extensive network to the relevant server, and subsequently, the data is sent back along the same path.&lt;/p&gt;

&lt;h1&gt;
  
  
  &lt;strong&gt;Network restrictions&lt;/strong&gt;
&lt;/h1&gt;

&lt;p&gt;Before diving into the world of TCP, UDP, Relay Servers, and real-time multiplayer game development, it's critical to have a solid understanding of network systems as a whole. This involves understanding the roles and functions of devices like hubs and routers and an awareness of any potential issues that can arise from the operation of these devices and mediums. &lt;/p&gt;

&lt;p&gt;Network technologies aren't isolated from the physical world and are subject to several physical limitations: bandwidth, latency, connection reliability — all of these factors are important to consider when developing networked games. &lt;/p&gt;

&lt;p&gt;Understanding these basic principles and constraints will help you better evaluate the possible solutions and strategies required for the successful network integration of your games.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bandwidth
&lt;/h2&gt;

&lt;p&gt;Bandwidth is the maximum amount of data that can be transmitted through a network in a specific period. Data transmission speeds directly depend on the available bandwidth: the more bandwidth, the more data can be simultaneously uploaded.&lt;/p&gt;

&lt;p&gt;Bandwidth is measured in bits per second and can be of two types: symmetric (with equal upload and download speeds) and asymmetric (with different upload and download speeds). &lt;/p&gt;

&lt;p&gt;Symmetric connections are usually used for wired networks, like with fiber-optic networks, while asymmetric connections are used in wireless networks, such is the case with mobile data.&lt;/p&gt;

&lt;p&gt;Bandwidth is usually measured in bits per second (bps) or multiples, such as megabits per second (Mbps). A high bandwidth means more data can be transmitted in less time, which is absolutely essential for real-time multiplayer games.&lt;/p&gt;

&lt;h2&gt;
  
  
  Round-Trip Time
&lt;/h2&gt;

&lt;p&gt;RTT, or Round-Trip Time, measures the time it takes for a data packet to travel from the sender to the receiver and then back again. This is an essential metric in networked games as it affects the latency that players may experience during gameplay. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--nP05BMx5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkjvym6ewjl3pzzdycr8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--nP05BMx5--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rkjvym6ewjl3pzzdycr8.png" alt="Image description" width="800" height="478"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When RTT is high, players may experience delays which can negatively impact gameplay. Therefore, game developers should strive to minimize RTT to provide a smoother and more responsive gameplay experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Network delays
&lt;/h2&gt;

&lt;p&gt;A network delay (often referred to as "lag") is the time required to transmit a data packet from sender to receiver. Even small network delays can significantly affect gameplay in games with high responsiveness requirements, such as first-person shooters. &lt;/p&gt;

&lt;p&gt;Although data is transmitted at speeds close to the speed of light, distance can still affect the system and cause delays. Delays often arise due to the infrastructure required for the Internet to function, and they cannot be eliminated. This can happen for reasons related to transmission through physical cables, delays in network devices such as routers and switches, and processing delays on sending and receiving devices. That said, this infrastructure can still be optimized to reduce delays.&lt;/p&gt;

&lt;h3&gt;
  
  
  The speed of light and network latency
&lt;/h3&gt;

&lt;p&gt;Let's talk about how the means of data transmission impacts network latency. Data transmitted with light via optical fibers isn't transferred at exactly the speed of light. In reality, the light in optical fibers transmits slower than in would in a vacuum, since the material of the fiber has an effect on speed. &lt;/p&gt;

&lt;p&gt;(The maximum speed of light is approximately 299 million meters per second or 186 thousand miles per hour, but again, this is only possible ideal vacuum conditions.)&lt;/p&gt;

&lt;p&gt;So, with optical fiber, light transmits at a slower rate, relatively speaking. Let's also note that data transmitted through copper wiring is significantly lower compared to optical fiber because optical fibers have greater bandwidth and are less susceptible to interference than copper wires.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--EpH-CQ1y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ruieeqsmnaeu081xz71m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--EpH-CQ1y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ruieeqsmnaeu081xz71m.png" alt="Image description" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The table above assumes that data packets are being transmitted over optical fiber in a large circle between cities, which, in reality, is rarely the case. The routing of data packets most often has many intermediate points (”hops”), which can significantly increase data delivery time; each intermediate point adds a delay, and the actual travel time can be significantly increased. A data packet transmitted over optical fiber (at speeds approaching that of light) requires more than 150 milliseconds to complete the round-trip journey from Amsterdam to Sydney and back.&lt;/p&gt;

&lt;p&gt;While people are not particularly sensitive to millisecond delays, research has shown that by the time we reach a 100-200 ms, the delay has already noticeable in the human brain. If it exceeds 300 ms, the human brain perceives it as a slow reaction.&lt;/p&gt;

&lt;p&gt;To reduce network latency so that it doesn't exceed 100 ms, content needs to be made available to users as geographically close as possible. We must carefully control the passage of data packets and provide a clear path, with as little congestion as possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jitter
&lt;/h2&gt;

&lt;p&gt;Jitter is a variation or "fluctuation" in network delays; it describes a change in the delay time between successive data packets. When data packets arrive at irregular intervals, this indicates network transmission instability. This can be caused by various factors, including network congestion, changes in traffic, and equipment deficiencies.&lt;/p&gt;

&lt;p&gt;Even if an average delay is deemed acceptable, high jitter can cause problems, especially in real-time applications such as online gaming, or those involving internet telephony where delay consistency is essential.&lt;/p&gt;

&lt;p&gt;If the amount jitter is too large, players may experience lag or "stuttering" when moving game characters or objects. This can also lead to packet loss, where data packets do not reach their destination or arrive too late to be useful. &lt;/p&gt;

&lt;p&gt;Jitter can also affect the overall fairness of the game. For instance, if one player has high jitter and another does not, the latter will have an advantage because their actions will be registered and displayed faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  Packet Loss
&lt;/h2&gt;

&lt;p&gt;Packet loss is a situation when one or more packets of data fail to reach their destination. This can happen for various reasons, such as network issues, traffic overload, or equipment problems. &lt;/p&gt;

&lt;p&gt;In real-time games where such information is relevant, packet loss can cause noticeable problems, including the character "freezing," disappearing objects, or game state inconsistency among players. &lt;/p&gt;

&lt;p&gt;Packet loss can lead to an outright interruption of gameplay, since necessary information may be lost during transmission.&lt;/p&gt;

&lt;p&gt;Therefore, it's important to develop mechanisms to cope with packet loss or minimize its impact on gameplay.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tick Rate
&lt;/h2&gt;

&lt;p&gt;The tick rate, or simulation rate, refers to the frequency at which the game generates and manages data each second. During a tick, the server processes the received data and performs simulations before sending the outcomes to the clients. The server then rests until the next tick. A faster tick rate means that clients will get new data from the server sooner, reducing the delay between the player and server and improving hit registration responsiveness.&lt;/p&gt;

&lt;p&gt;A tick rate of 60Hz is more efficient than 30Hz because it decreases the time between simulation steps, leading to less delay. Additionally, this rate allows the server to transmit 60 updates per second, which reduces the round trip delay between the client and server by around 33ms (-16ms from client to server and another -16ms from server to client). &lt;/p&gt;

&lt;p&gt;However, gameplay issues such as rubber banding, teleporting players, rejected hits, and physics failures may arise when the server struggles to process ticks within the allotted interval for each tick rate. For instance, if a server is set to a 60Hz tick rate but cannot complete the necessary simulations and data transmission within the approximately 16.67 milliseconds (1 second / 60) available for each tick, these issues can occur.&lt;/p&gt;

&lt;h1&gt;
  
  
  Dealing with limitations
&lt;/h1&gt;

&lt;p&gt;As we discussed in the sections on delay and packet loss, delay is a problem we need to address, and jitter makes creating a seamless gaming experience even more challenging. &lt;/p&gt;

&lt;p&gt;If we ignore delay and don't take steps to mitigate it, we'll end up with a "dumb terminal." Dumb terminals don't need to comprehend the simulation they show the client; instead, they only send input data from clients to the server and receive the resulting state from the server to display.&lt;/p&gt;

&lt;p&gt;This approach prioritizes accuracy, ensuring the correct user state is always displayed. However, it has several drawbacks:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;It can lead to delay and an unstable gaming experience if the server's update frequency is not adequate. The game will run at the server's pace, regardless of the client's potential frame rate. This can degrade a high-frequency game into a low-quality experience with a noticeable input delay.&lt;/li&gt;
&lt;li&gt;Delays in responsiveness may be acceptable in some game genres, but not all. An outdated visualization of the game world can make aiming accurately at other players difficult. Players have to anticipate their actions, aiming earlier to compensate for the delay.&lt;/li&gt;
&lt;li&gt;In the worst-case scenario, players may miss their target entirely. The enemy might appear to be ahead in time by 100-150 ms compared to the display, even if they are not moving erratically. This discrepancy can cause players to miss even if their aim was spot-on according to their screen.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Therefore, while the "dumb terminal" approach ensures accurate state representation, it can potentially lower the quality of the gaming experience due to its inherent limitations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Client-Side interpolation
&lt;/h3&gt;

&lt;p&gt;When we combine the chaos of RTT oscillations and jitter, the result is an undesirable gaming experience. Infrequent updates from the server, as well as poor network conditions, can cause visual instability. However, there are ways to minimize the impact of delay and jitter, like client-side interpolation.&lt;/p&gt;

&lt;p&gt;With client-side interpolation, the client smoothly interpolates the state of objects over time instead of simply relying on their positions sent from the server. This method is cautious, as it only smooths the transition between the actual states sent from the server.&lt;/p&gt;

&lt;p&gt;In a topology with a trusted server, the client can typically display a state that is roughly half of the RTT behind the actual modeling state on the server. However, for client-side interpolation to function correctly, it must lag behind the last state transmitted from the server. This results in a delay increase during the interpolation period. This time period should be shorter than the packet-sending period to prevent stuttering. Once the client finishes interpolating to the previous state, it will receive a new state and repeat the process.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dead Reckoning
&lt;/h3&gt;

&lt;p&gt;To minimize the impact of non-periodic state updates, some developers use the extrapolation method, also known as Dead Reckoning (DR). This technique involves predicting a game object's future position, rotation, and velocity based on its last known values. For instance, if the player sends a packet every third frame with the object's current position, rotation, and velocity, Bolt's extrapolation algorithm can estimate where the object will be for the next three frames until new data arrives.&lt;/p&gt;

&lt;p&gt;In this case, it's important to note that we can still use the same guessing method if a new packet doesn’t arrive as predicted. But the longer we guess into the future, the higher the chances of making an error; to address this, the DR algorithm utilizes "projected velocity blending" to make corrections once actual data is received.&lt;/p&gt;

&lt;p&gt;Extrapolation reduces the need for artificial packet delays in gaming, resulting in faster displays of real-time actions for players. It also deals with lost or missing packets more effectively when working with games with many players. This means that missing position, rotation, and velocity information does not cause any delays in gameplay.&lt;/p&gt;

&lt;p&gt;Although DR can be helpful, it is not as precise as interpolation. Additionally, using DR can be challenging if you are playing an FPS game and want to make authoritative shooting with delay compensation. This is because extrapolation, which involves estimating values, may cause variations in what each player sees on their screen. If you were using interpolated values, you could aim directly at a player moving perpendicular to you and still miss a shot.&lt;/p&gt;

&lt;h3&gt;
  
  
  Client-side prediction
&lt;/h3&gt;

&lt;p&gt;Interpolation and extrapolation on the client side reduce delays, but the game can still feel "sluggish". This is where "Client-Side Prediction" comes in: immediately after pressing a button, the player character starts moving, removing the feeling of sluggishness. If done correctly, this prediction will be almost identical to the server's calculations.&lt;/p&gt;

&lt;p&gt;Client-Side Prediction causes differences between what the server and client see. This can lead to "unexpected" visual effects. It is important to take into account unprocessed player actions and reapply them after each server update.&lt;/p&gt;

&lt;p&gt;Despite improvements, there is still a significant delay between any server update and the moment the player sees it. This leads to scenarios where the player, for example, makes a perfect shot, but misses because they are aiming at an outdated position of another player. This is where the area of debate known as Lag Compensation begins.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lag compensation
&lt;/h3&gt;

&lt;p&gt;Lag compensation is a controversial technique aimed at solving the problem where, for instance, a player makes a perfect shot, but because they were aiming at another's players “outdated” position, they miss. &lt;/p&gt;

&lt;p&gt;The principle of lag compensation is that the server can recreate the world state at any time. When the server receives your data packet with information about the shot, it recreates the world at the moment of the shot and decides whether it hit or missed.&lt;/p&gt;

&lt;p&gt;Unfortunately, lag compensation is susceptible to cheating. If the server trusts player-sent timestamps, a player can "trick" the server by sending a shot later but faking that it was performed some time before that.&lt;/p&gt;

&lt;p&gt;For this reason, lag compensation should be avoided. The three techniques described above on the client-side do not imply trust from the server to the client and are not susceptible to abuses like this.&lt;/p&gt;

&lt;p&gt;In this series, we'll explore all these techniques in more detail, when we will learn how to transmit data in the fastest, most compact, and reliable way.&lt;/p&gt;

&lt;h1&gt;
  
  
  Concluding part 1
&lt;/h1&gt;

&lt;p&gt;Your players will be gaming from various devices, behind different router models, and serviced by a diverse selection of providers. Sometimes they'll be connected through an optical fiber cable for high-speed internet, other times, they might use a Wi-Fi connection, or even 3G mobile internet. This means the network conditions can vary widely, affecting latency, packet loss, and overall connection stability. As a game developer, it's crucial to understand these different environments and design your network handling to ensure the best possible gaming experience. A challenging task, no doubt, but properly done, an implementation of these practices as a high-level is what sets successful multiplayer games apart from the rest.&lt;/p&gt;

&lt;p&gt;In the next section, we'll discuss the main data transmission protocols, like TCP, UDP, and WebSockets.&lt;/p&gt;

</description>
      <category>unity3d</category>
      <category>gamedev</category>
    </item>
    <item>
      <title>Exploring Unity DOTS and ECS: is it a game changer?</title>
      <dc:creator>MY.GAMES</dc:creator>
      <pubDate>Tue, 18 Jul 2023 12:08:33 +0000</pubDate>
      <link>https://dev.to/mygames/exploring-unity-dots-and-ecs-is-it-a-game-changer-55fh</link>
      <guid>https://dev.to/mygames/exploring-unity-dots-and-ecs-is-it-a-game-changer-55fh</guid>
      <description>&lt;p&gt;Exploring Unity DOTS and ECS: is it a game changer?&lt;br&gt;
Unity DOTS allows developers to use the full potential of modern processors and deliver highly optimized, efficient games – and we think it’s worth paying attention to. &lt;/p&gt;

&lt;p&gt;It’s been over five years since Unity first announced development of their Data-Oriented Technology Stack (DOTS). Now, with the release of the long-term support (LTS) version, Unity 2022.3.0f1, we’ve finally seen an official release. But why is Unity DOTS so critical to the game development industry, and what advantages does this technology offer?&lt;/p&gt;

&lt;p&gt;Hello, everyone! My name is Denis Kondratev, and I'm a Unity Developer at MY.GAMES. If you've been eager to understand what Unity DOTS is and whether it's worth exploring, this is the perfect opportunity to delve into this fascinating topic, and in this article – we’ll do just that.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is the Entity Component System (ECS)?
&lt;/h2&gt;

&lt;p&gt;At its core, DOTS implements the Entity Component System (ECS) architectural pattern. To simplify this concept, let’s describe it like this: ECS is built upon three fundamental elements: Entities, Components, and Systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Entities&lt;/strong&gt;, on their own, lack any inherent functionality or description. Instead, they serve as containers for various Components, which bestow them with specific characteristics for game logic, object rendering, sound effects, and more.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Components&lt;/strong&gt;, in turn, come in different types and just store data without independent processing capabilities of their own. &lt;/p&gt;

&lt;p&gt;Completing the ECS framework are &lt;strong&gt;Systems&lt;/strong&gt;, which process Components, handle Entity creation and destruction, and manage the addition or removal of Components.&lt;/p&gt;

&lt;p&gt;For instance, when creating a "Space Shooter" game, the playground will feature multiple objects: the player's spaceship, enemies, asteroids, loot, you name it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HKy8DXl8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/671ll1hti9wj2b6ll215.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HKy8DXl8--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/671ll1hti9wj2b6ll215.jpg" alt="Image description" width="800" height="1219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All of these objects are considered entities in their own right, devoid of any distinct features. However, by assigning different components to them, we can imbue them with unique attributes.&lt;/p&gt;

&lt;p&gt;To demonstrate, considering that all these objects possess positions on the game field, we can create a position component that holds the object's coordinates. Furthermore, for the player's spaceship, enemies, and asteroids, we can incorporate health components; the system responsible for handling object collisions will govern the health of these entities. Additionally, we can attach an enemy type component to the enemies, enabling the enemy control system to govern their behavior based on their assigned type.&lt;/p&gt;

&lt;p&gt;While this explanation provides a simplistic, rudimentary overview, the reality is somewhat more complex. Nonetheless, I trust that the fundamental concept of ECS is clear. With that out of the way, let's delve into the advantages of this approach.&lt;/p&gt;
&lt;h2&gt;
  
  
  The benefits of the Entity Component System
&lt;/h2&gt;

&lt;p&gt;One of the main advantages of the Entity Component System (ECS) approach is the architectural design it promotes. Object-oriented programming (OOP) carries a significant legacy with patterns like inheritance and encapsulation, and even experienced programmers can make architectural mistakes in the heat of development, leading to refactoring or tangled logic in long-term projects.&lt;/p&gt;

&lt;p&gt;In contrast, ECS provides a simple and intuitive architecture. Everything falls naturally into isolated components and systems, making it easier to understand and develop using this approach; even novice developers quickly grasp this approach with minimal errors.&lt;/p&gt;

&lt;p&gt;ECS follows a composite approach, where isolated components and behavior systems are created instead of complex inheritance hierarchies. These components and systems can be easily added or removed, allowing for flexible changes to entity characteristics and behavior – this approach greatly enhances code reusability.&lt;/p&gt;

&lt;p&gt;Another key advantage of ECS is performance optimization. In ECS, data is stored in memory in a contiguous and optimized manner, with identical data types placed close to each other. This optimizes data access, reduces cache misses, and improves memory access patterns. Moreover, systems composed of separate data blocks are easier to parallelize across different processes, leading to exceptional performance gains compared to traditional approaches.&lt;/p&gt;
&lt;h2&gt;
  
  
  Exploring the packages of Unity DOTS
&lt;/h2&gt;

&lt;p&gt;Unity DOTS encompasses a set of technologies provided by Unity Technologies that implement the ECS concept in Unity. It includes several packages designed to enhance different aspects of game development; let’s cover a few of those now.&lt;/p&gt;

&lt;p&gt;The core of DOTS is the &lt;strong&gt;Entities&lt;/strong&gt; package, which facilitates the transition from familiar MonoBehaviours and GameObjects to the Entity Component System approach. This package forms the foundation of DOTS-based development.&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Unity Physics&lt;/strong&gt; package introduces a new approach to handling physics in games, achieving remarkable speed through parallelized computations.&lt;/p&gt;

&lt;p&gt;Additionally, the &lt;strong&gt;Havok Physics for Unity&lt;/strong&gt; package allows integration with the modern Havok Physics engine. This engine offers high-performance collision detection and physical simulation, powering popular games such as The Legend of Zelda: Breath of the Wild, Doom Eternal, Death Stranding, Mortal Kombat 11, and more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lASbLXWS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97oekp39lb7nm54wzxz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lASbLXWS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97oekp39lb7nm54wzxz1.png" alt="Image description" width="800" height="431"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Death Stranding, like many other video games, uses the popular Havok Physics engine&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;Entities Graphics&lt;/strong&gt; package focuses on rendering in DOTS. It enables efficient gathering of rendering data and works seamlessly with existing render pipelines like the Universal Render Pipeline (URP) or High Definition Render Pipeline (HDRP).&lt;/p&gt;

&lt;p&gt;One more thing, Unity has also been actively developing a networking technology called Netcode. It includes packages like Unity Transport for low-level multiplayer game development, Netcode for GameObjects for traditional approaches, and the noteworthy &lt;strong&gt;Unity Netcode for Entities&lt;/strong&gt; package, which aligns with DOTS principles. These packages are relatively new and will continue to evolve in the future.&lt;/p&gt;
&lt;h2&gt;
  
  
  Enhancing performance in Unity DOTS and beyond
&lt;/h2&gt;

&lt;p&gt;Several technologies closely related to DOTS can be used within the DOTS framework and beyond. The &lt;strong&gt;Job System&lt;/strong&gt; package provides a convenient way to write code with parallel computations. It revolves around dividing work into small chunks called jobs, which perform computations on their own data. The Job System evenly distributes these jobs across threads for efficient execution.&lt;/p&gt;

&lt;p&gt;To ensure code safety, the Job System supports the processing of blittable data types. Blittable data types have the same representation in managed and unmanaged memory and require no conversion when passed between managed and unmanaged code. Examples of blittable types include byte, sbyte, short, ushort, int, uint, long, ulong, float, double, IntPtr, and UIntPtr. One-dimensional arrays of blittable primitive types and structures containing exclusively blittable types are also considered blittable.&lt;/p&gt;

&lt;p&gt;However, types containing a variable array of blittable types are not considered blittable themselves. To address this limitation, Unity has developed the Collections package, which provides a set of unmanaged data structures for use in jobs. These collections are structured and store data in unmanaged memory using Unity mechanisms. It is the developer's responsibility to deallocate these collections using the Disposal() method.&lt;/p&gt;

&lt;p&gt;Another important package is the &lt;strong&gt;Burst Compiler&lt;/strong&gt;, which can be used with the Job System to generate highly optimized code. Although it comes with certain code usage limitations, the Burst compiler provides a significant performance boost.&lt;/p&gt;
&lt;h2&gt;
  
  
  Measuring performance with Job System and Burst Compiler
&lt;/h2&gt;

&lt;p&gt;As mentioned, Job System and Burst Compiler are not direct components of DOTS but provide valuable assistance in programming efficient and fast parallel computations. Let's test their capabilities using a practical example: implementing &lt;a href="https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life"&gt;Conway's Game of Life algorithm&lt;/a&gt;. In this algorithm, a field is divided into cells, each of which can be either alive or dead. During each turn, we check the number of live neighbors for each cell, and their states are updated according to specific rules.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--gwwsHu_---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hkf90411juud3ynao1h6.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--gwwsHu_---/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hkf90411juud3ynao1h6.gif" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here’s the implementation of this algorithm using the traditional approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private void SimulateStep()
{
    Profiler.BeginSample(nameof(SimulateStep));

    for (var i = 0; i &amp;lt; width; i++)
    {
        for (var j = 0; j &amp;lt; height; j++)
        {
            var aliveNeighbours = CountAliveNeighbours(i, j);
            var index = i * height + j;

             var isAlive = aliveNeighbours switch
            {
                2 =&amp;gt; _cellStates[index],
                3 =&amp;gt; true,
                _ =&amp;gt; false
            };

             _tempResults[index] = isAlive;
        }
    }

    _tempResults.CopyTo(_cellStates);
    Profiler.EndSample();
}

private int CountAliveNeighbours(int x, int y)
{
    var count = 0;

    for (var i = x - 1; i &amp;lt;= x + 1; i++)
    {
        if (i &amp;lt; 0 || i &amp;gt;= width) continue;

        for (var j = y - 1; j &amp;lt;= y + 1; j++)
        {
            if (j &amp;lt; 0 || j &amp;gt;= height) continue;

            if (_cellStates[i * width + j])
            {
                count++;
            }
        }
    }

    return count;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I’ve added markers to Profiler to measure the time taken for the calculations. The states of the cells are stored in a one-dimensional array called &lt;strong&gt;_cellStates&lt;/strong&gt;. We initially write the temporary results to &lt;strong&gt;_tempResults&lt;/strong&gt; and then copy them back to &lt;strong&gt;_cellStates&lt;/strong&gt; upon completing the calculations. This approach is necessary because writing the final result directly to &lt;strong&gt;_cellStates&lt;/strong&gt; would affect subsequent calculations.&lt;/p&gt;

&lt;p&gt;I created a field of 1000x1000 cells and ran the program to measure the performance. Here are the results:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--yNHGSz4E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fyna39wu41adfsfimgtz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--yNHGSz4E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fyna39wu41adfsfimgtz.png" alt="Image description" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As seen from the results, the calculations took 380 ms.&lt;/p&gt;

&lt;p&gt;Now let's apply the Job System and Burst Compiler to improve the performance. First, we will create the Job responsible for executing the Conway's Game of Life algorithm.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public struct SimulationJob : IJobParallelFor
{
    public int Width;
    public int Height;
    [ReadOnly] public NativeArray&amp;lt;bool&amp;gt; CellStates;
    [WriteOnly] public NativeArray&amp;lt;bool&amp;gt; TempResults;

    public void Execute(int index)
    {
        var i = index / Height;
        var j = index % Height;
        var aliveNeighbours = CountAliveNeighbours(i, j);

        var isAlive = aliveNeighbours switch
        {
            2 =&amp;gt; CellStates[index],
            3 =&amp;gt; true,
            _ =&amp;gt; false
        };

        TempResults[index] = isAlive;
    }

    private int CountAliveNeighbours(int x, int y)
    {
        var count = 0;

        for (var i = x - 1; i &amp;lt;= x + 1; i++)
        {
            if (i &amp;lt; 0 || i &amp;gt;= Width) continue;

            for (var j = y - 1; j &amp;lt;= y + 1; j++)
            {
                if (j &amp;lt; 0 || j &amp;gt;= Height) continue;

                if (CellStates[i * Width + j])
                {
                    count++;
                }
            }
        }

        return count;
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have assigned the &lt;strong&gt;[ReadOnly]&lt;/strong&gt; attribute to the &lt;strong&gt;CellStates&lt;/strong&gt; field, allowing unrestricted access to all values of the array from any thread. However, for the &lt;strong&gt;TempResults&lt;/strong&gt; field, which has the &lt;strong&gt;[WriteOnly]&lt;/strong&gt; attribute, writing can only be done through the index specified in the &lt;strong&gt;Execute(int index)&lt;/strong&gt; method. Attempting to write a value to a different index will generate a warning. This ensures data safety when working in a multi-threaded mode.&lt;/p&gt;

&lt;p&gt;Now, from the regular code, let's launch our Job:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private void SimulateStepWithJob()
{
    Profiler.BeginSample(nameof(SimulateStepWithJob));

    var job = new SimulationJob
    {
        Width = width,
        Height = height,
        CellStates = _cellStates,
        TempResults = _tempResults
    };

    var jobHandler = job.Schedule(width * height, 4);
    jobHandler.Complete();
    job.TempResults.CopyTo(_cellStates);
    Profiler.EndSample();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After copying all the necessary data, we schedule the execution of the job using the &lt;strong&gt;Schedule()&lt;/strong&gt; method. It's important to note that this scheduling doesn't immediately execute the calculations: these actions are initiated from the main thread, and the execution happens through workers distributed among different threads. To wait for the job to complete, we use &lt;strong&gt;jobHandler.Complete()&lt;/strong&gt;. Only then can we copy the obtained result back to &lt;strong&gt;_cellStates&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let's measure the speed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--D1JDV7bi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kx2s2b698hm2sc0ydzve.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--D1JDV7bi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kx2s2b698hm2sc0ydzve.png" alt="Image description" width="800" height="682"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The execution speed has increased almost tenfold, and the execution time is now approximately 42 ms. In the Profiler window, we can see that the workload was distributed among 17 workers. This number is slightly less than the number of processor threads in the test environment, which is an Intel® Core™ i9-10900 with 10 cores and 20 threads. While the results may vary on processors with fewer cores, we can ensure the full utilization of the processor's power.&lt;/p&gt;

&lt;p&gt;But that's not all – it's time to utilize Burst Compiler, which provides significant code optimization but comes with certain restrictions. To enable Burst Compiler, simply add the [BurstCompile] attribute to the SimulationJob.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[BurstCompile]
public struct SimulationJob : IJobParallelFor
{
    ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's measure again:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--0YQ11-1P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ps2vefztwdrthhl2r1wz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--0YQ11-1P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ps2vefztwdrthhl2r1wz.png" alt="Image description" width="800" height="657"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The results exceed even the most optimistic expectations: speed has increased almost 200 times compared to the initial result. Now, the computation time for 1 million cells is no more than 2 ms. In Profiler, the parts executed by the code compiled with the Burst Compiler are highlighted in green.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;While the use of multithreaded computations may not always be necessary, and the utilization of Burst Compiler may not always be possible, we can observe a common trend in processor development toward multi-core architectures. This means that we should be prepared to harness their full power. ECS, and specifically Unity DOTS, align perfectly with this paradigm.&lt;/p&gt;

&lt;p&gt;I believe that Unity DOTS deserves attention, at the very least. While it may not be the best solution for every case, ECS can prove its worth in many games.&lt;/p&gt;

&lt;p&gt;The Unity DOTS framework, with its data-oriented and multithreaded approach, offers tremendous potential for optimizing performance in Unity games. By adopting the Entity Component System architecture and leveraging technologies like the Job System and Burst Compiler, developers can unlock new levels of performance and scalability.&lt;/p&gt;

&lt;p&gt;As game development continues to evolve and hardware advances, embracing Unity DOTS becomes increasingly valuable. It empowers developers to harness the full potential of modern processors and deliver highly optimized and efficient games. While Unity DOTS may not be the ideal solution for every project, it undoubtedly holds immense promise for those seeking performance-driven development and scalability.&lt;/p&gt;

&lt;p&gt;Unity DOTS is a powerful framework that can significantly benefit game developers by enhancing performance, enabling parallel computations, and embracing the future of multi-core processing. It’s worth exploring and considering its adoption to fully leverage modern hardware and optimize the performance of Unity games.&lt;/p&gt;

</description>
      <category>unity3d</category>
      <category>gamedev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Divide and Conquer: A Deterministic and Scripted Match-3 Engine</title>
      <dc:creator>MY.GAMES</dc:creator>
      <pubDate>Tue, 11 Jul 2023 09:12:24 +0000</pubDate>
      <link>https://dev.to/mygames/divide-and-conquer-a-deterministic-and-scripted-match-3-engine-35la</link>
      <guid>https://dev.to/mygames/divide-and-conquer-a-deterministic-and-scripted-match-3-engine-35la</guid>
      <description>&lt;p&gt;This is the story of how we separated the simulation logic from the presentation in our game Storyngton Hall to make code execution predictable, to test the functionality as much as possible, and to free the core base from custom logic.&lt;/p&gt;

&lt;p&gt;Hello, I’m Pavel Schevaev, CTO at BIT.GAMES, MY.GAMES. Our focus today is Storyngton Hall, a Match-3 game with elements of home and garden design and renovation, with a plot perfect for romance-lovers filled with exciting tales of lords and ladies.&lt;/p&gt;

&lt;h2&gt;
  
  
  A little backstory
&lt;/h2&gt;

&lt;p&gt;While we had already developed games with Match-3 elements before Storyngton Hall, a lot of time had passed, and none of those titles had a codebase which fit our new project. So, we faced the following problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A lack of determinism and replay possibility: there was no way to track bugs in a player session, nor to understand what happened&lt;/li&gt;
&lt;li&gt;The model is tightly intertwined with the presentation: for instance, you cannot turn off visual elements and “rewind” the simulation&lt;/li&gt;
&lt;li&gt;The core code contains highly customized gameplay logic: like classes such as &lt;em&gt;Honey&lt;/em&gt;, &lt;em&gt;Ferret&lt;/em&gt;, or &lt;em&gt;Rose&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We set out with a plan in mind to create a small function core base using C# and a compact API. We would then conduct development through testing, isolate the simulation from the presentation as much as possible, and introduce the concept of determinism. Finally, we’d make sure all custom gameplay logic was implemented using scripts, in our case, with BHL.&lt;/p&gt;

&lt;p&gt;And that last point brings us to our next topic of discussion, BHL itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  BHL uncovered
&lt;/h2&gt;

&lt;p&gt;We actually wrote BHL some time ago, and this is the language our gameplay programmers actively use today. &lt;a href="https://github.com/bitdotgames/bhl"&gt;BHL&lt;/a&gt; is an interpreted scripting language that contains convenient primitives for code pseudo-parallelism, hot reload support, and support for downloading bytecode from the server, which enables implementation of various patches and fixes without having to download the application to the store.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conceptual separation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--m09TZ8Gv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gd32byernyd40zbl7m5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--m09TZ8Gv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gd32byernyd40zbl7m5l.png" alt="Image description" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we had decided to separate simulation from presentation, it creates an analogy with client-server programming. The server is a deterministic simulation with its own “ticks”, while the simulation provides an opportunity to subscribe to all significant events: the client is the presentational element that affects the server using input from the player, and both presentation and the server live separately with their own ticks.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is determinism?
&lt;/h2&gt;

&lt;p&gt;Let’s take a moment to talk about determinism itself, with regards to gaming engines. We’ll highlight two points to consider when an engine is deterministic. First, we gain the ability to replay a player’s session. Second, you can control the difficulty of the Match-3 elements. For example, game designers can choose various seeds with either simplified or normal gameplay in order to aid players depending on certain conditions.&lt;/p&gt;

&lt;p&gt;One of the most popular ways to implement determinism is to use the “random seed” technique.&lt;/p&gt;

&lt;p&gt;A random seed is a number used as a randomizer parameter. During a player’s session, all queries to the randomizer will return some pseudo-random sequence of numbers, and, in subsequent game sessions, queries to the randomizer using the same random seed will return an identical sequence of numbers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NpvF3Vrq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i0lgjhscem964qlg8tyt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NpvF3Vrq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i0lgjhscem964qlg8tyt.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Test-driven development
&lt;/h2&gt;

&lt;p&gt;We had a mandatory requirement: any functionality implemented in the core code should have test coverage. Here’s an example of a simple test case that tests the effect of gravity on chips:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void TestSimpleGravityFall() {
    var sim = new M3Sim(4, 2);

    sim.SpawnAt(new M3Pos(0,1), new M3Chip(2));
    sim.SpawnAt(new M3Pos(1,1), new M3Chip(2));
    sim.SpawnAt(new M3Pos(2,0), new M3Chip(2));
    sim.SpawnAt(new M3Pos(3,1), new M3Chip(2));

    Assert.AreEqual(
@"
--2-
22-2
",
    sim.ToString());

    sim.TickUntilIdle();

    Assert.AreEqual(
@"
----
2222
",
    sim.ToString());
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let’s break down what’s going on there:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We create a simulation object&lt;/li&gt;
&lt;li&gt;Chips are placed&lt;/li&gt;
&lt;li&gt;We make sure they are in certain positions&lt;/li&gt;
&lt;li&gt;Our simulation is “ticked” until it goes idle — TickUntilIdle&lt;/li&gt;
&lt;li&gt;We make sure that a chip placed above other chips falls and lies in the same row with them&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We have already conducted several thousand tests like these.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--2RFrfV76--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e0731o2nhjmfr3h8kglj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--2RFrfV76--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e0731o2nhjmfr3h8kglj.png" alt="Image description" width="450" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Simulation as a plugin model
&lt;/h2&gt;

&lt;p&gt;We planned to organize the simulation as a plug-in model where any component can be replaced with a different implementation; using interfaces is a proven way to accomplish this. (We could have tried ECS, but we decided to be careful this time and to follow “the well-beaten path”.)&lt;/p&gt;

&lt;p&gt;The simulation receives external input in two ways: input from the player and an interval request for an update (in other words, a “tick”), and it allows for subscriptions to all significant events; and there are quite a few of those.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G9uo_JgM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tk2ubdmhcj4myh53ill1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G9uo_JgM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tk2ubdmhcj4myh53ill1.png" alt="Image description" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What happens in a simulation tick
&lt;/h2&gt;

&lt;p&gt;Let’s go through all components of the simulation in a strict order, so we have a clear understanding of what is being addressed, and when. Below you can see how we “tick” cell objects, spawners, matching, gravity, and more:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void Tick() {
    TickCellObjects();
    TickMatches();
    TickReplaces();
    TickSpawner();
    TickGravity();
    TickGoals();
    TickTurnsLeft();
    TickShuffle();
    TickCombo();
    TickFieldZone();
    ...
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The simulation model provides us with many events that can be subscribed to: spawns, new chips, landing, destruction, or the fact that a chip has destroyed a wall, and so on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void AttachToModel() { 
       m3.OnSpawnNew += OnSpawnNewChip;
       m3.OnSpawnNewMat += OnSpawnNewMat;
       m3.OnSpawnNewBlocker += OnSpawnNewBlocker;
       m3.OnChangeGoal += OnChangeGoal;
       m3.OnLanded += OnLandedChip;
       m3.OnMoveOnBelt += OnMoveOnBelt;
       m3.OnDamage += OnDamageChip;
       m3.OnMatch += OnMatchChips;
       m3.OnReplace += OnReplaceChips;
       m3.OnDestroy += OnDestroyChip;
       m3.OnShuffle += OnShuffleChips;
       m3.OnDestroyWall += OnDestroyWall;
       m3.OnDamageBlocker += OnDamageBlocker;
       m3.OnDestroyBlocker += OnDestroyBlocker;
       m3.OnDestroyBlocked += OnDestroyBlocked;
       m3.OnNextZoneSwitch += OnNextZoneSwitch;
       m3.OnNextFieldSwitch += OnNextFieldSwitch;
       m3.OnComboEnd += OnComboEnd;
       ...
     }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  First UI debugging
&lt;/h2&gt;

&lt;p&gt;At first, it was just one person involved in development: we had neither an artist nor a layout designer. But we had a debugging UI in Unity. It was still primitive but it worked.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/B4akS-pjztw"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;These were the preliminary results:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The simulation worked with the discrete movement of chips. All calculations were integer-based: chips moved between cells in one tick and they had no intermediate position in space. Because of this, there were no non-deterministic floating-point calculations to worry about&lt;/li&gt;
&lt;li&gt;The debugging UI was playable, the tests worked fine, and they validated the model. It seemed we just needed to add a beautiful visualization to this model.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;But… problems started as soon as the first real UI appeared.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/NbfGeaz3XLo"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This video shows each “chip” slowing down when passing over cells (under the influence of gravity). The reason for this turned out to be simple: the absence of an intermediate position for the chips in space and discrete movement. Trying to fix it using only visualization methods seemed extremely time-consuming, so we decided to deal with it differently:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We introduced an intermediate position of the chips in the space between the cells.&lt;/li&gt;
&lt;li&gt;The simulation began to tick more often per unit of time. We empirically selected a value of 20 Hz.&lt;/li&gt;
&lt;li&gt;We made the presentation interpolate the model at the maximum frame rate.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The problem was that we decided to implement the intermediate position of the chips in space using a float, and, as you probably know, floating-point math is not good at dealing with determinism, since it returns varying results on different hardware.&lt;/p&gt;

&lt;p&gt;We ultimately settled on a standard solution: fixed-point math based on integer calculations.&lt;/p&gt;

&lt;p&gt;But, there were certain drawbacks with the fixed-point math implementation: accuracy suffered; it was not as fast as a float-based implementation; it had limited functionality: add, mul, sqrt, abs, cos, sin, atan.&lt;/p&gt;

&lt;p&gt;But, given that we were not working on some kind of shooter, we knew we could deal with it.&lt;/p&gt;

&lt;p&gt;We found an &lt;a href="https://stackoverflow.com/questions/605124/fixed-point-math-in-c/616015"&gt;implementation&lt;/a&gt; on Stack Overflow, made some small changes, and were generally happy with everything:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public struct FInt 
{  
   // Create a fixed-int number from parts.  
   // For example, to create 1.5 pass in 1 and 500. 
   // For 1.005 this would 1 and 5.
   public static FInt FromParts( int PreDecimal, int PostDecimal = 0)
   ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This implementation was also convenient because it implicitly overrode the basic arithmetic operators, so the previous computation code was practically untouched.&lt;/p&gt;

&lt;p&gt;For example, below you can see code that calculates how the gravity works and it looks just like math using Unity vectors:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var fall_dir = chip.fall_target - chip.fpos;
  var fall_dirn = fall_dir.Normalized();

  var new_fpos = chip.fpos + (fall_dirn * chip.fall_velocity * fixed_dt);
  var new_fall_dir = chip.fall_target - new_fpos;

  chip.fall_velocity += FALL_ACCEL * fixed_dt;
  if(chip.fall_velocity &amp;gt; MAX_FALL_VELOCITY)
    chip.fall_velocity = MAX_FALL_VELOCITY;

  chip.fpos = new_fpos;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Fixed ticks and interpolation
&lt;/h2&gt;

&lt;p&gt;Due to the fact that the simulation “ticks” a fixed number of times per unit of time, we needed to introduce interpolation on the presentation side. The video below clearly shows how this works.&lt;/p&gt;

&lt;p&gt;No interpolation:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/_u2zSGj1Ezg"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Interpolated:&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/zngRW6DYFmQ"&gt;
&lt;/iframe&gt;
&lt;br&gt;
Interpolated&lt;/p&gt;
&lt;h2&gt;
  
  
  Fixed ticks and buffering
&lt;/h2&gt;

&lt;p&gt;We also found another interesting detail: although the simulation logic lives separately and knows nothing about the visual elements, it must set aside some time for various interactions to be conducted on the presentation side.&lt;/p&gt;

&lt;p&gt;For example, our chip cannot visually disappear instantly after being damaged, so the simulation allocates a certain number of fixed ticks for the chip to “die”. During this interval, the presentation is free to visualize the process of chip disappearance as it pleases:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;void DoDamage(M3Chip c, M3DamageType damage_type) {
    Error.Assert(c.life &amp;gt; 0);
    c.SetLife(c.life - 1);

    c.damage_sequence_ticks = (int)(EXPLODE_TIME / FIXED_DT);

    OnDamage(c, damage_type);
  }


 void TickChip(M3Chip c) {
    ...
    if(c.damage_sequence_ticks &amp;gt; 0) {
      --c.damage_sequence_ticks;
      if(c.damage_sequence_ticks == 0) {
        if(c.life == 0)
          c.is_dead = true;
      }
     ...
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Event forwarding
&lt;/h2&gt;

&lt;p&gt;Moving on to scripting, we use BHL for scripting, and all the main events from the simulation are forwarded to scripts. Various custom effects, eye candy, voiceovers, and so on are carried out exclusively within the scripts.&lt;/p&gt;

&lt;p&gt;For example, in the script code below, a beautiful “twitch” begins where the tile moves, as if on a spring, and a sound effect is played for a chip landing event:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fdDIbQmK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qwxtpbg2v863053pqhx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fdDIbQmK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9qwxtpbg2v863053pqhx.png" alt="Image description" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing various types of chips
&lt;/h2&gt;

&lt;p&gt;To implement various types of chips — for example, bombs — we could have introduced various types of classes. However, this solution was rather rigid and not very extensible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--HQvtJ9XF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrpvm7khh5l34v3xnpe6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--HQvtJ9XF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrpvm7khh5l34v3xnpe6.png" alt="Image description" width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We took a different approach and introduced the concept of “activation” — a functionality that could be associated with any type of chip.&lt;/p&gt;

&lt;p&gt;In the example below, activation is associated with chip type 14; there is destruction around it when the chip is tapped.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--R6sRVEY9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vshwsjpvxr5jxayh0ncw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--R6sRVEY9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vshwsjpvxr5jxayh0ncw.png" alt="Image description" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When we implemented this, it became possible to create various types of “activations” in BHL scripts.&lt;/p&gt;

&lt;p&gt;In the example below, you can see the same code as before, but with BHL: upon activation, a function starts which destroys the surrounding chips following a given pattern:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--C2BNjb48--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5pmxko1zqo4q43uy6sg9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--C2BNjb48--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5pmxko1zqo4q43uy6sg9.png" alt="Image description" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Complex functionality demonstrated: Beetle chips
&lt;/h2&gt;

&lt;p&gt;Now, let’s review a logic more complex compared with that of ordinary chips. For example, let’s say we have a Beetle chip (a unique chip resembling an insect which has both a special gameplay function and corresponding animation).&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/TWS092kBDNk"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;If we disassemble the logic of the “beetle” execution into its building blocks, we can highlight the following stages.&lt;/p&gt;

&lt;p&gt;In the logic: the target chip is marked as inaccessible, and after an interval of time, the marked chip is destroyed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Nuu3LEmk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/od7ec5plfxnnfqgqewu1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Nuu3LEmk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/od7ec5plfxnnfqgqewu1.png" alt="Image description" width="456" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At the same time, a visual is displayed showing all the “beauty”: we see the effect of the beetle taking off, then the beetle flying along the trajectory, and finally, an explosion effect.&lt;/p&gt;

&lt;p&gt;This is how it looks in the BHL script:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--evACgtOS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bs8yidpkxmsfxv1oluh0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--evACgtOS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bs8yidpkxmsfxv1oluh0.png" alt="Image description" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The green section of the code is responsible for simulation, the red section — for presentation.&lt;/p&gt;

&lt;p&gt;Here we start two tasks for the simulation and presentation, respectively. With both, these tasks tick at different frequencies and are synchronized using a special synchronization channel; this particular pattern was borrowed from Go.&lt;/p&gt;

&lt;p&gt;You can see how the logic and presentation are processed in the editor:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/O8LWNvmvFLQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;In the debug area at the bottom of the screen, we see the chip is simply marked and then destroyed, while the visual part and all the effects are visible in the upper presentation area.&lt;/p&gt;

&lt;p&gt;You’ve probably already noticed that the entire Beetle logic was scripted using BHL. This was implemented by the gameplay developers and was completely isolated from the Match-3 core.&lt;/p&gt;

&lt;h2&gt;
  
  
  Complex functionality demonstrated 2: Big Bomb chips
&lt;/h2&gt;

&lt;p&gt;Another example is the implementation of the Big Bomb chip.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/jD8RiKjk80c"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This implementation is similar to the Beetle:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jGNKLTp4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/02vj9myzwzbg0cioq6a4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jGNKLTp4--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/02vj9myzwzbg0cioq6a4.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, the explosion wave begins, following a certain “pattern”. The necessary presentation logic is played out in the red-highlighted section. All of this is consistent with the already familiar pattern of the synchronization channel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pleasant Bonuses: playable replays
&lt;/h2&gt;

&lt;p&gt;Simply put, player replay works as follows: we record the random seed, and then for each player input we set:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The simulation tick number&lt;/li&gt;
&lt;li&gt;Input type and parameters&lt;/li&gt;
&lt;li&gt;The checksum for the field state to make sure there are no discrepancies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And that’s all — this is enough for replay. Below you can see an example of this in action:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/aG6Y7ajBPGk"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;The game session starts and we find the player has been actively interacting with the game for some time. We stop the game and turn on the replay session, which was recorded automatically. A special debugging UI starts where you can walk through the steps and see what happened at each stage. Very convenient.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--shxBzW0G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sxo7uq638nlc88l4rdtb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--shxBzW0G--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sxo7uq638nlc88l4rdtb.png" alt="Image description" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A replay can be saved both in text and in visual form. We usually use text (binary data in the base64 format), which is especially convenient for sending via mail and messengers. The last screenshot of the field is saved in PNG format with embedded replay code.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--oq8NQC1y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h5x3ot62urkv5yibcj2s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--oq8NQC1y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h5x3ot62urkv5yibcj2s.png" alt="Image description" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Replays allowed us to predictably repeat errors from the Android test farm. In short, every night we start all our levels on ten devices, and bug reports come into Slack. We can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Watch replays with errors&lt;/li&gt;
&lt;li&gt;Understand what happened&lt;/li&gt;
&lt;li&gt;Fix everything quickly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Disconnecting visuals from the simulation logic
&lt;/h2&gt;

&lt;p&gt;As soon as we managed to separate everything correctly, we were able to do a fair rewind of the simulation to receive rewards at the end of the level and introduce a quick check of the levels by a bot.&lt;/p&gt;

&lt;p&gt;All Match-3 players know there is a certain sequence of actions that happens after passing a level, which you want to skip: bombs explode, rewards are obtained, points are awarded, and so on.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public void SkipM3Rewarding(UIM3 ui) {
    DetachUIFromModel(ui);

    while(!m3.IsIdle())
      m3.Tick();

    AttachUIToModel(ui);
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we disconnect from the UI, tick until the simulation becomes idle, and then reconnect to it.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Hp8Qtn0khps"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Please note: after our main character, Jane, appears, all the “fireworks” are skipped, but at the same time everything is legitimately played out in the simulation, all coins are accurately counted, and then awarded. Instantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Checking levels using a bot
&lt;/h2&gt;

&lt;p&gt;Our changes allowed us to make a quick bot for conducting basic reviews of the game designers’ work. This is how it looks in the editor:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/m33J1yqbaJY"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Let’s say a game designer is creating a new level and wants to test how it runs. To do this, they run a special bot which checks, using several dozen various seeds, if the level is beatable based on its heuristics. At the end of the bot’s execution, they can see the run’s statistics with various graphs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Implementing a deterministic simulation, custom logic scripting, and separating the simulation from the visuals carries with it great benefits. The bad part of the process was that we had to get used to it all — doing this requires a certain level of “breaking patterns” as well as discipline, but it all pays off in the end.&lt;/p&gt;

&lt;p&gt;In the future, we plan to use this scheme in new titles where rather complex interactions are required. But for small projects like hyper-casual games, such labor costs would be redundant.&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>programming</category>
    </item>
    <item>
      <title>No need to pause: how we update games without downtime</title>
      <dc:creator>MY.GAMES</dc:creator>
      <pubDate>Fri, 23 Jun 2023 09:53:47 +0000</pubDate>
      <link>https://dev.to/mygames/no-need-to-pause-how-we-update-games-without-downtime-3419</link>
      <guid>https://dev.to/mygames/no-need-to-pause-how-we-update-games-without-downtime-3419</guid>
      <description>&lt;p&gt;Hello everyone! We’re Dmitriy Apanasevich, Lead Developer at MY.GAMES, and Mikhail Alekseev, Developer at MY.GAMES. In this article, we reveal how our team implemented a game update process without downtime, with step-by-step guides to our architecture and our setup in practice. Plus, we discuss potential difficulties, and how and why we use blue-green deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Blue-green deployment
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Why did we use a blue-green deployment strategy?&lt;/strong&gt; Let’s start here. The blue-green deployment (BGD) pattern has a number of architectural and manufacturing expenses. So, before implementing BGD, you should answer the question: are these expenses worth the desired results? In our case, the answer was yes, and at our team we have 2 main reasons to use BGD:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Downtime is expensive – even 1 minute costs a lot of money.&lt;/li&gt;
&lt;li&gt;We make mobile games. Publishing a new version of the mobile client in stores isn’t instant – this new version must be reviewed by the store, and this can take several days. Thus, supporting more than one game server instance at the same time is a business requirement. &lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How this looks in practice
&lt;/h2&gt;

&lt;p&gt;Now, let’s discuss how our game update setup looks in practice. So, we have:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;A client&lt;/li&gt;
&lt;li&gt;Two servers: Alpha and Beta&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iryJ0JR6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44k9z3vcx8nj9spafqt0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iryJ0JR6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44k9z3vcx8nj9spafqt0.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We need to switch traffic from Alpha (1) to Beta (2) while making sure players don’t notice anything. This low-profile process of changing the game server involves the following parties: the game server, the client and a special server account (whose only task is to provide the client with the address of the game server for connection). The account server knows the game server address and its status (live/stopped). The status is a piece of meta-information about the game server, unrelated to whether or not it’s actually running.&lt;/p&gt;

&lt;p&gt;Based on the status of the game server, the account server provides the client with the appropriate address for connection:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--KV1vuJLD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0vnfrbfor8z3e8gywwot.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--KV1vuJLD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0vnfrbfor8z3e8gywwot.png" alt="Image description" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s go through the illustration above:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Alpha is live, Beta is stopped&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The player enters the game&lt;/strong&gt;. The client accesses the account server, according to which the current live game server is Alpha. The account server sends the Alpha address to the client, and the client connects to this address.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;At some point, Alpha is declared as stopped, and Beta as live&lt;/strong&gt;. Alpha sends a broadcast reconnect event to all its connected clients. Upon receiving a reconnect, the client again contacts the account server, which provides the Beta address this time. Upon receiving it, the client connects to Beta without the player noticing, and the game continues as usual.
Thus, we have achieved zero downtime when updating the server.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Making further improvements
&lt;/h2&gt;

&lt;p&gt;But there is still room for improvement in this scheme. For instance:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;First, QA specialists would like to run the final test of a new version of the game server before letting players join.&lt;/li&gt;
&lt;li&gt;Second, we would like the client to be able to complete some activities (for example, battles) on the same game server that they started on.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To implement those features, let’s introduce a new game server status: staging. During the staging status, the following people can access the game server:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;QA specialists&lt;/li&gt;
&lt;li&gt;Ordinary players, provided that the client specifies the preferred game server in the login request&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--NKiqFisc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v11p6bl1xp1c89cc6f4d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--NKiqFisc--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v11p6bl1xp1c89cc6f4d.png" alt="Image description" width="800" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let’s walk through the above illustration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Alpha is live, Beta is stopped&lt;/strong&gt;. Client is connected to Alpha.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alpha becomes live, Beta becomes staging&lt;/strong&gt;. After changing the status, QA can go to Beta, and all other clients, as before, will be sent to Alpha.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;After checking, Beta becomes live, and Alpha becomes staging&lt;/strong&gt;. Alpha sends a broadcast reconnect event; but if the player is in a battle, the client can ignore the reconnect and continue running on Alpha.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;As soon as Alpha acquires the stopped status, any new attempt to access the game will be sent to Beta.&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The scheme described above is quite flexible: it can be used both for rolling out servers for new game versions (1.0 -&amp;gt; 2.0), and for updating the current version – for example, to fix bugs.&lt;/p&gt;

&lt;p&gt;But this means that rolling out a new version of the game entails the necessity to maintain backward compatibility of the client-server protocol. This is so that players who failed to update will have the opportunity to continue playing on the server of the new version. And when several servers of different versions are running simultaneously, we naturally have to deal with the question of supporting forward and backward compatibility at the level of working with data (IMDG, database, interserver interaction).&lt;/p&gt;

&lt;p&gt;We decided that we didn’t want to do the double work of compatibility support, and agreed on a strict correspondence between the versions of the client and the server: version X clients are strictly handled by version X server, and version Y clients by version Y server. &lt;/p&gt;

&lt;p&gt;At the same time, within the same versions, changes to the server implementation are allowed if they don’t affect the protocol of interaction with the client. Thus, we’re getting rid of the expenses for maintaining the backward compatibility of the protocol, and we also have room for a lot of maneuvers.&lt;/p&gt;

&lt;p&gt;Accordingly, now the account server must know the version of the game server, and the client must provide the account server with the desired version of the game server for connection.&lt;/p&gt;

&lt;p&gt;Below is a complete diagram of the process of releasing a new game version and updating the servers of the same version (transitions to staging are skipped):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--9a-cE_6y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0zjahyh9nkla582agttk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--9a-cE_6y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0zjahyh9nkla582agttk.png" alt="Image description" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;At some point, version 2.0 will be released to replace 1.0.&lt;/li&gt;
&lt;li&gt;After QA specialists have carried out a final check of Beta, we activate the so-called soft update: Beta is live, and the 2.0 client becomes available to a certain percentage of players in the stores. If no critical bugs are found, version 2.0 will be open for 100% of the players.
Unlike with BGD, here, when a new version is rolled out, the server of the previous version doesn’t send out reconnects.&lt;/li&gt;
&lt;li&gt;If we detect any errors during the update process, we fix them and, using the BGD process described earlier, transfer players from Beta to Gamma, which contains fixes for the detected errors. (However, players from client 1.0 can still play on Alpha.)&lt;/li&gt;
&lt;li&gt;After some time, we activate the so-called hard update: Alpha is stopped, all login attempts from 1.0 are now prohibited, and the player receives a message asking to update the client.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The described server update process is so easy that we entrusted it to our QA specialists. They instruct the account server to change the status of the game server and send reconnects to game servers. All actions are performed in a simple Game Tool server interface:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--jakd2Nwn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wycsow64twar8l4dgch0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--jakd2Nwn--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wycsow64twar8l4dgch0.png" alt="Image description" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The entire state of the account-server is in its own database. Thus, account server instances are completely stateless.&lt;/li&gt;
&lt;li&gt;For example, if we carry out the BGD from Alpha to Beta, a QA specialist, using Game Tool, instructs the account server to change the status of Beta from stopped to staging in order to start the final check. The account server puts the updated state to the database upon command.&lt;/li&gt;
&lt;li&gt;After checking and transferring Beta to live, a QA specialist, using Game Tool, instructs Alpha to send a reconnect to the connected clients so that they start migrating to Beta.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Limitations
&lt;/h2&gt;

&lt;p&gt;As mentioned before, running more than one game server instance at the same time imposes certain restrictions on the development process.&lt;/p&gt;

&lt;p&gt;2 of the most common problems we have to deal with are maintaining forward and backward compatibility at the data level and synchronizing some of the background activities of game servers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Support compatibility
&lt;/h2&gt;

&lt;p&gt;We use PostgreSQL as persistent data storage and Hazelcast as cache. When working with PostgreSQL, we follow several rules:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;DDL migrations must be backward compatible. For example, when we want to delete a column in a table, we first release a version where the column is no longer used, and then a version where the column is deleted from the table.&lt;/li&gt;
&lt;li&gt;DDL migrations shouldn’t block the table. For example, index creation is only allowed with the concurrent directive.&lt;/li&gt;
&lt;li&gt;Instead of mass DML migrations, we prefer “client” migrations which are performed when a player logs in; this influences only relevant data.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We provide forward and backward data compatibility in Hazelcast using a third-party serialization framework (Kryo).&lt;/p&gt;

&lt;h2&gt;
  
  
  Synchronizing activities
&lt;/h2&gt;

&lt;p&gt;Here’s a perfect example of a background activity that needs synchronization: the distribution of rewards at the end of the game event; only 1 server should send out the distribution. Sometimes it doesn't matter what kind of server will do it; in this case synchronization is implemented using a distributed CAS on Hazelcast.&lt;/p&gt;

&lt;p&gt;But still, more often than not, we want to control which server will distribute the rewards – this synchronization is carried out using distributed voting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonuses
&lt;/h2&gt;

&lt;p&gt;“&lt;em&gt;Don’t worry if it doesn’t work right. If everything did, you’d be out of a job.&lt;/em&gt;” — Mosher’s Law of Software Engineering&lt;/p&gt;

&lt;p&gt;Software will always have bugs, there is no escaping them. But since we have a way to roll out game server updates without downtime, we can also use this to fix detected bugs in game mechanics and to perform optimizations.&lt;/p&gt;

&lt;p&gt;A particularly nice result of this feature is that in the event of a critical error for any game activity, the player can continue participating in other activities. Thanks to BGD, we can quickly roll out a fix without interrupting gameplay, and the next attempt to take part in the original game activity is likely to be successful.&lt;/p&gt;

&lt;p&gt;The ability to fix bugs on the client through the server deserves special attention.&lt;br&gt;
As we mentioned at the very beginning, publishing a new version of the mobile client in the stores takes some time, so client bugs that have reached the player are, to a certain extent, more dangerous than server bugs.&lt;/p&gt;

&lt;p&gt;However, in our work, we’ve faced situations where we’ve tweaked the sent events and responses, and the server managed to “persuade” the client to act in such a way that the bug either became invisible or disappeared altogether.&lt;/p&gt;

&lt;p&gt;Of course, we aren’t always that “lucky”, and we have made a lot of efforts to manage without hot-fixes, but if something does make its way through, then BGD comes to the rescue – sometimes even in the most unexpected cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  No downtime
&lt;/h2&gt;

&lt;p&gt;We’ve tried to describe our game update process in detail, shed light on the difficulties we’ve faced, and share some interesting results related to our architectural decisions. We hope our experience will be useful to you in your work!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>backend</category>
      <category>gamedev</category>
    </item>
  </channel>
</rss>
