<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sergio Garcia Moratilla</title>
    <description>The latest articles on DEV Community by Sergio Garcia Moratilla (@sgmoratilla).</description>
    <link>https://dev.to/sgmoratilla</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sgmoratilla"/>
    <language>en</language>
    <item>
      <title>Modernizing a CI+CD pipeline with Github Actions</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Tue, 07 Jun 2022 14:41:33 +0000</pubDate>
      <link>https://dev.to/playtomic/modernizing-a-cicd-pipeline-with-github-actions-12gn</link>
      <guid>https://dev.to/playtomic/modernizing-a-cicd-pipeline-with-github-actions-12gn</guid>
      <description>&lt;p&gt;&lt;a href="https://www.sgmoratilla.com/2019-04-15-playtomic-pipeline" rel="noopener noreferrer"&gt;Our CI+CD has been working for 5 years long&lt;/a&gt;. You know, if it ain't broken, don't fix it. But the company is not the same. It's time to update it!&lt;/p&gt;

&lt;p&gt;Let me wrap up a bit our current setup. I am going to be brief, I promise. We have a Jenkins cluster on-premise. That is, we manage (and maintain) a bunch of hosts that run Jenkins slaves and a host that runs the master. They are within our on-premise VPN, a reminiscence of our first hosting provider.&lt;/p&gt;

&lt;p&gt;Our production backend runs a containerized system on top of a Docker Swarm cluster. Our container registry is Nexus, which allows us to deploy our services in the Swarm.&lt;/p&gt;

&lt;p&gt;Both systems run within their own independent VPNs.&lt;/p&gt;

&lt;p&gt;Our code repositories are in Github. Our Jenkins is listening for changes in Github. When we merge to develop/production, Jenkins pushes the image to the Nexus, connects via ssh to the managers of the cluster and runs the deployment command (&lt;em&gt;docker stack deploy&lt;/em&gt;). We have a common Jenkins pipeline for all our services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6qncjgrls3ebqtx2dll.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh6qncjgrls3ebqtx2dll.png" alt="CI+CD Pipeline with Jenkins" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple.&lt;/li&gt;
&lt;li&gt;Security on our side (VPN certificates + ssh keys are in our servers).&lt;/li&gt;
&lt;li&gt;Jenkins is commonly known.&lt;/li&gt;
&lt;li&gt;Stable: we haven't had to change it a lot all this time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We need to maintain and keep update the Jenkins machines.&lt;/li&gt;
&lt;li&gt;As the team grows, the Jenkins cluster has to grow too to be able to run more jobs.&lt;/li&gt;
&lt;li&gt;We have only configured Java 8 and 11.&lt;/li&gt;
&lt;li&gt;Still on our old hosting provider.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We have been pretty happy with this setup so far. Maintenance is something that we always want to simplify at Playtomic. The fewer systems, the better. Besides, new versions of Java require new versions of the JDK, maven and thus... Jenkins. We have already been using Github Actions in other projects here, so that we know that it could be a fine replacement of Jenkins. The workflow defines the environment it is required to run (for example, the java version or the architecture), so it makes complete sense to use Github Actions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Replacing Jenkins by Github Actions
&lt;/h2&gt;

&lt;p&gt;We re-wrote the Jenkins pipeline as a Github Actions workflow. Our main concerns are two:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What about security? We are using ssh-keys to access the cluster. We can use Github secrets to store them, but we don't like having such an important piece of the security stored in a third-party.&lt;/li&gt;
&lt;li&gt;Cost might be a problem in the future. Github Actions are charged by the minute of computation and we have 50 services.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We discovered that Github Actions allow you to run self-hosted nodes... so that's the solution to both problems. At this very moment, we can afford the minutes that we are spending, so we are adding a host just for deployments. We added a t3.nano, which is pretty cheap. ssh-keys are still 100% under our control, as they are installed in the machine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftcpkmkw1f2vttifuakw4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftcpkmkw1f2vttifuakw4.png" alt="CI+CD Pipeline with Github Actions" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pros:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Still a common pipeline (workflow) but allows everyone to use new stuff (new java version, different languages, ...) without installing more tools in Jenkins.&lt;/li&gt;
&lt;li&gt;Security is still on our side.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Cons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We need to monitor the cost.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Containerized Github Action runner
&lt;/h3&gt;

&lt;p&gt;Worried about the cost? If you have a cluster, &lt;a href="https://www.sgmoratilla.com/2022-06-07-docker-multiarch-github-actions-runner/" rel="noopener noreferrer"&gt;you can run as several copies as you want of the Github Action runner!&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  On-demand self-hosted runners
&lt;/h3&gt;

&lt;p&gt;This would be a huge improvement to control the cost while still being able to scale the number of runners: &lt;br&gt;
&lt;a href="https://github.com/machulav/ec2-github-runner" rel="noopener noreferrer"&gt;https://github.com/machulav/ec2-github-runner&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Separate the CI from the CD
&lt;/h3&gt;

&lt;p&gt;To separate pipeline for the CI and CD is a good idea, as their lifecycles are pretty different.&lt;br&gt;
In our current setup, the workflow is responsible for running the deployment command. If that fails, then the whole pipeline fails, but the build was successful. &lt;/p&gt;

&lt;p&gt;We have already tested ArgoCD in Kubernetes so that the CD is handled by the cluster itself.&lt;br&gt;
If you are running Kubernetes, you can already &lt;a href="https://dev.to/2021-10-28-flux-vs-argocd/"&gt;do that with FluxCD or ArgCD&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sq4wqemb5gsatoyyw06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3sq4wqemb5gsatoyyw06.png" alt="CI+CD Pipeline with Github Actions and ArgoCD" width="800" height="535"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Replace Nexus by Github Packages
&lt;/h3&gt;

&lt;p&gt;Nexus requires a lot of space (as it stores all the containers, libraries, packages, ... of your organization). We don't want to maintain that space, be responsible of the backups, ... so that we are considering to migrate to Github Packages too.&lt;/p&gt;

</description>
      <category>cicd</category>
      <category>githubactions</category>
      <category>jenkins</category>
    </item>
    <item>
      <title>Let's talk about performance and MongoDB</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Wed, 16 Feb 2022 10:21:38 +0000</pubDate>
      <link>https://dev.to/playtomic/lets-talk-about-performance-and-mongodb-4048</link>
      <guid>https://dev.to/playtomic/lets-talk-about-performance-and-mongodb-4048</guid>
      <description>&lt;p&gt;If you, a backend developer, had to describe your job, what would you say? We usually talk a lot about servers, clusters, layers, algorithms, software stacks, memory consumption, ...&lt;/p&gt;

&lt;p&gt;We put data into databases and we get it back as fast as we can.&lt;br&gt;
Databases are our cornerstone. Why don't we talk more often about them? We are always relying on our ORMs.&lt;/p&gt;

&lt;p&gt;My best advice? Simplify your queries. Simplify your data models. Simplify your access patterns.&lt;/p&gt;
&lt;h1&gt;
  
  
  Performance in MongoDB
&lt;/h1&gt;
&lt;h2&gt;
  
  
  Metrics
&lt;/h2&gt;

&lt;p&gt;If you are using MongoDB, these two metrics will be your best friends:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scanned Documents/Returned ratio&lt;/li&gt;
&lt;li&gt;IOPS &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scanned docs/returned&lt;/strong&gt;: it means how many documents you are reading from disk vs the number of documents you are actually returning in your &lt;code&gt;find()&lt;/code&gt; or &lt;code&gt;aggregate()&lt;/code&gt;. Ideally: this should be 1 (every document read is returned). The only way to get it? All your queries must be covered by indexes. Indexes are in memory (or they are if they fit), so MongoDB doesn´t have to read and filter them from disk.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu7j6uxfni68te822j71.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuu7j6uxfni68te822j71.png" alt="Example of Scanned documents/Returned ratio" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;IOPS&lt;/strong&gt;: I/O operations per second. That is, operations on disk.  It is correlated with scanned docs, as they are read from disk. But there are more sources of IOPS, for example writes.&lt;br&gt;
Your disk will give you a limit to your maximum IOPS. Ours is 3000. &lt;/p&gt;

&lt;p&gt;Your goal is to keep IOPS below that threshold, and as low as possible. It is hard to know how many IOPS your query consumes, but it is easy to know the scanned docs/returned ratio. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ri7onm4eb585u0e10o3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ri7onm4eb585u0e10o3.png" alt="Example of IOPS" width="800" height="336"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Tools
&lt;/h2&gt;

&lt;p&gt;How can you analyze why your database is behaving as it is?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MongoDB Profiler&lt;/strong&gt;&lt;br&gt;
If you can afford to enable it, do it now. It's the best source of info. We use the Atlas MongoDB Profiler and it is worth every penny. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpq1a0hocgm95nk9z3h1s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpq1a0hocgm95nk9z3h1s.png" alt="Atlas MongoDB Profiler" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explain planner&lt;/strong&gt;&lt;br&gt;
The planner is the key to understanding your access patterns.&lt;br&gt;
There are several ways of calling it, but you can start with &lt;code&gt;explain()&lt;/code&gt; after your cursor:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db.our_collection.find(query).explain()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use add &lt;code&gt;executionStats&lt;/code&gt; to get more data about the query.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db.our_collection.find(query).explain("executionStats")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://docs.mongodb.com/manual/reference/explain-results/" rel="noopener noreferrer"&gt;There are many stages&lt;/a&gt;, but these ones are the most important:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;COLLSCAN: the query is scanning the collection in disk. Pretty bad, as no index covered the search, so MongoDB has to read the whole collection. &lt;/li&gt;
&lt;li&gt;IXSCAN: the query is using an index to filter. It doesn´t mean that all the query is covered by the index, but at least some part.&lt;/li&gt;
&lt;li&gt;FETCH: the planner is reading the documents from the collection. If your query is returning documents, you will get a FETCH stage probably (&lt;a href="https://docs.mongodb.com/manual/indexes/#covered-queries" rel="noopener noreferrer"&gt;unless your query is covered by the index&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is an example of one of our queries:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;winningPlan: 
      { stage: 'COLLSCAN',
        filter: 
         { '$and': 
            [ { '$or': 
                 [ { 'invitaed_user_id': { '$eq': '1' } },
                   { owner_id: { '$eq': '1' } },
                   { 'player_id': { '$eq': '1' } } ] },
              { is_canceled: { '$eq': false } },
             ] } },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is not good, as it is scanning the whole collection.&lt;/p&gt;

&lt;p&gt;Another one:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db.our_collection.find(query).explain("executionStats")
executionStats: 
   { executionSuccess: true,
     nReturned: 22,
     executionTimeMillis: 0,
     totalKeysExamined: 24,
     totalDocsExamined: 22,
     executionStages: 
      { stage: 'FETCH',
        nReturned: 22,
        docsExamined: 22,
        inputStage: 
         { stage: 'OR',
           nReturned: 22,
           inputStages: 
            [ { stage: 'IXSCAN',
                nReturned: 0,
                indexName: 'example-index-1',
                indexBounds: 
                 { owner_id: [ '["1", "1"]' ],
                   start_date: [ '[MaxKey, MinKey]' ] } },
              { stage: 'IXSCAN',
                nReturned: 22,
                indexName: 'example-index-2'
                indexBounds: 
                 { 'player_id': [ '["1", "1"]' ],
                   start_date: [ '[MaxKey, MinKey]' ] ] },
                keysExamined: 23,
                dupsDropped: 0 }] } } },
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here you can two see IXSCANs, merged by and OR. After that, the query is fetching the documents. I am reading the query inside-out. &lt;code&gt;example-index-1&lt;/code&gt; is used to resolve one part of the query, and &lt;code&gt;example-index-2&lt;/code&gt; for the other part.&lt;/p&gt;

&lt;p&gt;Sometimes you will get a FETCH just after an IXSCAN: it means that the index covers only part of the filter. After that, the planner needs to read the documents from disk to finish the filter.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/mongodb/mongo/blob/master/src/mongo/db/query/stage_types.h#L49" rel="noopener noreferrer"&gt;The complete list of stages? You need to check the code.&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Simplify your queries
&lt;/h1&gt;

&lt;h2&gt;
  
  
  $ors
&lt;/h2&gt;

&lt;p&gt;$ors are the devil. You, as a programmer, are used to thinking in $ors. You add a few $ors, your condition gets much more expressive. But guess what? Your query has gotten exponentially more complex. With every condition you add to the $or, you are adding one more combination of parameters.&lt;/p&gt;

&lt;p&gt;How does the planner resolve all those combinations? It needs an index for each of them. &lt;/p&gt;

&lt;p&gt;Do you remember the &lt;code&gt;explain()&lt;/code&gt; above? The query was:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    $or: [{"owner_id: "1"}, {"player_id: "1"}].
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The indexes used?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;example-index-1: {"owner_id: "1", "start_date": 1}
example-index-2: {"player_id: "1", "start_date": 1}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here is another tip: bigger indexes can cover smaller queries as long as the fields are at the beginning of the index.&lt;br&gt;
We are not using start_date to filter.&lt;/p&gt;

&lt;p&gt;What would happen if I add an extra $or?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    $or: [{"owner_id: "1"}, {"player_id: "1"}].
    $or: [{"is_canceled": true}, "start_date": {$gt: ISODate("2022-02-02T00:00:00)}]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then there would need 4 combinations:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{owner_id, is_canceled}, 
{owner_id, start_date}, 
{player_id, is_canceled], 
{player_id, start_date}, 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Denormalized fields
&lt;/h2&gt;

&lt;p&gt;If you find yourself filtering by several fields within an $or, or sorting by several fields, consider adding a denormalized field (based on the others).&lt;/p&gt;

&lt;p&gt;Yeah, my apologies to the &lt;a href="https://en.wikipedia.org/wiki/Third_normal_form" rel="noopener noreferrer"&gt;third normal form&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Keep reading, we will give an example pretty soon using the so-called &lt;code&gt;Summary Pattern&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  nulls are always the minimum value
&lt;/h2&gt;

&lt;p&gt;This is a minor trick, but still useful. Let's say you have a nullable field, and you have to filter (or sort descending) by that field.&lt;/p&gt;

&lt;p&gt;You will probably find yourself using a query similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
 $or: [
        field: {$exists: false}, 
        field: {$lte: value}
      ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is, if the field is null, then it passes the filter. Otherwise, compare. We used this filter to check if a player could join a match given their level and the level restrictions of the match.&lt;/p&gt;

&lt;p&gt;That's an $or. We don't like $ors. What would happen if we compare our value to null? Let's check the &lt;a href="https://docs.mongodb.com/manual/reference/operator/aggregation/sort/#ascending-descending-sort" rel="noopener noreferrer"&gt;sorting rules in MongoDB&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;MinKey (internal type)
Null
Numbers (ints, longs, doubles, decimals)
Symbol, String
Object
...
MaxKey (internal type)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That is, null is always the lower value when comparing (except for the MinKey object, we will talk about it later).&lt;/p&gt;

&lt;p&gt;Our query can be simplified by this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  field: {$lte: value}
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It works for $lt and $lte (lower than and lower or equal).&lt;br&gt;
It works too if you are sorting by descending order of &lt;code&gt;field&lt;/code&gt; &lt;code&gt;{field: -1}&lt;/code&gt; because the object with null will be at the end of the sort.&lt;/p&gt;
&lt;h2&gt;
  
  
  Counts are costly in MongoDB
&lt;/h2&gt;

&lt;p&gt;Counting seems an easy operation, but it is not. Even if you have an index, MongoDB needs to traverse the index due to the way &lt;a href="https://docs.mongodb.com/manual/indexes" rel="noopener noreferrer"&gt;MongoDB builds B-trees&lt;/a&gt;: they don't store the number of leaves that the sub-tree have. So they need to traverse the index until the end.&lt;/p&gt;

&lt;p&gt;Again, if you are counting using a query with $or, it makes the counting even more complex: the query needs to take into account possible repeated documents.&lt;/p&gt;

&lt;p&gt;For example, we used counts to compute the position of a player in a ranking (the original query was even more complex).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ranking_id: ?0}, 
{
 $or: [
        { value: {$gt: ?1}},
    { value: {$eq: ?1}, last_modified: {$gt: ?2} }
 ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It required several indexes to count:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ranking_id: 1, value: -1}
{ranking_id: 1, value: -1, last_modified: -1}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How can we avoid counting on several indexes? Add a single field that summarize the fields that you are filtering.&lt;/p&gt;

&lt;p&gt;For example: &lt;code&gt;weight = append(value, last_modified)&lt;/code&gt;&lt;br&gt;
With that new field, we only required one single index:&lt;br&gt;
Indexes:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{ranking_id: 1, weight: -1}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is called the Summary Pattern.&lt;/p&gt;

&lt;h1&gt;
  
  
  How to build indexes
&lt;/h1&gt;

&lt;p&gt;Ok, indexes are our best tool to keep MongoDB as performant as possible. So the next step, how do we know what indexes we should build?&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance advisor
&lt;/h2&gt;

&lt;p&gt;If you are in Atlas, use the &lt;a href="https://docs.atlas.mongodb.com/performance-advisor/" rel="noopener noreferrer"&gt;Performance advisor&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;At some point, you will know your system better than the Performance Advisor, but it is a good starting point.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clone your production collections and explain() it in local
&lt;/h2&gt;

&lt;p&gt;Test your indexes thoroughly before you put them in production: - it takes a lot of IOPS to build them.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;once you built it, the planner is taking it into account too, even if it is not finally used.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Remember what we said before:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;COLLSCAN: bad.&lt;/li&gt;
&lt;li&gt;IXSCAN: good.&lt;/li&gt;
&lt;li&gt;FETCH: good if it is the final step. Bad in between.
Others that you will see:&lt;/li&gt;
&lt;li&gt;COUNT: ok.&lt;/li&gt;
&lt;li&gt;MERGE: ok-ish, you probably could do better.&lt;/li&gt;
&lt;li&gt;MERGE_COUNT: good.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the way, there are blocking and non-blocking stages, meaning that a stage needs to wait for the previous one before it can start computing results.&lt;/p&gt;

&lt;h2&gt;
  
  
  ESR: Equal-Sort-Range
&lt;/h2&gt;

&lt;p&gt;Have you ever wondered what fields should go first in an index? You need to follow this rule:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fields filtered by equals ($eq, $in in some cases, ...) go first.&lt;/li&gt;
&lt;li&gt;Then fields used in the sort stage. Remember that the index has to be built in the same order as you are sorting.&lt;/li&gt;
&lt;li&gt;Then fields filtered by a range ($lt, $lte, $gt, ...).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ESR is the most useful rule you will find to build indexes. You should read as much as you can about it until you understand it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.alexbevi.com/blog/2020/05/16/optimizing-mongodb-compound-indexes-the-equality-sort-range-esr-rule/" rel="noopener noreferrer"&gt;This post by Alex Belilacqua is a gem&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disambiguate equals
&lt;/h2&gt;

&lt;p&gt;If you have two fields that are going to be filtered by $eq, what should go first?&lt;/p&gt;

&lt;p&gt;The answer is that it doesn't matter. You don't need to worry about having a more balanced tree. &lt;/p&gt;

&lt;p&gt;Just keep in mind the ESR rule. If one of them goes in a &lt;code&gt;sort&lt;/code&gt; or a &lt;code&gt;range&lt;/code&gt;, then it goes the latter.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rollover process
&lt;/h2&gt;

&lt;p&gt;Building an index is one of the most costly operations. Your IOPS will go nuts. If you need to do that in your production environment, and your collection is big enough, then we recommend you to use &lt;a href="https://docs.mongodb.com/manual/tutorial/build-indexes-on-replica-sets/" rel="noopener noreferrer"&gt;a rolling process&lt;/a&gt;. It starts the index build in a secondary, then promotes it to primary once the build is finished. You will be able to build any index even when your database load is high.&lt;/p&gt;

&lt;p&gt;In Atlas, it's just one click. &lt;/p&gt;

&lt;h2&gt;
  
  
  Remove / Hide indexes
&lt;/h2&gt;

&lt;p&gt;The more indexes you have, the worst for the planner. The planner runs the query through the indexes it has and then takes the most promising. &lt;/p&gt;

&lt;p&gt;Again, counting when you have several indexes is pretty bad. &lt;/p&gt;

&lt;p&gt;Sometimes, you cannot just remove an index in production. You can check using your profiler if it can be removed but you might not be 100%. One not-frequent query might launch a COLLSCAN and then you would miss that index.&lt;/p&gt;

&lt;p&gt;Luckily, since MongoDB 4.4 you can &lt;a href="https://docs.mongodb.com/manual/core/index-hidden/" rel="noopener noreferrer"&gt;hide indexes&lt;/a&gt;. We use them to detect what indexes we can remove safely. &lt;/p&gt;

&lt;h1&gt;
  
  
  Limit your queries
&lt;/h1&gt;

&lt;p&gt;Have you set a maximum limit of time to your queries? Why not? Do your clients have a request time out? Then it can be a healthy practice to avoid unexpected uses of your APIs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;db.our_collection.countDocuments(query, {maxTimeMS: 100})
MongoServerError: Error in $cursor stage :: caused by :: operation exceeded time limit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Do you see all these orange dots? No one was waiting for the backend to reply.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7pleurw87fvoogewkin.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7pleurw87fvoogewkin.png" alt="maxTimeMS effect" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  References
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.mongodb.com/blog/post/building-with-patterns-a-summary" rel="noopener noreferrer"&gt;https://www.mongodb.com/blog/post/building-with-patterns-a-summary&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.alexbevi.com/blog/2020/05/16/optimizing-mongodb-compound-indexes-the-equality-sort-range-esr-rule/" rel="noopener noreferrer"&gt;https://www.alexbevi.com/blog/2020/05/16/optimizing-mongodb-compound-indexes-the-equality-sort-range-esr-rule/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mongodb</category>
    </item>
    <item>
      <title>Flux vs ArgoCD</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Thu, 07 Oct 2021 08:01:03 +0000</pubDate>
      <link>https://dev.to/playtomic/flux-vs-argocd-2aj0</link>
      <guid>https://dev.to/playtomic/flux-vs-argocd-2aj0</guid>
      <description>&lt;p&gt;Hey there!&lt;/p&gt;

&lt;p&gt;Do you have any war stories about Flux or ArgoCD? Anytime they failed miserably? Did you have to fix manually your cluster? We want to hear those stories!&lt;/p&gt;

&lt;h1&gt;
  
  
   Why do we ask?
&lt;/h1&gt;

&lt;p&gt;We are exploring this GitOps trend at Playtomic. We have two k8s clusters running using Flux and ArgoCD. We have a &lt;a href="https://github.com/fluxcd/flux2-multi-tenancy" rel="noopener noreferrer"&gt;multi-tenancy setup&lt;/a&gt;, that is, one repo for the cluster itself, one repo for every application we deploy in the cluster. &lt;a href="https://github.com/argoproj-labs/argocd-image-updater" rel="noopener noreferrer"&gt;Flux&lt;/a&gt; and &lt;a href="https://github.com/argoproj-labs/argocd-image-updater" rel="noopener noreferrer"&gt;ArgoCD&lt;/a&gt; image updaters are in place.&lt;/p&gt;

&lt;p&gt;To be honest, both are pretty similar and any of them would work for us. If you search for comparisons between Flux and ArgoCD, you will only get a few of boring list of features. So... we are looking for first hand experiences. &lt;/p&gt;

&lt;p&gt;Why did you choose one over the other? Did you have the chance to try the other one?&lt;/p&gt;

</description>
      <category>help</category>
      <category>discuss</category>
      <category>flux</category>
      <category>argocd</category>
    </item>
    <item>
      <title>Hip hop to code</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Tue, 14 Sep 2021 20:35:21 +0000</pubDate>
      <link>https://dev.to/sgmoratilla/music-to-code-1ee4</link>
      <guid>https://dev.to/sgmoratilla/music-to-code-1ee4</guid>
      <description>&lt;p&gt;I have read tons of posts about music to code. &lt;/p&gt;

&lt;p&gt;Some people find focus in the &lt;a href="https://rainymood.com/" rel="noopener noreferrer"&gt;sound of the rain&lt;/a&gt;. Some prefer electronic music (techno, house, ...). Some use &lt;a href="https://open.spotify.com/playlist/1PKYiQbbX3Fak5c9SiYpFQ?si=d72d082449004942" rel="noopener noreferrer"&gt;lo-fi&lt;/a&gt; (I am thinking of you &lt;a class="mentioned-user" href="https://dev.to/angelolloqui"&gt;@angelolloqui&lt;/a&gt;). I have a few crazy friends listening constantly to heavy metal (maybe they want to &lt;a href="https://www.youtube.com/watch?v=DBwgX8yBqsw" rel="noopener noreferrer"&gt;destroy everything&lt;/a&gt; after all). &lt;/p&gt;

&lt;p&gt;I have never heard of anyone proposing my new favorite genre: hip hop instrumental music. Specifically &lt;a href="https://open.spotify.com/album/3GuOSbmbLphob1qkJ5aj80?si=UEqVNLilSLid5iuPiWJrkA&amp;amp;dl_branch=1" rel="noopener noreferrer"&gt;freestyle beats&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;If you are going to give just one shot to this kind of music, &lt;a href="https://www.youtube.com/watch?v=DbPAtxJdhCE" rel="noopener noreferrer"&gt;check this one&lt;/a&gt;.&lt;br&gt;
&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/DbPAtxJdhCE"&gt;
&lt;/iframe&gt;
&lt;br&gt;
If it doesn't make you move your neck, you're already dead inside 🤪&lt;/p&gt;

&lt;p&gt;Jokes aside, what kind of music do you listen to code? Do you have something new that can surprises us? Would you like to share some beats with me? 😇&lt;/p&gt;

&lt;p&gt;Header: &lt;a href="https://unsplash.com/photos/Kj4o6jCPulI" rel="noopener noreferrer"&gt;Kaysha&lt;/a&gt;&lt;/p&gt;

</description>
      <category>music</category>
      <category>hiphop</category>
      <category>focus</category>
    </item>
    <item>
      <title>The path to observability</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Mon, 02 Aug 2021 09:21:01 +0000</pubDate>
      <link>https://dev.to/playtomic/the-path-to-observability-25e7</link>
      <guid>https://dev.to/playtomic/the-path-to-observability-25e7</guid>
      <description>&lt;p&gt;o11y = observability = logs, metrics, traces&lt;/p&gt;

&lt;p&gt;This post is a summary of the steps we unconsciously followed in Playtomic when digging into the world of SRE (Site Reliability Engineering).&lt;/p&gt;

&lt;h2&gt;
  
  
  Logs
&lt;/h2&gt;

&lt;p&gt;Most of the people is already comfortable with logs. &lt;br&gt;
I have always been a fan of the 12-factors app design, and they &lt;a href="https://12factor.net/logs" rel="noopener noreferrer"&gt;have a chapter about logs&lt;/a&gt;. Just throw them to the stdout &lt;br&gt;
and treat them as a stream. As our services run in docker containers, we just read the logs from docker. With logstash+filebeat you can massage and forward them somewhere else (an elasticsearch, logz, datadog, ...)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;anemone_configuration.1.ip.eu-central-1.compute.internal    | 2021-05-25 08:44:44.080  INFO [configuration-service,65360f43df5797ee,65360f43df5797ee,3187176603182004136,7845016010104016623,767246] 1 --- [  XNIO-1 task-5] c.p.a.s.IgnorableRequestLoggingFilter    : After request [GET uri=/v2/status/version_control?app_name=playtomic&amp;amp;app_version=3.13.0&amp;amp;device_model=iPhone&amp;amp;os_version=14.5.1&amp;amp;platform=ios;user=one.user@gmail.com;agent=iOS 14.5.1;ms=1]
anemone_configuration.1.ip.eu-central-1.compute.internal    | 2021-05-25 08:44:44.087  INFO [configuration-service,5fe21e342fb8cac9,5fe21e342fb8cac9,4868272858418826990,131548911514375389,767246] 1 --- [  XNIO-1 task-6] c.p.a.s.IgnorableRequestLoggingFilter    : After request [GET uri=/v2/configuration;user=one.user@gmail.com;agent=iOS 14.5.1;ms=8]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Metrics
&lt;/h2&gt;

&lt;p&gt;Metrics are also quite known in the community, thanks to prometheus and grafana. What is most unknown and super useful is to have them correlated somehow.&lt;br&gt;
Probably via tags/attributes to identify hosts, containers, environments, ...&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbibntz09wd5dcq8bztkq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbibntz09wd5dcq8bztkq.png" title="Grafana Memory" alt="Grafana Memory Chart" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Traces
&lt;/h2&gt;

&lt;p&gt;Traces are probably the newest in the trio. &lt;/p&gt;

&lt;p&gt;In Playtomic, we followed the standard path: we started with in-house ELK (for logs) and prometheus (+ grafana). But we (developers) hate maintaining the infra: we run out of space, indexes get corrupted, ... it is quite distracting. Therefore, we got rid of these in-house versions and moved those services to Datadog. &lt;/p&gt;

&lt;p&gt;At that moment, we were already using distributed tracing to inject the trace and spain ids in our logs, so we can keep track of the requests among our microservices. So, with the Tracer already in place and the use of Datadog, the next natural step was using Trace to know more about our platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hr5b7zuh1j887536ni2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hr5b7zuh1j887536ni2.png" title="Traces" alt="Datadog Traces" width="800" height="550"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  What's next?
&lt;/h2&gt;

&lt;p&gt;This SRE world is huge. Where are we heading to?&lt;/p&gt;

&lt;h3&gt;
  
  
  SLIs, SLOs, SLAs
&lt;/h3&gt;

&lt;p&gt;Once you have your data (specially metrics), you can monitor your system. Welcome to the world of SLIs, SLOs, and SLAs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SLI: Indicator (Service Level): the number (for the tech team). Example: error_rate = % of requests with code &amp;gt;= 500.&lt;/li&gt;
&lt;li&gt;SLO: Objective (Service Level): the target (for the tech team). Example: keep error_rate &amp;lt; 99%&lt;/li&gt;
&lt;li&gt;SLA: Agreement (Service Level): the agreement (with the customer). Example: keep error_rate &amp;lt; 95%&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;SLOs are great because it gives you a hint when to spend more time in stability (paying technical debt, investing time researching problems, ...).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvztg8w0tsvvb9o5tey1v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvztg8w0tsvvb9o5tey1v.png" title="Monitor" alt="Datadog Monitor" width="800" height="257"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  RUM: Real User Monitoring
&lt;/h3&gt;

&lt;p&gt;One step further: what if you could connect traces in your clients with your traces in your server? That's where we are at this moment.  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Liquibase: Don't use ISO dates as changeset ids with yml</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Tue, 21 Jan 2020 10:09:01 +0000</pubDate>
      <link>https://dev.to/sgmoratilla/liquibase-don-t-use-iso-dates-as-changeset-ids-4c9e</link>
      <guid>https://dev.to/sgmoratilla/liquibase-don-t-use-iso-dates-as-changeset-ids-4c9e</guid>
      <description>&lt;p&gt;After upgrading a service from java 8 to java 11, our liquibase got crazy and tried to apply every change to the database from the beginning. But they were already applied and no changes were made to the database. &lt;br&gt;
We discovered that using ISO dates as identifiers of the changeset was a bad idea. The hard way.&lt;/p&gt;
&lt;h2&gt;
  
  
   Liquibase Changesets
&lt;/h2&gt;

&lt;p&gt;Changesets in liquibase are, as their name implies, a set of changes that must be applied at once in a database (or not at all).&lt;/p&gt;

&lt;p&gt;In order that liquibase is able to distinguish when a changeset is already applied or not, it requires you to set a unique identifier.&lt;/p&gt;

&lt;p&gt;By convention, we started to use the ISO date when the change was written (format yyyy-mm-dd). &lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;databaseChangeLog:
- changeSet:
    id: 2019-06-06
    author: sergiomoratilla
    comments: Adding new value to know which client was used to book.
    changes:
    - modifyDataType:
        tableName: reservation
        columnName: client
        newDataType: ENUM('WEB_DESKTOP', 'WEB_MOBILE', 'APP_ANDROID', 'APP_IOS', 'UNKNOWN') NOT NULL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Liquibase stores the applied changeset in a table called &lt;em&gt;DATABASECHANGELOG&lt;/em&gt; within your schema. So that, we were expecting a row&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"2019-06-06"    sergiomoratilla /liquibase/changelogs/this-example.yml ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  It's a trap
&lt;/h2&gt;

&lt;p&gt;Liquibase's YML parser decides to get the changeset id as a Date, and then use toString() to generate the id. That smells because you don't have control on that format... and this is exactly what happened.&lt;/p&gt;

&lt;p&gt;Instead of storing "2019-06-06" we got "Wed Jun 06 00:00:00 GMT 2019". &lt;/p&gt;

&lt;p&gt;After upgrading to Java 11, the behaviour of toString() changed and now returns "Wed Jun 06 02:00:00 CEST 2019". It is exactly the same date so that this format is correct but it is a bit weak to trust your ids to default formatting.&lt;/p&gt;

&lt;h2&gt;
  
  
   Solutions
&lt;/h2&gt;

&lt;p&gt;Don't use ISO date as ids (if you are using yml format to configure that). Probably, most of you didn't already do that.&lt;/p&gt;

&lt;p&gt;When we started to have several changes on the same date, we decided to change that format to yyyymmdd-n, where n is an incremental integer.&lt;/p&gt;

&lt;p&gt;What if you are already using them? I suggest you to replace the ids in your changelogs files by the ids you already have in your database. And change your convention for new files!&lt;/p&gt;

</description>
      <category>liquibase</category>
      <category>java</category>
    </item>
    <item>
      <title>Weather data providers: which one?</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Fri, 27 Dec 2019 11:46:38 +0000</pubDate>
      <link>https://dev.to/playtomic/weather-data-providers-which-one-96d</link>
      <guid>https://dev.to/playtomic/weather-data-providers-which-one-96d</guid>
      <description>&lt;p&gt;We are planning to provide our users with weather forecast before they book at Playtomic.&lt;/p&gt;

&lt;p&gt;To do that, we need a weather data provider O:) We have looked into these 3 (probably the most known):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.accuweather.com/" rel="noopener noreferrer"&gt;https://www.accuweather.com/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openweathermap.org/" rel="noopener noreferrer"&gt;https://openweathermap.org/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://darksky.net/" rel="noopener noreferrer"&gt;https://darksky.net/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pricing is not an issue right now. We intent to cache our queries. What I'm worried about is &lt;em&gt;data accuracy&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Any experience that you would like to share? It would be really appreciated!&lt;/p&gt;

</description>
      <category>help</category>
      <category>api</category>
      <category>weather</category>
    </item>
    <item>
      <title>A way of testing code based on time in a Spring Boot Application</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Fri, 20 Dec 2019 12:42:48 +0000</pubDate>
      <link>https://dev.to/sgmoratilla/a-way-of-testing-code-based-on-time-in-a-spring-boot-application-3n3f</link>
      <guid>https://dev.to/sgmoratilla/a-way-of-testing-code-based-on-time-in-a-spring-boot-application-3n3f</guid>
      <description>&lt;p&gt;Every time I see a &lt;code&gt;Instant.now() / new Date() / something that creates a date based on the current time&lt;/code&gt; I tremble. How do you expect to test that in a simple / coherent / easy to follow way?&lt;/p&gt;

&lt;p&gt;When I see that code, I usually see tests using now() + duration to check that everything works. So that the test is not the same when run today than when run tomorrow.&lt;/p&gt;

&lt;p&gt;Wouldn't it better to be able to "fix" the time and test your code with exact values of times/periods as well?&lt;/p&gt;

&lt;p&gt;So that I've fighting in my team in order that everyone uses a ClockProvider to get the "current" time. That way I can instance my own ClockProvider in unit tests and I can override the default ClockProvider of my application in integration tests. &lt;/p&gt;

&lt;p&gt;The former is pretty easy to understand, so that I'm not writing an example of that. This is an example of the latter.&lt;/p&gt;

&lt;p&gt;For instance, my application would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@SpringBootApplication&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BookingApplication&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;AbstractAnemoneApplication&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Bean&lt;/span&gt;
    &lt;span class="nd"&gt;@ConditionalOnMissingBean&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;ClockProvider&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="c1"&gt;// to allow tests to overwrite it&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;ClockProvider&lt;/span&gt; &lt;span class="nf"&gt;getClockProvider&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;Clock&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;systemDefaultZone&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// An example of method using ClockProvider&lt;/span&gt;
    &lt;span class="kt"&gt;boolean&lt;/span&gt; &lt;span class="nf"&gt;isItJanuary&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;@Nonnull&lt;/span&gt; &lt;span class="nc"&gt;ClockProvider&lt;/span&gt; &lt;span class="n"&gt;clockProvider&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;clockProvider&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getClock&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;instant&lt;/span&gt;&lt;span class="o"&gt;().&lt;/span&gt;&lt;span class="na"&gt;atZone&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="no"&gt;UTC&lt;/span&gt;&lt;span class="o"&gt;).&lt;/span&gt;&lt;span class="na"&gt;getMonth&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="nc"&gt;Month&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;JANUARY&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And my IT tests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@SpringBootTest&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;webEnvironment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SpringBootTest&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;WebEnvironment&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;RANDOM_PORT&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;classes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nc"&gt;FixingClockConfiguration&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;BookingsApplication&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;})&lt;/span&gt;
&lt;span class="nd"&gt;@ExtendWith&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;SpringExtension&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BookingsApplicationIT&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@TestConfiguration&lt;/span&gt;
    &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;FixingClockConfiguration&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

        &lt;span class="nd"&gt;@Bean&lt;/span&gt;
        &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;ClockProvider&lt;/span&gt; &lt;span class="nf"&gt;getClockProvider&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;ZonedDateTime&lt;/span&gt; &lt;span class="n"&gt;fixedInstant&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ZonedDateTime&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;of&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2019&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mo"&gt;04&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;14&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mo"&gt;00&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mo"&gt;00&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mo"&gt;00&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="no"&gt;UTC&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nc"&gt;Clock&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;fixed&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;fixedInstant&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;toInstant&lt;/span&gt;&lt;span class="o"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;fixedInstant&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;getZone&lt;/span&gt;&lt;span class="o"&gt;());&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// ... your tests based on that date&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Image: &lt;a href="https://unsplash.com/photos/FfbVFLAVscw" rel="noopener noreferrer"&gt;https://unsplash.com/photos/FfbVFLAVscw&lt;/a&gt;&lt;/p&gt;

</description>
      <category>spring</category>
      <category>test</category>
      <category>time</category>
    </item>
    <item>
      <title>Concurrency in Spring's StreamListener and Kafka</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Tue, 17 Dec 2019 19:08:20 +0000</pubDate>
      <link>https://dev.to/playtomic/concurrency-in-spring-s-streamlistener-and-kafka-4lf1</link>
      <guid>https://dev.to/playtomic/concurrency-in-spring-s-streamlistener-and-kafka-4lf1</guid>
      <description>&lt;p&gt;TL;DR: go to Use Configuration&lt;/p&gt;

&lt;p&gt;Another too fast, too furious post. I have spent a few hours trying to make my event processor multi-threaded, and it's so damn easy that I don't want anyone to spend more than a few minutes.&lt;/p&gt;

&lt;p&gt;We are using the Spring Cloud Stream layer to configure our Kafka consumers.&lt;/p&gt;

&lt;p&gt;For example, a configuration for a processor named 'reservations-input' connected to a Kafka topic 'reservations-topic' would be similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spring.cloud.stream&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;bindings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;reservations-input&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;content-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application/json&lt;/span&gt;
      &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reservations-topic&lt;/span&gt;
      &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;consumer-service-group&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And your class to start processing those events:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@EnableBinding&lt;/span&gt;&lt;span class="o"&gt;({&lt;/span&gt;
    &lt;span class="nc"&gt;MessagingConfiguration&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ReservationTopic&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;class&lt;/span&gt;&lt;span class="o"&gt;})&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MessagingConfiguration&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;interface&lt;/span&gt; &lt;span class="nc"&gt;ReservationTopic&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

        &lt;span class="nc"&gt;String&lt;/span&gt; &lt;span class="no"&gt;INPUT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"reservations-channel"&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

        &lt;span class="nd"&gt;@Input&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="no"&gt;INPUT&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="nc"&gt;SubscribableChannel&lt;/span&gt; &lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="nd"&gt;@Service&lt;/span&gt;
&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ReservationProcessor&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="nd"&gt;@StreamListener&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;MessagingConfiguration&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;ReservationTopic&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;INPUT&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;handle&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nd"&gt;@Nonnull&lt;/span&gt; &lt;span class="nc"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;ReservationEvent&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;reservationMessage&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// your stuff&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Easy peasy. Only problem here is concurrency. &lt;/p&gt;

&lt;p&gt;If you have used Kafka before, you would know that the number of partitions in your topic limits the concurrency.&lt;br&gt;
Each partition have 1 single consumer.&lt;/p&gt;

&lt;p&gt;I don't know whether (or where) I read that, but I assumed that my application would generate as many threads/consumers as partitions my topic has. But I was wrong. By default, Spring's only generates 1-threaded processor. &lt;/p&gt;

&lt;p&gt;Solutions? Get more instances of your application or configure &lt;code&gt;ConcurrentKafkaListenerContainerFactory&lt;/code&gt; to be able to throw more threads (see &lt;a href="https://docs.spring.io/spring-kafka/docs/2.3.x/reference/html/#container-factory" rel="noopener noreferrer"&gt;https://docs.spring.io/spring-kafka/docs/2.3.x/reference/html/#container-factory&lt;/a&gt;).&lt;/p&gt;
&lt;h1&gt;
  
  
  Option 1: create your own instance of ConcurrentKafkaListenerContainerFactory.
&lt;/h1&gt;

&lt;p&gt;The only hint I found in the documentation or stackoverflow but to instance a bean of type &lt;code&gt;ConcurrentKafkaListenerContainerFactory&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;    &lt;span class="nd"&gt;@Bean&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nc"&gt;ConcurrentKafkaListenerContainerFactory&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Object&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;kafkaListenerContainerFactory&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
        &lt;span class="nd"&gt;@Nonnull&lt;/span&gt; &lt;span class="nc"&gt;ConsumerFactory&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Object&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;consumerFactory&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

        &lt;span class="nc"&gt;ConcurrentKafkaListenerContainerFactory&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nc"&gt;String&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Object&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;factory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ConcurrentKafkaListenerContainerFactory&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;();&lt;/span&gt;
        &lt;span class="n"&gt;factory&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setConsumerFactory&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;consumerFactory&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
        &lt;span class="n"&gt;factory&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;setConcurrency&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;factory&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I am not very prone to instance my own beans to configure things that seems too obvious. It is easy to overwrite some Spring default values that I am already expecting to use, it is more code to maintain...&lt;/p&gt;

&lt;p&gt;There has to be a way through configuration.&lt;/p&gt;

&lt;h1&gt;
  
  
  Option 2: use configuration
&lt;/h1&gt;

&lt;p&gt;Getting back to configuration, what we write under &lt;code&gt;spring.cloud.stream.bindings.channel-name.consumer&lt;/code&gt; ends in the configuration of Kafka. So that I tried to configure the property concurrency. That is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;spring.cloud.stream&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;bindings&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;reservations-input&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;content-type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;application/json&lt;/span&gt;
      &lt;span class="na"&gt;consumer.concurrency&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
      &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;reservations-topic&lt;/span&gt;
      &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;consumer-service-group&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Starting our application, we see that we have 3 binders. &lt;br&gt;
Profit!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;    December 17th 2019, 14:22:57.274    2019-12-17 13:22:57.274  INFO &lt;span class="o"&gt;[&lt;/span&gt;consumer-service,,,] 1 &lt;span class="nt"&gt;---&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;container-1-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder&lt;span class="nv"&gt;$1&lt;/span&gt;  : partitions assigned: &lt;span class="o"&gt;[&lt;/span&gt;reservations-topic-1]
    December 17th 2019, 14:22:57.259    2019-12-17 13:22:57.259  INFO &lt;span class="o"&gt;[&lt;/span&gt;consumer-service,,,] 1 &lt;span class="nt"&gt;---&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;container-2-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder&lt;span class="nv"&gt;$1&lt;/span&gt;  : partitions assigned: &lt;span class="o"&gt;[&lt;/span&gt;reservations-topic-2]
    December 17th 2019, 14:22:57.256    2019-12-17 13:22:57.256  INFO &lt;span class="o"&gt;[&lt;/span&gt;consumer-service,,,] 1 &lt;span class="nt"&gt;---&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;container-3-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder&lt;span class="nv"&gt;$1&lt;/span&gt;  : partitions assigned: &lt;span class="o"&gt;[&lt;/span&gt;reservations-topic-3]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>spring</category>
      <category>kafka</category>
      <category>concurrency</category>
      <category>stream</category>
    </item>
    <item>
      <title>BlockingQueue and ExecutorService</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Mon, 16 Dec 2019 13:27:23 +0000</pubDate>
      <link>https://dev.to/playtomic/linkedblockingqueue-and-executorservice-1pc5</link>
      <guid>https://dev.to/playtomic/linkedblockingqueue-and-executorservice-1pc5</guid>
      <description>&lt;p&gt;This is a quick and dirty post, but I have promised to publish everything I research at Playtomic.&lt;/p&gt;

&lt;p&gt;We were having a discussion about how to limit how many tasks an ExecutorService can enqueue. We were trying to &lt;br&gt;
control how much memory a service can handle to avoid out of memory exceptions. This service accepts messages from a Kafka topic &lt;br&gt;
and from an API. Those operations end in the same internal logic, which is threaded.&lt;/p&gt;

&lt;p&gt;There is a kind of Queue, BlockingQueue, that can wait until a spot in the queue is free. It would seem that using an ExecutionService&lt;br&gt;
with a BlockingQueue would wait when submitting a task until that queue is not full. But it is not, the ExecutionService rejects the task.&lt;/p&gt;

&lt;p&gt;You know that hours of trial and error can save you hours of reading the manual. I'm proud to say that I have read the manual first this time.&lt;/p&gt;

&lt;p&gt;This test shows what happens:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BlockingQueueExecutorServiceTest&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Test&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;submitTest&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Worst case scenario: accept only 1 thread in the queue.&lt;/span&gt;
        &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;nThreads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;


        &lt;span class="nc"&gt;ExecutorService&lt;/span&gt; &lt;span class="n"&gt;exService&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ThreadPoolExecutor&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;nThreads&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;nThreads&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="mi"&gt;0L&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="nc"&gt;TimeUnit&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;MILLISECONDS&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;LinkedBlockingQueue&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;nThreads&lt;/span&gt;&lt;span class="o"&gt;));&lt;/span&gt;


        &lt;span class="c1"&gt;// Full this with tasks&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;WaitingTask&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;WaitingTask&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;exService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;submit&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;WaitingTask&lt;/span&gt; &lt;span class="kd"&gt;implements&lt;/span&gt; &lt;span class="nc"&gt;Runnable&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

        &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;WaitingTask&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

        &lt;span class="nd"&gt;@Override&lt;/span&gt;
        &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

            &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Running task {}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
                &lt;span class="nc"&gt;Thread&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sleep&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;InterruptedException&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@550dbc7a[Not completed, task = java.util.concurrent.Executors$RunnableAdapter@4dbb42b7[Wrapped task = com.playtomic.anemone.matchmaker.service.BlockingQueueExecutorServiceTest$WaitingTask@66f57048]] rejected from java.util.concurrent.ThreadPoolExecutor@21282ed8[Running, pool size = 1, active threads = 1, queued tasks = 1, completed tasks = 0]

    at java.base/java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2055)
    at java.base/java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:825)
    at java.base/java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1355)
    at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
    at com.playtomic.anemone.matchmaker.service.BlockingQueueExecutorServiceTest.submitTest(BlockingQueueExecutorServiceTest.java:28)
... more boring stacktrace
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you want to wait until queue is not full, you have to provide a RejectedExecutionHandler which does that. For example, Spring's CallerBlocksPolicy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;BlockingQueueExecutorServiceTest&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

    &lt;span class="nd"&gt;@Test&lt;/span&gt;
    &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;submitTest&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Worst case scenario: accept only 1 thread in the queue.&lt;/span&gt;
        &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;nThreads&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;

        &lt;span class="nc"&gt;CallerBlocksPolicy&lt;/span&gt; &lt;span class="n"&gt;policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;CallerBlocksPolicy&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// 10secs&lt;/span&gt;
        &lt;span class="nc"&gt;ExecutorService&lt;/span&gt; &lt;span class="n"&gt;exService&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ThreadPoolExecutor&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;nThreads&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;nThreads&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="mi"&gt;0L&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="nc"&gt;TimeUnit&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;MILLISECONDS&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
            &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;LinkedBlockingQueue&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class="n"&gt;nThreads&lt;/span&gt;&lt;span class="o"&gt;),&lt;/span&gt; 
            &lt;span class="n"&gt;policy&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;


        &lt;span class="c1"&gt;// Full this with tasks&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt; &lt;span class="o"&gt;++&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="nc"&gt;WaitingTask&lt;/span&gt; &lt;span class="n"&gt;t&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;WaitingTask&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="n"&gt;exService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;submit&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;t&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;

    &lt;span class="kd"&gt;private&lt;/span&gt; &lt;span class="kd"&gt;static&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;WaitingTask&lt;/span&gt; &lt;span class="kd"&gt;implements&lt;/span&gt; &lt;span class="nc"&gt;Runnable&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

        &lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="nf"&gt;WaitingTask&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;int&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
            &lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;index&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;;&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;

        &lt;span class="nd"&gt;@Override&lt;/span&gt;
        &lt;span class="kd"&gt;public&lt;/span&gt; &lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

            &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
                &lt;span class="n"&gt;log&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;info&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Running task {}"&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;index&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
                &lt;span class="nc"&gt;Thread&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;sleep&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
            &lt;span class="o"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;InterruptedException&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;

            &lt;span class="o"&gt;}&lt;/span&gt;
        &lt;span class="o"&gt;}&lt;/span&gt;
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And this time we get:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;12:21:32.409 [pool-1-thread-1] INFO com.playtomic.anemone.matchmaker.service.BlockingQueueExecutorServiceTest - Running task 0
12:21:32.422 [main] DEBUG org.springframework.integration.util.CallerBlocksPolicy - Attempting to queue task execution for 10000 milliseconds
12:21:33.420 [pool-1-thread-1] INFO com.playtomic.anemone.matchmaker.service.BlockingQueueExecutorServiceTest - Running task 1
12:21:33.420 [main] DEBUG org.springframework.integration.util.CallerBlocksPolicy - Task execution queued
12:21:33.421 [main] DEBUG org.springframework.integration.util.CallerBlocksPolicy - Attempting to queue task execution for 10000 milliseconds
12:21:34.423 [pool-1-thread-1] INFO com.playtomic.anemone.matchmaker.service.BlockingQueueExecutorServiceTest - Running task 2
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Header: &lt;a href="https://unsplash.com/photos/Kj2SaNHG-hg" rel="noopener noreferrer"&gt;https://unsplash.com/photos/Kj2SaNHG-hg&lt;/a&gt;&lt;/p&gt;

</description>
      <category>java</category>
      <category>threading</category>
    </item>
    <item>
      <title>Choosing a graph database</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Tue, 03 Dec 2019 18:39:36 +0000</pubDate>
      <link>https://dev.to/playtomic/choosing-a-graph-database-54dn</link>
      <guid>https://dev.to/playtomic/choosing-a-graph-database-54dn</guid>
      <description>&lt;p&gt;Disclaimer: &lt;em&gt;I am writing this because I throw a question &lt;a href="https://dev.to/sgmoratilla/graph-databases-which-one-2m20"&gt;about what graph database we should use at Playtomic&lt;/a&gt; to dev.to, and I would like to give something back to the community. This is the process of how we decided.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When starting with a new piece of tech, the question is always clear: which one of the alternatives? Most of you have had to choose a relational database: mysql vs postgresql vs oracle... Maybe, if you were lucky enough, a no relational database: mongodb (document) vs redis (key-value) vs cassanda (columns) vs hbase (columns)...&lt;/p&gt;

&lt;p&gt;What about graphs databases (which are no relational as well)? I only knew Neo4j. With every cloud-vendor offering their own proprietary solutions (i.e. Amazon Nepture) this choose is even harder.&lt;/p&gt;

&lt;p&gt;We prefer open source solutions over proprietary ones. More community, less prone to be vendor locked-in. We tend to host on cloud services. We don't have a infrastructure team and we don't want to spend time in maintenance. &lt;/p&gt;

&lt;p&gt;In our philosophy, experiments must be goal oriented, not tech-oriented. What's the purpose of testing a graph database in Playtomic? We want to explore whether we can model relations between players better than we already do (with a relational database). Final aim is ending with a recommendation system: new players to meet, new venues where to play, ... all based in the relations with players that you already know.&lt;/p&gt;

&lt;p&gt;As our team is small, our time to spend in experiments is limited too. So that, I have to reduce the number of options: Nepture is proprietary and pretty unknown to me, so that I will drop it. I'm not very seduced by OrientDB, as it looks like a too general db. &lt;/p&gt;

&lt;p&gt;OrientDB:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Schema-less&lt;/li&gt;
&lt;li&gt;SQL for queries (big win IMHO).&lt;/li&gt;
&lt;li&gt;Great web console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Janusgraph:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Schema.&lt;/li&gt;
&lt;li&gt;Gremlin for queries (functional programming language)&lt;/li&gt;
&lt;li&gt;Drawback: you have to choose what storage to use: HBase vs Cassandra. I'm not sure about the implications.&lt;/li&gt;
&lt;li&gt;No web console. You have to use third-party application to visualise data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Neo4j:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Schema-less (I'd rather say, only supports key-value).&lt;/li&gt;
&lt;li&gt;Cipher for queries (query language).&lt;/li&gt;
&lt;li&gt;Web console.&lt;/li&gt;
&lt;li&gt;Hosted by GraphStory and  GrapheneDb (cheaper).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of them are available in AWS marketplace. Neo4j has an online sandbox which the rest don't have. You can play with all using docker images. &lt;/p&gt;

&lt;p&gt;At this point, I think all three would give us all we need but Neo4j seems easier to test. &lt;br&gt;
Searching how to integrate them with Micronaut and Spring Boot, Neo4j is more popular. It doesn't mean it is better, just that when I find a problem (and I will), someone has already been there. &lt;/p&gt;

&lt;p&gt;Thinking about putting this experiment in production, we found some managed Neo4j hostings that allows us to start cheap and scale later if we are happy with it. &lt;/p&gt;

&lt;p&gt;So that, Neo4j was the chosen one for our experiment. We will probably writing about the result in a few weeks x)&lt;/p&gt;

</description>
      <category>graph</category>
      <category>neo4j</category>
      <category>playtomic</category>
    </item>
    <item>
      <title>Graph databases: which one?</title>
      <dc:creator>Sergio Garcia Moratilla</dc:creator>
      <pubDate>Tue, 15 Oct 2019 10:44:32 +0000</pubDate>
      <link>https://dev.to/sgmoratilla/graph-databases-which-one-2m20</link>
      <guid>https://dev.to/sgmoratilla/graph-databases-which-one-2m20</guid>
      <description>&lt;p&gt;I have been considering graph databases for a while. I have minor experience with Neo4j, but I am a total newbie about what alternatives are in the market.&lt;/p&gt;

&lt;p&gt;I see Amazon has Neptune, there exists OrientDB, MongoDB has incorporated graph functions, ... &lt;/p&gt;

&lt;p&gt;Has anyone built a product on top of a graph database and would like to share how the experience was? Should I stick to Neo4j?&lt;/p&gt;

</description>
      <category>help</category>
    </item>
  </channel>
</rss>
