<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rubén Martín Pozo</title>
    <description>The latest articles on DEV Community by Rubén Martín Pozo (@rmarpozo).</description>
    <link>https://dev.to/rmarpozo</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rmarpozo"/>
    <language>en</language>
    <item>
      <title>Enhancing Data Security with MongoDB: A Dive into Cryptography and CSFLE at Ovianta</title>
      <dc:creator>Rubén Martín Pozo</dc:creator>
      <pubDate>Mon, 02 Dec 2024 10:20:00 +0000</pubDate>
      <link>https://dev.to/ovianta/enhancing-data-security-with-mongodb-a-dive-into-cryptography-and-csfle-at-ovianta-47d</link>
      <guid>https://dev.to/ovianta/enhancing-data-security-with-mongodb-a-dive-into-cryptography-and-csfle-at-ovianta-47d</guid>
      <description>&lt;p&gt;In the digital age, safeguarding sensitive information is not optional. It's essential. At Ovianta, a SaaS solution empowering doctors with streamlined workflows and intelligent insights, protecting patient data is a top priority. MongoDB's cryptographic tools, particularly Client-Side Field Level Encryption (CSFLE), offer powerful methods to secure data in-use.&lt;/p&gt;

&lt;p&gt;In this article, we'll explore MongoDB's CSFLE and share how Ovianta leverages encryption to meet stringent data protection requirements while working within the constraints of serverless environments like Vercel.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Client-Side Field Level Encryption?
&lt;/h2&gt;

&lt;p&gt;MongoDB's CSFLE encrypts specific fields on the client side, ensuring sensitive data remains inaccessible to unauthorized parties, even if the database itself is compromised. The approach aligns with compliance standards like GDPR and HIPAA, making it an excellent choice for industries handling sensitive information, such as healthcare.&lt;/p&gt;

&lt;p&gt;CSFLE Highlights:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data confidentiality: Data is encrypted before it leaves the client.&lt;/li&gt;
&lt;li&gt;Field-level granularity: Only sensitive fields are encrypted, leaving the rest of the database searchable.&lt;/li&gt;
&lt;li&gt;Compliance-friendly: Helps meet data protection regulations.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Automatic vs. Manual Encryption
&lt;/h2&gt;

&lt;p&gt;MongoDB supports two CSFLE modes: Automatic Encryption and Manual Encryption.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Automatic Encryption:

&lt;ul&gt;
&lt;li&gt;Simplifies implementation by using MongoDB drivers to handle encryption.&lt;/li&gt;
&lt;li&gt;Requires the installation of an extra library.&lt;/li&gt;
&lt;li&gt;Not compatible with all hosting environments, including serverless platforms like Vercel.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Manual Encryption:

&lt;ul&gt;
&lt;li&gt;Offers fine-grained control by letting developers manage encryption and decryption explicitly.&lt;/li&gt;
&lt;li&gt;Does not rely on additional libraries, making it suitable for environments with strict resource constraints, including serverless platforms like Vercel.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;At Ovianta, we chose manual encryption because automatic encryption's library is incompatible with Vercel's serverless architecture. This decision ensures we maintain robust security without compromising the performance or scalability of our platform.&lt;/p&gt;

&lt;h2&gt;
  
  
  Manual Encryption: How Ovianta Secures Data
&lt;/h2&gt;

&lt;p&gt;At Ovianta, we handle sensitive patient information, such as medical histories and consultation records. Using manual encryption allows us to encrypt this data securely before storing it in MongoDB. Here's how we do it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Key Management:

&lt;ul&gt;
&lt;li&gt;We generate and manage Data Encryption Keys (DEKs) using a secure Key Management System (KMS).&lt;/li&gt;
&lt;li&gt;Our KMS integrates seamlessly with MongoDB, providing a secure mechanism for key storage.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Encryption and Decryption:

&lt;ul&gt;
&lt;li&gt;Data is encrypted using the MongoDB Client Encryption Library before it is sent to the database.&lt;/li&gt;
&lt;li&gt;Authorized services decrypt data when needed, ensuring only specific application workflows can access sensitive information.
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ClientEncryption&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mongodb-client-encryption&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Initialize encryption settings&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;clientEncryption&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ClientEncryption&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;keyVaultNamespace&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;encryption.__keyVault&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;kmsProviders&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;aws&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;accessKeyId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;AWS_ACCESS_KEY_ID&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;secretAccessKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;&amp;lt;AWS_SECRET_ACCESS_KEY&amp;gt;&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Encrypt sensitive patient data&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;encryptedValue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;clientEncryption&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;encrypt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;patientSensitiveData&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;keyId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;keyId&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;algorithm&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// Store encrypted data in MongoDB&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;insertOne&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;sensitiveField&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;encryptedValue&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It's also possible to decrypt using the MongoClient directly without needing to activate full automatic encryption by using the property &lt;strong&gt;bypassAutoEncryption&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secureClient&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MongoClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;uri&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;autoEncryption&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;keyVaultNamespace&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="nx"&gt;kmsProviders&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;bypassAutoEncryption&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;collection&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;toArray&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why Ovianta Chose Manual Encryption
&lt;/h2&gt;

&lt;p&gt;Manual encryption provides us with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Flexibility: By managing encryption directly in our code, we avoid dependencies on libraries incompatible with serverless environments.&lt;/li&gt;
&lt;li&gt;Granular control: We can tailor encryption to specific fields and workflows, ensuring efficiency and compliance. Although it is possible to achieve this behavior using schemas, that will force us to work on automatic mode that is not working in serverless environments such as Vercel.&lt;/li&gt;
&lt;li&gt;Portability: Since no special libraries are required, our encryption setup can be easily replicated across various environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How CSFLE Benefits Ovianta's Users
&lt;/h2&gt;

&lt;p&gt;For our customers—doctors and healthcare providers—CSFLE means:&lt;/p&gt;

&lt;p&gt;• Enhanced Privacy: Patient data is encrypted before leaving the client, ensuring it remains confidential even in the unlikely event of a breach. &lt;br&gt;
• Regulatory Compliance: By implementing advanced cryptographic measures, Ovianta adheres to stringent healthcare data protection standards, building trust with users.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;At Ovianta, securing patient data is central to our mission of empowering healthcare providers with seamless, AI-driven workflows. MongoDB's CSFLE, particularly through manual encryption, allows us to achieve high levels of security while maintaining the flexibility needed for our serverless architecture.&lt;/p&gt;

&lt;p&gt;Whether you're building a healthcare app or managing sensitive user data, MongoDB's encryption options offer a reliable path to compliance and trust. For environments like ours, where automatic encryption isn't an option, manual encryption ensures robust security without compromise.&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;p&gt;• MongoDB Documentation: &lt;a href="https://www.mongodb.com/docs/manual/core/csfle/fundamentals/automatic-encryption/" rel="noopener noreferrer"&gt;Automatic Encryption&lt;/a&gt; &lt;br&gt;
• MongoDB Documentation: &lt;a href="https://www.mongodb.com/docs/manual/core/csfle/fundamentals/manual-encryption/" rel="noopener noreferrer"&gt;Manual Encryption&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;At &lt;a href="https://ovinta.com" rel="noopener noreferrer"&gt;Ovianta&lt;/a&gt;, we're building a next-generation product for doctors to streamline software for their consultations using NextJS. Follow us on this journey to know more about how we're building.&lt;/p&gt;

</description>
      <category>cryptography</category>
      <category>mongodb</category>
    </item>
    <item>
      <title>Why the JavaScript ecosystem is so vibrant (and a bit chaotic) for a backend dev</title>
      <dc:creator>Rubén Martín Pozo</dc:creator>
      <pubDate>Wed, 30 Oct 2024 11:00:00 +0000</pubDate>
      <link>https://dev.to/ovianta/why-the-javascript-ecosystem-is-so-vibrant-and-a-bit-chaotic-for-a-backend-dev-oge</link>
      <guid>https://dev.to/ovianta/why-the-javascript-ecosystem-is-so-vibrant-and-a-bit-chaotic-for-a-backend-dev-oge</guid>
      <description>&lt;h3&gt;
  
  
  Why the JavaScript Ecosystem is So Vibrant (and a Bit Chaotic) for a Backend Dev
&lt;/h3&gt;

&lt;p&gt;As a backend developer with a background in Java and Spring Boot, stepping into the world of JavaScript felt like entering a parallel universe. JavaScript's ecosystem is dynamic, brimming with creativity, and driven by innovation. In contrast to Java, which is structured and stable, JavaScript thrives in a state of constant flux, fueled by new ideas and ever-evolving tools. For a backend developer used to a world of well-defined patterns and practices, the JavaScript world can feel like a bit of a wild ride, but that's what makes it so exciting.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. JavaScript: A Breath of Fresh Air for Backend Developers
&lt;/h3&gt;

&lt;p&gt;Coming from a Java and Spring Boot background, JavaScript was a bit of a shock to the system. Java offers reliability and structure. There's a defined way to approach most problems and a certain consistency in how frameworks evolve over time. JavaScript, on the other hand, feels like an open playground. In JavaScript, there are often multiple ways to approach a problem, and sometimes, no clear “right” way at all.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Contrast with Java&lt;/strong&gt;: Where Java feels familiar and consistent, JavaScript’s freedom opens up possibilities to experiment with new patterns and creative approaches.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adaptability&lt;/strong&gt;: JavaScript gives developers the flexibility to break free from traditional constraints, offering a range of tools and techniques that keep things fresh and exciting.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;JavaScript's flexibility isn’t just about syntax. It’s a mindset shift. The language encourages innovation and quick pivots, often leading developers to discover more efficient solutions than they might have imagined in a more rigid backend environment. This freedom allows for a sense of creativity that can be incredibly rewarding.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Pros of a Fast-Moving Ecosystem
&lt;/h3&gt;

&lt;p&gt;One of the most fascinating aspects of JavaScript is the sheer speed at which it evolves. The ecosystem is a hub of innovation, with a steady stream of new libraries, frameworks, and tools being released and adopted by the community. JavaScript is in a constant state of reinvention, pushing the envelope to make development faster, easier, and more efficient. In contrast with Java, where everything needs to go through a heavier and more complex process to be adopted by users.  &lt;/p&gt;

&lt;p&gt;For a backend developer, this fast-moving ecosystem is a breath of fresh air. It means there's always something new to learn, whether it's a framework like React, Vue, or a server-side solution like Node.js. The community is constantly experimenting and finding better ways to solve common problems, pushing developers to stay up-to-date with the latest advancements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk59h5zgb4np2z6p6tk1.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk59h5zgb4np2z6p6tk1.jpg" alt="Twp devs working on Javascript" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The Cons: Chaotic, Unstable, and Ever-Changing
&lt;/h3&gt;

&lt;p&gt;However, the pace of JavaScript's evolution also has its downsides. While Java's stability allows developers to build on a reliable foundation, JavaScript’s constant change can make it feel unstable. Frameworks and libraries rise and fall in popularity, sometimes within just a few months, making it challenging to commit to a particular stack or tool for long-term projects.&lt;/p&gt;

&lt;p&gt;Coming from a much more stable environment, it's difficult to understand what library or solution you should use to solve a particular problem, and that might increase your anxiety while trying out different approaches.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Constantly Changing Tools&lt;/strong&gt;: The fast pace of updates and new releases can make JavaScript feel like a moving target. Just when you've mastered one library or framework, a new version or a whole new approach might come along.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Steep Learning Curve for New Tools&lt;/strong&gt;: With so many options and regular updates, developers are always learning, which can be exhilarating but also overwhelming.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Project Abandonment&lt;/strong&gt;: It’s not uncommon for tools or libraries to lose community support or be quickly abandoned, which can be risky for production projects that need long-term reliability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation&lt;/strong&gt;: Frequently, the documentation is not as deep as I'm used to seeing in Java. That means more exploration and testing until you fully understand how the framework works. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;JavaScript’s experimental nature means that while the ecosystem is highly innovative, it can also be unpredictable. Developers may invest time learning a specific tool only to find that it’s no longer relevant or actively supported. It’s a landscape where you need to stay flexible and be prepared to switch gears when necessary.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;4. Why Embrace JavaScript’s Vibrancy?&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Despite its challenges, JavaScript’s vibrant ecosystem has a lot to offer backend developers. It’s an environment that encourages a different kind of problem-solving, one that’s creative, flexible, and always evolving. Working in JavaScript has made me a more versatile developer. And also, the journey is a lot of fun!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Broader Career Opportunities&lt;/strong&gt;: JavaScript's popularity across both frontend and backend roles (thanks to frameworks like Node.js) creates career flexibility.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fresh Perspective on Development&lt;/strong&gt;: The experience of working in JavaScript provides new insights that can enhance backend development, encouraging a more agile, creative approach.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the end, the JavaScript ecosystem is a thrilling place to be. It’s unpredictable and sometimes chaotic, but for those who are willing to embrace the changes, it’s also incredibly rewarding. For a backend developer stepping into JavaScript, it’s a journey that promises to challenge, inspire, and expand your horizons—if you’re up for the ride.&lt;/p&gt;

&lt;p&gt;Note: Everything said here applies to TypeScript, too. In fact, it’s even wilder and more fun if you choose to go down the TypeScript path.&lt;/p&gt;




&lt;p&gt;At &lt;a href="https://ovianta.com" rel="noopener noreferrer"&gt;Ovianta&lt;/a&gt;, we're building a next-generation product for doctors to streamline software for their consultations using NextJS. Follow us on this journey to know more about how we're building.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>typescript</category>
      <category>backend</category>
      <category>frontend</category>
    </item>
    <item>
      <title>Redis vs MongoDB, fight!</title>
      <dc:creator>Rubén Martín Pozo</dc:creator>
      <pubDate>Wed, 13 Oct 2021 07:36:28 +0000</pubDate>
      <link>https://dev.to/playtomic/redis-vs-mongodb-fight-1481</link>
      <guid>https://dev.to/playtomic/redis-vs-mongodb-fight-1481</guid>
      <description>&lt;p&gt;Redis is an in-memory, distributed database that has typically been used as a cache in multiple projects. The advantage over other types of databases is that it doesn't persist data to disk, so all the elements are always in memory. That should speed up reads and writes.   &lt;/p&gt;

&lt;p&gt;To understand what having a Redis cluster could offer us, we've run some experiments.&lt;/p&gt;

&lt;h1&gt;
  
  
  Weather experiment
&lt;/h1&gt;

&lt;p&gt;The idea behind this experiment is to test if MongoDB and Redis have a significant performance difference when accessing a cache by key. That means writing and reading to and from the cache by a key without further manipulating the value.    &lt;/p&gt;

&lt;p&gt;The weather service uses MongoDB as a cache to store weather forecasts, not to call the weather provider in every single request. We also added a Redis cache using AWS Elasticache. Let's see how those two compare. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjenm55sjb7po1q8hh59u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjenm55sjb7po1q8hh59u.png" alt="MongoDB and Redis read time"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This image shows MongoDB and Redis read time when retrieving forecasts from the cache. MongoDB shows 12ms for p99, 5ms for p90, and 2ms for p50. Redis shows 5ms for p99, 1ms for p90, and around 600µs for p50.&lt;/p&gt;

&lt;p&gt;Although Redis performs almost 2x as MongoDB, would that difference justify the system's complexity by having another database? Let's see if we can answer these questions with a new experiment.  &lt;/p&gt;

&lt;h1&gt;
  
  
  Ranking experiment
&lt;/h1&gt;

&lt;p&gt;In this experiment, we will use Redis to cache the rankings using one of Redis' data structures. In this case, we're going to use a sorted set. This kind of structure allows us to keep a set of elements ordered by a score. In our case, the score will be either the level or the points of a player.  &lt;/p&gt;

&lt;p&gt;What we did was to add pages of the ranking when they are read from Mongo to Redis, and then every time a ranking request is received, we calculate the ranking using both Mongo and Redis. Since we don't have all players in Redis, the page calculated using Mongo is always returned to the client. The same calculation is done using Redis to compare the difference in time.     &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh5845fq4gmxtsigt91z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh5845fq4gmxtsigt91z.png" alt="MongoDB and Redis calculating ranking"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This image shows MongoDB and Redis read time when calculating the ranking. MongoDB shows 350ms for p99, 40ms for p90, 15ms for p50, while Redis shows 9ms for p99, 2.5ms for p90, and 1.5ms for p50. That means almost 40x for p99, 16x for p90, and 10x for p50 for Redis compared to MongoDB.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusions
&lt;/h1&gt;

&lt;p&gt;We've seen with these experiments that MongoDB can work pretty decently as a cache when the element is acceded by a key, and MongoDB has an index on that key with a particular TTL. However, Redis' time is always better.&lt;/p&gt;

&lt;p&gt;For those use cases where Redis has its own data structure, Redis has no rival, and we strongly recommend its use. &lt;/p&gt;

&lt;h1&gt;
  
  
  Next steps
&lt;/h1&gt;

&lt;p&gt;Should we go then and use Redis in production? That's still to be seen. Using Redis as a cache would imply that we have to store the data somewhere else to ensure we can recreate the cache if something goes wrong. We would also need to keep in sync with both the cache and the underlying storing system. &lt;/p&gt;

&lt;p&gt;What's clear is that Redis is a useful tool that we could end up needing. We're not sure if we need to start using it just right now, though.&lt;/p&gt;

&lt;p&gt;Cover image by &lt;a href="https://unsplash.com/@wacalke?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Mateusz Wacławek&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/fight?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText" rel="noopener noreferrer"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>mongodb</category>
      <category>redis</category>
      <category>database</category>
      <category>performance</category>
    </item>
    <item>
      <title>Time to say goodbye to Docker Swarm</title>
      <dc:creator>Rubén Martín Pozo</dc:creator>
      <pubDate>Fri, 17 Sep 2021 08:12:37 +0000</pubDate>
      <link>https://dev.to/playtomic/time-to-say-goodbye-to-docker-swarm-2iej</link>
      <guid>https://dev.to/playtomic/time-to-say-goodbye-to-docker-swarm-2iej</guid>
      <description>&lt;p&gt;We've been using Docker Swarm almost from the beginning of Playtomic's history. It has performed astonishingly well from day one. We haven't had a significant issue in four years. But, with tears in our eyes, it's time to say goodbye.&lt;/p&gt;

&lt;p&gt;Why are we moving away from Docker Swarm? Well, the future of Docker Swarm is not clear. Although Docker Swarm is part of docker-ce distribution, Mirantis (owner of Docker Enterprise since November 2019) said that their main orchestrator would be Kubernetes.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The primary orchestrator going forward is Kubernetes. Mirantis is committed to providing an excellent experience to all Docker Enterprise platform customers and currently expects to support Swarm for at least two years, depending on customer input into the roadmap. Mirantis is also evaluating options for making the transition to Kubernetes easier for Swarm users.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It is true that in a later &lt;a href="https://www.mirantis.com/blog/mirantis-will-continue-to-support-and-develop-docker-swarm/"&gt;post&lt;/a&gt;, Mirantis said that they were going to support Docker Swarm in the future, but clearly, the orchestrator competition is led by Kubernetes. &lt;/p&gt;

&lt;p&gt;However, the main reason why we're moving away from Docker Swarm is that we need capabilities that are not easy to get with Docker Swarm at the moment. Playtomic is growing pretty fast, and we need tools that allow us to scale up and down automatically as required. &lt;/p&gt;

&lt;p&gt;In a containerized environment, scaling means that you need to automatically scale up the number of instances and the number of nodes in the cluster. That's something that it's not easily achievable with Docker Swarm, unfortunately.&lt;/p&gt;

&lt;p&gt;Also, we've grown into different teams, and these teams need a way to manage their services in the cluster autonomously. On the other hand, giving full access to the cluster to every team is risky. That's why we looked for a way of granted access just to the resources the team needs to manage. Again, Docker Swarm doesn't offer a straightforward solution to this problem.&lt;/p&gt;

&lt;p&gt;For all these reasons, we've been testing Kubernetes for a while, and although Kubernetes is definitely more complex than Docker Swarm is also more powerful. Once you understand how Kubernetes operates and understand its design, it opens up many new exciting possibilities for us here at Playtomic. &lt;/p&gt;

&lt;p&gt;We're running our Kubernetes cluster on EKS. That gives us kind of the same feeling we had with Docker Swarm since setting up the system is pretty straightforward. We don't need to fully understand the internals of the cluster to operate it. &lt;/p&gt;

&lt;p&gt;Farewell Docker Swarm, you served us well. &lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://unsplash.com/@carrier_lost?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Ian Taylor&lt;/a&gt; on &lt;a href="https://unsplash.com/s/photos/docker?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText"&gt;Unsplash&lt;/a&gt;&lt;/p&gt;

</description>
      <category>infrastructure</category>
      <category>k8s</category>
      <category>kubernetes</category>
      <category>swarm</category>
    </item>
    <item>
      <title>We need to integrate a chat solution in our app. What do you recommend?</title>
      <dc:creator>Rubén Martín Pozo</dc:creator>
      <pubDate>Fri, 22 Feb 2019 15:27:54 +0000</pubDate>
      <link>https://dev.to/playtomic/we-need-to-integrate-a-chat-solution-in-our-app-what-do-you-recommend-5g06</link>
      <guid>https://dev.to/playtomic/we-need-to-integrate-a-chat-solution-in-our-app-what-do-you-recommend-5g06</guid>
      <description>&lt;p&gt;We are looking for a chat solution for our app. We are currently evaluating the following products:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://chatsdk.co/"&gt;Chat SDK&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.twilio.com/chat"&gt;Twilio Chat&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://sendbird.com/"&gt;Sendbird&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://pusher.com/chatkit"&gt;Chatkit&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Do you have any experience with these frameworks that you can share with us? Would you recommend any others? I'll be extending this post with our experience and findings&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you want to know how the story ended up take a look at this post &lt;/p&gt;
&lt;div class="ltag__link"&gt;
  &lt;a href="/playtomic" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__org__pic"&gt;
      &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rdjPolXg--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--Giw9HNuA--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/organization/profile_image/201/eee12a98-255d-45d4-a517-0bf58a08a192.png" alt="Playtomic" width="150" height="150"&gt;
      &lt;div class="ltag__link__user__pic"&gt;
        &lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--3pkhHgNF--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://res.cloudinary.com/practicaldev/image/fetch/s--iepSAOpX--/c_fill%2Cf_auto%2Cfl_progressive%2Ch_150%2Cq_auto%2Cw_150/https://dev-to-uploads.s3.amazonaws.com/uploads/user/profile_image/89672/9f31089c-3a98-456c-a897-d6eecbc8a424.jpeg" alt="" width="150" height="150"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="/playtomic/playtomic-s-chat-solution-with-firebase-realtime-db-e8p" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Playtomic's chat solution with Firebase Realtime DB&lt;/h2&gt;
      &lt;h3&gt;Angel G. Olloqui for Playtomic ・ Jun 24 '19 ・ 8 min read&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#firebase&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#chat&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#architecture&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;





&lt;p&gt;Cover image by &lt;a href="https://www.pexels.com/@padrinan?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels"&gt;Miguel Á. Padriñán&lt;/a&gt; from &lt;a href="https://www.pexels.com/photo/two-white-message-balloons-1111368/?utm_content=attributionCopyText&amp;amp;utm_medium=referral&amp;amp;utm_source=pexels"&gt;Pexels&lt;/a&gt;&lt;/p&gt;

</description>
      <category>chat</category>
      <category>help</category>
      <category>discuss</category>
    </item>
    <item>
      <title>How we built our stack with Docker Swarm</title>
      <dc:creator>Rubén Martín Pozo</dc:creator>
      <pubDate>Mon, 07 May 2018 15:37:17 +0000</pubDate>
      <link>https://dev.to/playtomic/how-we-built-our-stack-with-docker-swarm-3md5</link>
      <guid>https://dev.to/playtomic/how-we-built-our-stack-with-docker-swarm-3md5</guid>
      <description>&lt;p&gt;When we started to think about the kind of architecture we wanted to build for Playtomic we knew what we didn't want to end up with: a huge monolith. Whether we call them micro services or just plain old boring services we wanted a decentralized system that allows us to survive if one or some of the pieces goes down. &lt;/p&gt;

&lt;p&gt;We also wanted to be able to independently build and test different parts of the system so different teams could be focused on solving problems that matters to us without having to worry too much about understanding every small detail of the whole system. Being able to compose features by creating and reusing services is key to grow at the pace we currently need. &lt;/p&gt;

&lt;p&gt;So, we started to think about the different components we would need in order to put together such a system. Let's see some of them.&lt;/p&gt;

&lt;h3&gt;
  
  
  How are we going to build services?
&lt;/h3&gt;

&lt;p&gt;We decided to start using &lt;a href="https://projects.spring.io/spring-boot/" rel="noopener noreferrer"&gt;Spring Boot&lt;/a&gt; to build our services upon. We're using both Java and Kotlin. Most of the people in the team had previous experience with it for building monolithic applications and we thought it would be a good start point. Spring boot has proven itself to be a good tool to build services because of the hundreds of frameworks and tools you can use out of the box (security, monitoring, logging, testing, ...). The main problems we've found are startup time and memory footprint. We're currently trying different things to reduce them both. &lt;/p&gt;

&lt;h3&gt;
  
  
  Where are we going to run our stuff?
&lt;/h3&gt;

&lt;p&gt;We need something that gives us the ability to deploy services independently, that guarantees us that everything will be continuously up and running without too much pain.&lt;/p&gt;

&lt;p&gt;We also don't want to have to depend too much on manual provisioning every time we come up with a new service (at least, minimize it as much as possible). &lt;a href="https://www.docker.com/" rel="noopener noreferrer"&gt;Docker&lt;/a&gt; and &lt;a href="https://docs.docker.com/get-started/part5/" rel="noopener noreferrer"&gt;Docker Stacks&lt;/a&gt; seemed like a great solution for us. &lt;/p&gt;

&lt;p&gt;We looked at &lt;a href="https://docs.docker.com/engine/swarm/" rel="noopener noreferrer"&gt;Docker Swarm&lt;/a&gt; and we found it was simple to set up, easy to understand and it scales pretty well. I personally had had some experience with Docker Swarm in the past and although I had some serious problem with stability and quality of service those problems were mostly due to the lack of maturity of the system at that time so we decided to give it another try.&lt;/p&gt;

&lt;p&gt;We set up a Docker Swarm of 3 manager nodes and 4 worker nodes deployed in our own servers. It's been working smoothly so far without any problem at all. So we're pretty happy with our decision. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufv126sw0ss2juqlguc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufv126sw0ss2juqlguc1.png" alt="Docker Swarm" width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For us, &lt;a href="https://kubernetes.io/" rel="noopener noreferrer"&gt;Kubernetes&lt;/a&gt; seems like too complicated for what we need. We're glad that Docker Swarm supports Kubernetes natively right now though, so we could take advantage of it in the future if we need so. &lt;/p&gt;

&lt;p&gt;We're also using &lt;a href="https://proxy.dockerflow.com/" rel="noopener noreferrer"&gt;Docker Flow Proxy&lt;/a&gt; to route traffic from the different applications and APIs to the corresponding service in the Swarm.&lt;/p&gt;

&lt;h3&gt;
  
  
  How can we be sure that everything is working properly?
&lt;/h3&gt;

&lt;p&gt;In all systems, but specially in a distributed one, it's very important to be able to see the whole picture to understand how the system is behaving. We've built a monitoring system based on &lt;a href="https://prometheus.io/" rel="noopener noreferrer"&gt;Prometheus&lt;/a&gt; to gather real-time metrics and &lt;a href="https://grafana.com/" rel="noopener noreferrer"&gt;Grafana&lt;/a&gt; to visualize services and how they are working together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k3wmp06gd139irb6egb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2k3wmp06gd139irb6egb.png" alt="Monitoring system" width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Logging is also very important to be able to find out fast where a problem is. We started using our own &lt;a href="https://www.elastic.co/elk-stack" rel="noopener noreferrer"&gt;ELK stack&lt;/a&gt; but we recently decided to move to &lt;a href="https://logz.io/" rel="noopener noreferrer"&gt;logz.io&lt;/a&gt;. Setting up our own elastic search cluster and keeping it running in production was too much for us. We thought we were spending too much time doing things we felt we shouldn't be doing (not our focus) so we moved that piece out.&lt;/p&gt;

&lt;p&gt;Logz.io also has a great insight feature. Logs are analysed by their AI, comparing them to thousands of articles in different sites to give you valuable insights of your system. We've been able to find bugs in production using this feature that otherwise would have very likely skipped our radar. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzk2mimbpdvuq8fik46o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzk2mimbpdvuq8fik46o.png" alt="Logz.io insight feature" width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We know we still have a lot of things to explore and improve but we're pretty happy with our current setup and we wanted to share it with you. What kind of infrastructure are you using? What ideas do you have to improve ours? Let us know in the discussion below. &lt;/p&gt;

</description>
      <category>docker</category>
      <category>monitoring</category>
      <category>infrastructure</category>
      <category>ourstack</category>
    </item>
  </channel>
</rss>
