<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Sergio Rodriguez Freire</title>
    <description>The latest articles on DEV Community by Sergio Rodriguez Freire (@sergiorf).</description>
    <link>https://dev.to/sergiorf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/sergiorf"/>
    <language>en</language>
    <item>
      <title>Designing for Flexibility: Reducing Cloud Lock-In in Legacy Systems</title>
      <dc:creator>Sergio Rodriguez Freire</dc:creator>
      <pubDate>Sun, 30 Mar 2025 17:59:18 +0000</pubDate>
      <link>https://dev.to/sergiorf/designing-for-flexibility-reducing-cloud-lock-in-in-legacy-systems-3obj</link>
      <guid>https://dev.to/sergiorf/designing-for-flexibility-reducing-cloud-lock-in-in-legacy-systems-3obj</guid>
      <description>&lt;p&gt;As businesses increasingly rely on cloud services, one critical challenge emerges: avoiding vendor lock-in. The ability to switch cloud providers without a massive multi-year effort is crucial for maintaining flexibility and controlling costs. In this article, we explore how to address this challenge, particularly for large, legacy monolithic systems—those with hundreds or thousands of files and hundreds of thousands of lines of code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Typical Lock-In Issues
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dependency on Provider Services&lt;/strong&gt;: Systems often become tightly coupled to a provider's functionality. Even when equivalent services exist on other providers, subtle differences in APIs or features can make migration complex and time-consuming.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deep Integration with Third-Party APIs&lt;/strong&gt;: Without careful design, third-party classes can become deeply enmeshed in your system, making API version upgrades or deprecations a significant challenge.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Version Conflict Hell&lt;/strong&gt;: Multiple versions of the same vendor library in a monolithic system can lead to the infamous "version conflict hell," a problem most engineers encounter at some point.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Techniques to Reduce Lock-In
&lt;/h2&gt;

&lt;p&gt;Fortunately, there are strategies to mitigate these challenges and ensure your systems remain flexible and adaptable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Encapsulation&lt;/strong&gt;: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Encapsulate what varies (Gang of Four 1995)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Create an abstraction layer that hides the implementation details of cloud services from the rest of your system. This ensures that only specific classes interact with the cloud provider's SDK, while the rest of your project remains cloud-agnostic.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The cloud-dependent code (highlighted in red below) implements the abstraction layer's interface and directly interacts with the cloud provider's SDK.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The rest of the system interacts only with the abstraction layer (cloud-agnostic code), ensuring flexibility and reducing the risk of vendor lock-in.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuawrtwyqlm7kppkx5j0l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuawrtwyqlm7kppkx5j0l.png" alt="System using Encapsulation: Cloud-dependent code in red, cloud-agnostic code in blue" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dependency Injection with Interface Wrappers&lt;/strong&gt;: &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Program to an interface, not an implementation (Gang of Four 1995)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;This approach involves defining provider-agnostic interfaces within your project and injecting specific implementations from outside your project. By ensuring that your classes depend only on interfaces, the actual implementations can be provided at runtime, typically through dependency injection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This solution enforces a strict separation between cloud-agnostic interfaces and cloud-specific implementations, making your system more flexible, maintainable, and testable. It also allows you to swap one implementation of a cloud service with another at runtime, reducing vendor lock-in and providing adaptability.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4mhznfh3u4kde5yw3jf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4mhznfh3u4kde5yw3jf.png" alt="Dependency Injection: Cloud-dependent code in red, cloud-agnostic code in blue" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Encapsulation vs. Dependency Injection Tradeoff Table
&lt;/h2&gt;

&lt;p&gt;As with any architectural decision, there is no perfect solution. However, understanding the trade-offs can help you choose the right approach for your system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3f55yhtqtin0uuc9uta.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3f55yhtqtin0uuc9uta.png" alt="Encapsulation vs. Dependency Injection Tradeoff Table&amp;lt;br&amp;gt;
" width="750" height="276"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Reducing cloud provider lock-in is essential for maintaining flexibility and avoiding future migration challenges. By implementing techniques like encapsulation and dependency injection, you can ensure that your legacy monolithic systems remain adaptable and resilient in an ever-evolving cloud landscape.&lt;/p&gt;

&lt;p&gt;In my opinion, Dependency Injection is the best solution in most cases. It keeps your code cloud-agnostic, testable, and ensures a clear separation between general third-party services and your custom business logic. An often overlooked advantage of this approach is the opportunity for reusability—when service interfaces are truly generic, they can be reused across different projects, saving time and effort.&lt;/p&gt;

&lt;p&gt;What about you? Which solution do you prefer? Or perhaps you’ve found other techniques that work better? I’d love to hear your thoughts—don’t hesitate to share your experiences!&lt;/p&gt;




&lt;p&gt;This article is also available in &lt;a href="https://www.linkedin.com/pulse/designing-flexibility-reducing-cloud-lock-in-legacy-sergio-rodriguez-pfdpc" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>legacy</category>
      <category>architecture</category>
      <category>designpatterns</category>
    </item>
    <item>
      <title>Using a Cache to Optimize Query Processing</title>
      <dc:creator>Sergio Rodriguez Freire</dc:creator>
      <pubDate>Sun, 30 Mar 2025 17:42:50 +0000</pubDate>
      <link>https://dev.to/sergiorf/using-a-cache-to-optimize-query-processing-4ei</link>
      <guid>https://dev.to/sergiorf/using-a-cache-to-optimize-query-processing-4ei</guid>
      <description>&lt;p&gt;Consider a setup where the data backend consists of a farm of Query Processors. Each Query Processor serves complex queries from a Web Server, using raw data from an underlying database. These requests typically involve compute-heavy, multi-second operations. The challenge is to leverage a cache mechanism to reduce request latency with minimal changes to the original design.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1i0x7d180h16e4dl86l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1i0x7d180h16e4dl86l.png" alt="Image description" width="359" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The main bottleneck is the Query Processor. Improving user experience by quickly reducing latency would typically involve adding more hardware to the Query Processors, since horizontal scaling is not viable without significant refactoring of the software architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  Introducing a Cache and Design Trade-offs
&lt;/h2&gt;

&lt;p&gt;Assuming query results can tolerate some staleness (e.g., a few hours), we can introduce a distributed Key-Value cache, such as Redis, between the Query Processor and the raw data database. In this architecture, we build a unique key for each query and store the key and the query result (value) in the cache.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsy6v60zb1zuiswwoqeg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsy6v60zb1zuiswwoqeg.png" alt="Image description" width="762" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As always in software engineering, the devil is in the details. Here are some key design trade-offs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Staleness vs. Latency:&lt;/strong&gt; Keep queries in the cache long enough to achieve a high hit ratio and reduce latency, but not so long that the results become too stale.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Insertion Cost&lt;/strong&gt;: Inserting a key in the cache can be costly, especially if the cache grows too fast and eviction kicks in. A common technique is to insert new keys with a TTL (Time-To-Live), so that keys are reclaimed as soon as the TTL expires. Choosing the right TTL is a delicate balance between avoiding eviction costs and maintaining a good hit ratio.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Choosing the Query Key&lt;/strong&gt;: Selecting the query key is crucial. A pure hash balances values evenly across cache shards, avoiding hot spots. However, adding some discrimination to the key can help with tasks like cleaning up keys by version of the product or other properties. The risk is skewing the load on one of the shards, but the benefits can outweigh this risk.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I encourage you to explore these concepts further and consider how they might apply to your own systems. Implementing a well-designed cache can be a game-changer for performance optimization without requiring major architectural changes.&lt;/p&gt;




&lt;p&gt;This article is also available on &lt;a href="https://www.linkedin.com/pulse/using-cache-optimize-query-processing-sergio-rodriguez-ivsve" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;, where you can join the discussion&lt;/p&gt;

</description>
      <category>performance</category>
      <category>redis</category>
    </item>
  </channel>
</rss>
